We need to talk about AI
In recent years, Artificial intelligence, also known in this context as “Generative AI”, has been attracting significant controversy.
From self-driving cars to text-to-image generation, from chat bots to speech recognition, AI seems to be poised to change our world in ways that we cannot even imagine.
Not surprisingly, AI has it’s supporters, and it’s detractors.
Many artists and other creative people believe that AI is “stealing”, due to the fact that the AI is “trained” on images that may not be in the public domain.
Many others believe that AI could revolutionise the world, providing anything from self-driving cars to quick diagnosis of medical issues, to AI-assisted therapy.
From my background in computer science, and my love of sci-fi, I am generally pro-AI, however from my perspective as an aspiring writer, I can also see the threat that it poses to the creative arts.
After all, why commission an artist or a freelance writer when you can just get an AI to do it for free?
I would argue that the state of the art of AI for producing images (Using, for example, stablediffusion, midjourney, etc) is quite a bit more advanced than it is for writing, or code generation.
This is why artists are probably the “canaries in the mineshaft” in relation to AI: Their livelihood is the first to be threatened, however, it will certainly not be the last (We are also seeing AI-generated plagarism starting to become an issue in academia, and even in journalism).
The field of AI is moving phenomenally fast, and it is almost inevitable that in the coming years we are going to see AI encroach more and more into other creative fields, such as writing, programming, as well as many other aspects of our society.
The purpose of this post is to discuss the ethics of not so much AI in general, since that is far too broad a topic, but instead to look at the different extents to which AI can be used by writers, and discuss the ethics of each.
So, how can a writer take advantage of Generative AI?
On one end of the scale, there are writers tools (like grammarly), which use what could be generally considered AI to not only correct spelling, but suggest grammatical changes, and even match the tone and mood of a piece of writing (For example, professional and academic for a CV or Resume, more relaxed and natural for a creative piece).
I think most people would consider these types of tools to be acceptable, and many professional writers use them on a daily basis.
But what about the other extreme?
One the other end of the scale, we have generative AI programs like chatGPT which can generate an entire story (Often with surprising quality!) with just a simple writing prompt (Such as: Write me a sci-fi story about a man called Jack fighting against an Evil Empire).
This is quite different. I think the majority of people (myself included) would consider attempting to publish or sell a story generated in this way to be at the very least unethical. There could even be legal concerns (The legal status if AI is still highly controversial, it is a very new field).
But what about the middle ground?
What about a situation where a writer writes a story completely without the help of AI, but then uses that story as an input to an AI program, and asks the AI to improve it or extend it?
Is this equivalent to using grammerly to improve your work? Or is it equivalent to using the AI to generate a story for you?
The answer is that is is both, the ethics here are quite murky and grey.
There could already be authors out there who are using AI in this way.
To what extent is using an AI-generated tool acceptable? At what point does it become unethical? Should writers (or artists) be required to divulge whether or not they have used generative AI as part of their work? What if they don’t?
I feel that using generative AI for personal use or for enjoyment, is fine. Not everyone can be an artist, or a writer, and the power of AI can allow these people to realise their dreams of creating incredible art and stories when they otherwise couldn’t.
I have no issue with this.
The problem arises when people attempt to monetise content that is largely created using AI (Ie, generating a story from a prompt using chatGPT, and trying to get it published).
Today, the state of the art of AI isn’t good enough for a story created in this was to actually be published or sold, but this will almost certainly change.
Ideally, I think that people should be allowed to create images, text, code, and even videos (Generative AI for video is in its infancy, but does exist) at their leisure, for personal use, and they should be able to share them, but I feel that monetising this content is unethical.
Any content produced largely or entirely using AI should be essentially considered public domain. I believe it has already been established by some courts that AI generated images cannot be copyrighted, which is an important first step in this direction.
However, using AI assistance tools is probably fine, and as long as the vast majority of the work was done without the benefit of AI, it is acceptable to sell such a work.
So, for example, an author using AI to generate book cover for a book written without generative AI, or an author using AI to make slight improvements to the tone or wording of some paragraphs, etc, should not stop that work from being monetised.
The problem is that is it generally not possible to tell whether an article or a story has been AI-generated, and to what extent.. The only methods are subjective: AI-generated stories tend to be unemotional, or have inconsistent tone, or repetitive words and phrases, but these tells will become even harder to spot in the coming years.
As mentioned, we are already at the point where plagarism using AI generated technology is becoming a problem in academia (Academic articles are usually short, and are intended to be unemotional, and so it is likely that AI-generated text is harder to detect in this field).
In the coming years, we may see AI-generated short stories and even novels, that are indistinguishable from human-generated text.
Ideally, what would be needed would be a way to detect if AI was used to generate a piece of writing. Academia already has plagiarism detectors, which are used to determine how much of a particular piece of writing “match” other texts, but these would be largely ineffective against AI generated text, since the text is unique.
The problem is that it seems that AI-detectors may be effectively impossible. There are some AI-detectors out there today, but most seem to generate an unacceptably high number of false positives.
How the creative arts will handle this influx of AI-generated content remains to be seen.
Established artists and writers should have nothing to fear: They are recognised and reknowned, people read their books and buy their artwork not just because of their quality, but also because of their reputation.
But what about small time creative professionals?
People who make a living freelancing, writing articles or stories, or drawing or painting on commission?
These individuals may find themselves essentially in the same position as manual labourers did when mechanisation and industrialisation began.
Despite being a supporter of Generative AI, I do acknowledge the need for regulation, and these regulations need to be passed quickly.
AI is tremendously powerful, much more so than most people (Including even the detractors of AI!) realise, and within a very short space of time it’s effects on our world will be undeniable and irreversible.
The law has always lagged behind the technology in these cases. Just think about the advent of Napster and file sharing and it’s effects on the recording industry: By the time the laws caught up to the technology, the damage was done, and to this day the recording industry is still feeling the after effects from this.
I believe that even now AI is close to the point where regulation will be impossible. AI is growing exponentially, and unless laws are passed quickly, it could be too late, and with the sheer power of AI, there is no telling how much damage could be done if this power ends up being misused.
In conclusion, even though the advent of AI will cause great upsets in our world, I do believe that, ultimately, AI is, and will continue to be, a net positive, rather than a net negative, provided that it is properly regulated and monitored.
Many new technologies caused social upheaval when they were introduced, industrialisation, electrification, the motor car, air travel, but it would be hard to imagine our world without them now.
–
“We Need to Talk about AI”
-AdAstraPhoenicia-
