Sci-Fi Magazine Clarkesworld Overwhelmed with Flood of AI-Generated Stories

Danika Ellis

Associate Editor

Danika spends most of her time talking about queer women books at the Lesbrary. Blog: The Lesbrary Twitter: @DanikaEllis

Danika Ellis

Associate Editor

Danika spends most of her time talking about queer women books at the Lesbrary. Blog: The Lesbrary Twitter: @DanikaEllis

Clarkesworld Magazine is one of the largest and most well-known sci-fi/fantasy magazines, publishing respected SFF authors like Catherynne Valente, Jeff VanderMeer, and Caitlin R. Kiernan. They have recently had to close submissions after being flooded with story submissions created with chatbots like ChatGPT.

The irony is not lost on them.

But while Clarkesworld has received plenty of ill-informed replies and retweets that they should welcome our robot overlords with open arms, this isn’t a matter of sentient AIs authoring stories so good they can compete with human authors. Instead, the threat posed by these word prediction machines — still far from a true Artificial Intelligence — is the sheer amount of noise they produce.

While Clarkesworld has dealt with spam submissions before, especially in recent years by people running famous short stories through programs that slightly reword them, chatbots like ChatGPT have dramatically increased the number of spam submissions.

Asking ChatGPT or similar programs to write a short story and then submitting them to paying magazines is being touted as a get rich quick scheme by “entrepreneurs” outside of author circles. At the moment, it isn’t particularly difficult to tell which submissions are generated this way — they have their own pattern that’s easy to spot when viewed together — but the scale is difficult for any literary magazine to handle.

Clarkesworld usually keeps submissions open, unlike other magazines that close and open submissions in a cycle, and it also pays well, which made it a great target for this scam.

Neil Clarke said that this open submission strategy is meant to encourage new and underrepresented authors, and that only allowing paid submissions or submissions from established writers “sacrifices too many legit authors.” There are tools to weed out AI-generated writing automatically, but they are imperfect, which means that some authors would be disqualified through no fault of their own.

Clarke has noted that as these AI text generators keep improving, it will become harder to detect them. This will likely make it harder for new writers to break into the industry, as many magazines and other outlets may have to limit their submissions to prevent AI-generated spam. Getting published for the first time is already a difficult task, especially for marginalized authors, and it is likely to be getting even harder.

Clarkesworld is not unique in this situation: it’s likely just one of the first to identify this pattern and raise the alarm, since they are constantly reviewing submissions. It’s not limited to short stories, either: there are already hundreds of books on Amazon that credit ChatGPT as an author, and countless more are likely using ChatGPT without crediting it. This includes children’s books accompanied by AI-generated artwork. Both the text and art generated rely on artists’ and authors’ work these systems were trained on without creators’ permission.

While many might imagine these AI-generated SFF stories as the next generation of art, I’m reminded more of Google’s degradation into a service that offers a wide variety of “SEO soup”: blog posts generated just to match common search terms with no real human author. It’s also a cousin to the influx of drop-shippers has made shopping online a tedious task: items listed on Etsy are often mass-produced and misrepresented as handmade in their listings. Amazon lists the same items over and over, often with many sellers all relisting the same single product, which they will then purchase themselves and send to you — after a long shipping delay. These are all versions of “get rick quick” schemes that have made the internet a worse place to be.

It’s easy to see a world in which every possible keyword search on Amazon or other ebook retailers will return hundreds of AI-generated ebooks on every topic: dog grooming, how to teach kids about financial literacy, queer cozy fantasy novels, everything you need to know about the next election, and more. Chatbots at the moment have a ton of flaws in the writing they create, including confidently offering up incorrect information, so these books would carry those same limitations with them, making it harder to find reliable information even in book form.

The problem right now, then, isn’t that AI is replacing authors by writing better than they can. Instead, it’s AI writing worse than human authors do — but a lot more of it.

This won’t be true forever, of course. Likely these systems will keep improving over time, and they may even be able to write coherent — perhaps even beautiful — short stories at some undetermined date. But even when, or if, that day comes, we’re left with several problems.

One concern is with the ethics of training a program on work that is under copyright without the creators’ permission. At the moment, while chatbots aren’t supposed to copy the writing they’re trained on, there have still been many instances of plagiarism, whether by lightly rewording existing work or — more rarely — using their words exactly. Even without technical plagiarism, though, can we reconcile AI-generated text being so closely inspired by stolen writing — especially if they start directly competing with these same authors?

Now that we’ve looked at the practical (a flood of low-quality ebooks polluting the market) and the ethical (training on art without creators’s permission), it’s worth also addressing a more ephemeral, philosophical issue with AI-generated text: is this what we want from art?

It’s one thing to be satisfied with a chatbot’s version of an article summarizing key facts — assuming it is doing so accurately. It’s another, though, to turn to computers and AI for art. It comes down to a fundamental question: why do we tell stories, and why do we seek them out?

If stories are about better understanding the human condition, then relying on a computer generated text is a poor substitute for just about any human pen. A six year old’s story can tell us more about the reality and emotional impact of being a person in this world than ChatGPT can. If we turn to stories to be entertained and need no insight into the human condition, no commentary on the world, no real originality, then perhaps an AI-generated story will serve just fine — though likely no better than hundreds of thousands of human-generated stories that already exist.

It’s clear that AI-generated text and images are a significant advancement that will shake up many industries. What they’ll do to books and reading is still up in the air, but at the moment, I’m finding it difficult to see what value they can bring.

I asked ChatGPT whether chatbots pose a threat to human authors, and this is what it had to say:

It’s important to note that AI chatbots are not capable of creating entirely original content on their own, and they rely on existing material to generate responses. While chatbots can assist with tasks like editing, proofreading, and content curation, they cannot replace the creativity and imagination that human authors bring to their work.

Additionally, while AI chatbots are becoming increasingly sophisticated, they are still limited by the data they are trained on and the algorithms they use. They are not capable of fully understanding the nuances of human language and emotions in the same way that humans can. Therefore, while chatbots may offer some benefits to authors, they cannot replace the value of human creativity and talent in the writing process.

Hmm. Couldn’t have said it better myself.


To learn more about the situation at Clarkesworld, read Neil Clarke’s blog post, “A Concerning Trend” and the The Washington Post’s article “Sci-fi magazine Clarkesworld flooded with AI-generated work.”