Generative AI programs inspire many ongoing discussions about artistic integrity, work, and originality. But that’s not stopping people from trying to take advantage of the systems right now. On Tuesday, Reuters profiled several people who turn to text generators like ChatGPT to churn out book manuscripts, which they then optimize and then sell through systems like Amazon’s self-publishing e-book platform. The rising tide of chatbot-assisted stories is now so bad it’s even forcing a temporary pause in submissions for one of the web’s leading sci-fi magazines.
According to Reuters, Amazon’s e-book store has at least 200 titles that ChatGPT openly list as author or co-author. These titles include space-inspired poetry, children’s novels about thrifty woodland animals, and how-to’s on using ChatGPT to supposedly improve dating lives. “I could see people making a whole career out of it,” one of the AI-assisted writers told Reuters.
[Related: No, the AI chatbots (still) aren’t sentient.]
Since Amazon currently has no explicit policies requiring individuals to list generative text programs as authors, its library of AI-powered titles is likely to be much larger.
“All books in store must comply with our content policies, including compliance with intellectual property rights and all other applicable laws,” Amazon spokeswoman Lindsay Hamilton said Reuters by email.
But even as AI-powered titles spread to literary markets like Amazon’s Kindle store, other outlets are being forced to halt all submissions to develop new strategies. In a blog post published last week, Neil Clarke, publisher and editor-in-chief of the popular science fiction website Clarkesworldannounced that the site would indefinitely pause its unwanted submissions portal due to an unsustainable influx of AI-assisted spam stories.
[Related: Just because an AI can hold a conversation does not make it smart.]
Clarke revealed in her post that spam entries resulting in bans on future submissions have skyrocketed since ChatGPT’s public debut. Within the first 20 days of February, the editors flagged over 500 submitted stories as plagiarism. Before ChatGPT, the magazine typically caught fewer than 30 plagiarized stories per month. While there are a number of tools that can help identify plagiarized material, the time and cost involved makes them difficult to deploy for publications such as Clarkesworld Operating on a small budget.
“If the field doesn’t find a way to address this situation, things will collapse,” Clarke wrote on his blog. “Response times are going to get worse and I don’t even want to think about what happens to my colleagues who provide feedback on submissions.” While he doesn’t think it will kill the short stories as readers know them – “please listen just up with this nonsense” — he warns that it will undeniably “complicate things” as opportunists and scammers continue to exploit the rapid advances in generative text.