It’s no secret that I’m a proponent of AI as a useful tool for creatives. But, as with any new tool, there are huge downsides that accompany the limitless upside potential. This is especially true in these early days of AI adoption as individuals and companies struggle to learn how best to implement these tools into their workflows. One of the most significant downsides we’re seeing right now involves AI detectors mistakenly flagging human-written articles as being AI-generated, which is causing real harm to writers’ careers.
Consider one of the major factors leading to some of these false flags: the use of Grammarly, a widely-used tool for fixing punctuation and grammar mistakes. Grammarly helps make writing clearer and free of errors, which is essential for professional work. However, this polished writing can sometimes look like it was created by AI, causing AI detectors to flag it. So, writers who try to improve their work may end up having their work flagged as being created using AI. This issue is especially frustrating because many companies actually require their employees to use Grammarly to clean up their work.
“AI trains on millions of human-written articles. AI gets really good at imitating human writing. AI thinks everything is AI.”
David Puddy the 2nd – Gizmodo Commenter
AI detectors are not very reliable, which is a big problem given what’s at stake. Bars Juhasz, speaking with Gizmodo, pointed out these concerns: “We have a lot of concerns around the reliability of the training process these AI detectors use. These guys are claiming they have 99% accuracy, and based on our work, I think that’s impossible. But even if it’s true, that still means for every 100 people there’s going to be one false flag. We’re talking about people’s livelihoods and their reputations.”
For writers, being wrongly accused of using AI can have serious consequences. Many writers depend on their reputation for original work to get jobs and keep steady work. If their work gets flagged, they can lose opportunities, damage their reputations, and even lose their jobs. Freelance writers and those working on strict contracts are especially at risk because clients may decide to end agreements rather than risk publishing something that might be AI-generated.
On top of the financial hit, these false flags also take a toll on writers’ mental health. Writers put a lot of time, creativity, and effort into their work. Being wrongly accused of using AI undermines their skills and dedication, sometimes tarnishing a career that took years to build. This can lead to lower motivation and increased stress, which hurts their productivity and well-being.
AI is here to stay, and it should be seen as a helpful tool, not something to be feared. Some of the biggest companies in the world also recognize the potential of AI and are making significant investments to integrate AI into their products. Microsoft and Google heavily invest in AI technologies to enhance their software and services. Similarly, Adobe has introduced AI tools into its Creative Suite to help artists be more effective, allowing for enhanced creativity and productivity. It seems hypocritical to provide and encourage AI tools for graphic artists, but make the use of AI tools a taboo for those whose creative interests tend to be more lexical. Instead of punishing employees based on dubious results from AI detectors, companies should help their employees use AI responsibly. By providing training and clear guidelines on ethical AI use, companies can empower writers to improve their work without fear of being made a pariah.
Boiled down to its core, the basic idea of AI being able to detect other AI-generated content seems questionable. As one Gizmodo commenter put it, “AI trains on millions of human-written articles. AI gets really good at imitating human writing. AI thinks everything is AI.” This shows the flaws in current AI detection methods.
There may never be an AI detection tool that’s infallible, and employers should question whether using these tools to make crucial employment decisions is really the best path forward. Instead it should be the quality of the work being produced that should be examined, regardless of the tools being used to create it.
Writers need to stand up for themselves and each other. The writing community needs to raise awareness about this issue and push for improvements in AI detectors. By working together with tech companies and industry groups, we can create are more effective environment that embraces AI as a tool that makes all of our lives easier.
After all, AI really can be a great tool for writers, helping them enhance their work. But the risk of false accusations by AI detectors is a serious threat. By recognizing this problem and working on better solutions, we can ensure that human writers are appreciated and valued for their contributions.