The AI Arms Race: Using AI to avoid AI-detection tools
The use of artificial intelligence (AI) to detect AI-generated content has gained increasing attention in recent years as the capabilities of AI technologies have advanced. With the ability to generate highly realistic text, images, and videos, AI has the potential to revolutionize content creation. However, this also raises concerns about the potential for AI to be used to produce misleading or malicious content. In particular, this has raised a lot of concern for student reports and take-home exams.
Obviously, ChatGPT from OpenAI has been on everybody’s mind the past months. But ChatGPT is not the only player in this space. Far from it. Several specialized AI-powered tools and services are available for both generating and detecting content. So, here is one short experiment I did in order to develop my own understanding about the possibilities.
Can AI be used to pass through AI content detection software?
First, I started in the most obvious way: by asking ChatGPT to generate some text. This is what came out from the prompt “Explain quantum computing in simple terms”:
Readers familiar with ChatGPT shouldn’t be surprised. This is a standard output for a very typical question (it’s even listed as a suggested prompt on the ChatGPT start page). However, if you have spent some time using ChatGPT, you are probably starting to develop a little bit of tacit “feel” towards these output texts. After a while, you see recurrent patterns. (I generated a complete Medium post a while back, and some people claim that they could tell about half-way through that it was probably AI-generated.) And if humans can detect such patterns, AI will be able to as well.
So, of course, there are AI-spotters out there. I used Writer’s AI Content Detector to see if it could spot the above text from ChatGPT as AI-generated.
Indeed, a 73 percent human-generated score is not high enough. The tool itself advises me to “edit your text until there’s less detectable AI content.” If my students, let’s say, would turn in a paper with this score, I would probably sit down and have a serious talk...
But the story doesn’t end here.
Now, in the current AI arms race, there is of course an AI solution to this. QuillBot is an AI-powered grammar tool that changes texts by using paraphrasing, synonyms, and even structural changes. Let’s see what happens when I run ChatGPT’s text through it:
Note: I didn’t do anything myself to this paraphrasing. There are several things you can do here: change the “mode”, adjust the rate of synonyms, and more. But for the purpose of this little experiment, I just pasted the text and hit “Rephrase”. The resulting text isn’t as fluent and good as the original (I think!), but it retains the meaning and is definitely readable.
Come the final test: What happens when we take the paraphrased text and run it through Writer’s AI Content Detector?
My conclusion is that educational institutions have no way of detecting AI-generated content. Most of you reading this are probably already aware of this, but if you are just starting out in this area, perhaps this experiment has convinced you that teachers have zero chance of detecting AI-generated content — at least in any general sense.
Oh, and traditional plagiarism-detecting software seems very obsolete. Why would anyone plagiarize, when tools like ChatGPT and QuillBot can generate unique and high-quality texts on the fly?