Brian Thuer of West Chester, PA is a business professional with vast experience in emergent technologies. In the following article, Brian Thuer discusses how some evolving artificial intelligence (AI) technologies can present new challenges for academia, and how educators are struggling to recognize advanced AI content.

Imagine a college professor reading through students’ papers. The overall effectiveness and quality of the students’ writing were astounding, but something is amiss. The professor has an intuitive twinge that something is off. Is this actually the students’ work? It looks like quality work. Their papers are disproportionately well-written.

Brian Thuer says that this professor has probably caught students writing with a relatively new AI program called ChatGPT. Although its primary function is chatting, students have instead been using ChatGPT to write entire documents and essays. Brian Thuer asks, what can educators do about programs like these? Are there any ways to tell if a machine did their students’ work? Read on to find out.


ChatGPT is a powerful writing program released by Open AI. Technically, it is not new; the original program, GPT-3, has been around since 2020. But the latest update, renamed to “ChatGPT”, was released in late 2022, and according to Brian Thuer, it is already making waves. The program was originally designed as a digital conversation partner, but chatting is not the primary purpose students are using it. The end-user can command ChatGPT to write a paper on the Civil War, that’s exactly what it will do. And it will do it with startling accuracy and fluency, unlike many other AI writing content technologies that are available today.

Brian Thuer explains that ChatGPT competitively raises the bar for other writing programs, which consequently challenges its peers to further improve on this benchmark technology. Additionally, this adds an opportunity for the industry to release AI countermeasures making ChatGPT’s work easier to detect, others will probably sneak under the radar.

How it Works

Brian Thuer explains that AI must be trained to do things. ChatGPT was trained in casual conversation. Anyone who chats with this AI application will get a relatively natural response back. But it also means that this technology has an impressive handle on how the English language works.

It does this by anticipating what a human will expect, explains Brian Thuer. Plenty of AI already does this. For example, Google Docs includes a feature that allows the program to offer suggestions for the next word in a sentence. Several email services have a similar feature. ChatGPT is like that, only it can write full academic papers.

How Good Is It

If the question is “how good is this bot at writing?”, then the answer is, “when comparing writing samples, the bot produces more effective writing than the majority of collegiate-level students. ” Brian Thuer says that Academics were stunned at how easily ChatGPT wrote quality, legible essays. It knows when it is being lied to, self-censors, and writes work that would be “given full marks.

Just because an AI tool is effective, does not assume the computer-generated output is written in a moral or ethical context. Recently, ChatGPT was discovered to produce anti-Hinduism and anti-Islamic content. Cultural insensitivity may be a red flag if a student uses ChatGPT to write a paper, therefore, both students and educators should be on the lookout for discriminatory language. If not careful, some students may find themselves in an uncomfortable situation of accepting accountability for discriminatory remarks rather than admitting violation of an institutional code of conduct for cheating.

In short, ChatGPT is a surprisingly strong writing platform that is quickly being adopted by higher education students, while not being recognized as computer-generated work by teachers and professors. Educators are right to be alarmed. So, what can educators do to make sure their students’ work was not written by a digital ghost? Can advancing AI technologies evolve to distinguish between AI and human writing?

Brian ThuerWays to Pinpoint Digital Writing

The first thing to know is that ChatGPT’s work will not trigger the usual plagiarism checker sites like Although is working on a method to catch AI writing, plagiarism is not the issue.

Brian Thuer says that the good news is that there are web platforms that exist to counter this challenge. GLTR is made specifically to check for ChatGPT. However, it does not do as well against other AI writing algorithms. Open AI also made a countermeasure with the GPT-2 Output Detector Demo. Both of these “fight fire with fire” by using AI to check for AI.

In short, Brian Thuer says that the “war” against AI writing is more of an arms race. People will create technology to check AI technology writing, only to have someone else develop a better algorithm that can beat it. This war will have no winner. But humans may be the losers when the dust settles.


While numerous AI technologies have been recently introduced to the market, ChatGPT might be the most impactful AI platform for academia in the past decade. It takes the “autofill” feature found in most email and writing utilities miles further, allowing it to write essays that don’t sound auto-generated. Even the keenest professors cannot tell these papers apart at a glance, by reading, or running through plagiarism checkers. At this point, institutions should be consistently seeking superior AI tools to distinguish between AI and human writing.

Categories: News