Even GPT can’t detect AI-generated content, claims Open AI

Even GPT can't detect AI-generated content, claims Open AI
Image via Open AI

OpenAI recently published a promotional blog post aimed at educators, shedding light on the innovative ways some teachers are integrating ChatGPT into their educational toolkit. The blog post not only showcases these educators’ success stories but also provides valuable prompts to facilitate the incorporation of ChatGPT into educational settings.

In a closely related Frequently Asked Questions (FAQ) section, OpenAI openly addresses a pervasive issue in the world of AI and education: the inefficacy of AI writing detectors, despite their widespread use in penalizing students based on potentially inaccurate findings.

In response to the question, “Do AI detectors work?” OpenAI firmly asserts that they do not. The company, along with others in the field, has released tools that claim to detect AI-generated content, but none have demonstrated consistent and reliable accuracy in distinguishing between AI-generated and human-generated text.

This sentiment is reinforced by a July report from ArsTechnica, which highlighted the ineffectiveness of AI tools, including the popular GPTZero, even labeling them as “mostly snake oil.” The inherent problem lies in the reliance on unproven detection metrics, resulting in frequent inaccuracies. Ultimately, there is no foolproof method to consistently differentiate AI-generated text from human-written text, as determined individuals can bypass detectors through paraphrasing techniques.

Even OpenAI’s own ChatGPT does not possess the capability to identify AI-written text. As stated in the FAQ, ChatGPT lacks any “knowledge” of what constitutes AI-generated content. Consequently, it may occasionally generate responses to questions like, “Did you write this essay?” or “Could this have been written by AI?” These responses are arbitrary and devoid of factual basis.