Marley Stevens, a University of North Georgia (UNG) student, was stunned when an email from her professor accused her of cheating using artificial intelligence (AI), resulting in a failing grade on a paper. The alleged offense? Using Grammarly—a proofreading tool endorsed by UNG’s website—to check spelling and grammar. The accusation triggered a six-month misconduct investigation, culminating in academic probation, a plummeting GPA, and the loss of her scholarship. For Stevens, who already managed anxiety and a chronic heart condition, the ordeal worsened her mental health. “I couldn’t sleep or focus. I felt helpless,” she recalls.
Her story reflects a growing trend: students nationwide face disciplinary action over AI-related accusations as schools scramble to address ChatGPT’s rise since late 2022. While institutions deploy AI detectors like Turnitin to identify AI-generated work, flawed software and vague policies have led to false accusations, upending students’ academic and emotional lives.
New York education consultant Lucie Vágnerová, who has handled over 100 AI misconduct cases since late 2023, notes a surge in students battling false positives. Consequences range from revoked scholarships to visa risks for international students. Cases often drag on for months, with some students even confronting post-graduation plagiarism claims. “Anxiety is the most common word I hear,” Vágnerová says. “Students stop eating, sleeping, and feel overwhelming guilt.”
At Texas A&M University–Commerce, an instructor temporarily withheld diplomas from an entire animal science class in 2023 after incorrectly claiming ChatGPT wrote their papers. Experts stress that ChatGPT cannot reliably detect its own output, yet such methods persist. Liberty University student Maggie Seabolt faced a 20% grade reduction when her work was flagged as 35% AI-generated—a claim she denies. “I felt alone and had no idea how to prove my innocence,” says Seabolt, a first-generation student who lacked guidance.
Detection tools like Turnitin, which warns against relying solely on its AI scores, struggle with accuracy. A 2024 University of Pennsylvania study found AI detectors could be tricked by minor text alterations, while a Stanford study highlighted bias against non-native English speakers. OpenAI abandoned its own detection tool due to inaccuracy.
University of Colorado’s Casey Fiesler, a tech ethics researcher, argues that basing misconduct decisions on error-prone detectors is irresponsible. “The risk of false positives is too high,” she emphasizes. Compounding the issue, EDUCAUSE’s 2024 report reveals most institutions lack clear AI guidelines, with only 8% confident in their cybersecurity policies.
Kathryn Conrad, a University of Kansas English professor, distinguishes AI detection from plagiarism checks, noting tools analyze “burstiness” and “perplexity” rather than text matches. She advocates transparent classroom policies to prevent confusion.
For students, experts recommend documenting work via platforms like Google Docs, saving research trails, and understanding course-specific AI rules. Legal advisor Richard Asselta urges accused students to remain calm, seek support, and follow institutional processes meticulously. “Responding hastily can backfire,” he warns.
As Grammarly—which compensated Stevens and promoted her advocacy—introduces features to combat false accusations, the academic world grapples with balancing innovation and integrity. For students like Stevens, the stakes extend beyond grades: “It’s about fairness and mental survival in an opaque system.”