Madeleine was finishing her nursing internship when an email arrived with the subject line "Academic Integrity Concern." Inside, the (false) accusation that she had used artificial intelligence to write an assignment. She hadn't done anything of the sort, but it didn't matter. The university had a report generated by another AI, and that was enough. Six months later, she was still there, explaining why, who knows, perhaps the system had gotten it wrong.
Meanwhile, her CV reported "pending results," and no hospital wanted her. It's a perfect short circuit: an AI that accuses students of using AI, based on another AI that doesn't work well. And it's people who pay, not algorithms. It's not Black Mirror, and I didn't make it up: it really happened.
Six thousand cases, ninety percent wrong
THEAustralian Catholic University recorded nearly 6.000 cases of alleged plagiarism in 2024. 90% of these involved artificial intelligence. The problem? A significant number of those students had done nothing wrong. All false accusations. The detector used by the university, TurnitinHe had simply decided that those texts were too well written to be human. A compliment, in theory. A condemnation, in practice.
Tania Broadley, vice-chancellor of ACU, told the ABC that the figures were "substantially overstated." She acknowledged an increase in cases of academic misconduct, but declined to comment on the number of students who had been victims of false accusations. A quarter of all referrals were archived after investigation. But the lost months aren't recovered.
Madeleine's case is not isolated. Hundreds of students had to submit complete browsing histories, handwritten notes, and detailed justifications. on things they hadn't done. As one emergency medicine student told ABC:
“They're not the police. They don't have warrants to request your browsing history. But when you're in danger of having to repeat a course, they do what they want.”
Turnitin, the system that admits it's unreliable
On his own website, Turnitin warns that The AI detector “should not be used as the sole basis for adverse action against a student”The company admitted that its document-level false positive rate is less than 1% for documents with 20% or more AI content. But at the sentence level, the rate rises to 4%. And when the text mixes human writing and AI, problems multiply.
One student told ABC: “It’s AI detecting AI, and almost my entire essay was highlighted in blue: 84% supposedly written by AI.” His text was completely original. But the algorithm didn't care.
Vanderbilt University has completely disabled Turnitin's AI detector. as early as 2023, citing concerns about its reliability. The university calculated that with 75.000 papers submitted annually and a false positive rate of "only" 1%, Approximately 750 students were falsely accused. Too many.
The problem is not only technical
The Australian university knew about the system's problems for over a year before decommissioning it in March 2025. Internal documents seen by ABC show that the staff was aware of the“limited reliability of the tool” and “inability to distinguish AI-assisted editing from full generation”But the false accusations continued for months.
The phenomenon is not limited to Australia. In Italy, several universities are implementing advanced anti-plagiarism softwareThe University of Padua has introduced a new, more sophisticated anti-plagiarism server. The University of Verona has integrated software capable of identifying artificially generated texts. And theUniversity of Ferrara He cancelled an entire Psychobiology exam after discovering that some students had used ChatGPT, without being able to identify those responsible.
One detail I wouldn't overlook: one Stanford study revealed that AI detectors exhibit bias against non-native English speakers. The algorithms tend to flag texts written by foreign students as "AI-generated" more frequently. This problem adds discrimination to the damage.
False accusations of plagiarism: a paradoxical situation
Universities are embracing artificial intelligence. Many they have partnerships with technology companies to integrate AI tools into teaching. Then they accuse students of using the same tools. The message is confusing: AI is the future, but if you use it, you're a fraud.
In the meantime, according to an Italian survey, 75% of students who took their final exams in 2023 admitted to using AI tools to prepare for their exams. 18% of students between the ages of 16 and 18 use AI during regular classwork. The technology is there, accessible, and students are using it. Expecting them not to is unrealistic.
What happens now
As mentioned, the "anti-plagiarism" system has been abandoned, and personal issues have been gradually (and too slowly) filed away. For Madeleine, for example, it's now too late. She's lost six months, a job opportunity, and her trust in the system. "I didn't know what to do. Go back to school? Give up and do something other than hospital nursing?" she told ABC.
The case shows what happens when we delegate judgment to systems we don't fully understand. Algorithms can be useful, but they are not infallible. And when they get it wrong, it's the people who pay. Not with corrupted data or miscalculations, but with ruined careers and wasted years. Artificial intelligence may appear more compassionate than humans in certain contexts, but when it comes to making false accusations, it only shows how dangerous it is to trust her with life-changing decisions.
As always, the problem isn't AI. It's the blind faith some people place in it.