Academic Integrity in the Age of AI: Why Clear Proctoring Rules Matter

Updated May 2026 · 8 min read · MonitorExam


⚡ Quick Answer When a PhD student at the University of Minnesota was expelled for allegedly using AI on an exam — losing his visa, his career, and his right to stay in the US — it exposed a systemic problem: most institutions still don't have clear, consistent, enforceable policies around AI use in assessments. Here's what happened, what it means for higher education, and what institutions must do now.

The Case That Changed Everything

In August 2024, Haishan Yang — a third-year PhD student in health economics at the University of Minnesota — sat for an eight-hour preliminary exam. The exam permitted class materials but explicitly prohibited the use of "any sort of Artificial Intelligence tools, such as ChatGPT."

Yang completed the exam remotely from Morocco. He submitted his answers. "I think I did perfect," he said in an interview with KARE 11. "I was very happy."

What followed became one of the most significant academic integrity cases in recent US higher education history.


What Happened: The Full Timeline

August 2024 — Yang submits his preliminary exam. A four-member faculty grading committee notices his answers appear inconsistent with his previous writing and contain concepts not covered in class.

Professor Hannah Neprash inputs the exam questions into ChatGPT and compares the outputs to Yang's answers. She finds near-identical phrasing and structure in multiple sections. Another professor notes Yang's use of the acronym PCO (Primary Care Organization) — an acronym none of the four faculty members had ever seen used in their field, but which appeared in ChatGPT's output.

The committee refers the case to the Office for Community Standards. Yang is informed of the allegations and submits a 35-page written defence denying all charges. He acknowledges using AI to check his English on other assignments — but categorically denies using it to generate exam answers.

The university offers informal resolution: expulsion. Yang chooses a formal hearing instead.

At the hearing, both sides present evidence. Yang's advocate attempts to generate new ChatGPT outputs live during the hearing to challenge the university's methodology — the panel chair stops the demonstration, stating that the committee "does not create new evidence" during hearings.

The panel unanimously finds Yang responsible for scholastic dishonesty. He is expelled.

The consequences are immediate and severe:

  • Expulsion triggers termination of his SEVIS record
  • He loses his international student status
  • He loses his legal right to remain in the United States
  • His academic career — built over years of work from a rural village in China — is effectively over

Yang calls it "a death penalty."


Yang files multiple lawsuits between December 2024 and March 2025:

  • December 2024 — Sues Professor Neprash in Hennepin County District Court
  • January 2025 — Files federal lawsuit against university employees involved in the hearing, seeking over $4 million in damages, reinstatement, and expungement of his record
  • March 2025 — Petitions the Minnesota Court of Appeals challenging the university's conduct decision
  • March 2025 — Files complaint against the university under the Minnesota Government Data Practices Act
  • February 2026 — Minnesota Court of Appeals upholds the expulsion, finding the university's decision was supported by substantial evidence and that Yang received reasonable notice and a meaningful opportunity to be heard
  • October 2025 — Federal court dismisses Yang's due process lawsuit

As of 2026, some cases remain open.


Why This Case Is So Important

Yang's case is not just about one student. It exposes five systemic failures that affect universities worldwide.

1. AI detection tools are unreliable — and institutions are using them anyway

The faculty committee ran exam questions through ChatGPT and compared outputs to Yang's answers. This is not a validated scientific methodology. AI detection software is inconsistent at best, and at worst displays bias against neurodivergent students and non-native English speakers.

Research consistently shows AI detectors produce false positives — text written by humans that gets flagged as AI-generated. Yang's advisor, Professor Brian Dowd, described the evidence against Yang as "inconclusive" and called Yang "the best-read student" he had encountered.

The hearing panel itself noted it did not rely on AI detection evidence — it relied on faculty judgment. But the entire investigation was triggered by a ChatGPT comparison.

2. Policies were unclear about what constituted AI use

The exam said no AI tools "such as ChatGPT." Yang acknowledged using AI to check his English grammar on other assignments. At what point does grammar checking become academic dishonesty? The policy didn't say.

Most institutions still lack acceptable-use AI policies. Without clear boundaries, enforcement becomes arbitrary — and arbitrary enforcement creates legal and reputational risk.

3. International students face asymmetric consequences

For domestic students, an expulsion is devastating but recoverable. For international students on student visas, expulsion triggers immediate loss of legal status. The stakes are categorically different — yet the same disciplinary process applies to both.

4. The prior incident created bias

A year before the exam in question, Yang submitted a homework assignment containing what appeared to be an AI prompt in the body of his answer: "re write it, make it more casual, like a foreign student write but no ai." The professor who raised it later dropped the allegation and Yang received only a warning — but the incident was brought up again during the exam hearing.

Yang believes this prior conflict fuelled bias against him. Whether or not that's true, the inclusion of dropped allegations in a subsequent hearing raises serious due process questions.

5. No proctoring infrastructure caught anything in real time

The entire case was built on post-submission analysis — faculty comparing text to ChatGPT outputs. There was no proctoring during the exam. No tab switch detection. No session monitoring. No identity verification that the same person completed the whole exam.

If the university had used a proctoring layer during the exam, there would be objective data — not competing interpretations of writing style.


What Institutions Must Do Now

1. Write a clear AI acceptable-use policy before the next exam cycle

Every exam should explicitly state:

  • What AI tools are permitted (none / grammar only / summarisation / full use)
  • What constitutes a violation
  • What evidence will be used in a misconduct investigation
  • What the process is — and what the possible consequences are

Vague policies like "no AI tools such as ChatGPT" leave too much room for interpretation and litigation.

2. Never rely solely on AI detection tools as evidence

AI detectors should be a flag for further investigation — not evidence of misconduct. Any disciplinary action based primarily on AI detection output is legally and ethically vulnerable.

Institutions should always require corroborating evidence: proctoring session data, identity verification logs, behavioural anomalies, and direct conversation with the student before escalating.

3. Implement proctoring that creates objective session data

If Yang's exam had been proctored — even lightly — there would be a timestamped record of every tab switch, every application opened, every window focused. That's objective data. Not a comparison of writing styles.

Proctoring during the exam doesn't prevent every form of cheating, but it creates an evidence base that is far more defensible than post-submission text analysis.

4. Build a separate policy for non-native English speakers and international students

AI grammar correction is now near-universal. Holding international students to the same stylistic consistency standards as native English speakers — when those students routinely use AI for language support — creates asymmetric risk. Institutions should explicitly address this in their policies.

5. Consider AI-forward assessment design

The deeper question Yang's case raises is whether traditional exam formats are still fit for purpose in an AI-enabled world. Open-book, open-internet, open-AI exams that test synthesis and reasoning — rather than recall — are harder to cheat on and more representative of real-world work.


Is your institution prepared for an AI misconduct case? MonitorExam creates a complete session record for every exam — tab switches, identity verification, and behavioural logs — so if a concern arises, you have objective data, not competing interpretations. See how it works →

What MonitorExam Does Differently

The Yang case is, at its core, a proctoring failure — not because someone cheated, but because there was no objective record of what happened during the exam.

MonitorExam creates that record:

What happened in Yang's exam What MonitorExam would have recorded
Remote exam, no monitoring Full session log with timestamps
No tab switch detection Every tab switch flagged and logged
No identity verification during exam FIDO passkey + face presence throughout
Post-submission text analysis only Real-time behavioural anomaly detection
No audit trail CredScore report with full evidence chain

This doesn't mean automated proctoring catches everything — it doesn't. But it means any misconduct investigation starts with objective session data, not competing interpretations of writing style.


The Broader Lesson

Yang's case will be studied in education law and policy courses for years. What it tells us is clear:

The problem was not that AI exists. The problem was that the institution had no system for managing it — before, during, or after the exam.

Universities that build clear policies, implement transparent proctoring, and create objective evidence trails are not just protecting themselves legally. They are protecting students like Yang — who may be entirely innocent, but who have no evidence to prove it.

Academic integrity must evolve. Not by banning AI, but by building the systems that make integrity verifiable.


Frequently Asked Questions

What happened to Haishan Yang? Yang was a PhD student expelled from the University of Minnesota in 2024 for allegedly using ChatGPT on a preliminary exam. He filed multiple lawsuits challenging the decision. Both the federal court and the Minnesota Court of Appeals upheld the university's expulsion in 2025–2026. Some cases remain open.

Are AI detection tools reliable enough to expel a student? Research suggests AI detection tools are inconsistent and can produce false positives — particularly for non-native English speakers. The Yang hearing panel itself stated it did not rely on AI detection evidence, instead relying on faculty judgment. Most legal and academic experts advise against using AI detection as the sole basis for academic misconduct decisions.

What should universities include in their AI exam policy? At minimum: which tools are permitted, what constitutes a violation, what evidence is required, what the process is, and what the possible consequences are. Policies should be specific — "no AI tools such as ChatGPT" is not sufficient.

How does proctoring help in AI misconduct cases? Proctoring creates a real-time objective record — tab switches, application changes, timing anomalies, identity verification — that can either corroborate or refute allegations of AI use. Without a session record, misconduct cases rely entirely on post-submission analysis, which is legally vulnerable.

Does MonitorExam detect AI use during exams? MonitorExam detects behavioural signals — tab switching, browser exits, unusual timing, copy-paste activity — that may indicate external AI tool use. It cannot read what was typed in another application, but it creates an objective audit trail that supports or refutes post-submission concerns.


References

  1. MPR News — 'A death penalty': Ph.D. student says U of M expelled him over unfair AI allegation (January 17, 2025) https://www.mprnews.org/story/2025/01/17/phd-student-says-university-of-minnesota-expelled-him-over-ai-allegation
  2. Minnesota Lawyer — Minnesota court upholds U of M student expulsion for alleged AI exam cheating (February 13, 2026) https://minnlawyer.com/2026/02/13/u-of-m-ai-cheating-expulsion-upheld-appeal/
  3. Liebert Cassidy Whitmore — Federal Court Upholds University's Disciplinary Process in AI Misconduct Case (December 4, 2025) https://www.lcwlegal.com/news/federal-court-upholds-universitys-disciplinary-process-in-ai-misconduct-case/
  4. KARE 11 — PhD student expelled from University of Minnesota for allegedly using AI (February 2025) https://www.kare11.com/article/news/local/kare11-extras/student-expelled-university-of-minnesota-allegedly-using-ai/89-b14225e2-6f29-49fe-9dee-1feaf3e9c068
  5. The Minnesota Daily — Ph.D. student sues UMN, files human rights complaint after AI plagiarism expulsion (March 3, 2025) https://mndaily.com/campus/ph-d-student-sues-umn-files-human-rights-complaint-after-ai-plagiarism-expulsion/03/03/2025/
  6. MinnPost — Instead of punishing students for using AI, schools must provide clear, consistent guidelines (May 16, 2025) https://www.minnpost.com/community-voices/2025/05/instead-of-punishing-students-for-using-ai-schools-must-provide-clear-consistent-guidelines-and-rules/
  7. KARE 11 — Court upholds U of M PhD student's expulsion over AI-use allegations (February 2026) https://www.kare11.com/article/news/local/court-upholds-u-of-m-phd-students-expulsion-over-ai-use-allegations/89-01e98fc3-a0b5-43ff-b072-d8ac3c3eedb0
  8. TechStory — PhD student expelled from the University of Minnesota for allegedly using AI (February 22, 2025) https://techstory.in/phd-student-expelled-from-the-university-of-minnesota-for-allegedly-using-ai/

Build the Evidence Trail Before You Need It

The best time to implement proctoring is before a misconduct case — not after.

Yang's case had no session data, no tab switch logs, no identity verification record during the exam. The entire case rested on post-submission text comparison — which is legally vulnerable, factually disputed, and destroyed a student's career.

MonitorExam creates a complete, timestamped session record for every student on every exam — automatically. If a concern arises, your institution has objective data to work with, not competing interpretations of writing style.

For institutions For individual educators
Book a 30-min walkthrough with our team Set up a proctored exam in 5 minutes
See how MonitorExam fits your exam workflow No IT team or installation required
Discuss compliance, scale, and integration Free to try — no credit card needed

Book an Institution Demo Try MonitorExam Free