Academic Integrity in the Age of AI: Why Clear Proctoring Rules Matter
The recent expulsion of Haishan Yang from the University of Minnesota underscores a growing crisis in higher education: the lack of clear, consistent policies around AI use in assessments.
Yang, a promising researcher, was accused of using generative AI during an open-book online exam — a charge he denies. The consequences were severe: loss of his visa and a halted academic career. His ongoing legal battle highlights what happens when institutions lack a cohesive strategy for academic integrity in an AI-enabled world.
As tools like ChatGPT become mainstream, universities must shift from reactive discipline to proactive design. This includes:
- Defining AI usage boundaries for exams and assignments
- Implementing fair and transparent proctoring methods, especially for remote assessments
- Educating students and faculty on ethical AI use
Without such clarity, we risk inconsistent enforcement, student mistrust, and long-term damage to the credibility of higher education.
Academic integrity must evolve — not by ignoring AI, but by thoughtfully integrating it into modern proctoring and pedagogy.