Academic Integrity in a Post-AI Era
In our experience, the initial launch of ChatGPT precipitated a wave of concern related to the use of the technology and academic integrity. Much work was done and is still being done to ‘detect’ or ‘catch’ AI use in assignments, whether using AI detectors, or holding conversations with students or otherwise.
While we have witnessed a shift in many areas from a sole focus on academic integrity, we do continue to see understandable attention to issues of academic misconduct. Understandable in the sense that at the course and program level the institution needs to have confidence that students have accomplished the intended learning outcomes. This confidence is needed for assurance that graduates of the program know, can do or care about the essential learning of their program.
At most institutions CTL leaders do not hold primary responsibility for academic integrity, but are called upon to work with academic integrity staff to advise on policy or to support faculty. In this capacity your role might be to advocate for a changing definition of academic misconduct.
In an April 2025 piece Sarah Elaine Eaton puts forward a call for a ‘post-plagiarism’ approach. She writes
“Technology changes how we learn and create knowledge. AI writing tools now generate sophisticated text. Students need skills to use these tools ethically. A post-plagiarism approach acknowledges this reality. Rather than banning technology, we teach students to use it responsibly. We help them understand when AI assistance is appropriate and when independent work matters.”
In our function as teaching and learning leaders, we can advocate for this changed reality – one where individual authorship is complicated and reimagined, and where responsible use is an institutionally owned value. What we mean by that is that not only do we expect our students to use AI responsibly in a post-plagiarism era, but we expect our administrators, staff and faculty to do the same. We establish and uphold institutional expectations and norms around appropriate use and transparency, and we model those norms for our students and one another.
Certainly the work of learning is different from the work of a staff member. As a staff member, you already know much of what you need to know to carefully evaluate an AI output for accuracy and bias; a student is learning the core knowledge and skills that will prepare them to confidently work with an AI tool to produce something of value.
We do not mean to conflate a post-plagiarism era with one in which students have unfettered use of AI which could risk their learning. Rather, we offer that we need to work as a whole institution to bring AI literacy* to all members of the community, where student learning begins from an assumption of AI use and we teach them how to use AI well, rather than banning their engagement with a tool that can both support their learning and serves as an essential skill for future employment.
We would be glad to see examples of where and how institutions across Canada are changing policies or shifting practices toward a post-plagiarism approach, where faculty can spend less time evaluating academic misconduct related to AI and more time engaging with AI literacy and the core learning of the course and program.
Leverage existing academic integrity networks provincially, regionally, and internationally, for guidance, support, and resources on exploring AI in ways that align to academic, research, and institutional integrity:
- Academic Integrity Council of Ontario
- Manitoba Academic Integrity Network
- Alberta Council on Academic Integrity
- British Columbia Academic Integrity Network
- European Network for Academic Integrity
- International Center for Academic Integrity
- Centre for Teaching Support & Innovation – Academic Integrity and the Role of the Instructor
*AI literacy is a contested term with differing definitions depending on who you ask, their disciplinary background and their organization. Your institution may have an existing definition or framework, but if not, we offer this definition from the University of Calgary as a starting point “Being “AI literate” is the ability to understand, use, and reflect critically on AI applications without necessarily being able to create and develop AI models like someone with a computer science background. Understanding how to use AI as a tool to enhance your work is how you become AI literate.” See also the Digital Education Council AI Literacy Framework, the UNESCO AI Competency Framework for Students, or A Framework for AI Literacy from Educause.