Since CoVID forced the move to remote learning, universities have sounded the alarm about a concomitant spike in academic misconduct. Instructors are worried: according to one study, 93% believe that cheating is easier and more prevalent in online learning.
Did CoVID bring with it a pandemic of cheating?
Objective empirical data on the increases of cheating over the past year and a half are scant. In general, studying cheating is inherently challenging because of the reliance on perceptions and self-reports of dishonest behaviour. Studies comparing cheating rates between online and face-to-face both pre- and during CoVID are equivocal: some find that students in the online environment cheat more; some find they cheat less, and others find that they cheat about the same amount.
What all results seem to agree on is that cheating has occurred in sometimes alarming rates since long before the CoVID pandemic, indeed long before online learning was possible. This age-old problem has also been tackled with equally age-old solutions:
The first is the proctored exam. In the face-to-face version, proctors stroll the aisles between carefully spaced desks, confiscating devices, crib sheets, and other prohibited resources. Online proctoring attempts to recreate this familiar environment, but the mechanism is a bit different; proctors can no longer stand unobtrusively in the back of the room or casually wander closer to investigate surreptitious activities, but are confined to what’s immediately visible via a student’s web camera. This limited field of view makes it much more difficult to determine what students are peering at, what sort of alternative device they might be querying, or what notes they may have pinned to the seams of their clothing.
As a consequence, online proctoring flags students for virtually any movement, whether it is to stand up and leave their desks, or to simply glance briefly away from their computer screens. Anything but sitting still and looking directly at the computer screen can appear to be a sign of nefarious activity. To uncover any hidden aides a student may have carefully stashed away, the exam is sometimes preceded by a “room check” where the student is required to scan the environment so that proctors may see if there are any texts, notes, or alternate devices present. None of this surveillance is sufficient to prevent cheating, however, as the internet is rife with creative methods for how to fool online proctoring services. Anecdotal data suggest that online proctoring is more likely to prevent someone from going to the bathroom than from cheating. Worse, the AI behind online proctoring has been established to have inherent racist bias built into facial-recognition algorithms, leading to some students being required to shine a bright light directly into their faces during a high-pressure exam, just to appease the cameras.
Given these downsides, why is the surveillance model of assessment considered ideal? Decades of research shows that fear of being caught does cause students who were tempted to cheat some pause, but it is absolutely not a panacea. Students cheat in traditional exams as much as and perhaps moreso than in the online environment. So why is proctoring continually invoked as the key to successful assessment?
One answer lies at the heart of the very purpose of assessment: ensuring students have acquired the desired knowledge and skills, are able to demonstrate these on demand, independently, and to a specified standard. Because proctored exams require students to answer many complex and possibly high-level questions in a high-pressure environment, quickly, and without reference to external supports, they are sometimes regarded as the only way to truly assess an individual’s mastery of the material.
This assessment type is also said to reflect the reality that students will confront post-graduation. For many professions, including lawyers, nurses, doctors, engineers, technical staff, editors & poets, to name a few, it’s essential to be able quickly diagnose a problem, independently exercise good judgement, and rapidly arrive at a sound and evidenced-based solution. While there may also be a team to work with, or opportunities to consult references for further information, each student needs a solid foundation of skills and knowledge to draw upon.
At the same time, decades of research has taught us that, of all assessment strategies, exams are the least likely to either teach or validly assess these high level skills. In study after study, the artificial nature of exams has been shown to foster superficial approaches to learning. Questions that focus on retrieval of information prompt students to prepare by cramming a series of facts into their memories, facts often forgotten once the exam is finished. Exams that rely heavily on multiple choice or other objective questions are particularly problematic because it is impossible to discern whether success reflects mastery – or simply superb test-taking skills – and whether errors stem from a true lack of mastery – or minor calculation errors, or confusion over ambiguous or idiomatic language. These problems can be mitigated by writing questions that require application or analysis, as these will foster deeper approaches to learning, but in general students who are assessed through timed, summative, high-stakes exams tend to perform poorly when asked to retrieve, apply, or transfer information even weeks, let alone months or years later.
To teach and adequately assess students’ abilities to effectively problem solve, to persevere in the face of uncertainty, to communicate their analyses and findings in coherent ways that might speak to a range of audiences, it’s essential to provide opportunities for them to practice and demonstrate these skills in different ways and different contexts, receiving meaningful feedback along the way. This type of assessment strategy, assessment designed for learning, focuses more on the process, and creates opportunities for the instructor to formatively guide growth. It also tends to be more engaging for students, reducing temptations and several motivating factors for committing academic misconduct.
Unfortunately, this type of testing tends to be much more expensive, requiring more investment of time and energy in providing feedback and working with students individually, which is challenging for under-resourced instructors, particularly those with large classes. Certain types of assignments that help alleviate these workloads, such as group work, can make it difficult to assess the individual work. While students will reap the benefits of collaboration, they may cross the line into inappropriate “collusion”.
What’s the solution? There is no perfect assessment strategy, but in thinking through assessment design, it’s useful to recognize that certain practices are known to exacerbate the problem, cultivating a culture of cheating. These include assuming that students are cheaters and focusing energy on surveillance and punishment, rather than on assessment design & learning; creating busywork assignments for students that are submitted for grades but not read thoroughly or given meaningful feedback; focusing testing on memorization of information, rather than application, problem solving, and analysis. In contrast, strategies that are well-known to build a culture of academic integrity include creating high-level and relevant assignments that engage students; providing low-stakes opportunities for students to practice, take risks, possibly fail, and then recover; being transparent about expectations; and providing meaningful and formative feedback to help students succeed.
The online environment, it turns out, is similar to the real world in many ways: when students are lost, frustrated or disengaged, anxious, and under a great deal of pressure, they are much more likely to turn to inappropriate solutions. As this era of CoVID has undoubtedly been trying for all, it is more important than ever to concentrate effort on that which makes teaching most rewarding: designing engaging and meaningful experiences to support students in their honest efforts to learn.
 Bailey, J. (2021, April 22). How Bad Was the Pandemic for Academic Integrity, Plagiarism Today https://www.plagiarismtoday.com/2021/04/22/how-bad-was-the-pandemic-for-academic-integrity/
 Lederman, D. (2020). Best way to stop cheating in online courses? ‘Teach better’. Inside Higher Ed. https://www.insidehighered.com/digital-learning/article/2020/07/22/technology-best-way-stop-online-cheating-no-experts-say-better
 Reedy, A., Pfitzner, D., Rook, L. et al. (2021). Responding to the COVID-19 emergency: student and academic staff perceptions of academic integrity in the transition to online exams at three Australian universities. International Journal for Educational Integrity 17(9). https://doi.org/10.1007/s40979-021-00075-9
 Harwell, D. (2020, Nov. 12) Cheating-detection companies make millions during the pandemic. Now students are fighting back. The Washington Post https://www.washingtonpost.com/technology/2020/11/12/test-monitoring-student-revolt/
 Swauger, S. (2020). Our bodies encoded: Algorithmic test proctoring in higher education. Hybrid Pedagogy. https://hybridpedagogy.org/our-bodies-encoded-algorithmic-test-proctoring-in-higher-education/
 McCabe, D.L. & Trevino, L.K. (1997). Individual and contextual influences on academic dishonesty: A multicampus investigation. Research in Higher Education, 38, 379-396; McCabe, D. L., Trevino, L. K., & Butterfield, K. D. (2001) Cheating in Academic Institutions: A decade of information. Ethics & Behavior 11(3), 219-232 Tittle, C. R., & Rowe, A. R. (1974). Fear and the student cheater. Change, 6(3), 47–48; Wideman (2008). Academic Dishonesty in Postsecondary Education: A literature review https://bit.ly/3pX8r2W
Beck, V. (2014). Testing a model to predict online cheating—Much ado about nothing. Active Learning in Higher Education 15 (1), 65–75; Harris, L., Harrison, D., McNally, D., & Ford, C. (2019). Academic Integrity in an Online Culture: Do McCabe’s Findings Hold True for Online, Adult Learners? Journal of Academic Ethics, 1-16; Owunwanne, D., Rustagi, N., & Dada, R. (2010). Students’ perceptions of cheating and plagiarism in higher institutions. Journal of College Teaching and Learning, 7(1), 59–68; McGee, P. (2013). Supporting academic honesty in online courses. Journal of Educators Online, 10(1). https://www.thejeo.com/archive/2013_10_1/mcgee.
 Gibbs, G. and Simpson, C. (2005). Conditions Under Which Assessment Supports Student Learning. Learning and Teaching in Higher Education(1), 3-31.
Allyson Skene is a teaching and learning specialist with the Centre for Teaching and Learning. In her role, she supports faculty and graduate students in the development of effective and engaging courses, both online and in the classroom.