Access Peer Assess ProTM Software Get Started

Peer Assess Pro Blog

Analyse Early Teammate Peer Feedback to Deter Academic Dishonesty

Dr Peter Mellalieu asks How do you improve the chance of detecting that a struggling student has used a contract assignment-writing service?

The #NZHerald front page article University cheating scandal exposed raises the challenge of how universities respond to several varieties of #academicdishonesty undertaken by struggling students. Investigative journalist Kurt Bayer reports that a whistleblower, working for China-based assignment writing service #AssignmentJoy, revealed the extent to which university students throughout New Zealand submitted assignments written by professional, paid assignment writers, #contractcheating (Bayer, 2022). Coincidentally, the Sydney Morning Herald reported that
‘Serious cheating by students at … [New South Wales’] two biggest universities was found at record levels last year as the tertiary sector confronts a major battle trying to clamp down on students paying to have assignments done for them.’ (Carroll & White, 2022)

I demonstrate here how the Swiss Cheese Model of accident causation (Reason 1990), adapted to mitigate student cheating, illustrates why it is attractive for struggling students to ‘game the system’ and risk suffering material penalties for their academic dishonesty. I propose that improved #gradebook analysis and #collaborativelearning teaching practices provide cost-effective, transparent, early-warning barriers to cheating, whilst also raising #studentengagement and success.

I concur with Dr Leon Fourie’s reported observation that ‘Suspected cheating … at times can be hard to detect and prove.’ In my tertiary teaching experience, the chance is remote that a student submitting an assignment written by a contract cheating service will be detected, investigated, substantiated and materially penalised. The commensurate penalty may result in the cheating student being expelled from their university programme, losing their student visa and being transported promptly back to their homeland. Just once over my 35 year’s university teaching have I observed the full progression of material penalties – some 5000 students ago.

Source: New Zealand Weekend Herald, October 29, 2022

Consequently, I have seen students cross the graduation stage to my amazement and that of my close colleagues. In a particular case, I was asked for a reference by a company operating in the outdoor adventure training industry. I recalled the student had failed my course on account of having failed to attend and participate in the compulsory final capstone training exercise where students were expected to prove their competence in effective risk and safety management. I recall my evasive reference ‘There are safer candidates’. I later discovered the student had gained his successful graduation ‘On appeal’ to the university, a process from which the student’s teacher had been excluded. The Academic Appeal: more on this topic later.

The technological fix?

According to journalist Kurt Bayer, universities claim they have appropriate academic policies, robust procedures, technological solutions and appropriate penalties to address the issue of contract cheating and academic dishonesty in general. Certainly, I’ve used #plagiarismchecking software, such as #turnitin, for over a decade. However, students and essay-writing services use the same software to ensure that any ‘red flag’ plagiarism, either accidental or deliberate, is rewritten before they submit the assignment is submitted. Furthermore, plagiarism-checking software, by definition, cannot detect an assignment written uniquely ‘to order’ by a contract cheating service.

To first detect an assignment written by someone other than the student who claims authorship takes skill, time, and a police detective-like dedication to pursue and substantiate a potential case of cheating. The detective’s ethos is obverse to those of us called to the teaching profession. In contrast, many teachers seek co-production of learning with their students and success for the less privileged, neuro-compromised and those from culturally-unfamiliar contexts. It takes several faint whispers to prompt me to suspect and investigate an incident of cheating.

Specifically, a tutor paid minimum wage, subcontracted to mark one academic script every 15 minutes, is unlikely to realise that the style of writing, the frequent citations, and the abundant list of academic references bear little comparison with the same student’s prior poor academic performance or their substandard classroom participation. Accordingly, it takes the skill of the course academic to notice these symptoms and investigate more thoroughly. However, academics, teaching courses of a hundred, even a thousand students, are neither willing nor able to invest the time to interrogate every suspect assignment submitted.

Teachers, operating as the first line of defence against academic cheating, need simple, cost-effective solutions for identifying cheating that also adds value to their teaching practice, student engagement and success.

‘I see nothing!’

Once a case of academic dishonesty, such as an outsourced assignment, is suspected time is required for the teaching academic and academic administrators to undertake a fair, due process to investigate, substantiate and determine a penalty appropriate to the seriousness. These processes are long-established, well-defined, robust, fair and proven in practice. Although somewhat elaborate, they are necessary to defend the university against the increasing probability of a legal challenge by a student who receives an adverse outcome.

In my experience, these processes initially draw a few hours taken from the teacher’s ‘quality time’ they would prefer to devote to research and professional activities. In addition, there are 10 to 20 hours of educational administrators’ time. What might be the path of least resistance is that ‘I see nothing’ (Schultz 1965). Give the student a bare pass and promote them to the next course. Besides, the teacher faces managerial pressures to deliver corporate goals of ‘student success and retention’. If the student is expelled from the university programme, the institution loses substantial future fees and suffers the reputational damage illustrated by articles such as that published in the NZ Herald.

‘I did not understand’: Pascales’ Wager

The material consequences faced by a student for which a case of contract cheating has been substantiated might include termination of academic enrollment, incomplete or annulled qualification, the loss of prospective resident migrant status,  employment and private disgrace. These consequences motivate the student to appeal or even litigate the decision, perhaps through a judicial review. Normally, the student has pursued the first option to draw upon the university’s own Academic Appeal process supported with assistance from student advocates from their student union.

A winning defence is the student’s invocation of Pascales’ Wager (sic) ‘I didn’t understand … ‘. That is, ‘I didn’t understand the assignment, the academic policies, the importance of maintaining academic integrity…’. There is invariably something a student failed to understand. They were inattentive or absent from class. They failed to ask their classmates or their teacher in a timely manner…

Given the high stakes faced by a student, they see value in ‘lawyering up’ with a team of legal and/or public relations advisors. By this stage, senior academics, deans and even deputy vice-chancellors might be involved in the dispute. Now the university’s very expensive, quality time is being consumed. A marginal pass grade, C- in New Zealand, is a compelling solution to making the problem go away… until next time. A marginal grade on a student’s academic transcript is a ‘tell’, to use poker parlance! Employers, beware!

The Swiss Cheese Model for mitigating student cheating

What I’ve described provides the basis for an adaptation of the Swiss Cheese Model for accident causation first proposed by Reason et al. (1990). In essence,

‘The Swiss Cheese Model of accident causation illustrates that, although many layers of defence lie between hazards and accidents, there are flaws in each layer that, if aligned, can allow the accident to occur.’ (Wikipedia, 2022).

Extending the Swiss Cheese Model to prevent student cheating in universities, the defences, illustrated as cheese slices, include

  1. The definitions and scope of academic dishonesty
  2. Well-communicated policies defining the aim of academic integrity
  3. Training students in academic integrity
  4. Technology to assist in detecting plagiarism
  5. The process for investigating and substantiating suspected cheating
  6. The process for a student to appeal an adverse decision.
  7. The administration of consequences for cheating
The Swiss Model for Mitigating Student Cheating (Author 2022. Inspired by MacKay 2020 from an original idea by Reason 1990)

From a struggling or ill-motivated student’s perspective, the chances are high that they’ll navigate successfully through all the holes in the Swiss Cheese succeeding in their intention to graduate successfully from their course.  Conversely, the chances are remote that the university’s policies, processes, assessment graders and teachers or #plagiarismchecking software will deter, detect and successfully substantiate a student motivated to cheat. Despite all the defences, cheating is not prevented. The holes in the slices are rather large. The holes readily align in the favour of the motivated cheat. Universities need to reduce the size of holes or add some slices of beef!

Is there an alternative? Introducing the Swiss Beefburger

In early 2021, during New Zealand’s mid-Covid times, I designed and taught a new-to-me, new-to-institution postgraduate course. The class met four-weekly in full-day sessions, with optional weekly video check-ins. I’ve learned from prior teaching experience that it’s crucial to obtain regular and early feedback from my students when I develop and introduce a new course. Soliciting early feedback was especially relevant as the course was a hybrid combining online and on-campus teaching, designed to be ‘Covid resilient’ should another pandemic health lockdown be required.

My hybrid design included a #flippedclassroom learning environment where students were assigned to complete critical course readings prior to each four-weekly contact workshop. The flipped classroom is one of several techniques found to raise student engagement and accountability (Fink 2003, 2013). Conducting a #formative Individual Readiness Assurance Test (iRAT) at each on-campus class helped me track early the academic preparation and progress of each student.

Team-Based Learning Process

In addition, before our first class meeting, each student was assigned to a virtual #collaborativelearning group, founded initially as a ‘band of buddies’. Later, each group was tasked to undertake a team #capstoneproject delivering an industry-based investigative report, a team presentation and, ultimately, an individual #reflectivepractice essay. This variety of assessments raises difficulties for students attempting to use a cheating service. To write a credible reflective essay, I argue the student must have been a regular, effective contributor to the team’s experience.

Crucially, though, at the start of the course, the collaborative groups were tasked to mutually support their members’ learning and individual assignment writing, the ‘band of buddies’. Furthermore, as part of the flipped class teaching, the collaborative groups each worked together to discuss and deliver one group-as-a-whole response to a repeat of the iRAT, the so-called Team Readiness Assurance Test (tRAT).

The flipped classroom, Individual Readiness Assurance Test and Team Readiness Assurance Test are several elements drawn from the #teambasedlearning, #tbl educational movement for #significantlearning advocated by Fink.

Early-warning symptoms

In my teaching, one critical early-warning feedback process I use incorporates formative #teammatepeerfeedback. In the figure summarising a team-based learning process, this peer feedback is presented as #peerreview or #peerevaluation, Step 5. I reserve an additional summative #teammatepeerassessment for the conclusion of the course. Specifically, this summative peer assessment rewards students with a grade for their teamwork proportional to their relative contribution to teamwork and leadership contributions.

It’s a foundational pillar that students gain experience and training in the peer review process. To assure fairness, students should be given the opportunity to understand how peer assessment will impact their final grades (Mellalieu 2022; Sprague et al., 2019). In addition, students should receive early feedback through an anonymised formative Personal Feedback Report. The report helps students learn how they might need to adjust their contribution to the team’s output and teamwork processes to achieve the grade they seek (Mellalieu, 2021, Step 6). Handling the survey required to administer, process and deliver the results of formative #teammatepeerfeedback is a straightforward, cost-effective component of a comprehensive digital platform for teammate peer assessment.

By the end of my course’s second contact session, the students in each collaborative team had anonymously peer-reviewed the contribution of their team members to their learning to date. In my course, one specific piece of evidence students used to support the formative peer assessment feedback included their teammates’ ability to contribute to the academic success of their Team Readiness Assurance Test, tRAT.

When I came later to grade students’ first (individual) assignment I noticed Student A had presented an assignment with an impressive list of academic citations and references. However, Student A had written eloquently about a topic that bore tangential relevance to the assignment specification and my in-class discussion of the assignment. Reviewing my gradebook records I noticed the same student had earlier been rated ‘at-risk’, a weak team contributor according to the Active Warnings notified by the #teammatepeerassessment platform. Furthermore, Student A was amongst the lowest in class on their formative Individual Readiness Assurance Test, iRAT.

This #triangulation of readily-gathered evidence made it a straightforward matter for me to promptly refer Student A for investigation as a postgraduate student at risk of failure, therefore requiring immediate teaching and learning assistance. At that point, I did not allege, nor even suspect that the assignment had been written by a contract assignment-writing service. Student A engaged regularly and productively with the institution’s student success/teaching and learning service and began to cope with her learning struggles.

Student B, in contrast, submitted a failed first individual assignment. Their early poor academic performance was forewarned by the at-risk status alerted through their group’s formative teammate peer feedback, and their low formative Individual Readiness Assurance Test iRAT results. Ignoring the objective and subjective evidence suggesting they pursue improved performance, the student received an extension to resubmit the assignment. However, Student B eventually suspended their enrolment in the course concentrating their efforts on success elsewhere. Student B, an obviously struggling student, appeared to resist temptations to gain academic success through alternative means.

In the case of struggling students, like both Students A and B, early formative assessment confronts the student with stark evidence that their struggles are known to both the teacher and their teammates.

Students value early courageous conversations with shirkers

Students are more willing to confront poor contributions to their team’s output early in the course, rather than at the course conclusion (Mentzer et al., 2017), especially when that notification can be made anonymously. Specifically, it is in the team’s interest to report early evidence of absent or malingering teammates’ poor team contribution to their teacher. Receiving early advice, the course the teacher can proactively engage in ‘courageous conversations‘ with at-risk students in a timely manner. The at-risk student can undertake remedial action and avoid the temptation to cheat.

Incidentally, student teams are generally consistent in their rating of high, average, and poor contributions by teammates. This is more apparent when students have been trained in peer assessment (Dodd, 2019). In the Peer Assess Pro platform for teammate peer assessment, for example, advanced statistical tests (concordance) confirm the degree of intra-team agreement (Mellalieu, 2021).

Future improvements to teamwork behaviour and written assignments are both intended and very welcome. However, out-of-character longitudinal mismatches between written assignment quality and substandard performance within collaborative learning projects suggest teachers further investigate the possibility of contract cheating. The astute student, tempted to cheat, will also realise they have been ‘red flagged’ for special observation.


  1. Universities deploy several defences against academic dishonesty, such as a student submitting an assignment written by another person, contract cheating. These defences include well-communicated policies defining the aim of academic integrity, the definitions and scope of academic dishonesty, the consequences of cheating, training in academic integrity, and the process for investigating and substantiating suspected cheating.
  2. There is a high cost in professional time required to investigate and substantiate a case of cheating faced by the academic who first suspects cheating combined with the time required of supporting actors (academic programme managers, deans) who become involved. The gain for the institution’s reputation as a credible academy can be marginal when compared with the cost of professional time and the loss of the student’s future tuition fees.
  3. #plagiarismchecking software is available for academics to detect copying. However, such software is easily defeated since students and contract cheating services have access to the same software and can rewrite their assignments. Furthermore, an assignment written by a contract cheating service is a unique, ‘made-to-order’ product.
  4. The more extreme penalties available that might be applied for using a contract cheating service, such as cancellation of enrolment, are sparingly applied.
  5. The Swiss Cheese Model helps focus attention on why ‘Suspected cheating … at times can be hard to detect and prove’. While universities deploy several barriers against cheating, even when combined together they are likely to be ineffective in substantiating a claim of serious cheating.
  6. Two cost-effective defences, ‘beef up’ the initial Swiss Cheese Model to create the Swiss Beefburger Model for Mitigating Student Cheating. These defences are early, formative #teammatepeerassessment and Individual Readiness Assurance Tests, iRAT. Together, the results of these early assessments provide baseline data for identifying unexpected outlier results that later arise from individual assignments that have been produced through contract cheating
  7. In compensation for the cost of the proposed additional defences, important educational benefits arise from introducing early formative teammate peer assessment, the flipped classroom and team-based (collaborative) learning. The benefits include higher student #engagement, fairer grading, more valid assessment, and the development of teamwork and leadership capabilities relevant to employers’ requirements for future-ready capabilities (Dodd & Mellalieu, 2022; Geertshuis & Lewis, 2020).
Swiss Beefburger Model for Mitigating Student Cheating (Author 2022. Inspired by MacKay 2020 after an original idea by Reason 1990)

Advice for good practice

In many institutions, learning content and assessment results are conveniently delivered to students and their teachers through a single portal, the Learning Management System #lms. Within each course, basic statistics can highlight to a teacher and their programme managers where the grades for individual written assessments are unexpected outliers when compared with the results of early formative teammate peer feedback and/or, ideally, Individual Readiness Assurance Tests, iRAT. We propose adapting the approach taken by Fernando & Mellalieu (2012) where students, early in their course, are provided with a predictive model of the end-of-course grades which they can use to productively adjust their academic behaviour.

A few minutes’ investigation of a Personal Feedback Report should provide confirming evidence that a team member’s contributions were viewed by teammates as late or unforthcoming, unprofessional or incoherent, or plagiarised.

Triangulating this evidence provides the teacher and/or academic programme manager with the confidence to confront early and proactively a struggling student who may later be tempted towards contract cheating. Rather, the teacher can nudge the struggling student towards using the university’s student success teaching and learning services. Should the student continue their academic disengagement (failure to attend, failure to contribute, cheating), the evidence trail has been established, the student has been confronted, and there can be no defence that ‘I did not understand’.

Join the conversation

Discuss this article on LinkedIn

Frequently Asked Questions FAQs

  1. How can struggling students be identified early through active warnings presented by a digital platform for teammate peer assessment?
  2. How can symptoms of cheating, such as outlier assessment results, be identified through educational analytics?
  3. What are best practices, principles and policies for the conduct of fair and valid collaborative learning and assessment?

Contact |  Mobile +64 21420118 (Auckland, GMT + 12)

About the author

Dr Peter Mellalieu, BTech(hons), MPubPol, PhD (Operations Research and Informations Systems) has taught risk management, quality management, operations management, strategy, innovation management and design thinking at several universities in New Zealand, the United States and Botswana. He has a long-standing interest in academic quality management drawing from his foundation membership of the New Zealand Organisation for Quality Assurance (now NZOQ) in 1978. As an academic programme leader and industrial professor over several decades, he has successfully substantiated his claims of academic dishonesty admittedly undertaken by an infinitesimally small number of students.


Dr Mellalieu’s research and teaching applying collaborative learning led to his development of the Peer Assess Pro™ platform for teammate peer assessment and feedback. Peter is Chief Technology Officer at Peer Assess Pro™.


I acknowledge the feedback from Phillip Dawson, author of Defending Assessment Security in a Digital World, who provided valuable ‘further reading’.

Further reading

Swiss Cheese Model. (2022). In Wikipedia.

Team-Based Learning Process, TBL. (n.d.). LAMS Team-Based Learning. Retrieved 31 October 2022, from

Bayer, K. (2022, October 29). Cheating scandal: NZ uni students paying Chinese company’s ghostwriters for papers. NZ Herald. Online edition.

Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., Rozenberg, P., Saddiqui, S., & van Haeringen, K. (2019). Contract cheating: A survey of Australian university students. Studies in Higher Education, 44(11), 1837–1856.

Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., van Haeringen, K., Saddiqui, S., & Rozenberg, P. (2019). Contract cheating and assessment design: Exploring the relationship. Assessment & Evaluation in Higher Education, 44(5), 676–691.

Carroll, L., & White, D. (2022, November 6). University students caught paying others to do their work at record levels. The Sydney Morning Herald.

Dawson, P. (2020). Defending Assessment Security in a Digital World: Preventing E-Cheating and Supporting Academic Integrity in Higher Education. Routledge.

Dodd, P., & Mellalieu, P. J. (2022, August 18). Innovative team collaboration for fair, effective teamwork, Peer Assess Pro Academy; Conversations in Learning Innovations.

Ellis, C., van Haeringen, K., Harper, R., Bretag, T., Zucker, I., McBride, S., Rozenberg, P., Newton, P., & Saddiqui, S. (2020). Does authentic assessment assure academic integrity? Evidence from contract cheating data. Higher Education Research & Development, 39(3), 454–469.

Fink, L. D. (2003). A Self-Directed Guide to Designing Courses for Significant Learning. Instructional Development Program, University of Oklahoma.

Fink, L. D. (2013). Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses (Kindle; 2nd edition). Jossey-Bass.

Fernando, D. A. K., & Mellalieu, P. J. (2012). Effectiveness of an evidence-based predictive model for motivating success in freshmen engineering students [Paper 95]. The Profession of Engineering Education: Advancing Teaching, Research and Careers, 23, 1037–1045.

Geertshuis, S., & Lewis, N. (2020). Future Ready Graduates: Embedding Employability in the Curriculum: Strategies for the Development of Future-Ready Employability Attributes Within Advanced and Research Informed Programmes. Ako Aotearoa National Centre for Tertiary Teaching Excellence.

MacKay, I. M. (2020). The Swiss Cheese Model Applied to Covid19 Prevention.

McCabe, D. L., & Trevino, L. K. (1993). Academic Dishonesty: Honor Codes and Other Contextual Influences. The Journal of Higher Education, 64(5), 522–538.

Mellalieu, P. J. (2022). Future-Ready Capability through Collaborative Learning: Principles, Policies and Practice for Fair, Valid and Effective Assessment. Peer Assess Pro.

Mellalieu, P. J. (2021, April 28). Is peer assessment valid for determining individual grades in group work? Better Feedback. Better Teams.

Mellalieu, P. J. (2020, November 5). How to teach using group assignments: The 7 step formula for fair and effective team assessment [Online]. Peer Assess Pro.

Mentzer, N., Laux, D., Zissimopoulos, A., & Richards, K. A. R. (2017). Peer Evaluation of Team Member Effectiveness as a Formative Educational Intervention. Journal of Technology Education, 28(2).

Reason, J. (1990). The contribution of latent human failures to the breakdown of complex systems. Philosophical Transactions of the Royal Society of London. B, Biological Sciences, 327(1241), 475–484.

Rundle, K., Curtis, G., & Clare, J. (2020). Why students choose not to cheat. In Frontiers in Psychology (pp. 100–111).

Sprague, M., Wilson, K. F., & McKenzie, K. S. (2019). Evaluating the quality of peer and self evaluations as measures of student contributions to group projects. Higher Education Research & Development, 38(5), 1061–1074.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.