Access Peer Assess ProTM Software Get Started

Take on learnings and improve next assessment

Step 7 – Improve the next cycle of your students’ group assignments

See how we create better teams through better feedback

Download our eBook now! – “How to teach using group assignments”

Improve the next cycle

We examine the feedback, charts, data and analytics resulting from our peer assessment to guide improvements to our design of future group assignments and our next conduct of teammate peer assessment.


  • Identify critical areas of feedback provided to you from your students. Refine your assignment specifications and the teaching and learning process.
  • Identify training for relatively weak teamwork capabilities in your academic programme and/or related studies.
  • Identify training opportunities to improve the validity and fairness of future peer assessments
  • Identify opportunities to overcome self-enhancement bias and develop students’ realistic self image (ERSI).
  • Identify the reasons for persistent at-risk and failed students. Establish steps for proactive observation and intervention in their future studies
  • Advance the extent of innovation and authentic learning in your future group assignments.

Feedback from students

Refine your assignment specifications and your teaching and learning process based on feedback and cues from students. Possible areas for refinement students might suggest include

  • Initiate the group assignment earlier in your academic programme
  • Conduct formative peer assessment earlier in the academic programme, so you can more promptly identify and and intervene with at risk students and dysfunctional teams
  • Shorten the availability period of the peer assessment survey. Publish feedback results promptly so that courageous conversations amongst students have a longer period to impact positively on delivered outputs and teamwork processes
  • Coach the students to conduct effective team meetings especially when conducted using virtual meeting technologies.
  • Alert students to proactively solicit contributions from less vocal, less communicative or timid teammates
  • Encourage students to review regularly their effectiveness against their team charter.
  • Encourage students to be proactive in addressing issues of dysfunctional team or individual behaviour according to your academic policies for group assignments and peer assessment.
  • Improve the fairness for teamwork contribution by adjusting the personal result method and/or scale factor (weight) of the peer assessment in determining students’ personal results combined with the team result.

Future priorities for teamwork capability development

TIP! Compare the statistics and patterns in charts resulting from your team assessment with examples of good practice. Anticipate issues of ineffective peer assessment practice by improving your assignment design, peer assessment training, and support for courageous conversations within teams following your formative feedback.

Raise the overall performance of your next class by focussing on specific training interventions to improve the teamwork capabilities identified as weaker than most. For example, the bar chart of average class capabilities in Figure 9.1 suggests improving your students’ capabilities related to the factors of Chairmanship and Encouraging the contribution of others, and, to a lesser extent, Listening to and welcoming the contributions of others.

For your next class delivery, you could coach these top priority items in the early weeks of your student teams meeting together, rather than waiting until after you receive the results of your first peer assessment. Whilst you could deliver the skills development yourself, there are alternatives. Your institution’s teaching and learning centre may offer a specialist consultant in group assignments and group processes. Alternatively, you could commission the student(s) who are most strong in the capability to make a brief presentation about the way they operate. The latter option is consistent with the action learning approach, where the class learns new skills and knowledge from its own members.

Finally, if your institution is pursuing a teamwork across the curriculum teaching and learning strategy, you now have evidence for these weaker capabilities to be developed in academic programmes pre-requisite to your own. Advocate through your department’s usual channels for these programme adjustments.

Figure 9.1 Identify opportunities to improve class teamwork capabilities

Improve validity through fairness, honesty and accuracy

One fundamental principal of peer assessment and feedback is that students learn to rate each other honestly, accurately and fairly – Pillar 3. When one or more of these features is absent, then the assessment lacks validity.

There is a mythical land, Lake Wobegon,“where all the women are strong, all the men are good looking, and all the children are above average.” (‘Lake Wobegon: The Lake Wobegon Effect’, 2017). To what extent have your students imbibed from the waters of Lake Wobegon thereby rating the majority of their team members above average?

Identify the extent to which the Lake Wobegon phenomenon applies to your class by examining a histogram showing the frequency distribution of peer assessed scores, Figure 9.2.

You should aspire towards a distribution of peer assessed scores with a mean of 50, and a bell-curved symmetrical distribution either side, the ‘Ideal’ green bars shown in the histogram.

If you observe the class mean well above 50 that is a symptom of systematic overgenerous peer assessment ratings by your class. For example, the orange bars show a class average peer assessed score of around 70. This class presents with a marked Lake Wobegon Effect, but a reasonably normal, bell-shaped distribution of results above and below the average.

In contrast, the red bars in Figure 9.2 illustrate a peer assessment with 80 out of the 149 peer assessment ratings in the upper range of 90 to 100. The average peer assessed score is 86, and the shape of the distribution is markedly skewed. This example is symptomatic of an ‘Invalid’ peer assessment survey.

Figure 9.2 The need for more accurate training in rating to reduce the Lake Wobegon Effect and other invalid surveys

Extra for experts

You expect a a mid-point of 50 based on the assumption that

  • The peer assessment rubric is similar to either of those presented in Gallery 3.1. In each of These rubrics the average teammate should be rated midway along the BARS or Likert scales
  • The peer assessed score is scaled to range from zero to 100, as illustrated in the example calculations shown in Table 3.4 and Table 3.5. Thus, the mean value for the class will be 50/100

Quality of typical team peer assessments

Another indicator that some of your class teams are failing to assess fairly and accurately was discussed in relation to Figure 7.1, the first, formative peer assessment conducted for a class of 848 students. Re-presenting the data from Figure 9.2 to include the range of peer assessed scores within each team, Figure 9.3, note that none of this class has a team average peer assessed score of below 50! That is, most of the teams exhibit the Lake Wobegon Effect to a marked extent. For example, the members of Team Charley give rise to an average peer assessed score of 99, with a range of 6 peer assessed scores from highest to lowest within the team. Specifically, 10 of 28 teams have submitted especially low quality team ratings, almost one-third of all teams. The average peer assessed score across all members of these teams is above 90/100. Correspondingly, the range of peer assessed scores within each of these teams is less than 10 peer assessed score units. Furthermore the median of the (average) peer assessed scores across all teams is 80, well above the ideal value of 50. This class certainly requires more effective peer assessment training to forestall recurrences of these extreme results.

Minimizing the prospect of the Lake Wobegon Effect

Should you observe the Lake Wobegon effect in your class results, next time you use teammate peer assessment, allocate more time to training your students how to rate their teammates more accurately. For example, use the exercises presented in Step 3 – Train your students, Chapter 5.

Consider penalising the grades of teams that fail to rate fairly, honestly and accurately, according to Academic Policy 10, such as those with the most extreme low quality team ratings illustrated in Figure 9.3.

Figure 9.3 Identifying the extent of low quality team peer assessments

Another tactic for persuasion is to select a personal result method such as the Normalized Personal Result (NPR) or Rank-Based Personal (RPR) result. By mathematical definition, these methods award the same team result to all students when they all rate at the extreme end of the scale. Alternatively, select a Standardized Peer Assessed Score (SPAS) that translates each teams’ peer assessed score to achieve a mean of 50/100. Either of these tactics for persuasion might help students realise that peer assessing their team members 100 (or over a narrow range) does not lead to a personal result of 100, but rather less. Plus, their abdication of responsibility to rate honestly is clearly visible to the teacher.

Develop students’ realistic self-image

A related issue to the Lake Wobegon Effect is self-enhancement bias, the extent to which students rate themselves superior to the peer assessment rating awarded by their teammates. Self-enhancement bias is the phenomenon that humans rate themselves more highly than others would rate them, or as measured by objective tests (Loughnan et al., 2011). According to Sedikides and Strube (1995), self-enhancement is a type of motivation “that refers to people’s desire to enhance the positivity or decrease the negativity of their self-concept”. For example, in general, most people tend to think to think they are better drivers than the average driver. Students, in particular, rate their leadership capabilities well above the average.

I find from my experience with teammate peer assessment that the histogram of self-assessed scores is shifted right by about 10 points higher than the class distribution of peer assessed scores, as illustrated by the orange bars in in Figure 9.4. Consequently, consider discussing with your students the benefits for their employability of developing an Exceptionally Realistic Self-Image (ERSI). An Exceptionally Realistic Self-Image means that a person knows their strengths and weaknesses, and perceives these features in a way that matches how others would perceive them.

In teammate peer assessment a high ERSI is calculable from both

  • The degree to which a peer assessed score matches a student’s self-assed score, and
  • The extent of the match amongst the ratings of the component factors measured by the survey instrument

Figure 9.4 Self-enhancement bias in self and peer assessments

Developing students’ Exceptionally Realistic Self-Image

Quinn, Bright, Faerman et al. (2015) in the chapter Understanding Self and Others introduce concisely the notions of personality, and the importance of understanding yourself before you can attempt to understand others. The Johari Window, Gallery 9.1 is a useful tool for increasing students’ self-awareness through

  • Sharing knowledge that is known to you but hidden from others. Increasing the size of the OPEN quadrant and reducing the HIDDEN quadrant.
  • Seeking to learn knowledge that is known to others but to which you are BLIND. Reducing the size of the BLIND quadrant.

Gallery 9.1 Know thyself as others see you – The Johari Window

For students, the benefits of developing a high ERSI include

  • Having a good sense of who you are enables you to build upon your strengths and correct your weaknesses. In turn, that can make you more successful at work and in your personal life
  • You are able to better understand, predict and cope with others more effectively
  • You can better distinguish valid and invalid inputs
  • You are more likely to select (and achieve) realistic personal goals.

Factors that detract from having an Exceptionally Realistic Self-Image

  • The way you were brought up, such as the inaccurate comments (good or bad) you heard from family, friends and teammates that you internalized.
  • The messages you send yourself through negative self-talk. (“No wonder I didn’t pass that test. I always mess up in tests!”)
  • The way you perceive yourself. (“Why doesn’t my team leader do it my way? I’m so much smarter than she is!”) (Adapted from ‘ERSI: Exceptionally Realistic Self-Image’, 2012).

How is a high Exceptionally Realistic Self-Image developed?
A three-step programme to develop an Exceptionally Realistic Self-Image includes

  • Make a commitment. Decide you really want to know yourself better and that you are willing to pay the price (in time, effort, or temporary unease) that may be required
  • Learn to recognize and to reduce your defenses against the cues from reality. In others words, learn to see yourself as you really are
  • Receive and review those cues to assess how your current self-image and your authentic self may be incongruent or inconsistent (‘ERSI – Exceptionally Realistic Self-Image’, 2012)

Other indicators of realistic self assessment
Note that the self-enhancement effect is a subtly different phenomena to the issue of extreme outlier self-assessments – overconfidence and under-confidence – addressed in Step 5 – Manage the peer assessment, Chapter 7.

In my experience, the class as a whole will present with a self-enhancement effect of about ten peer assessment units, but some will be beyond this typical measure. Gallery 9.2 illustrates two graphical methods to explore this phenomenon using data from the same class presented in Figure 9.3. A scattergram simply plots the peer assessed score of each student against their self assessment. Overconfident self-assessments are colour coded in red, whilst underconfident assessments are colour coded in blue. The histogram presents the same data showing the count for each bin for IRSA, the percentage ratio of self-assessed score / peer assessed score. The histogram shows that most students in this class evidenced a reasonably realistic self-assessment that matched their peer assessment score, with an IRSA index between 80 through 120, colored in green.

To improve students’ measures of realistic self-assessment, recall the training exercise in STEP 3 – Train your students, Chapter 5

Gallery 9.2 Indicators of realistic self-image

Persistent at-risk students

Identify the students who failed your course from a tabular sort or histogram of the class final personal results. Typically, programme directors seek to know the reasons for these students’ failure, and what might be necessary to improve their future success.

From the peer assessment personal reports for the most at risk students you can report

  • The students who failed to improve their peer-assessed performance following a low formative peer assessment?
  • The guidance teammates have suggested would improve students performance in future group assignments
  • Students who had an unrealistic, overconfident assessment of their contribution
  • Students who failed because their team as a whole failed the group assignment

These at risk students can be alerted for special attention by their teachers in future academic programmes.

Based on the earlier sections in this chapter, you will can report your proposals for improving the learning outcomes arising from your teaching and learning process for future students in your class.

You should also advise what teamwork capabilities could be developed in programmes that precede and follow your own, based on your analysis of the weakest capabilities you observed through peer assessment. You might also advocate for teammate peer assessment in other courses, in accordance with a teamwork across the curriculum (TAC) institutional teaching and learning strategy.

Review peer assessment tactics for innovation

Three tactics were suggested for introducing peer assessment into your teaching in Step 1 – Prepare the group assignments, Chapter 3.

Tactic 1 -Immediate shallow-entry formative
Tactic 2 – Incremental summative
Tactic 3 -Innovative redesign.

Consider advancing beyond the tactic you selected for your current group assignment, drawing on these suggestions.

Beyond a shallow entry formative feedback assessment

Did you adopt the shallow entry approach of introducing formative teammate peer feedback using your existing assignment (Tactic 1)? For your next group assignment, consider introducing at least two teammate peer assessment events.

  • Formative peer assessment and feedback within two to three weeks of the students being assigned the group assignment
  • Summative peer assessment and feedback following your teams’ submission of the final delivered outputs.

By extending the use of teammate peer assessment to determine students’ personal results, you begin the journey towards addressing Pillar 1 – ‘Awarding all teammates the same grade is not valid, fair, nor motivating for students‘ and Pillar 2 – ‘Freeloading in group assignments is less likely if students’ contributions will determine their grades’.

Plan to conduct more extensive training of your students in the practice of peer assessment. Step 3 – Train your students. In your group assignment specification, make explicit the Academic Policies that will apply. Schedule pre-emptive training in the weakest teamwork capabilities identified from your recent teammate peer assessments.

Peer assessment over an extended period

If the group project extends over more than ten weeks, consider a mid-point summative peer assessment and feedback session drawing on interim team results for items such as a project plan or draft executive summary. A mid-point summative event adds beyond an early (at week two or week three) formative peer feedback event and your final, summative peer assessment

Reflective learning

If the group assignment accounts for a substantial portion of the grade for the academic programme, consider requiring students to complete a personal reflective journal and/or final reflective essay. In their reflective essay, students make critical, evaluative reference to some of the key issues and challenges they experienced during their experience of the course. Students can also provide in appendixes their personal feedback reports and feedback even action plans as evidence supporting their personal and professional development achieved, and future intentions.

Innovation, and beyond!

Consider extending the innovation in your teaching practice by redesigning your assignment to include the principles and practices of authentic learning, problem-based learning, project-based learning, team-based learning (TBL), action learning, agile, or hybrids.

Courses that adopt agile learning typically include many peer assessments, perhaps every two weeks over an entire year. Team arrangements are formed initially by the teacher. Small group assignments are conducted for the first few weeks of the academic programme, similar to the approach taken in team-based learning.

Following a few peer assessment events, students self-select into new team memberships based around the industry-based projects available for pursuit. As part of the recruitment and on-boarding processes for each team, potential teammates can present their peer feedback reports in evidence of the strengths they would bring to the team, and the teamwork capabilities they wish to develop.

The class as a collaborative enterprise

In some jurisdictions, there is argument that postgraduate students’s grades should be constrained to a small proportion of work arising from group work. Postgraduates are expected to demonstrate they can deliver ‘masterworks’ of magnificence under the sole efforts. Paradoxically, the curricula for applied postgraduate programmes in management, engineering and design often state explicitly that the graduate will demonstrate achievement of teamwork and leadership capabilities. How can these disparate expectations be resolved practicably?

I faced such a paradox in my teaching of an applied masters of business innovation and entrepreneurship. My solution is informed by Bruffee’s collaborative learning. There is no group assignment in the class. Instead, there is a rather elaborately-assessed individual assignment with collaborative elements.

  • Each student has a single, significant individual masterworks assignment for which they are responsible for delivering. The delivered outputs included a new venture Dragons Den-style pitch and a documented business plan.
  • Each student offers and promotes their talents, skills and time to other students in the class, to help them produce their masterwork
  • Each student draws upon the skills of their classmates to produce their masterwork
  • Each student uses teammate peer assessment across the entire class based on their perception of how each classmate contributed towards their individual assignment, and their overall learning during the class.
  • The teacher calculates a class grade from the average grade awarded by the teacher to each students’ masterwork
  • Each student received two grades (1) the teacher’s grade for their individual masterwork (2) a personal result calculated from their peer assessed score combined with the class grade. The class grade is a proxy for the ‘team result’ typically used in a peer assessment platform.

The foregoing process is highly motivating and engaging for students. Students play to their strengths. They learn how to work with others and draw upon others capabilities (in a simulation of the ‘gig’ economy). Each student is responsible ultimately for the creation and delivery of their own masterwork. However, their total grade is determined not just from how effectively they work on their own masterwork, but also on the standard of achievement by the entire class. Each student takes pride in seeing the achievements of their classmates. The result is a win-win outcome.


As part of the specification of the ‘significant individual masterworks assignment you can include a reflective essay as another delivered output. Since a reflective essay is usually submitted after all other class work tasks have been completed, the grade from this component would be excluded from the calculation of the average grade achieved by the class or learning set.

Extra for experts

For larger classes, the teacher should arrange the class into large action learning sets of at least seven students. Substitute the average class grade mentioned earlier by the average grade per learning set awarded by the teacher to each of the set members masterwork.

How Peer Assess Pro helps

Table 9.1 How Peer Assess Pro helps improve future group assignments and assessment

Class strengths and weaknessesIdentifies the weakest teamwork capability factors that could be addressed by just-in-time coaching.
Self-enhancement biasIdentifies the extent to which self-enhancement bias and the Lake Wobegon Effect pervades class-level peer assessments.
Unacceptable ratingsIdentification of invalid and unacceptable rating behaviour by teams and individuals.
Teacher feedbackStudents may provide the teacher with anonymous feedback about concerns or suggestions about the conduct of peer assessment, group work, or the class generally.
Persistent at risk studentsThe teacher can quickly investigate the feedback reports for an individual team and its teammates to review the peer assessed scores given and received, and to confirm the qualitative feedback corroborates the peer assessed ratings.
Gradebook and dataset downloadComprehensive set of download options of results enables convenient import to gradebook system and bespoke education analytics.
Survey history logPermanent track-and-trace survey history log of actions taken by the platform, and notifications communicated to students and teacher. Convenient recall of full reports for high performing and at risk students.
© Peer Assess Pro Ltd


PREVIOUS: STEP 6 — Courageous conversations NEXT: Peer assessment platforms 

Return to top of page