Step 1 – Prepare the group assignment as an authentic learning experience
See how we create better teams through better feedback
- Introduction
- Why group assignments?
- Why peer feedback?
- STEP 1 — Prepare the team assignment
- STEP 2 — Build your teams
- STEP 3 — Train your students
Prepare the group assignment
Before we introduce our class to their group assignment, we create an engaging assignment that meets academic learning outcomes and develops teamwork capabilities valued by employers. An authentic assignment that addresses real-world issues, problems, or applications will mirror the complexities and ambiguities of professional life.
Checklist
- Design your assignment to demand the highest level course and programme outcomes appropriate for your class.
- Frame the team’s deliverables as outputs likely to be valued by a client, the students’ profession, industry or field of employment.
- Design to require a collaborative effort rather than ‘divide and conquer’ tactics.
- Incorporate relevant academic policies into your assignment specification.
- Ensure the assignment description emphasises that teammate peer assessment will be used to reward extra contribution whilst penalising freeloaders and counter-productive team behaviour.
- Select the teammate peer assessment rubric that will be used by teammates to assess each others’ contribution.
- Explore how a digital peer assessment platform facilitates conduct of a peer assessment survey
Tactics for designing the group assignment
You can improve the conduct of your existing group assignments simply by introducing formative teammate peer feedback during the assignment. At the other extreme, you can dive into the deep end by courageously redesigning your assignment whilst introducing summative peer assessment. We suggest there are three broad and alternative tactics for ‘getting started’ using teammate peer assessment and feedback, presented in Table 3.1. We discuss the advantages of each tactic later.
Table 3.1 Tactics for introducing teammate peer assessment and feedback
Tactic | Description |
---|---|
Tactic 1 Immediate shallow-entry formative | Introduce formative teammate peer feedback using a group assignment that has already been designed and/or is being currently undertaken by students. |
Tactic 2 Incremental summative | Extend an existing group assignment by incorporating formative teammate peer feedback and summative peer assessment with feedback. |
Tactic 3 Innovative redesign | Create a novel group assignment focussed on realizing the full benefits of group assignments and teammate peer feedback and assessment. |
© Peer Assess Pro |
TIP! Consider introducing formative teammate peer assessment and feedback with your group assignments currently in progress. Early introduction will help you to identify and intervene proactively with at-risk teams and individuals.
TACTIC 1 – Immediate shallow entry
Introduce formative teammate peer feedback using a group assignment that has already been designed and/or is being currently undertaken by students.
The immediate, shallow entry tactic can be applied even if you have already designed your group assignment and begun teaching your class. The tactic gives you the opportunity to experience the benefits of using teammate peer feedback and use of a modern digital peer feedback platform.
Simply introduce teammate peer feedback in conjunction with your existing group assignment. Make a small addition to your teaching plan. Specifically, add a formative peer assessment early in the group work, perhaps by week three or just before the mid-point. Next, distribute the collated survey responses back to each teammate. Now, encourage and support each team to undertake the courageous conversations they need to confront teammates whose behaviour may be unproductive. Simultaneously, students can offer positive recognition to their more constructive teammates and leaders. The detailed processes for managing your students’ courageous conversations through a feedback event are presented in Step 6 – Courageous conversations.
Given this timely formative feedback, your teams will be nudged towards improving their processes of working together with consequential improvement to the quality of their delivered outputs. More importantly, you will receive early warning of at-risk students and teams. You can address such issues by adjusting the composition of teams or other appropriate counseling.
To support this shallow entry tactic, you may wish to announce and adopt Academic Policy 5 – Three strikes and you’re out, and Academic Policy 6 – The power of one, introduced as academic policies in Chapter 2.
TACTIC 2 – Incremental summative
Extend your existing group assignment by incorporating both formative teammate peer feedback and summative peer assessment.
Deploying Tactic 1 gains you limited benefits arising from the process of formative peer feedback: you can address manifestly dysfunctional and at risk behaviour identified by your students earlier in the academic programme. However, you and your students gain additional benefits of improving fairness and minimizing freeloading behaviour when you incorporate summative peer assessment into your assessment regime. Specifically, you determine each students’ final results proportional in some way to their contribution to the team’s outputs combined with the result you award to the team as a whole for its delivered outputs.
Practically, you deploy both formative peer feedback early in your teams’ work (as per Tactic 1) with summative peer assessment conducted at the conclusion of the group assignment. As with Tactic 1, you need not adjust your group assignment specification apart from announcing that the academic policies, will apply. Students will want to know the mechanism of peer assessment, and how the process impacts on their academic grades. These matters are detailed later in this chapter. In summary, implementing Tactic 2 progresses your implementation of Pillar 1 and Pillar 2.
TACTIC 3 – Innovative
Design a new group assignment focussed on realizing the full benefits of group assignments and teammate peer feedback and assessment.
There are several modern well-defined teaching approaches that focus on delivering the benefits of group assignments mentioned in Why group assignments? Chapter 1 in addition to those realized by Tactic 2. These benefits include delivered outputs produced to a higher academic quality, students feeling a greater sense of engagement and satisfaction with the results achieved; and students developing professional teamwork capabilities valued by employers. The teaching approaches include authentic learning, project-based learning (PBL), team-based learning (TBL), and action learning.
Authentic learning
Authentic learning refers to a wide variety of educational and instructional techniques focused on connecting what students are taught at university to real-world issues, problems, and applications. The principles of authentic learning underpin several of the teaching approaches discussed later. The basic idea is that students are more likely to be interested in what they are learning, more motivated to learn new concepts and skills, and better prepared to succeed in college, careers, and adulthood if what they are learning mirrors real-life contexts, equips them with practical and useful skills, and addresses topics that are relevant and applicable to their lives outside of school (Great Schools Partnership, 2013).
One important principle of authentic learning is that it mirrors the complexities and ambiguities of real life. In its purest expression, authentic learning culminates in students making some form of genuinely useful contribution to their community or to a field of study.
Project-based Learning (PBL)
Project-based learning (PBL) is a teaching method strongly informed by the authentic learning approach. Students gain knowledge and skills by working for an extended period of time to investigate and respond to an authentic, engaging, and complex question, problem, or challenge. Extensive resources to support adoption of Project Based Learning are available from the Buck Institute for Education, https://www.pblworks.org. Although the Buck Institute is focussed mainly on education at the pre-university level, its principles certainly apply in higher education, especially as the primary vehicle to demonstrate competencies in a student’s intended profession. Reports on the progressive uptake of Project-Based Learning in higher education include Wurdinger (n.d.), Wurdinger, (2016), Ríos et al., (2010) and Shannon (2016).
Team-based learning (TBL)
Team-based learning is a specialized form of group learning with distinctive goals and procedures developed by Michaelsen, Knight, and Fink (2002). In team-based learning, student groups engage in meaningful, problem-focused tasks. The premise of the method is that team cohesion will lead to learning (Michaelsen & Sweet, 2012). Compared with group learning and problem-based learning, TBL is distinctive through the application of four principles (Sisk, 2011).
- Form heterogeneous teams, composed of five to seven members assigned anonymously
- Stress student accountability
- Provide meaningful group assignments focussed on solving a real-world problem
- Provide feedback to the students.
Team-based learning has been used in a variety of fields such as business, engineering, the natural sciences, mathematics, medicine, nursing, informatics, and the humanities. The state of the science related to TBL is unclear. However, a systematic research review is presented in Sisk (2011).
In some implementations, the teacher incorporates an extension of the flipped classroom. Specifically, the teacher prescribes homework for students to prepare before each class. Immediately upon arrival in class, the teacher conducts a test to establish the the level of each student’s preparation, the Individual Readiness Assurance Test (IRAT). Without revealing the results, the teams into which each student has been allocated now repeat the same test. In this second test round, the teammates discuss their response before a single response on behalf of the team is made. The teacher now grades the second test, the Team Readiness Assurance Test (TRAT). Ideally, each team’s result will be better than the average of the individual teammates’ first round grades but this is by no means guaranteed: a domineering but misguided student may push a team’s performance downwards!
The teacher focusses their teaching on the most critical gaps of knowledge revealed by the results of the IRAT and TRAT tests, rather than teaching everything scheduled for the day. This practice leaves time for the teacher and class to engage on additional group activity that practices application of the knowledge learned, for example a case study discussion.
To promote rapid development of team cohesion and mitigate freeloading, teammate peer assessment and feedback can be used intensively (weekly or fortnightly) in a team-based learning programme. Whilst the individual test results of students are confidential, teammates are quick to discern who is well-prepared for the TRAT test. Consequently, they reward or sanction their teammates through their responses to the teammate peer assessment. A personal result proportional to contribution can be readily determined either using a peer assessed score, or a personal result based on combining the peer assessed score and TRAT result, as detailed later in this chapter. An example of this implementation is presented in Mellalieu & Quifors, (2018).
Action learning
The focus of action learning is to balance a group solving one or more real, organisational problems with the group members’ own personal and professional development. Unusually, the group is coached to learn from its own resources rather than relying on an external trainer or specialist to present new skills and knowledge. Crucially, an action learning facilitator coaches the group through identifying and defining the problem(s) to be investigated, and identifying the resources the group will use to learn how to solve the problem and implement the recommendations developed. Typically, the problem is chosen specifically to be valuable to the host organization and sufficiently ill-defined in its initial specification to demand exploration of diverse and creative approaches to its investigation and resolution.
Action learning comprises six components (Marquardt, 2004)
- A significant, urgent problem, project, challenge or issue the resolution of which is important to an individual, team, or the organization,
- An action learning group of four to eight individuals with diverse backgrounds and experience
- A process of insightful questioning and reflective listening that takes precedence over a focus on finding the ‘right answer’
- The expectation that action will be undertaken to implement the solutions recommended by the action learning group
- A commitment to broad learning about the organization, problem solving under uncertainty, systems thinking, and personal development
- A coach who helps the group reflect on its problem solving process and learning.
Action learning is more often associated with professional development in the workplace rather than in higher education, though examples in the latter are known such as the case example presented in Chapter 2 and Leberman & Mellalieu, (1996).
Implementing innovative group assignments
When adopting Tactic 3, as with Tactic 2, you deploy formative peer feedback early in your teams’ work. You conduct summative peer assessment at the conclusion of the group assignment. If the group assignment is conducted over a long period and/or contributes significantly to the weight of assessment grade for the programme, then consider several additional intermediate formative or summative peer assessments with feedback.
As with Tactic 2, your group assignment specification will announce the academic policies that will apply. Similarly, students will want to know the mechanism of peer assessment, and how the process impacts on their academic grades. These matters are detailed later in this chapter.
Assessment rubrics
When you introduce a revised or new group assignment you have the opportunity to examine afresh the rubric(s) that will be used to grade the group’s delivered outputs and teamwork processes undertaken by the group. Consider your options by reviewing the section Assessment of outputs delivered from group assignments, Chapter 1.
Coaching for teamwork and peer assessment
Pillar 3 and Pillar 4 assert that the full benefits of group work and peer assessment arise from training in both teamwork, peer assessment and peer feedback. The section Turbocharging teamwork, Chapter 2, illustrated how these full benefits arise. Consequently, ensure you schedule time in your teaching plan for these activities, which are discussed in later chapters.
TIP!Review and adapt good examples of group assignments presented in educational forums discussing project-based learning (PBL), team-based learning (TBL), and action learning.
Advise academic policies for group work
Check your institution or department’s academic policies for group assignments and peer assessment. Select and document those applicable to your teaching context and aims from the schedule described in detailed in Chapter 2.
- Policy 1 – Individual contributions to group work must be identified
- Policy 2 – Peer assessed grades are advisory
- Policy 3 – Fair opportunity to adjust behaviour and outcome
- Policy 4 – Failure is an option
- Policy 5 – Three strikes and you’re out
- Policy 6 – The power of one
- Policy 7 – No last minute team arrangement changes
- Policy 8 – Exclusion of self-assessment
- Policy 9 – Limit on group work contribution
- Policy 10 – Academic dishonesty in peer assessment
Personal results from peer assessment
Pillar 1 asserts that ‘awarding all group members the same grade is not valid, fair, nor motivating for students’. Furthermore, Pillar 2 asserts that ‘freeloading on group projects is less likely if students’ contributions will determine their grades’. To emphasise your commitment to these pillars, present Figure 3.1 to convince your students you ‘mean business’ through your intent to apply teammate peer assessment as illustrated.
As an example, let’s assume that 50/100 is the pass grade criterion for our class Ornithology 101 in which our teams are named after native and exotic birds of Aotearoa New Zealand. Figure 3.1 illustrates how ‘Failure is an option’, for one member in each of the teams Yellow Mohua, Grey Warblers and Black Robins. Someone in team Brown Kiwi is borderline. All the members of Team Red Ruru failed. However, that is due to the teacher awarding a fail team result.
Specifically, in team Black Robins teammate Kamryn Miller is identified clearly as a freeloader by their teammates! The team result awarded by the teacher was 95. However, when teammate peer assessment is used to determine a personal result, in this example Kamryn is awarded 42, which is 47 marks below the median of the team’s personal results, 89.
You’ll observe for team Grey Warblers the wide range of personal results spread from 100 through 5. You’ll infer that a wide range, as in both teams Grey Warblers and Black Robins, is symptomatic of serious team dysfunction and/or a freeloader poorly managed by the team!
Figure 3.1 Example of peer assessment used to determine personal result
The methods of calculation are defined and demonstrated mathematically in Further Resources: FAQ.© Peer Assess Pro. All rights reserved.
Extra for experts
More precisely, when the spread of personal results within a team presents a ‘long drop’ highlighted by the red-coloured bar beneath the team median, you should suspect a severe outlier underperformance in the team. This severe outlier is confirmed by the table of data for the same team. For example, the team median for Black Robins is 89, the boundary between the red and yellow bars, in contrast to the team result of 95. The median is defined as the middle-ranked value of a team when you have an odd number of teams members. However, the median in a team with an even number of teammates, as in the case of Black Robins, is calculated from the arithmetic mean of the two midpoint data, (100 + 78.6) / 2 = 89 as you inspect the data in the personal result table for the team. There will be more discussion of this topic in Step 5 – Manage the peer assessment and Step 6 – Courageous conversations, Chapters 7 and 8.
Calculation process for personal result
The basic process for undertaking the calculations required to produce a peer assessed score and personal result through using teammate peer assessment is
- Teammates rate each other using a teammate peer assessment survey rubric such as those illustrated in Gallery 3.1.
- Calculate a sub-total peer assessed score FOR each student in the team from the sum of the ratings on each survey rubric item FROM each student in that team. A calculation example is illustrated in Table 3.3.
- Calculate a total peer assessed score FOR each team member by summing the sub-total peer assessed score each has received FROM their teammates, Table 3.4.
- Find your set of team results. The team result is the grade the teacher awards for the total set of delivered outputs from each team, such as a report, presentation or performance.
- Calculate a personal result for each student in the class from their peer assessed score ‘mathematically combined’ with the team result.
- Collect self-ratings. The self-rating is used to calculate an Index of realistic self-appraisal (IRSA).
As an academic policy, self-assessment ratings are generally excluded from the calculation of the total peer assessed score. However, you collect self-assessments as they have value for risk managing the conduct of group assignments and counseling at risk and underconfident students.
You’ll conclude quickly that once your class size exceeds more than a few teams, you’ll need a digital survey system to gather your students’ peer assessment ratings. Furthermore, once you have the ratings, you’ll at least need a spreadsheet to produce your gradebook of peer assessed-based personal results for your assignment. The grade-book is derived from each student’s peer assessed score ‘mathematically combined’ with their team’s result as I explain later.
An example of a digital platform that conducts these survey management and gradebook calculation processes is presented in Movie 3.1.
Now let’s delve more deeply into the comparative advantages of peer assessment rubrics. Next, I’ll illustrate how personal results are ‘mathematically combined’ from peer assessed scores and team results using several approaches commonly used by teachers.
Rubrics for teammate peer assessment surveys
Two rubrics often used for teammate peer assessment are presented in Gallery 3.1. An ideal survey rubric for a teammate peer assessment must
- Assess a team member’s contributions to both the output of the team and the processes of working together as a team
- Distinguish between different elements of contribution to the team’s work
- Use language that can be readily understood by students and teachers from many academic contexts, and diverse backgrounds
- Enable the calculation of a fair personal result proportional to the peer assessed contribution
- Have authoritative foundations based on group psychology scholarship and the requirements of employers and professions
- Provide the basis for enabling comparisons with achievement over multiple survey instances within or between academic programmes
- Provide useful guidance to behavioural change when feedback results are provided to teammates
One ten-factor peer assessment rubric that meets several of these criteria is that developed by Deacon Carr, Herman, Keldsen, Miller & Wakefield (2005). The rubric is used as one component in their Team Learning Assistant. The first five factors, Initiative through Ideas and Learning, shown in Table 3.2, measure a teammate’s Overall Contribution towards Task Accomplishment. The second set of factors, Focus and Task Allocation through Chairmanship measure contribution towards Leadership and Teamwork Processes.
Factor | Description |
---|---|
Initiative | Shows initiative by doing research and analysis. Takes on relevant tasks with little prompting or suggestion. |
Attendance | Prepares for, and attends scheduled team and class meetings. |
Contribution | Makes positive contributions to meetings. Helps the team achieve its objectives. |
Professionalism | Reliably fulfills assigned tasks. Work is of professional quality. |
Ideas and learning | Contributes ideas to the team's analysis. Helps my learning of course and team project concepts. |
Focus and task allocation | Keeps team focused on priorities. Facilitates goal setting, problem solving, and task allocation to teammates. |
Encourages contribution | Supports, coaches, or encourages all teammates to contribute productively. |
Listens and welcomes | Listens carefully and welcomes the contributions of others. |
Conflict management and harmony | Manages conflict effectively. Helps the teamwork in a harmonious manner. |
Chairmanship | Demonstrates effective leadership for the team. Chairs meetings productively. |
Rubric adapted from Deacon Carr et al. (2005) |
The Deacon Carr et al. approach requires students to rate their teammates on a one to five point Likert Scale across the ten factors of the rubric. An example of how a survey question for the Contribution factor appears to students as a Likert Scale is shown in Figure 3.2.
Figure 3.2 Rating the Contribution factor using a five-point Likert Scale
The Likert scale factors are defined completely as 1 = Almost never, 2 = Seldom, 3 = Average, 4 = Better than most, 5 = Outstanding. Adapted from Deacon Carr et al. © Peer Assess Pro. All rights reserved.
Another rubric by Ohland, Loughry, Wooer et al. (2012) is defined by five broad sets of factors illustrated in Gallery 3.1.
- Contributing to the team’s work
- Interacting with teammates
- Keeping the team on track
- Expecting quality
- Having relevant knowledge, skills, and attributes.
Whilst Carr et al. uses a Likert Scale, Ohland et al. propose a Behaviourally Anchored Rating Scale (BARS). Specifically, for each factor, the rater makes distinctions based on observed behaviours defined explicitly. For example, Table 3.3 illustrates the BARS scale for just one of the five factors, Contributing to the team’s work, and the rating value associated with the levels of behaviour that might be observed.
Rating value | Behaviour observed |
---|---|
5 | Does more or higher-quality work than expected. Makes important contributions that improve the team's work. Helps to complete the work of teammates who are having difficulty. |
4 | Demonstrates behaviours described in both 3 and 5. |
3 | Completes a fair share of the team's work with acceptable quality. Keeps commitments and completes assignments on time. Fills in for teammates when it is easy or important. |
2 | Demonstrates behaviours described in both 1 and 3 |
1 | Does not do a fair share of the team's work. Delivers sloppy or incomplete work. Misses deadlines. Is late, unprepared, or absent for team meetings. Does not assist teammates. Quits if the work becomes difficult. |
Source Ohland et al., (2012) |
Selecting from an established rubric
The Likert Scale and BARS rubrics each have their advantages and drawbacks. The Likert approach is simpler and quicker for students to rate. However, the BARS approach is more likely to give ratings that compare across rating factors over several courses and over time, provided students have been trained to rate fairly and validly. Alternatively, the Carr et al approach gives you flexibility to define the specific behaviours that relate to each of the five levels of performance for the ten factors specific to your academic programme’s learning outcomes. This adaptation process is detailed in Step 3 – Train your students – Practice applying the survey rubric.
Teacher-created rubrics
Using authoritative rubrics such as the Deacon Carr et al or Ohland et al. is superior to allowing teachers to create their own rubrics for teammate peer assessment. Furthermore, either rubric can be used to calculate a fair personal result proportional to a peer assessed contribution calculated from the sum of ratings across the component rating factors. Consequently, either rubric can be used to produce standardized indexes that enable comparisons of a student’s progress from assignment to assignment, from year to year, for example, as part of a teamwork across the curriculum (TAC) strategy. This progression cannot be determined validly if each teacher constructs and applies their own rubric for teammate peer assessment.
Sometimes teachers want students to evaluate the quality of components of the delivered outputs being developed by the team. For example, the quality of text submitted for a business plan, or the quality of software code forming part of a software application project. In general, these requirements should not be assessed using teammate peer assessment, but through peer assessment using genre-specific rubrics. The distinction between peer assessment and teammate peer assessment is defined in Figure 1.1, Chapter 1. Software platforms such as Turnitin PeerMark facilitate this type of peer assessment.
Calculate a fair grade from peer assessment
Once the teammates have rated each other using a teammate peer assessment survey you calculate a peer assessed score (PA Score) FOR each team member by summing the peer assessed sub-scores each has received FROM their teammates. It is common practice to scale the the peer assessed score to fit within a range from zero to 100.
Table 3.4 shows the calculations based on the survey form submitted by one teammate, Alexander SAMPSON, FOR his four teammates in team Blue Robins. Alexander has assessed himself and the other three teammates using the ten-factor rubric presented in Table 3.2. For example, for the factor Contribution, Alexander has rated Kamryn MILLER 1 out of 5 corresponding to ‘Almost never’ on the five-point Likert scale presented in Figure 3.2.
My name is | I am rating | |||
---|---|---|---|---|
Alexander SAMPSON | Alexander SAMPSON (Self) | |||
Mikaela RAY | ||||
Ramon MCNIGHT | ||||
Kamryn MILLER | ||||
1. Initiative | 5 | 4 | 3 | 2 |
2. Attendance | 5 | 5 | 3 | 3 |
3. Contribution | 5 | 5 | 3 | 1 |
4. Professionalism | 5 | 5 | 3 | 2 |
5. Ideas and learning | 4 | 4 | 3 | 3 |
6. Focus and task allocation | 5 | 4 | 3 | 2 |
7. Encourages contribution | 4 | 4 | 2 | 2 |
8. Listens and welcomes | 4 | 4 | 2 | 1 |
9. Conflict management and harmony | 4 | 4 | 2 | 1 |
10. Chairmanship | 5 | 4 | 2 | 1 |
Subtotal | 46 | 43 | 26 | 18 |
Peer Assessed Sub-score | 90 | 83 | 40 | 20 |
Peer Assessed Sub-score = 100 x (Subtotal - 10) / 40. Assumes each component is equally-weighted. |
You’ll receive a further three survey forms from the other teammates of Blue Robins. After you calculate all the peer assessed sub-scores for every combination of student in the team, you’ll produce a table like Table 3.5. The row Received FROM Alexander SAMPSON matches the sub-scores from Table 3.4 and is highlighted in yellow.
Next, you’ll calculate a peer assessed score FOR each team member by summing the peer assessed sub-score each has received FROM their teammates. The self-assessment score is excluded, according to Academic Policy 8 introduced in Chapter 2. Table 3.5 illustrates the calculation of the peer assessed score of 85 FOR Alexander SAMPSON from the sub-scores received from his three teammates, namely (75+85+95)/3 = 85. Note this set of peer assessed scores matches the values shown for team Blue Robins in Figure 3.1.
Received FROM | Peer assessed sub-score FOR | |||
---|---|---|---|---|
Alexander SAMPSON | ||||
Mikaela RAY | ||||
Ramon MCNIGHT | ||||
Kamryn MILLER | ||||
Alexander SAMPSON‡ | 90# | 83 | 40 | 20 |
Mikaela RAY | 75 | 70# | 60 | 28 |
Ramon MCNIGHT | 85 | 78 | 100# | 40 |
Kamryn MILLER | 95 | 72 | 51 | 30# |
Peer assessed score | 85 | 77.5 | 50.8 | 29.1 |
‡ from Table 3.3 Row Peer assessed sub-score. Example: Peer assessed score FOR Alexander SAMPSON = (75+85+95)/3 = 85. #Self-assessed sub-scores are excluded from the calculation of the Peer assessed score. |
Does the rubric matter?
It does not matter what rubric you select for teammate peer assessment provided you can combine the component ratings into a single peer assessed score. Some rubrics apply different weights to the components. Some teachers require a student to rank order the relative contribution of each teammate to the team’s outputs. Finally, a student may be asked to ‘divide a cake’ of, say, 100 pieces amongst their teammates in proportion to the contribution.
Ultimately, whatever method is used, you can manipulate the resulting ratings of contribution into a set of numbers that rate the relative performance of each teammate within the team: a peer assessed score. Once you have such a set of numbers, like Table 3.5, you can apply one of many methods to distribute the teacher-assessed team result according to each teammate’s relative contribution measured by their peer assessed score.
Can peer assessed scores be compared?
In short, peer assessed scores, such as those in Table 3.5, can only be compared validly within a team. This assumes there have been no deliberately unfair ratings. Alexander SAMPSON (85) is clearly the best performer in team Black Robins, whilst Kamryn MILLER (29.1) is the weakest contributor. However, peer assessed scores cannot be compared validly between teams of the same class unless the students have been well-trained to rate fairly and accurately. Furthermore, comparing peer assessed scores between different classes and even between different assessment events in the same class is not a robustly valid process.
Nevertheless, despite the dubious comparability of peer assessed scores it is possible to apply mathematical treatments to enable valid comparison of students’ results between teams and between classes. More importantly, it is quite possible to calculate a valid, academically defendable personal result from the peer assessed score. These matters are discussed in the following sections.
Calculate personal result from peer assessed score
Earlier, we used the term ‘mathematically combined’ somewhat evasively when referring to how we determine a personal result from a peer assessed score. Simply multiplying the peer assessed score by the team result will yield inadequate results for reasons suggested in the previous section. Furthermore, when students know that simple multiplication is your approach, our experience is that some teams will collude and rate each other the highest measure presented on the rubric scales!
There are several methods for calculating personal results from peer assessed scores. Some of these methods address the issue of enabling valid results to be determined. Table 3.6 compares features of several alternative methods.
Method | Symbol | Example feature |
---|---|---|
Peer assessed score | PA Score | The raw peer assessed scores calculated from the peer assessment survey rubric. Scaled arithmetically to range from zero to 100. Not comparable between team within the same or different academic programmes. |
Indexed peer assessed score | PA Index | The best-rated team member in each team is awarded 100/100. All others scaled down proportional to the peer assessed score. |
Indexed personal result | IPR | The best best-rated team member in each team is awarded the team result. All others scaled down proportional to the peer assessed score. |
Normalized personal result | NPR | The average team member in the team is awarded the team result. All others scaled above or below the average, so that the average of the team’s result matches the team result. |
Rank-based personal result | RPR | The average team member in the team is awarded the team result. All others scaled above or below the average proportional to the relative rank of the teammates’ peer assessed score. |
Standard peer assessed score | SPAS | The average-rated team member in each team is awarded 50/100. Feature-scaled transformation adjusts all others below or above 50 to enable intra-institutional comparisons over time. |
The methods of calculation are defined and demonstrated mathematically in Frequently asked questions. © Peer Assess Pro. All rights reserved. |
Select a personal personal result method
TIP! You need not make the final choice of personal result method until after you have received your survey responses from students.
Consider announcing to your students that you will a) adopt the Normalized Personal Result method (NPR) and (b) Apply Academic Policy 2 – Peer assessed grades are advisory to your assessment process.
The Normalized Personal Result method rewards students with above average contribution with a personal result above the team result. Similarly, below average and freeloaders are penalized with a personal result below the team result. Incidentally, the NPR method was used in the example of personal results in Figure 3.1.
Adopting Academic Policy 3 you have personal flexibility to adjust ‘in the field’ your team arrangements, choice of method, scaling and weighting factors as you receive your first set of responses from your peer assessment survey.
To help you consider your choice of personal result method, let’s compare the features and benefits of the methods presented in Table 3.6 that are based on the peer assessed scores calculated in Table 3.5.
First, consider the effect of choosing each method for 27 students spread across our seven teams from our class Ornithology 101, illustrated in Gallery 3.2. We’ll highlight the effect by focussing specifically on members of the team Black Robins, which achieved a team result of 95.
The first two gallery images present methods that exclude using the team result in the calculation of a personal result: peer assessed score (PA Score) and Peer Assessed Index (PA Index). Note the peer assessed scores are those demonstrated in Table 3.5. Often you have no team results when you conduct a formative peer assessment, so these two methods are the only feasible options for that circumstance.
The subsequent images in Gallery 3.2 show methods that combine mathematically the teacher’s team result with the peer assessed score to give a personal result: Indexed Personal Result (IPR), Normalized Personal Result (NPR) and Rank-based Personal Result (RPR).
Table 3.7 summarises and compares all the results presented in Gallery 3.2 for the single team Black Robins that was used to demonstrate the calculations in earlier tables. The underlying mathematics of the personal result calculation methods shown in Table 3.7 are presented in the Frequently asked questions below.
TIP! Curious or ambitious students will want to know the potential impact on their grade of teammate peer assessment. Show illustrations of the potential outcomes that may experienced for both high achievers and freeloaders.
Show students Figure 3.1 and the example calculations in Table 3.4 and Table 3.5. Finally, refer students to the Frequently asked questions below.
Method | Symbol | Teammates’ personal result | |||
---|---|---|---|---|---|
Alexander SAMPSON | |||||
Mikaela RAY | |||||
Ramon MCNIGHT | |||||
Kamryn MILLER | |||||
Peer assessed score | PA Score | 85 | 77.5 | 50.8 | 29.1 |
Indexed peer assessed score | PA Index | 100 | 91.1 | 59.8 | 34.3 |
Indexed personal result | IPR‡ | 95 | 86.6 | 56.8 | 32.5 |
Normalized personal result | NPR‡€ | 100 | 100 | 84 | 59.8 |
Normalized personal result | NPR‡fi | 100 | 100 | 78.6 | 42.2 |
Rank-based personal result | RPR‡ | 100 | 100 | 76 | 38 |
Standard peer assessed score | SPAS± | 72 | 65.2 | 41.2 | 21.6 |
‡Team result is 95. €NPR Scale factor is 1.0. fiNPR Scale factor 1.5 ±Standardized to mean 50 standard deviation 20, Team result excluded. |
Table 3.8 suggests the typical applications for the alternative methods for determining personal result presented earlier.
Method | Symbol | Typical applications |
---|---|---|
Peer assessed score | PA Score | No team result is available. When you want to see how honest and competent students assess each other according to the Likert or BARS ratings. |
Indexed peer assessed score | PA Index | No team result is available. When you want to give the best performer in each team 100. WARNING: invites collusion by students so everyone scores maximum 100. |
Indexed personal result | IPR | A team result is available. Used by teachers averse to giving any student more than the result achieved by the team. |
Normalized personal result | NPR | A team result is available. Used by most teachers. Works effectively and transparently to reward above average contribution and penalize freeloaders or counter-productive teammates. A scale factor adjusts the reward:penalty impact. |
Rank-based personal result | RPR | A team result is available. Used to apply a standard, very severe reward:penalty spread of personal results within each team. Amplifies minor differences in peer assessed scores between teammates based on rank within team. |
Standard peer assessed score | SPAS | You want to compare peer assessments between teams, assignments, programmes, and years for the purposes of eduanalytic investigations. |
© Peer Assess Pro. All rights reserved. |
Explore a digital peer assessment platform
Gallery 3.1 presents two peer assessment rubrics in a form that could be used to conduct a paper-based survey. However, to conduct formative and summative teammate peer assessment and feedback for anything but the smallest of classes, you will conclude you’ll need a mechanism that, as a minimum
- Surveys all students enabling them to rate each teammate using a peer assessment rubric
- Collates the survey results calculating a peer assessed score for each teammate, a measure of each student’s relative contribution to the team’s work
- Calculates a personal result for each student through combining every peer assessed score with the teacher-determined team result
- Identifies dysfunctional and at risk behaviour by teams and individuals to enable proactive intervention by the teacher
- Delivers feedback results to each surveyed student
Several of these minimum requirements are illustrated in the digital platform shown in Movie 3.1. As you work through the remaining steps in this book, you’ll discover additional requirements for peer assessment platform that will improve your practice and make life easier.
How Peer Assess Pro helps
The Peer Assess Pro teammate peer assessment digital platform provides teammates with personal reports that encourage timely, constructive conversations around teammates’ strengths, and opportunities to improve contribution to their team’s future achievement. Table 3.9 summarises the key features that Peer Assess Pro contributes as you consider design of your group assignment.
Feature | Benefit |
---|---|
Authoritative survey rubric | Rubric factors aligned with requirements of academic programmes, professions and employers. |
Fixed survey rubric | Reduces the need for teachers to train students how to use the rubric to rate teammates accurately and fairly. Enables valid comparisons between different groups and programmes from year to year. |
Survey administration | Administers the distribution and collation of peer assessment surveys, and the feedback of results to students. |
Team dysfunction alert | Identifies dysfunctional and at risk behaviour by teams and individuals to enable proactive intervention by the teacher. |
Alternative result methods | A comprehensive selection of established methods for determining each students’ personal result combining their peer assessed score (contribution) with teacher-awarded team result. |
Deferred selection | The choice of personal result calculation method may be postponed until late in the survey’s deployment and management, after some initial survey results are available. |
Knowledge base | Comprehensive examples and explanation of the principles and mathematics underpinning each method of personal result calculation. Advice on when best to apply each method. |
© Peer Assess Pro Ltd |
Return to top of page