Identify how to select the peer assessment platform that meets your needs
See how we create better teams through better feedback
Peer assessment platforms
We explore the features relevant to evaluating and choosing a digital peer assessment platform. We examine how the features are prioritized according to the particular needs of a user and/or their institution. We propose methods for conducting a comparative evaluation of alternative digital platforms when multiple stakeholders from the institution are involved in decision making.
Checklist
- Identify the general and specific features of an ideal teammate peer assessment platform
- Identify the features crucial to your peer assessment requirements
- Evaluate the competitive benefits of alternative peer assessment platforms/li>
Considerations for undertaking teammate peer assessment
The earlier chapter Why Peer assessment? observes that the teacher, as an absolute minimum, requires a teammate peer assessment platform that
Minimum features required of a digital platform
The chapter Why Peer assessment?, Chapter 2, observes that a teacher requires a teammate peer assessment platform that, as an absolute minimum
- Surveys all students enabling them to rate each teammate using a peer assessment rubric
- Collates the survey results and calculates a peer assessed score for each teammate, a measure of each student’s relative contribution to the team’s work
- Calculates a personal result for each student through combining every peer assessed score with the team result determined by the teacher for each team’s delivered outputs
- Identifies dysfunctional and at risk behaviour by teams and individuals to enable proactive intervention by the teacher and proactive responses by teammates.
- Delivers feedback results to each surveyed student including the peer assessed result, personal result, team result, and guidance on the behaviour required to improve their future personal result.
These minimum requirements are illustrated in the digital platform shown in Movie 3.1.
Broader considerations for platform selection and adoption
When you consider the activities required to conduct each of the 7 Steps for successful group assignments you will recognise that additional features are required from your digital teammate peer assessment platform. For example, occasionally you must adjust your team arrangement during a survey to respond to new students arriving in your class and/or students requiring to be reallocated to another team. Moreover, there are wider factors to be considered such as compatibility with your institution’s platforms for gradebook management, single sign-on for students and teachers, and integration with learning management systems. Equally important, there are technology transfer issues.
- How will teachers, and academic programme leaders discover the need and opportunity for conducting teammate peer assessment? … as part of existing group assignments conducted within the institution? … as part of new academic programme design or redesign?
- How will teachers and students learn to make effective use of a teammate peer assessment platform?
- Who will assure the claims for validity, reliability and fairness are delivered by the approaches used to conduct teammate peer assessment? What standards and processes will assessor from within and beyond the institution apply?
- How can the institutional risks of using group assignments and teammate peer assessment be minimized through correct selection and application of academic policies?
A tale of three platforms
This chapter presumes you are leading a project team empowered to investigate and recommend adoption of a teammate peer assessment platform, and proposing its roll-out to prospective users. You have identified three teammate peer assessment platforms that appear to meet your initial assessment of requirements. If you pursue a well-structured, rational, analytic decision-making process for advanced technology adoption these are the essential steps which I’ll illustrate with a case example later.
- Identify the stakeholder groups material to the decision-making process
- Identify a schedule of requirements and Features by consulting representatives of the stakeholder groups. See Table 10.1.
- Prioritise stakeholders’ requirements according to degree of Importance, Table 10.2 and Table 10.3.
- Generate a schedule of candidate technology platform suppliers
- Evaluate the extent to which each candidate Platform meets every stakeholder requirement, Table 10.4.
- Calculate a heatmap contribution value for every platform evaluation, Table 10.5. The heatmap value combines Importance and Performance for each Factor.
- For each candidate platform sum the heatmap contribution values across all requirements.
- Select the candidate platform that achieves the highest heatmap sum that meets all requirements that are Moderately important or better, Figure 10.1.
Identify the stakeholder groups
A material stakeholder includes those actors with interests in adopting and using educational technology or assuring the validity of educational assessments. Specifically, a material stakeholder may be affected by the platform approach adopted, or they could influence matters appreciably. It’s important to identify their current roles, interests and power positions. Stakeholder groups in a higher education setting include
- Teachers of small and large classes
- Teachers’ tutors and administrative support
- Learning consultant support for teachers and students
- Budget holders, for example, Departmental or faculty manager, Director of teaching and learning
- Technology support
- Institutional analysts, for example, student success and retention
- Senior policy makers, for example Faculty deans, Academic programme directors
- …. and what about the students?
Identify a schedule of platform requirements
Before you consult with representatives of your stakeholder groups, you’ll form a preliminary assessment of the key benefits sought from deploying a teammate peer assessment platform. As you consult your stakeholders, your assessment will be extended! A preliminary assessment of key benefits should include
- Decision support and time savings The teacher saves time deploying and managing the survey, interpreting results, and arranging for feedback to students. Outlier, dysfunctional or missing responses from students and teams are highlighted for the teacher’s attention. Exploratory data analysis.
- Grade fairness and student development Each student receives a timely, personalized peer assessment report that gives them confidence their personal result is fair and provides guidance on what behaviours and attitudes they must adopt to achieve improved results from future teamwork.
- Confidence, validity and reliability The survey rubric and calculations undertaken by the platform conform with good practice and are correct. The peer assessed results from one teammate peer assessment are comparable with other surveys from within the same class, between classes in the same academic programme, and across the entire institution…. and beyond.
- Flexibility The teacher can accommodate rearrangements of teams whilst the survey is in progress. Alternative methods for calculating personal results according to peer assessed scores. Scale factors that weight the degree to which the team result and/or peer assessed score influence each student’s personal result.
- Workflow integration The workflow required by the platform’s users (students, teachers, tutors, and support staff) is logical, familiar and integrates with other institutional systems such as learning management systems. For example, gradebook integration; Single sign on for students, Single sign-on for teachers and tutors administering the same academic programme or assessment event.
- End user support. Guidance from the platform provider to enable secure, timely adoption and productive use of the platform. For example, help guides, training, consultancy, and feature extension.
Once you have identified your preliminary assessment of requirements you consult with representatives of each stakeholder group to establish a comprehensive, and ideal ‘wish list’ schedule of requirements. To achieve a schedule like Table 10.1, you might conduct a survey, focus group, brainstorm or brainwriting session. One approach to conduct identify requirements is to request each stakeholder to complete the statement “As stakeholder … I want the Feature … so that I can receive the Benefit …”.
Table 10.1 presents a preliminary schedule of stakeholders, Features (“I want…”) and Benefits (“So that I can…”) sought from a teammate peer assessment platform. If you have several stakeholder groups each with their favorite platform requirements the size of your version of Table 10.1 could be substantial. For example, eight stakeholder groups each with four requirements might give you 30 to 40 items, assuming no overlaps. Consequently, the next step is to weight the Importance of each of these requirements.
Table 10.1 Illustrative schedule of stakeholder requirements
As a | I want ... | So that I can ... |
---|---|---|
Teacher | Calculation of a personal result from peer assessed contribution and team results | Award a fair grade to students undertaking group assignments. |
Teacher | Identification of outlier, dysfunctional or at risk responses | Intervene with a proactive coaching moment, or request a survey resubmission from the student or team. |
Teacher | Convenient rearrangement of team compositions | Cope with late enrollments, no shows, or dysfunctional team members with minimal disruption to the survey process. |
Teacher | Convenient guidance on how to use the platform | Save my time, reduce the risk of problems, help my students get fair grades, help my students develop their teamwork capabilities, and minimise the risk of student complaints. |
Teacher | Training module for peer assessment rating | Assessment validity is raised through my students learning to rate each other accurately and fairly. |
Teacher | Several calculation methods and scale factors for formative and summative assessment | Weight the importance of either the team result or peer assessed score in determining a student's personal result. |
Teacher | Log of all communications to students | Confirm what and when students have received requests to complete the survey, review results, or make a survey resubmission. |
Teacher | Single sign on of teacher through learning management system | Delegate management of my peer assessment surveys to my tutors and teacher aids. |
Teacher | Evaluation and grading of qualitative feedback and rating behaviour | Influence students to produce better quality feedback and ratings. |
Teacher | Flexible peer assessment rubrics | Adapt the platform to some other purpose. |
Student | Informative feedback report | Understand why a student received the grade awarded through peer assessment, and what behaviours must be adjusted to gain a better result in future teamwork. |
Student | Anonymous ratings of teammates | Provide my feedback without fear of retribution. |
Student | Self-help teamwork resources | Improve a student's ability to contribute to teamwork, develop leadership and learn from peer feedback. |
Student | Single sign on of student through learning management system | A student can locate all their peer assessment surveys and results in one place. |
Learning consultant | Single platform with standard teamwork capabilities rubric | Develop a common set of self-help resources for teachers and students for teamwork capability development and teammate peer assessment processes. |
Learning consultant | Identification of persistent at-risk students | Ensure future teachers can proactively support these students’ success. |
Learning consultant | Industry-standard teammate peer assessment rubric Learning consultant | Compare students’ progress developing teamwork capabilities across the entire institution. |
Faculty dean | Valid, fair, and reliable regime for group assessments | Assure the academy, the professions and employers that graduates meet the leadership and teamwork capabilities specified for the qualification they have been awarded. |
Faculty dean | Common set of academic policies for group assignments | Minimise complaints and appeals from students about the conduct of teammate peer assessment and group assignments. |
Technology director | Cost-effective installation, operation and support | The tangible and intangible benefits to all stakeholders exceed lifetime costs. |
Technology director | Capacity for timely addition of new platform features. | Be confident the relationship with platform suppliers is worth the initial investment and engagement. |
Technology director | Learning management system integration | Ensure convenient, but strict control over access to the platform and its data by all stakeholders. |
© Peer Assess Pro Ltd |
Prioritise the importance of your requirements
It is most unlikely that an existing technology platform will achieve your stakeholders’ schedule of ideal requirements identified in Table 10.1. Consequently, you now attempt to seek consensus amongst your stakeholders of what you “really, really want….”. Here is where ‘angels may fear to tread’! … However, this prioritization is a crucial step before seeking a request for proposal from a supplier, or before settling down to evaluate the relative worth of available platforms.
Begin by conducting a survey in which every stakeholder rates the Importance of every “I want…” Feature in your list through applying a Likert Scale of Importance.
- Not at all important
- Slightly important
- Moderately important
- Very important
- Extremely important
Suppose your survey results in the Importance ratings shown in Table 10.2. For the purposes of illustration, the table illustrates responses from a representative of two stakeholder groups, Teachers, and Learning Consultant.
Note that in addition to rating each feature with a different Importance, each stakeholder group could perceive different Benefits arising from the same feature. In the same initial survey, therefore, you might request each stakeholder to report their “So that I can…” justification statement as part of the same survey, for reasons that will become apparent later when the Delphi Method for obtaining consensus is introduced.
Develop consensus on relative importance of features
In Table 10.2., observe the ‘high agreement’ where both stakeholders rate the feature Informative feedback report as Very Important. Both agree wholeheartedly. In contrast, they disagree markedly about several features. For example the Teacher rates the feature Single sign on of student through learning management system’ as Slightly Important, whilst the Learning Consultant rates the same factor as Extremely Important. How do we resolve disagreements like this? Do we give more weight to the teacher’s rating (“I just want to get the job done!”) or more weight to the strategically perceptive Learning Consultant? (“There are broader considerations concerning students’ privacy and security that must inform our technology adoption strategy”).
Through some process of discussion, negotiation – and, perhaps, brutal assertion of budgetary power – the decision making group must agree on a combined set of features and importance ratings. You seek to produce a schedule like Table 10.3. But how?Two broad approaches exist to achieve a combined importance of the ideal features:
- Quick and dirty, for example simply average the importance rating provided by each stakeholder.
- Slow and robust consensus, for example, by applying the Delphi Method.
A quick and dirty method involves simply averaging the rating provided by each stakeholder. This presumes each stakeholder has well-informed, and has equal responsibility, authority and interest in the outcome of the decision. This is an improbable, even presumptuous assumption. Note that the statistical median is the better statistic to use rather than the arithmetic mean.
In contrast, the Delphi Method is a long-established and formal approach to obtaining opinion consensus amongst stakeholders with divergent expertise, competing interests and priorities. The method relies on an iterative procedure through which stakeholders provide several rounds of answers to a series of questions. The key features of the method are (a) anonymity (b) controlled feedback (c) statistical group responses (Gigch, 1978; Turoff & Linstone, 2002). An example applying the Delphi Method to prioritise the design elements for the information technology experience in a university business school is reported in Mellalieu & Hooper (1994).
In simplistic terms, the Delphi Method proceeds thus.
- The annonymised results of the first survey, which rated Importance and rationale (Benefit sought), are circulated to each expert respondent. Statistical reports highlight areas of divergent opinion (dispersion) related to the Importance ratings, such as those observed in Table 10.2.
- Through a second round of the survey, respondents are invited to reconsider their ratings of Importance in light of the annonymised justifications provided by previous respondents about their need for the Feature. In some cases a stakeholder might argue that a particular feature must definitely not be part of the requirements.
- After the first round, Features where there is high consensus on the level of Importance are dropped from consideration in future rounds. For example, considering Table 10.2, their is complete agreement amongst stakeholders that the features Anonymous ratings of teammates rates Extremely Important and Self-help teamwork resources rates Moderately Important. The column Revisit signifies all Features where there is disagreement of at least two units on the Likert Scale of Importance. These Revisit items should be included in the second round of the survey.
For example, a teacher might determine the importance ratings shown in Table 10.2. In contrast, the academic vice-chancellor may regard the feature Valid, fair and credible regime for group assignments as a Must have feature, whereas the teacher rated the feature Prefer to have. Similarly, the student success and retention analyst could regard the feature Identification of persistent at-risk students as a Must Have.
Table 10.2 Illustrative ratings of feature importance by a stakeholder
Feature | Importance to Teacher | Importance to Learning Consultant | Revisit? |
---|---|---|---|
Calculation of a personal result from peer assessed contribution and team results | Extremely important | Very important | |
Identification of outlier, dysfunctional or at risk responses | Very important | Extremely important | |
Convenient rearrangement of team compositions | Very important | Extremely important | |
Convenient guidance on how to use the platform | Moderately important | Extremely important | Revisit |
Training module for peer assessment rating | Moderately important | Extremely important | Revisit |
Several calculation methods and scale factors for formative and summative assessment | Slightly important | Moderately important | |
Log of all communications to students | Moderately important | Very important | |
Single sign on of teacher through learning management system | Slightly important | Moderately important | |
Evaluation and grading of qualitative feedback and rating behaviour | Slightly important | Moderately important | |
Flexible peer assessment rubrics | Slightly important | Not at all important | |
Informative feedback report | Very important | Very important | |
Anonymous ratings of teammates | Extremely important | Extremely important | |
Self-help teamwork resources | Moderately important | Moderately important | |
Single sign on of student through learning management system | Slightly important | Extremely important | Revisit |
Single platform with standard teamwork capabilities rubric | Slightly important | Very important | Revisit |
Identification of persistent at-risk students | Very important | Very important | |
Industry-standard teammate peer assessment rubric | Slightly important | Very important | Revisit |
Valid, fair, and reliable regime for group assessments | Slightly important | Very important | Revisit |
Common set of academic policies for group assignments | Slightly important | Very important | Revisit |
Cost-effective installation, operation and support | Slightly important | Very important | Revisit |
Capacity for timely addition of new platform features. | Slightly important | Moderately important | |
Learning management system integration | Slightly important | Very important | Revisit |
Importance Rating Likert Scale.0 = Not at all important, 1 = Slightly important, 2 = Moderately important, 3 = Very important, 4 = Extremely important. ‘Revisit’ is indicated when the disagreement between stakeholders is greater than one Likert Scale unit. © Peer Assess Pro. All rights reserved. |
The process is repeated several times until acceptable consensus is achieved on most Features. That is, when the dispersion of Importance ratings across the stakeholders is low. For example, an ‘acceptable’ consensus could mean a dispersion of no more than one Likert Scale unit, such as illustrated by the first three features listed in Table 10.2.
Correctly conducted, the Delphi Method removes several shortcomings of group decision processes, such as face-to-face confrontation, the effect of status, and the order of voting. However, the Delphi Method is not appropriate in which situations where changes of opinion are not possible or hard to admit (Gigch, 1978).
Table 10.3 Final importance of features agreed across all stakeholder groups
Feature | Agreed Importance | Importance |
---|---|---|
Anonymous ratings of teammates | 4 | Extremely important |
Calculation of a personal result from peer assessed contribution and team results | 3.7 | Extremely important |
Informative feedback report | 3.3 | Very important |
Identification of outlier, dysfunctional or at risk responses | 3 | Very important |
Training module for peer assessment rating | 2.7 | Very important |
Identification of persistent at-risk students | 2.7 | Very important |
Convenient rearrangement of team compositions | 2.3 | Moderately important |
Convenient guidance on how to use the platform | 2.3 | Moderately important |
Self-help teamwork resources | 2.3 | Moderately important |
Single sign on of student through learning management system | 2.3 | Moderately important |
Log of all communications to students | 2 | Moderately important |
Single platform with standard teamwork capabilities rubric | 2 | Moderately important |
Valid, fair, and reliable regime for group assessments | 2 | Moderately important |
Common set of academic policies for group assignments | 2 | Moderately important |
Several calculation methods and scale factors for formative and summative assessment | 1.3 | Slightly important |
Evaluation and grading of qualitative feedback and rating behaviour | 1.3 | Slightly important |
Industry-standard teammate peer assessment rubric | 1.3 | Slightly important |
Cost-effective installation, operation and support | 1.3 | Slightly important |
Learning management system integration | 1.3 | Slightly important |
Single sign on of teacher through learning management system | 1 | Slightly important |
Capacity for timely addition of new platform features. | 1 | Slightly important |
Flexible peer assessment rubrics | 0.3 | Not important at all |
Agreed Importance value deduced from recoding of Likert Scale for Importance. See Table 10.2 © Peer Assess Pro. All rights reserved. |
For the purpose of explanation, let’s presume that several stakeholders have agreed to the final rating of Importance of Features presented in Table 10.3. The table shows the average level of importance determined either through a quick and dirty or Delphi process. The table is sorted in descending order of stakeholders’ median rating of Importance. Now the stakeholders are well-prepared to evaluate the Performance of candidate platforms against the schedule of features.
Compare alternative platforms
Suppose you have identified a shortlist of three candidate platforms, Alpha, Charlie and Papa. The expert group of stakeholders select and rate the expected Performance each platform against the required Features and Benefits sought. Ignore matters related to the Importance of the platform features in this stage. So long as the Feature rates as Slightly Important or better, you would consider the Feature worth evaluating. The Importance rating will be reconsidered later.
The stakeholders obtain consensus about each platform’s Performance through either
- Delegating the evaluation to a technical specialist or group
- Meeting and arguing their assessment
- Conducting a survey
- Engaging in another process using a set of surveys informed by the Delphi Method.
Some aspects of performance can be measured technically. For example, “How many students can the platform handle?”, “How many teams can the platform handle in a single peer assessment survey”, “What is the cost per survey?” Other features are binary: they either exist in the platform or do not. Finally, the performance evaluation of some features is a matter of opinion and judgement, “How adequately does the platform draw the teacher’s attention to outlier, dysfunctional, or at risk responses?”. The latter question can be evaluated using a Performance Evaluation Likert Scale.
- Far below requirements
- Below requirements
- Meets requirements
- Above requirements
- Far above requirements
In general terms, these different types of measurement can and should all be tabulated. However, be cautious about technical measurements. Suppose the largest class in your institution is 1500 students with a typical number of students per team of four to five teammates. Therefore, you require a platform with capacity to fluently handle 300 to 400 teams. You anticipate incredible future growth in class sizes. Perhaps in ten years time you anticipate 4500 students per class. That growth implies the requirement for a capability to handle around 1200 teams. Is a platform that can handle 4000 teams in one assessment ten times better than a platform that can handle only 400 teams, your current maximum anticipated requirement? I suggest the latter platform can be evaluated as performing ‘Far above requirements’. Nice to have in reserve, but probably not necessary or valuable for a reasonable three-years planning horizon. It’s worth a small bonus in your overall evaluation scheme, but not a major bonus when you have many other requirements to consider.
You have decided to survey each stakeholder asking a sequence of questions to which the response is a rating on the Performance evaluation Likert Scale presented above. For example,‘What is the extent to which the following platforms meet our Performance requirement for the Feature Anonymous ratings of teammates’.
The response from one stakeholder, the Learning Consultant, is tabulated in Table 10.4. Expect a set of different Performance evaluations by each stakeholder.
Score contribution from importance and performance
You now combine the Importance rating of each feature with all respondents’ Performance evaluation rating of each platform to yield a table of contributions, Table 10.5 , the data from which is presented subsequently as the summative barchart comparison, Figure 10.1.
Inspecting Figure 10.1, you might rush to conclude that Platform Papa is the most superior platform and should be your choice. However, carefully probe the heatmap Table 10.5 to ensure that
- Your most important requirements are at least met by your most preferred platform(s)
- You are not seduced by paying for features that are far above your requirements. The costs of installation, operation, training and support have not yet been included in the analysis.
Table 10.4 Performance evaluation of several platforms by one stakeholder
Feature | Platform Alpha | Platform Charlie | Platform Papa |
---|---|---|---|
Calculation of a personal result from peer assessed contribution and team results | Below | Meets | Meets |
Identification of outlier, dysfunctional or at risk responses | Far below | Meets | Far above |
Convenient rearrangement of team compositions | Below | Meets | Meets |
Convenient guidance on how to use the platform | Far below | Meets | Above |
Training module for peer assessment rating | Far below | Far above | Meets |
Several calculation methods and scale factors for formative and summative assessment | Far below | Meets | Far above |
Log of all communications to students | Far below | Far below | Far above |
Single sign on of teacher through learning management system | Far below | Far below | Far below |
Evaluation and grading of qualitative feedback and rating behaviour | Far above | Far below | Below |
Flexible peer assessment rubrics | Above | Far below | Far below |
Informative feedback report | Meets | Meets | Above |
Anonymous ratings of teammates | Meets | Meets | Meets |
Self-help teamwork resources | Far below | Below | Meets |
Single sign on of student through learning management system | Far below | Far below | Below |
Single platform with standard teamwork capabilities rubric | Far below | Meets | Meets |
Identification of persistent at-risk students | Below | Below | Meets |
Industry-standard teammate peer assessment rubric | Far below | Meets | Meets |
Valid, fair, and reliable regime for group assessments | Meets | Meets | Far above |
Common set of academic policies for group assignments | Below | Below | Above |
Cost-effective installation, operation and support | Above | Meets | Meets |
Capacity for timely addition of new platform features. | Far below | Below | Far above |
Learning management system integration | Far below | Far below | Below |
Performance Rating Likert Scale. 0 = Far below requirements, 1 = Below requirements, 2 = Meets requirements, 3 = Above requirements, 4 = Far above requirements © Peer Assess Pro. All rights reserved. |
The colour-coded heatmap of Table 10.5 highlights visually the relative contribution to your stakeholders’ requirements from each of the Features delivered by each platform. For example, Platform Alpha is clearly below requirements on many of your top priority ‘Must have’ features. Platform Alpha can be reasonably discarded from future consideration by the project team. In contrast, platforms Charlie and Papa meet or exceed many of the features rated as Very Important or Extremely Important. The deep green heatmap cells for Platform Papa highlights its superior performance in terms of producing an Informative feedback report for students and Identification of outlier, dysfunctional or at risk responses. In contrast, platform Charlie has a superior Training module for peer assessment rating. Perhaps it’s time to start a bidding war between the suppliers of these two platforms!
Table 10.5 Heatmap comparing all peer assessment platforms evaluated by all stakeholders
Feature | Agreed Importance | Alpha | Charlie | Papa |
---|---|---|---|---|
Anonymous ratings of teammates | Extremely important | 50 | 50 | 50 |
Calculation of a personal result from peer assessed contribution and team results | Extremely important | 23 | 46 | 46 |
Informative feedback report | Very important | 42 | 42 | 63 |
Identification of outlier, dysfunctional or at risk responses | Very important | 19 | 38 | 75 |
Training module for peer assessment rating | Very important | 0 | 67 | 33 |
Identification of persistent at-risk students | Very important | 17 | 17 | 33 |
Convenient rearrangement of team compositions | Moderately important | 29 | 29 | 29 |
Convenient guidance on how to use the platform | Moderately important | 15 | 29 | 44 |
Self-help teamwork resources | Moderately important | 0 | 29 | 29 |
Single sign on of student through learning management system | Moderately important | 0 | 0 | 0 |
Log of all communications to students | Moderately important | 0 | 0 | 38 |
Single platform with standard teamwork capabilities rubric | Moderately important | 0 | 25 | 25 |
Valid, fair, and reliable regime for group assessments | Moderately important | 25 | 25 | 50 |
Common set of academic policies for group assignments | Moderately important | 13 | 13 | 38 |
Several calculation methods and scale factors for formative and summative assessment | Slightly important | 0 | 17 | 25 |
Evaluation and grading of qualitative feedback and rating behaviour | Slightly important | 33 | 0 | 8 |
Industry-standard teammate peer assessment rubric | Slightly important | 0 | 17 | 17 |
Cost-effective installation, operation and support | Slightly important | 25 | 17 | 17 |
Learning management system integration | Slightly important | 0 | 8 | 8 |
Single sign on of teacher through learning management system | Slightly important | 0 | 0 | 0 |
Capacity for timely addition of new platform features. | Slightly important | 0 | 6 | 25 |
Flexible peer assessment rubrics | Not important at all | 6 | 0 | 0 |
Total Heatmap Contribution | 291 | 475 | 653 | |
Average Heatmap Contribution by Platform | 14 | 22 | 30 | |
Heatmap = (Importance)(Performance)(Scale), where Importance and Performance ratings are recoded from the Likert Scales presented from Tables 10.3 and 10.4. Scale = 100/(4)(4). Maximum possible Heatmap value is 100 per Feature. © Peer Assess Pro. All rights reserved. |
Extra for experts
The Heatmap values that produce Table 10.5 are determined from the formula
Heatmap = (Importance)(Performance)(Scale)
The procedure for calculation of Heatmap values.
- All Importance and Performance Likert Scale values are recoded to their numerical equivalent
- Each cell of the heatmap table is the product of the Feature’s Importance and the platform’s Performance averaged over all stakeholder respondents. The statistical median rating is the recommended statistic.
- For the purpose of visualization, the product is scaled to the range 0 to 100. Since the maximum value on both Likert Scales is 4, the Scale factor is 100/(4)(4) = (100/16)
For instance, consider the cell in Table 10.5 with the feature Informative feedback report and column Platform Papa. Table 10.3. shows the numerical rating of Importance agreed by all stakeholders is 3.33333 rounded to 3.3 in the table. On the Likert Scale, this evaluation is labelled Very Important (values between 2.5 and < 3.5).
The Performance evaluations from the three stakeholders for Platform Papa for this feature are
- Teaching and Learning Consultant = Above requirements = 3 (As reported in Table 10.4)
- Student = Above requirements = 3
- Teacher = Above Requirements = 3
Therefore, the median Performance evaluation across all stakeholders is = 3
Applying the formula to the cell Informative feedback report and column Platform Papa yields the value shown in Table 10.5.
Heatmap = (Importance)(Performance)(Scale) = (3.3333)(3)(100/16) = 62.5
Note that the maximum value for Heatmap that can be obtained is 100, equivalent to a Feature whose Importance is rated Extremely Important AND whose Performance is evaluated as Far Above Requirements.
Figure 10.1 Summative comparison of peer assessment platforms evaluated by all stakeholders
Sensitivity analysis
The calculation that Platform Papa earns an average Heatmap Contribution of 30/100, whereas platform Alpha scores 14/100 does not mean that Platform Papa is twice as good as Platform Alpha. Only the rank order of platforms is material. In technical terms “No cardinal significance can be assigned to the particular scores obtained. …. considering that the weights [Importance] result from purely subjective judgements without regard to any scaling concerns” (Gigch, p. 301).
The preceding tables and figures are not the end of the analysis, but a basis for provoking further discussion amongst the stakeholders. Consequently, it’s time to conduct ‘What if?’ analysis, technically known as sensitivity analysis. How robust is the recommend platform in light of the assumptions and judgements made about Importance and Performance and evaluations? To what extent can these evaluations vary so that the top ranked platform is replaced by the next best alternative? How realistic are those variations?
For example, in Table 10.5 there is one pink row which should make you feel cautious. Specifically, none of the platforms offers Single sign on of students through a learning management system. In this example, the feature was rated as Moderately Important to the stakeholders. In this case, the absence from any platform might prompt your platform evaluation team to search for an alternative platform for consideration. Alternatively, your team might consider commissioning their most-favored platform supplier to develop the feature.
Make the commitment
Once you have identified the platform(s) most likely to meet your foreseeable requirements it is time to consider how the platform – and supplier – will perform in your environment.
- How promptly can a small team of pathfinder teachers ‘get started’ using the platform to deliver real teammate peer assessments with their classes?
- How will lessons learned from the team of your institution’s pathfinders be diffused to other teachers?
- What are the risks and costs of delaying implementation? For example, is there a material risk to the institution’s authority to grant qualifications if group assignments fail to be assessed in a fair, reliable, and valid manner?
- If teammate peer assessment enables more of the institution’s academic programmes to incorporate group assignments, what is the value of teachers’ time saved?
- What are the platform providers development plans? How are the priorities determined in relation to current and future users? … and to competitors’ platforms?
- What are the platform provider’s charges for support, training, consulting, and bespoke feature enhancements?
How Peer Assess Pro helps
Table 10.6 How Peer Assess Pro helps platform assessment and adoption
Feature | Benefit |
---|---|
Consulting and training services | Advisory services that help departments identify and evaluate requirements and priorities for adopting technology platforms for improving group assignments through teammate peer assessment. Advice on academic policy and technology adoption strategy. Process facilitation for decision support including Delphi Method. |
Competitively-priced per student license fees | Teachers can self-register to use the platform and create a series of teammate peer assessments at an affordable cost and minimal technical difficulty. Fee discounts for departmental and institution-wide use. |
Development service | New features regularly released in response to current users requirements, requests, and our own research-based insights. Bespoke technical developments for special requirements available on request. |
Get started | Comprehensive guide that focusses on guiding a teacher towards confident creation of their first teammate peer assessment. See Chapter 11. |
Workflow guidance | Platform onboarding assistance for new teachers through active warnings presented during conduct of peer assessment and email prompts durning the survey duration. Extensive knowledge base including video tutorials, frequently asked questions, technical support and the 7 step guide. |
Training services | On-line training, in-house workshops and masterclass sessions that raise prospective and current users’ expertise from good to great. |
7 step infographic | Succinct overview providing a visual checklist covering key steps for effective improvements to group assignments, and successful, timely introduction to formative and summative teammate peer assessment. Each check step elaboratedin the corresponding eBook chapter. |
© Peer Assess Pro Ltd |
Return to top of page