Access Peer Assess ProTM Software Get Started

What peer assessment platforms work for me?

Identify how to select the peer assessment platform that meets your needs

See how we create better teams through better feedback

Download our eBook now! – “How to teach using group assignments”

Peer assessment platforms

We explore the features relevant to evaluating and choosing a digital peer assessment platform. We examine how the features are prioritized according to the particular needs of a user and/or their institution. We propose methods for conducting a comparative evaluation of alternative digital platforms when multiple stakeholders from the institution are involved in decision making.


Checklist

  • Identify the general and specific features of an ideal teammate peer assessment platform
  • Identify the features crucial to your peer assessment requirements
  • Evaluate the competitive benefits of alternative peer assessment platforms/li>

Considerations for undertaking teammate peer assessment

The earlier chapter Why Peer assessment? observes that the teacher, as an absolute minimum, requires a teammate peer assessment platform that

Minimum features required of a digital platform

The chapter Why Peer assessment?, Chapter 2, observes that a teacher requires a teammate peer assessment platform that, as an absolute minimum

  • Surveys all students enabling them to rate each teammate using a peer assessment rubric
  • Collates the survey results and calculates a peer assessed score for each teammate, a measure of each student’s relative contribution to the team’s work
  • Calculates a personal result for each student through combining every peer assessed score with the team result determined by the teacher for each team’s delivered outputs
  • Identifies dysfunctional and at risk behaviour by teams and individuals to enable proactive intervention by the teacher and proactive responses by teammates.
  • Delivers feedback results to each surveyed student including the peer assessed result, personal result, team result, and guidance on the behaviour required to improve their future personal result.

These minimum requirements are illustrated in the digital platform shown in Movie 3.1.

Movie 3.1 A digital platform to enable teammate peer assessment and feedback

Broader considerations for platform selection and adoption

When you consider the activities required to conduct each of the 7 Steps for successful group assignments you will recognise that additional features are required from your digital teammate peer assessment platform. For example, occasionally you must adjust your team arrangement during a survey to respond to new students arriving in your class and/or students requiring to be reallocated to another team. Moreover, there are wider factors to be considered such as compatibility with your institution’s platforms for gradebook management, single sign-on for students and teachers, and integration with learning management systems. Equally important, there are technology transfer issues.

  • How will teachers, and academic programme leaders discover the need and opportunity for conducting teammate peer assessment? … as part of existing group assignments conducted within the institution? … as part of new academic programme design or redesign?
  • How will teachers and students learn to make effective use of a teammate peer assessment platform?
  • Who will assure the claims for validity, reliability and fairness are delivered by the approaches used to conduct teammate peer assessment? What standards and processes will assessor from within and beyond the institution apply?
  • How can the institutional risks of using group assignments and teammate peer assessment be minimized through correct selection and application of academic policies?

A tale of three platforms

This chapter presumes you are leading a project team empowered to investigate and recommend adoption of a teammate peer assessment platform, and proposing its roll-out to prospective users. You have identified three teammate peer assessment platforms that appear to meet your initial assessment of requirements. If you pursue a well-structured, rational, analytic decision-making process for advanced technology adoption these are the essential steps which I’ll illustrate with a case example later.

  • Identify the stakeholder groups material to the decision-making process
  • Identify a schedule of requirements and Features by consulting representatives of the stakeholder groups. See Table 10.1.
  • Prioritise stakeholders’ requirements according to degree of Importance, Table 10.2 and Table 10.3.
  • Generate a schedule of candidate technology platform suppliers
  • Evaluate the extent to which each candidate Platform meets every stakeholder requirement, Table 10.4.
  • Calculate a heatmap contribution value for every platform evaluation, Table 10.5. The heatmap value combines Importance and Performance for each Factor.
  • For each candidate platform sum the heatmap contribution values across all requirements.
  • Select the candidate platform that achieves the highest heatmap sum that meets all requirements that are Moderately important or better, Figure 10.1.

Identify the stakeholder groups

A material stakeholder includes those actors with interests in adopting and using educational technology or assuring the validity of educational assessments. Specifically, a material stakeholder may be affected by the platform approach adopted, or they could influence matters appreciably. It’s important to identify their current roles, interests and power positions. Stakeholder groups in a higher education setting include

  • Teachers of small and large classes
  • Teachers’ tutors and administrative support
  • Learning consultant support for teachers and students
  • Budget holders, for example, Departmental or faculty manager, Director of teaching and learning
  • Technology support
  • Institutional analysts, for example, student success and retention
  • Senior policy makers, for example Faculty deans, Academic programme directors
  • …. and what about the students?

Identify a schedule of platform requirements

Before you consult with representatives of your stakeholder groups, you’ll form a preliminary assessment of the key benefits sought from deploying a teammate peer assessment platform. As you consult your stakeholders, your assessment will be extended! A preliminary assessment of key benefits should include

  • Decision support and time savings The teacher saves time deploying and managing the survey, interpreting results, and arranging for feedback to students. Outlier, dysfunctional or missing responses from students and teams are highlighted for the teacher’s attention. Exploratory data analysis.
  • Grade fairness and student development Each student receives a timely, personalized peer assessment report that gives them confidence their personal result is fair and provides guidance on what behaviours and attitudes they must adopt to achieve improved results from future teamwork.
  • Confidence, validity and reliability The survey rubric and calculations undertaken by the platform conform with good practice and are correct. The peer assessed results from one teammate peer assessment are comparable with other surveys from within the same class, between classes in the same academic programme, and across the entire institution…. and beyond.
  • Flexibility The teacher can accommodate rearrangements of teams whilst the survey is in progress. Alternative methods for calculating personal results according to peer assessed scores. Scale factors that weight the degree to which the team result and/or peer assessed score influence each student’s personal result.
  • Workflow integration The workflow required by the platform’s users (students, teachers, tutors, and support staff) is logical, familiar and integrates with other institutional systems such as learning management systems. For example, gradebook integration; Single sign on for students, Single sign-on for teachers and tutors administering the same academic programme or assessment event.
  • End user support. Guidance from the platform provider to enable secure, timely adoption and productive use of the platform. For example, help guides, training, consultancy, and feature extension.

Once you have identified your preliminary assessment of requirements you consult with representatives of each stakeholder group to establish a comprehensive, and ideal ‘wish list’ schedule of requirements. To achieve a schedule like Table 10.1, you might conduct a survey, focus group, brainstorm or brainwriting session. One approach to conduct identify requirements is to request each stakeholder to complete the statement “As stakeholder … I want the Feature … so that I can receive the Benefit …”.

Table 10.1 presents a preliminary schedule of stakeholders, Features (“I want…”) and Benefits (“So that I can…”) sought from a teammate peer assessment platform. If you have several stakeholder groups each with their favorite platform requirements the size of your version of Table 10.1 could be substantial. For example, eight stakeholder groups each with four requirements might give you 30 to 40 items, assuming no overlaps. Consequently, the next step is to weight the Importance of each of these requirements.

Table 10.1 Illustrative schedule of stakeholder requirements

As aI want ...So that I can ...
TeacherCalculation of a personal result from peer assessed contribution and team resultsAward a fair grade to students undertaking group assignments.
TeacherIdentification of outlier, dysfunctional or at risk responsesIntervene with a proactive coaching moment, or request a survey resubmission from the student or team.
TeacherConvenient rearrangement of team compositionsCope with late enrollments, no shows, or dysfunctional team members with minimal disruption to the survey process.
TeacherConvenient guidance on how to use the platformSave my time, reduce the risk of problems, help my students get fair grades, help my students develop their teamwork capabilities, and minimise the risk of student complaints.
TeacherTraining module for peer assessment ratingAssessment validity is raised through my students learning to rate each other accurately and fairly.
TeacherSeveral calculation methods and scale factors for formative and summative assessmentWeight the importance of either the team result or peer assessed score in determining a student's personal result.
TeacherLog of all communications to studentsConfirm what and when students have received requests to complete the survey, review results, or make a survey resubmission.
TeacherSingle sign on of teacher through learning management systemDelegate management of my peer assessment surveys to my tutors and teacher aids.
TeacherEvaluation and grading of qualitative feedback and rating behaviourInfluence students to produce better quality feedback and ratings.
TeacherFlexible peer assessment rubricsAdapt the platform to some other purpose.
StudentInformative feedback reportUnderstand why a student received the grade awarded through peer assessment, and what behaviours must be adjusted to gain a better result in future teamwork.
StudentAnonymous ratings of teammatesProvide my feedback without fear of retribution.
StudentSelf-help teamwork resourcesImprove a student's ability to contribute to teamwork, develop leadership and learn from peer feedback.
StudentSingle sign on of student through learning management systemA student can locate all their peer assessment surveys and results in one place.
Learning consultantSingle platform with standard teamwork capabilities rubricDevelop a common set of self-help resources for teachers and students for teamwork capability development and teammate peer assessment processes.
Learning consultantIdentification of persistent at-risk studentsEnsure future teachers can proactively support these students’ success.
Learning consultantIndustry-standard teammate peer assessment rubric Learning consultantCompare students’ progress developing teamwork capabilities across the entire institution.
Faculty deanValid, fair, and reliable regime for group assessmentsAssure the academy, the professions and employers that graduates meet the leadership and teamwork capabilities specified for the qualification they have been awarded.
Faculty deanCommon set of academic policies for group assignmentsMinimise complaints and appeals from students about the conduct of teammate peer assessment and group assignments.
Technology directorCost-effective installation, operation and supportThe tangible and intangible benefits to all stakeholders exceed lifetime costs.
Technology directorCapacity for timely addition of new platform features.Be confident the relationship with platform suppliers is worth the initial investment and engagement.
Technology directorLearning management system integrationEnsure convenient, but strict control over access to the platform and its data by all stakeholders.
© Peer Assess Pro Ltd

Prioritise the importance of your requirements

It is most unlikely that an existing technology platform will achieve your stakeholders’ schedule of ideal requirements identified in Table 10.1. Consequently, you now attempt to seek consensus amongst your stakeholders of what you “really, really want….”. Here is where ‘angels may fear to tread’! … However, this prioritization is a crucial step before seeking a request for proposal from a supplier, or before settling down to evaluate the relative worth of available platforms.
Begin by conducting a survey in which every stakeholder rates the Importance of every “I want…” Feature in your list through applying a Likert Scale of Importance.

  1. Not at all important
  2. Slightly important
  3. Moderately important
  4. Very important
  5. Extremely important

 
Suppose your survey results in the Importance ratings shown in Table 10.2. For the purposes of illustration, the table illustrates responses from a representative of two stakeholder groups, Teachers, and Learning Consultant.

Note that in addition to rating each feature with a different Importance, each stakeholder group could perceive different Benefits arising from the same feature. In the same initial survey, therefore, you might request each stakeholder to report their “So that I can…” justification statement as part of the same survey, for reasons that will become apparent later when the Delphi Method for obtaining consensus is introduced.

Develop consensus on relative importance of features

In Table 10.2., observe the ‘high agreement’ where both stakeholders rate the feature Informative feedback report as Very Important. Both agree wholeheartedly. In contrast, they disagree markedly about several features. For example the Teacher rates the feature Single sign on of student through learning management system’ as Slightly Important, whilst the Learning Consultant rates the same factor as Extremely Important. How do we resolve disagreements like this? Do we give more weight to the teacher’s rating (“I just want to get the job done!”) or more weight to the strategically perceptive Learning Consultant? (“There are broader considerations concerning students’ privacy and security that must inform our technology adoption strategy”).

Through some process of discussion, negotiation – and, perhaps, brutal assertion of budgetary power – the decision making group must agree on a combined set of features and importance ratings. You seek to produce a schedule like Table 10.3. But how?Two broad approaches exist to achieve a combined importance of the ideal features:

  • Quick and dirty, for example simply average the importance rating provided by each stakeholder.
  • Slow and robust consensus, for example, by applying the Delphi Method.

A quick and dirty method involves simply averaging the rating provided by each stakeholder. This presumes each stakeholder has well-informed, and has equal responsibility, authority and interest in the outcome of the decision. This is an improbable, even presumptuous assumption. Note that the statistical median is the better statistic to use rather than the arithmetic mean.

In contrast, the Delphi Method is a long-established and formal approach to obtaining opinion consensus amongst stakeholders with divergent expertise, competing interests and priorities. The method relies on an iterative procedure through which stakeholders provide several rounds of answers to a series of questions. The key features of the method are (a) anonymity (b) controlled feedback (c) statistical group responses (Gigch, 1978; Turoff & Linstone, 2002). An example applying the Delphi Method to prioritise the design elements for the information technology experience in a university business school is reported in Mellalieu & Hooper (1994).

In simplistic terms, the Delphi Method proceeds thus.

  • The annonymised results of the first survey, which rated Importance and rationale (Benefit sought), are circulated to each expert respondent. Statistical reports highlight areas of divergent opinion (dispersion) related to the Importance ratings, such as those observed in Table 10.2.
  • Through a second round of the survey, respondents are invited to reconsider their ratings of Importance in light of the annonymised justifications provided by previous respondents about their need for the Feature. In some cases a stakeholder might argue that a particular feature must definitely not be part of the requirements.
  • After the first round, Features where there is high consensus on the level of Importance are dropped from consideration in future rounds. For example, considering Table 10.2, their is complete agreement amongst stakeholders that the features Anonymous ratings of teammates rates Extremely Important and Self-help teamwork resources rates Moderately Important. The column Revisit signifies all Features where there is disagreement of at least two units on the Likert Scale of Importance. These Revisit items should be included in the second round of the survey.

For example, a teacher might determine the importance ratings shown in Table 10.2. In contrast, the academic vice-chancellor may regard the feature Valid, fair and credible regime for group assignments as a Must have feature, whereas the teacher rated the feature Prefer to have. Similarly, the student success and retention analyst could regard the feature Identification of persistent at-risk students as a Must Have.

Table 10.2 Illustrative ratings of feature importance by a stakeholder

FeatureImportance to TeacherImportance to Learning ConsultantRevisit?
Calculation of a personal result from peer assessed contribution and team resultsExtremely importantVery important
Identification of outlier, dysfunctional or at risk responsesVery importantExtremely important
Convenient rearrangement of team compositionsVery importantExtremely important
Convenient guidance on how to use the platformModerately importantExtremely importantRevisit
Training module for peer assessment ratingModerately importantExtremely importantRevisit
Several calculation methods and scale factors for formative and summative assessmentSlightly importantModerately important
Log of all communications to studentsModerately importantVery important
Single sign on of teacher through learning management systemSlightly importantModerately important
Evaluation and grading of qualitative feedback and rating behaviourSlightly importantModerately important
Flexible peer assessment rubricsSlightly importantNot at all important
Informative feedback reportVery importantVery important
Anonymous ratings of teammatesExtremely importantExtremely important
Self-help teamwork resourcesModerately importantModerately important
Single sign on of student through learning management systemSlightly importantExtremely importantRevisit
Single platform with standard teamwork capabilities rubricSlightly importantVery importantRevisit
Identification of persistent at-risk studentsVery importantVery important
Industry-standard teammate peer assessment rubricSlightly importantVery importantRevisit
Valid, fair, and reliable regime for group assessmentsSlightly importantVery importantRevisit
Common set of academic policies for group assignmentsSlightly importantVery importantRevisit
Cost-effective installation, operation and supportSlightly importantVery importantRevisit
Capacity for timely addition of new platform features.Slightly importantModerately important
Learning management system integrationSlightly importantVery importantRevisit
Importance Rating Likert Scale.0 = Not at all important, 1 = Slightly important, 2 = Moderately important, 3 = Very important, 4 = Extremely important. ‘Revisit’ is indicated when the disagreement between stakeholders is greater than one Likert Scale unit. © Peer Assess Pro. All rights reserved.

The process is repeated several times until acceptable consensus is achieved on most Features. That is, when the dispersion of Importance ratings across the stakeholders is low. For example, an ‘acceptable’ consensus could mean a dispersion of no more than one Likert Scale unit, such as illustrated by the first three features listed in Table 10.2.

Correctly conducted, the Delphi Method removes several shortcomings of group decision processes, such as face-to-face confrontation, the effect of status, and the order of voting. However, the Delphi Method is not appropriate in which situations where changes of opinion are not possible or hard to admit (Gigch, 1978).

Table 10.3 Final importance of features agreed across all stakeholder groups

FeatureAgreed ImportanceImportance
Anonymous ratings of teammates4Extremely important
Calculation of a personal result from peer assessed contribution and team results3.7Extremely important
Informative feedback report3.3Very important
Identification of outlier, dysfunctional or at risk responses3Very important
Training module for peer assessment rating2.7Very important
Identification of persistent at-risk students2.7Very important
Convenient rearrangement of team compositions2.3Moderately important
Convenient guidance on how to use the platform2.3Moderately important
Self-help teamwork resources2.3Moderately important
Single sign on of student through learning management system2.3Moderately important
Log of all communications to students2Moderately important
Single platform with standard teamwork capabilities rubric2Moderately important
Valid, fair, and reliable regime for group assessments2Moderately important
Common set of academic policies for group assignments2Moderately important
Several calculation methods and scale factors for formative and summative assessment1.3Slightly important
Evaluation and grading of qualitative feedback and rating behaviour1.3Slightly important
Industry-standard teammate peer assessment rubric1.3Slightly important
Cost-effective installation, operation and support1.3Slightly important
Learning management system integration1.3Slightly important
Single sign on of teacher through learning management system1Slightly important
Capacity for timely addition of new platform features.1Slightly important
Flexible peer assessment rubrics0.3Not important at all
Agreed Importance value deduced from recoding of Likert Scale for Importance. See Table 10.2 © Peer Assess Pro. All rights reserved.

For the purpose of explanation, let’s presume that several stakeholders have agreed to the final rating of Importance of Features presented in Table 10.3. The table shows the average level of importance determined either through a quick and dirty or Delphi process. The table is sorted in descending order of stakeholders’ median rating of Importance. Now the stakeholders are well-prepared to evaluate the Performance of candidate platforms against the schedule of features.


Compare alternative platforms

Suppose you have identified a shortlist of three candidate platforms, Alpha, Charlie and Papa. The expert group of stakeholders select and rate the expected Performance each platform against the required Features and Benefits sought. Ignore matters related to the Importance of the platform features in this stage. So long as the Feature rates as Slightly Important or better, you would consider the Feature worth evaluating. The Importance rating will be reconsidered later.

The stakeholders obtain consensus about each platform’s Performance through either

  • Delegating the evaluation to a technical specialist or group
  • Meeting and arguing their assessment
  • Conducting a survey
  • Engaging in another process using a set of surveys informed by the Delphi Method.

Some aspects of performance can be measured technically. For example, “How many students can the platform handle?”, “How many teams can the platform handle in a single peer assessment survey”, “What is the cost per survey?” Other features are binary: they either exist in the platform or do not. Finally, the performance evaluation of some features is a matter of opinion and judgement, “How adequately does the platform draw the teacher’s attention to outlier, dysfunctional, or at risk responses?”. The latter question can be evaluated using a Performance Evaluation Likert Scale.

  1. Far below requirements
  2. Below requirements
  3. Meets requirements
  4. Above requirements
  5. Far above requirements

 
In general terms, these different types of measurement can and should all be tabulated. However, be cautious about technical measurements. Suppose the largest class in your institution is 1500 students with a typical number of students per team of four to five teammates. Therefore, you require a platform with capacity to fluently handle 300 to 400 teams. You anticipate incredible future growth in class sizes. Perhaps in ten years time you anticipate 4500 students per class. That growth implies the requirement for a capability to handle around 1200 teams. Is a platform that can handle 4000 teams in one assessment ten times better than a platform that can handle only 400 teams, your current maximum anticipated requirement? I suggest the latter platform can be evaluated as performing ‘Far above requirements’. Nice to have in reserve, but probably not necessary or valuable for a reasonable three-years planning horizon. It’s worth a small bonus in your overall evaluation scheme, but not a major bonus when you have many other requirements to consider.

You have decided to survey each stakeholder asking a sequence of questions to which the response is a rating on the Performance evaluation Likert Scale presented above. For example,‘What is the extent to which the following platforms meet our Performance requirement for the Feature Anonymous ratings of teammates’.

The response from one stakeholder, the Learning Consultant, is tabulated in Table 10.4. Expect a set of different Performance evaluations by each stakeholder.

Score contribution from importance and performance

You now combine the Importance rating of each feature with all respondents’ Performance evaluation rating of each platform to yield a table of contributions, Table 10.5 , the data from which is presented subsequently as the summative barchart comparison, Figure 10.1.

Inspecting Figure 10.1, you might rush to conclude that Platform Papa is the most superior platform and should be your choice. However, carefully probe the heatmap Table 10.5 to ensure that

  • Your most important requirements are at least met by your most preferred platform(s)
  • You are not seduced by paying for features that are far above your requirements. The costs of installation, operation, training and support have not yet been included in the analysis.

Table 10.4 Performance evaluation of several platforms by one stakeholder

FeaturePlatform AlphaPlatform CharliePlatform Papa
Calculation of a personal result from peer assessed contribution and team resultsBelowMeetsMeets
Identification of outlier, dysfunctional or at risk responsesFar belowMeetsFar above
Convenient rearrangement of team compositionsBelowMeetsMeets
Convenient guidance on how to use the platformFar belowMeetsAbove
Training module for peer assessment ratingFar belowFar aboveMeets
Several calculation methods and scale factors for formative and summative assessmentFar belowMeetsFar above
Log of all communications to studentsFar belowFar belowFar above
Single sign on of teacher through learning management systemFar belowFar belowFar below
Evaluation and grading of qualitative feedback and rating behaviourFar aboveFar belowBelow
Flexible peer assessment rubricsAboveFar belowFar below
Informative feedback reportMeetsMeetsAbove
Anonymous ratings of teammatesMeetsMeetsMeets
Self-help teamwork resourcesFar belowBelowMeets
Single sign on of student through learning management systemFar belowFar belowBelow
Single platform with standard teamwork capabilities rubricFar belowMeetsMeets
Identification of persistent at-risk studentsBelowBelowMeets
Industry-standard teammate peer assessment rubricFar belowMeetsMeets
Valid, fair, and reliable regime for group assessmentsMeetsMeetsFar above
Common set of academic policies for group assignmentsBelowBelowAbove
Cost-effective installation, operation and supportAboveMeetsMeets
Capacity for timely addition of new platform features.Far belowBelowFar above
Learning management system integrationFar belowFar belowBelow
Performance Rating Likert Scale. 0 = Far below requirements, 1 = Below requirements, 2 = Meets requirements, 3 = Above requirements, 4 = Far above requirements © Peer Assess Pro. All rights reserved.

The colour-coded heatmap of Table 10.5 highlights visually the relative contribution to your stakeholders’ requirements from each of the Features delivered by each platform. For example, Platform Alpha is clearly below requirements on many of your top priority ‘Must have’ features. Platform Alpha can be reasonably discarded from future consideration by the project team. In contrast, platforms Charlie and Papa meet or exceed many of the features rated as Very Important or Extremely Important. The deep green heatmap cells for Platform Papa highlights its superior performance in terms of producing an Informative feedback report for students and Identification of outlier, dysfunctional or at risk responses. In contrast, platform Charlie has a superior Training module for peer assessment rating. Perhaps it’s time to start a bidding war between the suppliers of these two platforms!

Table 10.5 Heatmap comparing all peer assessment platforms evaluated by all stakeholders

FeatureAgreed ImportanceAlphaCharliePapa
Anonymous ratings of teammatesExtremely important505050
Calculation of a personal result from peer assessed contribution and team resultsExtremely important234646
Informative feedback reportVery important424263
Identification of outlier, dysfunctional or at risk responsesVery important193875
Training module for peer assessment ratingVery important06733
Identification of persistent at-risk studentsVery important171733
Convenient rearrangement of team compositionsModerately important292929
Convenient guidance on how to use the platformModerately important152944
Self-help teamwork resourcesModerately important02929
Single sign on of student through learning management systemModerately important000
Log of all communications to studentsModerately important0038
Single platform with standard teamwork capabilities rubricModerately important02525
Valid, fair, and reliable regime for group assessmentsModerately important252550
Common set of academic policies for group assignmentsModerately important131338
Several calculation methods and scale factors for formative and summative assessmentSlightly important01725
Evaluation and grading of qualitative feedback and rating behaviourSlightly important3308
Industry-standard teammate peer assessment rubricSlightly important01717
Cost-effective installation, operation and supportSlightly important251717
Learning management system integrationSlightly important088
Single sign on of teacher through learning management systemSlightly important000
Capacity for timely addition of new platform features.Slightly important0625
Flexible peer assessment rubricsNot important at all600
Total Heatmap Contribution291475653
Average Heatmap Contribution by Platform142230
Heatmap = (Importance)(Performance)(Scale), where Importance and Performance ratings are recoded from the Likert Scales presented from Tables 10.3 and 10.4. Scale = 100/(4)(4). Maximum possible Heatmap value is 100 per Feature. © Peer Assess Pro. All rights reserved.

Extra for experts

The Heatmap values that produce Table 10.5 are determined from the formula

Heatmap = (Importance)(Performance)(Scale)

The procedure for calculation of Heatmap values.

  • All Importance and Performance Likert Scale values are recoded to their numerical equivalent
  • Each cell of the heatmap table is the product of the Feature’s Importance and the platform’s Performance averaged over all stakeholder respondents. The statistical median rating is the recommended statistic.
  • For the purpose of visualization, the product is scaled to the range 0 to 100. Since the maximum value on both Likert Scales is 4, the Scale factor is 100/(4)(4) = (100/16)

For instance, consider the cell in Table 10.5 with the feature Informative feedback report and column Platform Papa. Table 10.3. shows the numerical rating of Importance agreed by all stakeholders is 3.33333 rounded to 3.3 in the table. On the Likert Scale, this evaluation is labelled Very Important (values between 2.5 and < 3.5).

The Performance evaluations from the three stakeholders for Platform Papa for this feature are

  • Teaching and Learning Consultant = Above requirements = 3 (As reported in Table 10.4)
  • Student = Above requirements = 3
  • Teacher = Above Requirements = 3

Therefore, the median Performance evaluation across all stakeholders is = 3

Applying the formula to the cell Informative feedback report and column Platform Papa yields the value shown in Table 10.5.

Heatmap = (Importance)(Performance)(Scale) = (3.3333)(3)(100/16) = 62.5

Note that the maximum value for Heatmap that can be obtained is 100, equivalent to a Feature whose Importance is rated Extremely Important AND whose Performance is evaluated as Far Above Requirements.

Figure 10.1 Summative comparison of peer assessment platforms evaluated by all stakeholders

Sensitivity analysis

The calculation that Platform Papa earns an average Heatmap Contribution of 30/100, whereas platform Alpha scores 14/100 does not mean that Platform Papa is twice as good as Platform Alpha. Only the rank order of platforms is material. In technical terms “No cardinal significance can be assigned to the particular scores obtained. …. considering that the weights [Importance] result from purely subjective judgements without regard to any scaling concerns” (Gigch, p. 301).

The preceding tables and figures are not the end of the analysis, but a basis for provoking further discussion amongst the stakeholders. Consequently, it’s time to conduct ‘What if?’ analysis, technically known as sensitivity analysis. How robust is the recommend platform in light of the assumptions and judgements made about Importance and Performance and evaluations? To what extent can these evaluations vary so that the top ranked platform is replaced by the next best alternative? How realistic are those variations?

For example, in Table 10.5 there is one pink row which should make you feel cautious. Specifically, none of the platforms offers Single sign on of students through a learning management system. In this example, the feature was rated as Moderately Important to the stakeholders. In this case, the absence from any platform might prompt your platform evaluation team to search for an alternative platform for consideration. Alternatively, your team might consider commissioning their most-favored platform supplier to develop the feature.


Make the commitment

Once you have identified the platform(s) most likely to meet your foreseeable requirements it is time to consider how the platform – and supplier – will perform in your environment.

  • How promptly can a small team of pathfinder teachers ‘get started’ using the platform to deliver real teammate peer assessments with their classes?
  • How will lessons learned from the team of your institution’s pathfinders be diffused to other teachers?
  • What are the risks and costs of delaying implementation? For example, is there a material risk to the institution’s authority to grant qualifications if group assignments fail to be assessed in a fair, reliable, and valid manner?
  • If teammate peer assessment enables more of the institution’s academic programmes to incorporate group assignments, what is the value of teachers’ time saved?
  • What are the platform providers development plans? How are the priorities determined in relation to current and future users? … and to competitors’ platforms?
  • What are the platform provider’s charges for support, training, consulting, and bespoke feature enhancements?

How Peer Assess Pro helps

Table 10.6 How Peer Assess Pro helps platform assessment and adoption

FeatureBenefit
Consulting and training servicesAdvisory services that help departments identify and evaluate requirements and priorities for adopting technology platforms for improving group assignments through teammate peer assessment. Advice on academic policy and technology adoption strategy. Process facilitation for decision support including Delphi Method.
Competitively-priced per student license feesTeachers can self-register to use the platform and create a series of teammate peer assessments at an affordable cost and minimal technical difficulty. Fee discounts for departmental and institution-wide use.
Development serviceNew features regularly released in response to current users requirements, requests, and our own research-based insights. Bespoke technical developments for special requirements available on request.
Get startedComprehensive guide that focusses on guiding a teacher towards confident creation of their first teammate peer assessment. See Chapter 11.
Workflow guidancePlatform onboarding assistance for new teachers through active warnings presented during conduct of peer assessment and email prompts durning the survey duration. Extensive knowledge base including video tutorials, frequently asked questions, technical support and the 7 step guide.
Training servicesOn-line training, in-house workshops and masterclass sessions that raise prospective and current users’ expertise from good to great.
7 step infographicSuccinct overview providing a visual checklist covering key steps for effective improvements to group assignments, and successful, timely introduction to formative and summative teammate peer assessment. Each check step elaboratedin the corresponding eBook chapter.
© Peer Assess Pro Ltd
PREVIOUS: STEP 7 — Improve the next cycle NEXT: Get started with Peer Assess Pro 


Return to top of page

 

Messages