Enhanced consensus scoring
From the June 2017 print edition
In recent years, group evaluations and consensus scoring have been the subject of numerous successful legal challenges that have exposed serious flaws in the evaluation practices of many purchasing institutions. This article discusses enhanced consensus scoring protocols, which allow evaluation teams to operate efficiently while staying within the bounds of their due diligence duties.
Scoping: balancing efficiency and defensibility
Enhanced consensus scoring seeks to strike a balance between the mechanical averaging of individual evaluator scores, which forgoes the benefit of any peer review, and the collective group scoring of every evaluation score, which tends to lead to protracted evaluation sessions with questionable end-results. To balance efficiency and defensibility, the objective of enhanced consensus scoring is to set the balance between these two extremes.
Enhanced consensus scoring sessions occur after each evaluator is provided sufficient time to conduct an individual evaluation and to record his or her initial evaluation results. Only those scores that fall outside the pre-set variation tolerance are addressed during the consensus scoring sessions. Scores that fall within the pre-established range of variation tolerance are not discussed during consensus scoring and are instead simply averaged.
Instead of diluting the limited resources of the evaluation team, one of the main advantages of enhanced consensus scoring is that it focuses the team’s efforts on the areas with the largest divergence in initial scoring since those areas are most likely to contain errors that would impact the ultimate rankings and undermine the fairness of the process.
Setting the variation tolerance
The variation tolerance should be set before the enhanced consensus scoring session. There is no single formula for variation tolerance since evaluation teams use different scoring ranges between and even within specific RFPs. For example, if evaluators are scoring a category on a one-to-10 scale, the variation tolerance could be set for any scores that are more than three points apart. When using a one-to-five scale, it could be set at any scores that are more than two points apart. Those scores that fall within the acceptable range would be averaged at the end of evaluations and would not be raised for group discussion.
Coordinating the enhanced consensus scoring sessions
The outlier scores (those falling outside of the variation tolerance) should be identified by the group evaluation moderators (typically procurement advisors who are not members of the evaluation team) prior to the enhanced consensus scoring sessions.
Where possible, those items with the greatest range of divergence should be addressed first during the enhanced consensus scoring sessions. This is particularly important in situations where the evaluation group has a finite amount of time allocated to group scoring since this will better ensure that the items of greatest potential impact on the ultimate rankings are addressed by the group within the allocated time. This is less critical in situations where the project team is prepared to allow for additional sessions if all outliers are not addressed in the initial session. Either way, this issue should be addressed prior to the first group session to confirm whether the sessions will be conducted based on: (i) prioritization within a finite amount of time (where the remaining outliers would default to averaging); or (ii) non-prioritized discussion of outliers with the contingency of additional sessions if time runs out before all outliers are discussed.
Rules of order for enhanced consensus scoring
Enhanced consensus scoring sessions should be conducted in accordance with formal rules of order. They should be presided over by the moderator who chairs the meeting. The moderator should prepare the agenda ahead of time based on a pre-identified summary of outliers.
The moderator should brief evaluators before initiating the sessions to clarify these enhanced consensus scoring protocols and confirm other meeting logistics, including how each evaluator will be provided an opportunity to speak to the group before an item is closed. The moderator should also remind evaluators to record the reasons for any revisions to their initial scores as part of their overall individual evaluation records. Once the evaluators have been given the opportunity to record any adjustments to their scores resulting from consensus scoring, either during or shortly after the meeting, the final individual scores are then averaged to arrive at the final overall scores.
In summary, by applying enhanced consensus scoring protocols, project teams can streamline their evaluation process while maintaining proper evaluation records. This helps protect the evaluation team from inappropriate interference and protect the ultimate fairness of the evaluation process.