Peer calibration feature for analyst evaluations
We are looking to implement a peer calibration process where analysts evaluate each other's completed evaluations/reviews to assess scoring consistency and alignment with quality standards.
The process should involve:
Selection: Choose a completed evaluation for each evaluator.
Assignment: Assign other team members to evaluate the same case.
Comparison: Compare the results to identify scoring misalignments across quality standards.
Currently, this process is manual and requires significant effort to manage and analyze. We request a dedicated feature in Playvox that allows us to:
Select and assign evaluations for peer review.
Facilitate the comparison of results among evaluators.
Identify and address scoring discrepancies.