344 results found
-
calibrations to represent which feedback item was selected/entered
In calibration system represents the answer and the comment, but there is no visibilty of the selected feedback, thus the calibrations do not represent the full picture of the evaluation.
It would be useful to have an option to add feedback as part of the calibration, in order to cover all of the areas
1 vote -
Avoiding score going below 0
In the scoresheet we would like to minus score for each wrong answer but don’t allow the final score to be negative.
For e.g. right now if agent fails all of the answers they are able to gather for e.g. -200 score for x scorecard, we are looking for a set up where the lowest possible score would be zero and it wouldn’t go below 0
1 vote -
Filter Improvement: Agent Must Have Had 2+ Interactions on a Thread
For QA filters, it would be great to add a filter for requiring an agent to have had at least 2 interaction within a thread and/or were the last ones to have connected with the guest.
This would help us ensure that agents we are evaluating are the ones who finished the conversation and it would help us ensure that the person we were evaluating had a substantial impact on the conversation.
1 vote -
Calibration score based on complete evaluations in a calibration session
Currently, Playvox gives a 0% score for an evaluation that was not completed due to missing the deadline. The 0% is counting towards the calibration final score, even though there are many other completed ones.
We would like an enhancement to just take completed evaluations in a calibration session into account to generate the score.11 votes -
Use different term than "Rejected"
Our quality is heavily soft skill focused, and the Red "Rejected" notice is not ideal. I wonder if a different term could be used instead. Even "Denied" or "No changes warranted" might be less abrasive. Also, maybe a different color than red?
8 votes -
Completion Thresholds for Workload Report
I love the new Reports section of Workloads, showing the completion percentages for my analysts. What I'd like to see there is an option to set completion percentage targets for assigned workloads; for example, those with 0-30% of their assigned evaluations are in red, those with 31-60% are in yellow, etc. It would allow for at-a-glance checks on how the analysts are doing and call attention to those who are behind.
4 votes -
Results for Fail Reasons in Scorecard within Reporting tab
To be able to see in the “Reporting” tab results from the fail reasons in the scorecard. Right now, we have to manually pull from “Evaluations” into Excel and make pivot tables from there.
3 votes -
Add marking guide to the answers of each criteria
Much like the description line underneath each criteria question, to be able to add the marking guide of what is considered 'Achieved' or 'not achieved' for agents to see what we look for / consider when reviewing. We currently have these in word docs but it would be great to have these in PV.
1 vote -
Blind Audits
Hide the name of the agent during the Evaluation process, as this will reduce the bias between Monitor and Agent.
8 votes -
More reporting based on custom fields - for example, trends over time, trends based on section it is attached to..
There is the ability to add custom field selections within the scorecard however the reporting only shows the occurrences - I would like this to show trends of the occurrences over time. Also, the custom fields created do not show as connected to the question it is attached to on the scorecard. For example, we have a process-related question with a department-specific customer field, but the custom feild doesn't relate to the question itself in advance reports.
2 votes -
Disputes - Change the name or add an option to make the function seem less negative / extreme
How disputes should be used:
The function should be used for disputes as well as for clarification requests and general questions regarding a certain question or section in an evaluation.
The agent can create a dispute without choosing any specific topic (dispute, clarification, general question). After a dispute is received, the evaluator can then change it to a clarification request, if applicable.
Problem:
Employees are a bit scared to use the feature as they assume it is only for pure disputes. They don't open a dispute for unclear cases (clarification) because they think it would be too big of a…3 votes -
Mandatory Dispute Categories
An option to enable Mandatory Dispute Categories
3 votes -
Question description UI is difficult
Having the question description only show up while hovering over the "?" is terrible. It should be visible at all times.
3 votes -
Active Users for Signed Evaluations
Currently finding that when we run a report on all of our signed evaluations, it includes agents that have been removed from the platform that are no longer with the team. We are hoping for a way to pull a list of unsigned evaluations and have it only be current active users.
4 votes -
Include custome fields under DISPLAY FIELDS in Quality filters
Include custome fields under DISPLAY FIELD.
Incluir el custome field "Numero de Pedido de la PQR" como DISPLAY FIELD dentro de los filtros por Zendesk.
2 votes -
Live Filter date range on Evaluations tab
On the evaluations tab, it would be great to have an all "live" filter for the date filter.
Insted of locking the end date, have this end date to be live and updated for the current date.3 votes -
Quality triger to create coachings
Posibilidad de crear una opción donde el asesor (Agente) al posser más de 2 o más auditorias desaprobadas no se le permita cargar más evaluaciones hasta que se le cargue una sesión de coaching.
1 vote -
Calibration comparison granular data
Exporting data from calibrations: we need the granular data from our calibrations, like a comparison between the analyst evaluation and expert evaluation; and separated analyst answers and expert answers;
5 votes -
Audio waves
For the audio interactions, it would be helpful to have the sound waves back so people can see where are the silent moments and jump them.
1 vote -
Do not load any filters after opening "Interactions" tab
Whenever we open "Interactions" under the Quality tab, it automatically loads the first filter in the list, forcing the user to wait until it finishes loading. Especially impacting when users have to go to interaction tab frequently. It should not load filters until a user selects a specific filter.
2 votes
- Don't see your idea?