304 results found
-
Restrict the result view for participants of a calibration to their own result only
Currently calibrations results form part of our auditors KPI's. Being able to see how everyone else is performing in calibrations is causing problems and could mean we are unable to continue to move from our calibrations from our current platform, and into Playvox.
Please can there be the ability within roles management to set what any role can view eg: own, team, all. Thank you.6 votes -
Add feedback options to scale question type
We use feedback options as "tags" to create more reporting insights. Currently this option is only available if we go with a "Points" based question type. This can be setup with Points, but a slider or scale option has a nicer look and keeps the scorecard more condensed (it takes several seconds to scroll a scorecard with several points answers and feedback options).
1 vote -
Evaluations increase or decrease based on QA score
The client would like to have automated workloads. The number of evaluations increases when the agent's overall QA score is low and the number of evaluations decrease when the agent's overall QA score is high.
9 votes -
Export agent stats per scorecard
The ability to export Quality Agent Results. Go to Reports > Agent tab > View Reports > View Details. Currently, it's not possible to export this summary. We report on score per section per agent, and having the ability to export this would help with reporting needs.
1 vote -
Improve the consistency of the header names between the Evaluator Review export and the Playvox report
There is an inconsistency in the naming of fields in the Evaluator review reporting. Specifically, there is a discrepancy between the field names in the Playvox review tab and the header names in the Export report. For instance, the field called "Reviewer" in the Playvox report is referred to as "Analyst" in the export report. Another example is the field called "Analyst" in the Playvox report is named "Reviewed_by" in the export report. This lack of clarity can easily result in reporting mistakes.
1 vote -
Review tab filters (Evaluate the analyst)
Currently, when exporting Evaluate the analyst reviews, there are very limited filter options. The biggest pain point is that there is no date range filter, so every export contains ALL completed reviews to date, which is far from ideal as it is not efficient and leads to additional manual work to isolate the required date range. I would like to recommend the addition of a date range filter and another filter for the Reviewer so that we can track and trend Reviewer productivity and scoring trends.
1 vote -
Ability to archive scorecards without impacting Calibrations / ATA
If we archive scorecards, then Playvox doesn't allow Calibrations / ATA to be completed for evaluations completed using the archived scorecard. This means when we revamp scorecards, QAs will see both old and new scorecards when creating an evaluation. To only show one, we have to create a complete separate team.
3 votes -
Add display of agents who are expected to have interactions assigned, but no available interactions were found
When workloads are assigned to specific agents within a team, but not the entire team, it's difficult to figure out which agents don't have interactions available, but are supposed to.
In the image of agent assignments, the first two 0/0 are on the team, but were never added to interaction assignments in the workload configuration. An agent who was added to assignments and should have 4/4 but has 0, will also show as 0/0. Adding a clear distinction of who is missing interactions will make it much easier to follow up.
2 votes -
Populate links within email/customer interaction when reviewing evaluations
A few of my analysts and myself have come across the issue of hyperlinks not showing up when evaluating. Portions of the interaction that are hyperlinked look like plain, regular text while evaluating and appears as though there aren't any when there are.
This has caused unintentional mark offs for agents for not including a link to resources when there was in the Salesforce case. This has only become a recent issue. If possible, can we have the hyperlinks included (again) when evaluating? It does appear to show in the interaction itself before clicking to start evaluation but not during…6 votes -
dashboard edits
the main dashboards need to be editable. e.g. there are certain statistics which we consider more important e.g. fail-alls.- for us this should be top and centre of the dashboard. "errors" are not important and could be removed
3 votes -
Ability for team leaders/managers to flag interactions that need to be reviewed
My org has team leaders that routinely escalate cases that need to be reviewed that are typically outside of workloads and on the spot to evaluate. Currently there is no way for a team leader/manager to flag an interaction in Playvox that needs to be reviewed by another analyst (the manager/lead is not doing the reviewing) and this is a manual effort that is captured in google spreadsheets.
It would be critical to have the ability to flag an interaction for review through the interactions tab, have a dropdown of categories for the analyst for the reason for review, leave…
3 votes -
Export should include highlights
The exported file should include any highlights and the comments that were attached.
1 vote -
Calibration score based on complete evaluations in a calibration session
Currently, Playvox gives a 0% score for an evaluation that was not completed due to missing the deadline. The 0% is counting towards the calibration final score, even though there are many other completed ones.
We would like an enhancement to just take completed evaluations in a calibration session into account to generate the score.9 votes -
Use different term than "Rejected"
Our quality is heavily soft skill focused, and the Red "Rejected" notice is not ideal. I wonder if a different term could be used instead. Even "Denied" or "No changes warranted" might be less abrasive. Also, maybe a different color than red?
8 votes -
Calibration Due Time
Currently the calibration feature allows you to only set a due date (end of day). We would like to be able to set up a datetime calibration results are due (eg. 9/29/2022 @ 4pm) This would allow us a bit more flexibility in how long we can allow users to provide their inputs.
3 votes -
Ability for admin to bypass SAML
The ability for an admin/super admin to bypass SAML rather than having to reach out to Playvox support. If there is an issue during SSO set up our internal IT team can't login to resolve it.
1 vote -
deletion
Our security team requires we delete (or securely archive) old evaluation data. Deleting thousands of evaluations individually is not feasible.
A process to bulk (securely archive) or delete evaluations is necessary for many organizations from a compliance perspective!
1 vote -
Date Filter Field Update
Add a Date Filter Field in Filters to allow pulling up interactions based on e-mail sent date. Currently, this is not possible and creates problems pulling interactions from SalesForce as the available Date Filter Fields are not based on email dates.
3 votes -
dispute
Allow Disputes to edit the evaluation either up or down - currently can only accept a change and increase the score, but if the dispute is incorrect and the score should be lower, we cannot make that change. This is for when using a score range.
3 votes -
The ability to edit a participant's calibration answers
Sometimes participants submit their answers for a calibration and realize they have made a mistake or wish to change or add something thought of afterwards. If the participant reaches out to the expert before the calibration call happens, it would be nice to be able to edit said question or to add something that could be useful. This could also affect the overall results in the end and prevent misalignment between the participant's answers and the expert's answers. Perhaps this could be added under the calibration options.
3 votes
- Don't see your idea?