304 results found
-
Automated Quality Coaching Recommendations
The user can set the percentage that triggers the coaching session recommendation
Options to trigger the coaching recomendation:
Question or section score
critical fail5 votes -
Ability to use decimals in a workload for QA assignment
We need the ability to use decimals when dividing the estimated volume to QAs in a workload. I.e. Currently, if you have 5,000 interactions estimated to be divided among 100 QAs the difference between the allocation of 1% (50 interactions) and 2% (100 interactions) is not providing a fair distribution.
5 votes -
Reviews
Evaluations Reviews:
There's no option for the Analyst to comment on and dispute a review.
Drafts are created once the review scorecard is initiated and has no option to save.
In the instance of toggling between the actual playvox evaluation (selecting “view evaluation details”) and review evaluation. It does not save the information inserted and you will have to start from scratch.
After sending the reviewed evaluation, there's no option to make an edit.
5 votes -
See the drafted version of the evaluation they started earlier on the actual workload
Whenever the analyst is completing a workload, the client would like to see the drafted version of the evaluation they started earlier on the actual workload without needing to go to the "draft" tab. Also, to avoid confusion the client would want to have only one draft per ticket/ evaluation and not multiple for every time he saves that same evaluation as a draft.
5 votes -
Ability to save "Reviews" of evaluations as drafts before sending the review.
While having the ability to review the evaluations of QA specialists is super helpful, it would be especially beneficial to have the ability to save reviews as drafts before sending them!
5 votes -
Pull reports on scorecard header filters not just custom filters
We can add header filters in scorecards which we can use to filter evaluations in the evaluations tab however it would be beneficial to be able to filter on these in reports much like we can custom filters within the scorecard.
5 votes -
Calibration comparison granular data
Exporting data from calibrations: we need the granular data from our calibrations, like a comparison between the analyst evaluation and expert evaluation; and separated analyst answers and expert answers;
5 votes -
Filter by Intercom teams / groups
The ability to filter Interactions in PlayVox by Intercom teams and/or groups. It will be easier to evaluate specific agents and teams.
5 votes -
5 votes
-
Add the ability to approve or remove the ability to skip interactions in WL
Analysts are skipping interactions that they consider too hard, too long or too low scoring and don't want to score, approving will make it easier to track
5 votes -
Dashboard with overtall data of agents
There is currently no dashboard view to get an overall average overview for all agents and all sections. There is also no view to see a team's average section score by month. These are integral for reporting purposes. To be able to quickly and accurately share easy to understand overviews with our Leadership team would save us lots of time. It can be frustrating exporting many different types of reports to get only partial information to build my own report outside of Playvox.
5 votes -
Add comments on Export evaluations file
Add all comments from the comment box inside the evaluations when exporting that data.
5 votes -
5 votes
-
Selection filters
Improve the filters to be able to easier select different teams and scorecards.
Today you have to select them one by one.
We should be able to select several boxes at a time, have a multiple select option.
Actually having teams on different levels would be great.5 votes -
Possibility to hide evaluations from the Agents
We need to be able to evaluate agents, and hide only some evaluations from theor view (to be able to select which evaluations to make visible for agents and which not). Therefore, if they check their profile, they wont be able to see the evaluations that we don´t want to. For example: we want to do audits with investigation purpose, but we don´t want to give visibility to the agents so these audits wont have any impact on them.
I know we have the option when doing an evaluation to mark if we want to send the notification to agent…
4 votes -
Enhance Workload Reporting
We'd like to have the option for an automated "Workload Status Report" email that can be sent out daily, weekly, monthly, quarterly, etc. that would provide insights into the current period's workload activity to see things like total time spent evaluating so far per analyst, individual and overall completion rate, number of skips, average scores by team and scorecard, as well as any trends perhaps tracked by AI. We'd also like to have the ability to export all workload data/reports which currently doesn't seem to exist on the Quality>Workloads>Reports pages.
4 votes -
Edit scorecards post publishing
We would love the ability to edit scorecards after publishing, specifically the feedback sections, to be re-enabled or even improved upon with more edit options so we don't have to clone and archive when we want to update our cards.
This would improve our efficiency and clean up the DOMO reporting as well.
4 votes -
Marking an agent as out of office/sick/vacation
After sending a workload that can't be edited, we would like to be able to mark employees if they're on vacation or sick so we can know the reason that the analyst didn't complete their workload, also for future reference as there is no data to prove why this employee hasn't been evaluated.
include columns for start and end dates, as well as the reason for the absence.
4 votes -
View All Quality Evaluation Drafts
It would be very helpful if the admin and super admin role could see all drafted evaluations on the platform, not just the ones they started. Evaluations are tied to other parts of the platform, like scorecards, and having the ability to edit and delete other draft evaluations would be great!
4 votes -
New Analyst report: Disputes per Analyst (accepted and not accepted)
The new Analyst Report is great but we are missing Disputes per Analyst and also how many of those Disputes per analyst were accepted or rejected.
This will give insight into accuracy of evaluations and can help with coaching of Analysts.4 votes
- Don't see your idea?