307 results found
-
Filter by interaction/call date
Ability to filter the evaluations by the interaction or call date, rather than just the 'Created date'. This enables us to easily export reports based on when the call was made aside from when the call was evaluated. This eliminates the need for us to open every evaluation to see if the call date falls on the current or previous month.
2 votes -
New type of sampling -> Percentage or exact number of evaluation by team
We'd like to have an additional type of sampling function based on the "Percentage or exact number of evaluation by team", not by team member or by filter as these are the options available now.
6 votes -
Workload progress bar color
It would be great if the color of the progress in the workload can be different in different statuses. So that we can easily identify completed workload from workloads in progress. I have attached an exemplary screenshot. When the workload is 100% the progress bar turns green.
2 votes -
Workload Open Queue
Currently, Workloads assign a static list of evaluations per analyst. This is not an efficient workflow for larger teams.
A setting which ran workloads without analysts assignments would be helpful. The workload would run with the typical date, frequency and filter settings, but instead of assigning lists for each analyst, the user would add their analysts and allow them to work from an open 'bucket' of interactions.
Analysts would take ownership of an interaction by completing the evaluation and it would be removed from the open queue of interactions.
Users would be able to see which analysts completed evaluations and…
14 votes -
Definitions Doc
Add a section of the platform for admins to create and store a scorecard definitions doc for analyst and agent reference.
Example: Did the agent verbalize empathy during the conversation?
Definitions doc:
- Scorer notes
- Examples
- Internal resource links
- Comments section4 votes -
Cumulative Report
Wants a cumulative report highlighting insight on specific agents to figure out which area they are failing in and or excelling in.
4 votes -
Create Custom Filters
Have the ability to create your own custom filters at the settings level (reports tab, scorecard, employee profile etc.) they would all be possible to use as filters. Example: you could have a category for a line of business so you can see results at an aggregate level without needing to export.
4 votes -
Large Team Workload Automations
Create a more efficient way for larger groups of analysts to be updated & managed within workload creation & edits as team changes happen
6 votes -
Substitutions for Multiple Analysts
Redistribute evaluations across multiple analysts (Team Leaders etc.) for both the substitutions (on PTO) & the re-assignments
19 votes -
Add ability for the user to see calibration completion date
Currently the only way to find when exact calibration was completed_at is only through the API.
Given that this field is used by the system to find calibrations over a period of time, it would be extremely useful to have it visible in the system1 vote -
Advanced Dispute Reporting
Currently, the information from the dispute export is fairly limited. Being able to export data such as which question(s) were disputed would be helpful in seeing trends with agents or evaluators. This way, we can see if it is a specific question that an agent or evaluator may have a misunderstanding of, or if there are other factors that may affect a certain question being disputed more frequently than others.
9 votes -
Scorecard Question results for Calibrations
It would be very helpful to see the average scorecard question results based on Calibrations.
Example:
Feature the Analyst's average scorecard question results for a specific period of time
VS.
The Expert's avg. scorecard question results for the same period of time.
Similarly, it would be very helpful to feature said data in the API for easier exporting.
2 votes -
Manual agent assignment in workloads
It would be very helpful to have the option of assigning specific agents for evaluation to specific analysts. Reason: some analysts may not be fully capable of evaluating specific agents due to language barriers, etc... YES, one could technically make a separate workload for that language, but there are other instances in which this could be useful, especially if agents can be reassigned MID-WORKLOAD period (an analyst is away on vacation or is ill, etc...).
10 votes -
Level 2 Dispute
It would be good if agents can make an appeal or can still comment after a dispute has been resolved.
There are instances that agents still have concerns or questions after their disputes get resolved but there is no way for them or evaluators to re-open the cases.
10 votes -
Filter by Intercom teams / groups
The ability to filter Interactions in PlayVox by Intercom teams and/or groups. It will be easier to evaluate specific agents and teams.
5 votes -
Duplicate use of filters
Currently if I select a filter where I only want emails, I cannot add another to eliminate voice or chat, so a lot of the emails coming through are associated with phone calls or chats. Would be nice to have this feature as I'm strictly looking for an inbound customer email with a reply from a representative to review rather than skipping for a decent amount of time until I find one eligible.
0 votes -
Target specific conversation types or user queries for QA evaluations
I want to be able to target specific conversation types or user queries for evaluations. I.e. Feature requests/user feedback, bug reports, refunds, subscription failures so we can take a deeper dive into evaluating these conversations. This will help us learn how to improve our customer service and better understand our customers.
3 votes -
Reporting trends
Report dashboard that can be filtered by agent, by scorecard type, by team, etc. that will show the trend per section, trend by questions, etc. and at the same time include the evaluation comment for each occurrences. Rather than going back to the actual evaluation to review them individually.
18 votes -
Feedback summary to be at the top of the evaluation
It would be great for the feedback summary to be viewed at the top of the evaluation so the agent can read the overall summary and then delve into the particulars of each criterion.
3 votes -
5 votes
- Don't see your idea?