Skip to content

Quality

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

284 results found

  1. Would love to see the ability to adjust evaluation data so that if a monitor is submitted against the wrong team, the results can be fixed. I'd prefer that evaluations(at least in the team sense) are dynamic instead of static.

    EXAMPLE - If I submit against Team B and meant to evaluate Team A, I want to be able to adjust the evaluation to show up against Team A retroactively without having to delete, submit again, and scrub the inital results from our data lake. This ensures that all of our front-end reporting matches results outside of Playvox and that…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. When a question has a 0 point value & the agent gets the question correct it is counted as part of the error rate. We would ask to have 0 point questions only counted as an error if the question is incorrect

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Automation of the export metrics by time. For example instead of manually exporting the QA metrics every x amount of time, there should be a feature were we could stablish a time parameter and the metrics would be dowloaded to an excel or google sheets automatically.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. When an Analyst is added / removed from a workload, it redistributes the evaluations equally amongst all Analysts. This requires us to take note of existing distribution and then recreate them. It should not reset distribution by default and users should be able to click on a button to reset it if needed.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. How AHT is measured currently is inaccurate. Currently, the AHT clock starts when the Analysts click on the "Start Evaluation" button. However, the interaction is reviewable even before the Analyst clicks on the button. We also cannot instruct the Analysts to NOT review before clicking, as the Analysts can only skip before clicking the "Start Evaluation" button. Thus, Analysts have to review the ticket beforehand to determine whether to skip the ticket or not.

    The AHT clock should start before the analyst can review the interaction and the evaluation made skippable even after starting the evaluation. Record the time from…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Currently, users can only be searched using a single name (i.e. Mike, Arun, Luke). With 7000+ agents, the list Playvox returns can get multiple pages. Please allow searching by their full names.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. When an evaluation gets deleted, it marks it as "Not Evaluated" for the analyst, impacting their completion rate. It should not impact their completion rate if deleted. It's sometimes out of QA's control.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. 9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  3 comments  ·  Settings  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Would be great to have the ability for Super Admin to update edit access on particular workloads to new admin(s) to access/edit vs creating brand new ones for select teams (and vs updating permissions so they can access all workloads for editing).
    OR ability for new admins assigned as team leaders to be able to access existing workloads tied to their team created by other/previous team leaders.

    There's quite a bit of transition at points throughout the year with admins/team leaders and having to create new workloads that new admins are able to edit for their team is pretty tedious.…

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. If a workload is not completed before a new assignment is distributed, analysts are unable to work on the previous items that were assigned to them and can create a gap in work that QA needs to complete. This must manually be captured and completed outside of the workload.

    Would love to see the ability to complete the previous assignment and/or have the functionality to catch this and the future assignment to reflect what was missed. This will ensure that there are no gaps from a controls perspective

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. On the evaluations tab, it would be great to have an all "live" filter for the date filter.
    Insted of locking the end date, have this end date to be live and updated for the current date.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Currently you need to set up workloads based on past interactions, I can't set up a workload to find one call randomly per agent for calls taken this week unless I re-assign a workload everyday and still the analyst will need to keep note on who've they've evaluated and who they haven't

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. We recently went through some attrition and had to deactivate some agents, we noticed that even though the agents are deactivated all of their evaluations are still affecting our overall QA and the only way to get rid of these is to delete them. Currently, the only way around this reporting-wise is to look into Insights, or more so manually via our analytics team.

    Would love to see an option introduced where maybe we can archive these reviews when an agent is deactivated instead of deleting them, so we can toggle this information when needed.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Add a section of the platform for admins to create and store a scorecard definitions doc for analyst and agent reference.

    Example: Did the agent verbalize empathy during the conversation?

    Definitions doc:
    - Scorer notes
    - Examples
    - Internal resource links
    - Comments section

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. To be able to see in the “Reporting” tab results from the fail reasons in the scorecard. Right now, we have to manually pull from “Evaluations” into Excel and make pivot tables from there.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Wants a cumulative report highlighting insight on specific agents to figure out which area they are failing in and or excelling in.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Have the ability to create your own custom filters at the settings level (reports tab, scorecard, employee profile etc.) they would all be possible to use as filters. Example: you could have a category for a line of business so you can see results at an aggregate level without needing to export.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Filters  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. The client would want for the expert in a calibration session, to see the answers of all the participants in the session before he sends his own evaluation.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Having the question description only show up while hovering over the "?" is terrible. It should be visible at all times.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Include further filters in the reporting of review stats (by analyst, team, etc.)

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?