Skip to content

Quality

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

341 results found

  1. When using a Standalone workload, the ticket ID input doesn't pull up the interaction in Playvox so analysts can't use the full scoring features, like highlighting text. This results in longer evaluation time and less specific feedback

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Currently, workloads that are scheduled to run daily will run everyday. We do not have the ability to restrict workloads from running on certain days. If, for example, all of our QAs are off on weekends, we want to be able for the workload to not run on Saturday or Sunday.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  2 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Sometimes participants submit their answers for a calibration and realize they have made a mistake or wish to change or add something thought of afterwards. If the participant reaches out to the expert before the calibration call happens, it would be nice to be able to edit said question or to add something that could be useful. This could also affect the overall results in the end and prevent misalignment between the participant's answers and the expert's answers. Perhaps this could be added under the calibration options.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Ability to add an internal note to evaluations that are scheduled vs. completed

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. It would be nice to have the QA Reporting add a filter in for ´active users´ and ´inactive users´. When running MTD QA performance it would be useful and more accurate to filter out any of the inactive users.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Currently calibrations results form part of our auditors KPI's. Being able to see how everyone else is performing in calibrations is causing problems and could mean we are unable to continue to move from our calibrations from our current platform, and into Playvox.
    Please can there be the ability within roles management to set what any role can view eg: own, team, all. Thank you.

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. We current have the possibility to mark a question as Delivered, Almost There or On it's Way.
    Almost There and On it's Way answers require feedback options to be selected so we can better understand why the agent did not achieve a Delivered score.
    However sometime we mark the lowest scoring option 'On it's Way' and select the relevant feedback options, but the agent also displayed some of the feedback options from the 'Almost Delivered' feedback reasons, but we are unable to select these. From a reporting and feedback to agent POV it would be great to be able to…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. The client would like to have automated workloads. The number of evaluations increases when the agent's overall QA score is low and the number of evaluations decrease when the agent's overall QA score is high.

    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Prioritized  ·  1 comment  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Like the Calibration tab, allow a Review to be started from the Review tab, instead of just being able to start a Review from an evaluation.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Report dashboard that can be filtered by agent, by scorecard type, by team, etc. that will show the trend per section, trend by questions, etc. and at the same time include the evaluation comment for each occurrences. Rather than going back to the actual evaluation to review them individually.

    18 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Planned  ·  2 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Currently, Playvox gives a 0% score for an evaluation that was not completed due to missing the deadline. The 0% is counting towards the calibration final score, even though there are many other completed ones.
    We would like an enhancement to just take completed evaluations in a calibration session into account to generate the score.

    11 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. We would like to be able to track the time to complete (similar to Evaluations), so that we can track QA productivity as well as compare original auditing time to evaluator review time.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. We would like our agents to see the Effectiveness score or pass rate of their evaluations rather than the QA score as we work with COPC standards. Also in the reporting section and the Recommendations pass rate or Critical error rate would be more significant and helpful for our quality program.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Include the reassignment feature thats available for the "evaluate the agent" workloads and extend to "evaluate the analyst" so I'm able to reassign a new analyst to help with calibrations.

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  3 comments  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. We use feedback options as "tags" to create more reporting insights. Currently this option is only available if we go with a "Points" based question type. This can be setup with Points, but a slider or scale option has a nicer look and keeps the scorecard more condensed (it takes several seconds to scroll a scorecard with several points answers and feedback options).

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. The ability to export Quality Agent Results. Go to Reports > Agent tab > View Reports > View Details. Currently, it's not possible to export this summary. We report on score per section per agent, and having the ability to export this would help with reporting needs.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. There is an inconsistency in the naming of fields in the Evaluator review reporting. Specifically, there is a discrepancy between the field names in the Playvox review tab and the header names in the Export report. For instance, the field called "Reviewer" in the Playvox report is referred to as "Analyst" in the export report. Another example is the field called "Analyst" in the Playvox report is named "Reviewed_by" in the export report. This lack of clarity can easily result in reporting mistakes.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. If we archive scorecards, then Playvox doesn't allow Calibrations / ATA to be completed for evaluations completed using the archived scorecard. This means when we revamp scorecards, QAs will see both old and new scorecards when creating an evaluation. To only show one, we have to create a complete separate team.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Prioritized  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. A few of my analysts and myself have come across the issue of hyperlinks not showing up when evaluating. Portions of the interaction that are hyperlinked look like plain, regular text while evaluating and appears as though there aren't any when there are.
    This has caused unintentional mark offs for agents for not including a link to resources when there was in the Salesforce case. This has only become a recent issue. If possible, can we have the hyperlinks included (again) when evaluating? It does appear to show in the interaction itself before clicking to start evaluation but not during…

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  2 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. the main dashboards need to be editable. e.g. there are certain statistics which we consider more important e.g. fail-alls.- for us this should be top and centre of the dashboard. "errors" are not important and could be removed

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
1 2 8 10 12 17 18
  • Don't see your idea?