Skip to content

Quality

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

335 results found

  1. The client would like to have automated workloads. The number of evaluations increases when the agent's overall QA score is low and the number of evaluations decrease when the agent's overall QA score is high.

    10 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Point Value scorecards have an option to add checkboxes for each score (unable to provide screenshot stating file is not supported - JPEG, PNG, GIF, HTML tried all)
    These are a great way to tag behaviors based on the score and help analysts/supervisors see what behavior lead to that particular score. However, these checkboxes are not trackable in reporting and does not export easily to track these behaviors.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  2 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. For example, to change the scorecard from mail to call during an evaluation.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Include further filters in the reporting of review stats (by analyst, team, etc.)

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Started  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Ability to pull a report showing all the evaluations that have been deleted, who deleted, when deleted and even some details of the evaluation - agent name, date of eval, quality score etc?

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Currently, we can pull individual agent reports on the scorecard section, but not by question group. It would be great for us to export this easily instead of needing to export all evals individually and then work with the data to get the information we need.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  2 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. In the "review the analyst" scorecard, allow weighting on the scorecard.
    Currently, there is no way to set the weight and when comparing evaluations the score does not match when completed. This leads to questions from analysts why the score does not match when the evaluation questions do match.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Would love to see the ability to adjust evaluation data so that if a monitor is submitted against the wrong team, the results can be fixed. I'd prefer that evaluations(at least in the team sense) are dynamic instead of static.

    EXAMPLE - If I submit against Team B and meant to evaluate Team A, I want to be able to adjust the evaluation to show up against Team A retroactively without having to delete, submit again, and scrub the inital results from our data lake. This ensures that all of our front-end reporting matches results outside of Playvox and that…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. When a question has a 0 point value & the agent gets the question correct it is counted as part of the error rate. We would ask to have 0 point questions only counted as an error if the question is incorrect

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. There is no option on a review scorecard to have a scale score (eg. 0-5) only 'points'. There are questions that need to be marked on a scale.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. We currently have 2 questions that zero the scorecard, but agents are finding this very demotivating and the 0% score overrides all the good they have achieved in the other 10 questions.

    Would it be possible to present the score in the notification email, and within Playvox, broken down into 2 parts?
    e.g.
    Your quality score (Q1-10) is 80%
    Your compliance score is 0%
    Your overall score is 0%

    Your quality score (Q1-10) is 95%
    Your compliance score is N/a
    Your overall score is 95%

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. If a workload is not completed before a new assignment is distributed, analysts are unable to work on the previous items that were assigned to them and can create a gap in work that QA needs to complete. This must manually be captured and completed outside of the workload.

    Would love to see the ability to complete the previous assignment and/or have the functionality to catch this and the future assignment to reflect what was missed. This will ensure that there are no gaps from a controls perspective

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. In calibration system represents the answer and the comment, but there is no visibilty of the selected feedback, thus the calibrations do not represent the full picture of the evaluation.

    It would be useful to have an option to add feedback as part of the calibration, in order to cover all of the areas

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. In the scoresheet we would like to minus score for each wrong answer but don’t allow the final score to be negative.

    For e.g. right now if agent fails all of the answers they are able to gather for e.g. -200 score for x scorecard, we are looking for a set up where the lowest possible score would be zero and it wouldn’t go below 0

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. For QA filters, it would be great to add a filter for requiring an agent to have had at least 2 interaction within a thread and/or were the last ones to have connected with the guest.

    This would help us ensure that agents we are evaluating are the ones who finished the conversation and it would help us ensure that the person we were evaluating had a substantial impact on the conversation.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Currently, Playvox gives a 0% score for an evaluation that was not completed due to missing the deadline. The 0% is counting towards the calibration final score, even though there are many other completed ones.
    We would like an enhancement to just take completed evaluations in a calibration session into account to generate the score.

    11 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Started  ·  3 comments  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Our quality is heavily soft skill focused, and the Red "Rejected" notice is not ideal. I wonder if a different term could be used instead. Even "Denied" or "No changes warranted" might be less abrasive. Also, maybe a different color than red?

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  3 comments  ·  Disputes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. I love the new Reports section of Workloads, showing the completion percentages for my analysts. What I'd like to see there is an option to set completion percentage targets for assigned workloads; for example, those with 0-30% of their assigned evaluations are in red, those with 31-60% are in yellow, etc. It would allow for at-a-glance checks on how the analysts are doing and call attention to those who are behind.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. To be able to see in the “Reporting” tab results from the fail reasons in the scorecard. Right now, we have to manually pull from “Evaluations” into Excel and make pivot tables from there.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Much like the description line underneath each criteria question, to be able to add the marking guide of what is considered 'Achieved' or 'not achieved' for agents to see what we look for / consider when reviewing. We currently have these in word docs but it would be great to have these in PV.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?