Skip to content

Quality

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

307 results found

  1. Right now it's not possible to audit the logs for all evaluations at the same time, there's no visibility. We have to access each individually, and it's not scalable for reporting.
    This could be fixed by implementing the possibility to export the logs, or at least view them, much like we export evaluations or view Audit logs for account creation.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. The client would want for the expert in a calibration session, to see the answers of all the participants in the session before he sends his own evaluation.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Whenever the analyst is completing a workload, the client would like to see the drafted version of the evaluation they started earlier on the actual workload without needing to go to the "draft" tab. Also, to avoid confusion the client would want to have only one draft per ticket/ evaluation and not multiple for every time he saves that same evaluation as a draft.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. The client would like to have automated workloads. Whenever the workload does not get enough samples to match the ones requested when creating it (because no other interactions match the filter and date range requirement), these missing evaluations should be added automatically to the next trigger and therefore complete the quota of established tickets to evaluate.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. The client would like to have automated workloads. The number of evaluations increases when the agent's overall QA score is low and the number of evaluations decrease when the agent's overall QA score is high.

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Point Value scorecards have an option to add checkboxes for each score (unable to provide screenshot stating file is not supported - JPEG, PNG, GIF, HTML tried all)
    These are a great way to tag behaviors based on the score and help analysts/supervisors see what behavior lead to that particular score. However, these checkboxes are not trackable in reporting and does not export easily to track these behaviors.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  2 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. For example, to change the scorecard from mail to call during an evaluation.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Include further filters in the reporting of review stats (by analyst, team, etc.)

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Ability to pull a report showing all the evaluations that have been deleted, who deleted, when deleted and even some details of the evaluation - agent name, date of eval, quality score etc?

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. Currently, we can pull individual agent reports on the scorecard section, but not by question group. It would be great for us to export this easily instead of needing to export all evals individually and then work with the data to get the information we need.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  2 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. In the "review the analyst" scorecard, allow weighting on the scorecard.
    Currently, there is no way to set the weight and when comparing evaluations the score does not match when completed. This leads to questions from analysts why the score does not match when the evaluation questions do match.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Would love to see the ability to adjust evaluation data so that if a monitor is submitted against the wrong team, the results can be fixed. I'd prefer that evaluations(at least in the team sense) are dynamic instead of static.

    EXAMPLE - If I submit against Team B and meant to evaluate Team A, I want to be able to adjust the evaluation to show up against Team A retroactively without having to delete, submit again, and scrub the inital results from our data lake. This ensures that all of our front-end reporting matches results outside of Playvox and that…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. When a question has a 0 point value & the agent gets the question correct it is counted as part of the error rate. We would ask to have 0 point questions only counted as an error if the question is incorrect

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. There is no option on a review scorecard to have a scale score (eg. 0-5) only 'points'. There are questions that need to be marked on a scale.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Even though the option in the user management for reviews says Manage reviews (create and update) , Reviews, currently cannot be updated after they have been submitted. We would like the option to update Reviews after they have been submitted as errors can happen and we want to be able to correct mistakes to have correct data and also make sure we provide correct feedback to evaluators.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. We currently have 2 questions that zero the scorecard, but agents are finding this very demotivating and the 0% score overrides all the good they have achieved in the other 10 questions.

    Would it be possible to present the score in the notification email, and within Playvox, broken down into 2 parts?
    e.g.
    Your quality score (Q1-10) is 80%
    Your compliance score is 0%
    Your overall score is 0%

    Your quality score (Q1-10) is 95%
    Your compliance score is N/a
    Your overall score is 95%

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. If a workload is not completed before a new assignment is distributed, analysts are unable to work on the previous items that were assigned to them and can create a gap in work that QA needs to complete. This must manually be captured and completed outside of the workload.

    Would love to see the ability to complete the previous assignment and/or have the functionality to catch this and the future assignment to reflect what was missed. This will ensure that there are no gaps from a controls perspective

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. In calibration system represents the answer and the comment, but there is no visibilty of the selected feedback, thus the calibrations do not represent the full picture of the evaluation.

    It would be useful to have an option to add feedback as part of the calibration, in order to cover all of the areas

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. In the scoresheet we would like to minus score for each wrong answer but don’t allow the final score to be negative.

    For e.g. right now if agent fails all of the answers they are able to gather for e.g. -200 score for x scorecard, we are looking for a set up where the lowest possible score would be zero and it wouldn’t go below 0

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. For QA filters, it would be great to add a filter for requiring an agent to have had at least 2 interaction within a thread and/or were the last ones to have connected with the guest.

    This would help us ensure that agents we are evaluating are the ones who finished the conversation and it would help us ensure that the person we were evaluating had a substantial impact on the conversation.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?