Skip to content

Quality

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

284 results found

  1. To be able to do a coaching session on specific disputes while in the dispute process. This will allow the arbitrator or admin to be able to discuss more quickly the specific issue or dispute that occurred.

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Disputes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. Evaluation preview option before sending the evaluation

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Custom field options triggers options in another custom fields

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Currently, we can't perform quality checks in Playvox without assigning scores. However, our quality process doesn't involve scoring agents; we focus on capturing criteria without emphasizing scores. It would be great if Playvox could support this by allowing assessments without associated scores, making it easier to align with our approach and minimizing disruptions.

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Currently calibrations results form part of our auditors KPI's. Being able to see how everyone else is performing in calibrations is causing problems and could mean we are unable to continue to move from our calibrations from our current platform, and into Playvox.
    Please can there be the ability within roles management to set what any role can view eg: own, team, all. Thank you.

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. A few of my analysts and myself have come across the issue of hyperlinks not showing up when evaluating. Portions of the interaction that are hyperlinked look like plain, regular text while evaluating and appears as though there aren't any when there are.
    This has caused unintentional mark offs for agents for not including a link to resources when there was in the Salesforce case. This has only become a recent issue. If possible, can we have the hyperlinks included (again) when evaluating? It does appear to show in the interaction itself before clicking to start evaluation but not during…

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  2 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. We'd like to have an additional type of sampling function based on the "Percentage or exact number of evaluation by team", not by team member or by filter as these are the options available now.

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. Create a more efficient way for larger groups of analysts to be updated & managed within workload creation & edits as team changes happen

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. Allow to use specific meta-data as filter for report views

    ellos tiene unos forms en zendesk y los usan para creat filtros, entonces ahi les comente que pueden crear scorecards dedicadas también

    REPORTING: Can we filter reports/agent dashboards? For example, can we filter an agents dashboard by Ticket Form to see their performance with specific ticket types?

    Right now there is no easy ability to take QA scores and cross-reference them with key Zendesk information, be that simply tagging based (ie: show me QA scores for THESE contacts with THESE tags *or* show me QA scores for tickets with a…

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. We need to be able to evaluate agents, and hide only some evaluations from theor view (to be able to select which evaluations to make visible for agents and which not). Therefore, if they check their profile, they wont be able to see the evaluations that we don´t want to. For example: we want to do audits with investigation purpose, but we don´t want to give visibility to the agents so these audits wont have any impact on them.

    I know we have the option when doing an evaluation to mark if we want to send the notification to agent…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Participation lists for Calibrations have been implemented (yay!) but they cannot be edited or deleted after creation. We need this to be added ASAP

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  3 comments  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. Changing the user/agent selected even after evaluation is saved/sent.
    We should be able to change the user/agent selected even after evaluation is saved/sent. We can currently change the date, time, link to the interaction, etc. BUT we still cannot change the selected user name for which we're making the evaluation. Please, make this field editable, so that we can correct and amend scorecards efficiently. Thanks.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  2 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Upon submitting an evaluation review, there should be the capability to make edits to it. Currently, if an error is noticed after submitting the review, the only option is to delete it and resubmit a corrected version. However, this results in the original review having already been transmitted through our API call, and when the corrected review is resubmitted, as it has a new ID, this causes both the original and corrected reviews to be stored in our database, leading to inaccurate data.

    Additionally, there should be the ability for our evaluators to challenge evaluation reviews, similar to how agents…

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. The user can set the percentage that triggers the coaching session recommendation

    Options to trigger the coaching recomendation:

    Question or section score
    critical fail

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Automations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. We need the ability to use decimals when dividing the estimated volume to QAs in a workload. I.e. Currently, if you have 5,000 interactions estimated to be divided among 100 QAs the difference between the allocation of 1% (50 interactions) and 2% (100 interactions) is not providing a fair distribution.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. Evaluations Reviews:

    There's no option for the Analyst to comment on and dispute a review.

    Drafts are created once the review scorecard is initiated and has no option to save.

    In the instance of toggling between the actual playvox evaluation (selecting “view evaluation details”) and review evaluation. It does not save the information inserted and you will have to start from scratch.

    After sending the reviewed evaluation, there's no option to make an edit.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Whenever the analyst is completing a workload, the client would like to see the drafted version of the evaluation they started earlier on the actual workload without needing to go to the "draft" tab. Also, to avoid confusion the client would want to have only one draft per ticket/ evaluation and not multiple for every time he saves that same evaluation as a draft.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Exporting data from calibrations: we need the granular data from our calibrations, like a comparison between the analyst evaluation and expert evaluation; and separated analyst answers and expert answers;

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Analysts are skipping interactions that they consider too hard, too long or too low scoring and don't want to score, approving will make it easier to track

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. There is currently no dashboard view to get an overall average overview for all agents and all sections. There is also no view to see a team's average section score by month. These are integral for reporting purposes. To be able to quickly and accurately share easy to understand overviews with our Leadership team would save us lots of time. It can be frustrating exporting many different types of reports to get only partial information to build my own report outside of Playvox.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Other  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?