Skip to content

Quality

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

304 results found

  1. Currently calibrations results form part of our auditors KPI's. Being able to see how everyone else is performing in calibrations is causing problems and could mean we are unable to continue to move from our calibrations from our current platform, and into Playvox.
    Please can there be the ability within roles management to set what any role can view eg: own, team, all. Thank you.

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. We use feedback options as "tags" to create more reporting insights. Currently this option is only available if we go with a "Points" based question type. This can be setup with Points, but a slider or scale option has a nicer look and keeps the scorecard more condensed (it takes several seconds to scroll a scorecard with several points answers and feedback options).

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. The client would like to have automated workloads. The number of evaluations increases when the agent's overall QA score is low and the number of evaluations decrease when the agent's overall QA score is high.

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  1 comment  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. The ability to export Quality Agent Results. Go to Reports > Agent tab > View Reports > View Details. Currently, it's not possible to export this summary. We report on score per section per agent, and having the ability to export this would help with reporting needs.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. There is an inconsistency in the naming of fields in the Evaluator review reporting. Specifically, there is a discrepancy between the field names in the Playvox review tab and the header names in the Export report. For instance, the field called "Reviewer" in the Playvox report is referred to as "Analyst" in the export report. Another example is the field called "Analyst" in the Playvox report is named "Reviewed_by" in the export report. This lack of clarity can easily result in reporting mistakes.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. Currently, when exporting Evaluate the analyst reviews, there are very limited filter options. The biggest pain point is that there is no date range filter, so every export contains ALL completed reviews to date, which is far from ideal as it is not efficient and leads to additional manual work to isolate the required date range. I would like to recommend the addition of a date range filter and another filter for the Reviewer so that we can track and trend Reviewer productivity and scoring trends.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Filters  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. If we archive scorecards, then Playvox doesn't allow Calibrations / ATA to be completed for evaluations completed using the archived scorecard. This means when we revamp scorecards, QAs will see both old and new scorecards when creating an evaluation. To only show one, we have to create a complete separate team.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Scorecards  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. When workloads are assigned to specific agents within a team, but not the entire team, it's difficult to figure out which agents don't have interactions available, but are supposed to.

    In the image of agent assignments, the first two 0/0 are on the team, but were never added to interaction assignments in the workload configuration. An agent who was added to assignments and should have 4/4 but has 0, will also show as 0/0. Adding a clear distinction of who is missing interactions will make it much easier to follow up.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. A few of my analysts and myself have come across the issue of hyperlinks not showing up when evaluating. Portions of the interaction that are hyperlinked look like plain, regular text while evaluating and appears as though there aren't any when there are.
    This has caused unintentional mark offs for agents for not including a link to resources when there was in the Salesforce case. This has only become a recent issue. If possible, can we have the hyperlinks included (again) when evaluating? It does appear to show in the interaction itself before clicking to start evaluation but not during…

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  2 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. the main dashboards need to be editable. e.g. there are certain statistics which we consider more important e.g. fail-alls.- for us this should be top and centre of the dashboard. "errors" are not important and could be removed

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Reports  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. My org has team leaders that routinely escalate cases that need to be reviewed that are typically outside of workloads and on the spot to evaluate. Currently there is no way for a team leader/manager to flag an interaction in Playvox that needs to be reviewed by another analyst (the manager/lead is not doing the reviewing) and this is a manual effort that is captured in google spreadsheets.

    It would be critical to have the ability to flag an interaction for review through the interactions tab, have a dropdown of categories for the analyst for the reason for review, leave…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Interactions  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  12. The exported file should include any highlights and the comments that were attached.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  13. Currently, Playvox gives a 0% score for an evaluation that was not completed due to missing the deadline. The 0% is counting towards the calibration final score, even though there are many other completed ones.
    We would like an enhancement to just take completed evaluations in a calibration session into account to generate the score.

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  14. Our quality is heavily soft skill focused, and the Red "Rejected" notice is not ideal. I wonder if a different term could be used instead. Even "Denied" or "No changes warranted" might be less abrasive. Also, maybe a different color than red?

    8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  3 comments  ·  Disputes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  15. Currently the calibration feature allows you to only set a due date (end of day). We would like to be able to set up a datetime calibration results are due (eg. 9/29/2022 @ 4pm) This would allow us a bit more flexibility in how long we can allow users to provide their inputs.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  16. The ability for an admin/super admin to bypass SAML rather than having to reach out to Playvox support. If there is an issue during SSO set up our internal IT team can't login to resolve it.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Settings  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  17. Our security team requires we delete (or securely archive) old evaluation data. Deleting thousands of evaluations individually is not feasible.

    A process to bulk (securely archive) or delete evaluations is necessary for many organizations from a compliance perspective!

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Evaluations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  18. Add a Date Filter Field in Filters to allow pulling up interactions based on e-mail sent date. Currently, this is not possible and creates problems pulling interactions from SalesForce as the available Date Filter Fields are not based on email dates.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Prioritized  ·  0 comments  ·  Interactions  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  19. Allow Disputes to edit the evaluation either up or down - currently can only accept a change and increase the score, but if the dispute is incorrect and the score should be lower, we cannot make that change. This is for when using a score range.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Disputes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  20. Sometimes participants submit their answers for a calibration and realize they have made a mistake or wish to change or add something thought of afterwards. If the participant reaches out to the expert before the calibration call happens, it would be nice to be able to edit said question or to add something that could be useful. This could also affect the overall results in the end and prevent misalignment between the participant's answers and the expert's answers. Perhaps this could be added under the calibration options.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Acknowleged  ·  0 comments  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
1 2 6 8 10 15 16
  • Don't see your idea?