Skip to content

Quality

Categories

JUMP TO ANOTHER FORUM

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback

11 results found

  1. When we assign the ETA workload, it first takes us to the calibration and then to the review screen. Could we have a disability feature to opt for the calibration or not in the workloads and that should directly take us to the review screen?

    Can we have this feature enabled for the Evaluate the Analyst workloads?

    15 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Under Review  ·  3 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  2. If the agents are deactivated after the workload runs, the assigned evaluations for the agent still remains in QAs queue. However, QAs cannot evaluate the agent because the agent is inactive.

    14 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  3. Currently, the analyst distribution step in a workload only lets us distribute a percentage of the estimated interaction volume to be assigned. We need to be able to set a fixed number for each QA analyst to distribute the estimated volume. That way, across multiple workloads every week we can determine their total assignment.

    12 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Under Review  ·  0 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  4. Redistribute evaluations across multiple analysts (Team Leaders etc.) for both the substitutions (on PTO) & the re-assignments

    17 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Under Review  ·  1 comment  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  5. Currently, Workloads assign a static list of evaluations per analyst. This is not an efficient workflow for larger teams.

    A setting which ran workloads without analysts assignments would be helpful. The workload would run with the typical date, frequency and filter settings, but instead of assigning lists for each analyst, the user would add their analysts and allow them to work from an open 'bucket' of interactions.

    Analysts would take ownership of an interaction by completing the evaluation and it would be removed from the open queue of interactions.

    Users would be able to see which analysts completed evaluations and…

    13 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Under Review  ·  2 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  6. We need to discover how choosing more than one filter will work with the sampling

    Workloads : Multiple Filters.Assign multiple filters to a Workload to capture omni-channel tickets. (Chat, email, phone). Current work around is to create a workload for each channel. Not scallable. Also, some of our agents work on calls, chats and email. By creating 3 workload, we evaluate some of them more than the scheduled evaluations number to reach.

    Assign multiple filters to a Workload to capture omni-channel tickets. (Chat, email, phone). Current work around is to create a workload for each channel. Not scallable.

    Two parts.…

    22 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Under Review  ·  2 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  7. Currently, Playvox gives a 0% score for an evaluation that was not completed due to missing the deadline. The 0% is counting towards the calibration final score, even though there are many other completed ones.
    We would like an enhancement to just take completed evaluations in a calibration session into account to generate the score.

    6 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Under Review  ·  1 comment  ·  Calibrations  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  8. 5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Under Review  ·  5 comments  ·  Filters  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  9. I would like for there to be a feature where once you have started a workload and finished your first evaluation, it automatically goes to the next evaluation in the workload.
    Currently, once I finish with an evaluation, I have to go back to the workload and start a new evaluation which creates unnecessary steps and has proven to be inefficient for my workflow.

    12 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Under Review  ·  4 comments  ·  Workloads  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  10. I want to be able to target specific conversation types or user queries for evaluations. I.e. Feature requests/user feedback, bug reports, refunds, subscription failures so we can take a deeper dive into evaluating these conversations. This will help us learn how to improve our customer service and better understand our customers.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Under Review  ·  1 comment  ·  Filters  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  11. Disputes - Report on Trends

    On Disputes report, show Original and New Score

    In the Dispute, Quality functionality- would it be possible to display reporting on what 'closed' disputes were deemed 'valid/invalid' by the arbitrator? And create coachings off of the valid disputes?
    If multiple points of an evaluation were disputed ie. in different or the same sections of the dispute and were deemed valid while others were not, there would have to be a display of these details as well.
    It would also be ideal if the dispute functionality could connect to the coaching functionality so that if a…

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Under Review  ·  0 comments  ·  Disputes  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
  • Don't see your idea?