Need & Focus
Firstly, your programme will need to have its objectives clearly defined, be scoped and its focus areas delineated. For example, will you be judging a single interaction, a number of interactions centred around a common thread, or even a process which has a measurable output.
The next step is to define metrics and desired outcomes which in turn effectively create your test criteria, often manifested as a questionnaire which you will use to ensure that the desired output, with checks along the way, has been achieved. There can be questionnaires for different stages and aspects of the process and the more granular the scope of your programme the more preparation you will need. However, more detailed programmes are far more likely to deliver the desired outcome.
When building your questionnaire there are a number of considerations to bear in mind. Taking the example of a phone call you would probably want a minimum of three sections: call opening • call body • call closing. Each group will be made up of questions related to that section and it is important to note that thoughtful grouping will help with your analytics at the other end of the process, rather than just producing an overall score. In other words is the whole call failing or just one aspect? Groups also facilitate more focused revision and are great for spotlighting any trends that are developing.
With thoughts of the above in mind we actually have a framework for the programme:
Need >>Focus>>Questionnaire>>Section >>Question
Your draft programme should be distributed to the relevant stakeholders for review. You need to do this because you probably don’t know everything about the process. I have only ever seen this improve the questionnaire and it really encourages operational buy in.
With your questions and groups identified the next step is to think about any weighting you want apply. To help with this we usually categorise questions as:
- Critical (Fail)
- Classification field
The weighting of a standard question will be determined by its importance and, if you have a large questionnaire, you may want to sub-divide this category again. A good Q. A. system will work out the maths for you when it comes to final percentage scores so you can focus on making sure that your weighting reflects the significance of the question. A Critical or Fail question is one that has to be answered correctly and these are commonly related to, for instance, legislation or health and safety. Again, a good Q. A. system will let the answer to critical questions override all other scores and should red-flag the issue to a supervisor. A Classification field is a non-scoring field but can be critical to the way an assessment is ultimately classified. Examples might include: type of call, duration of call or agent’s level of experience.
You should consider linking training or support material to your programme to explain or demonstrate how the process should be done. This ensure that assessors and those being assessed understand what is expected. A properly organised assessment will demonstrate trends or common areas of weakness and you should use this information to update the relevant sections of the support material.
Calibration means testing the assessors to ensure that they grade consistently across all assessments. Before assessing begins they should be given the opportunity to review the programme and, when appropriate, set a model performance. They should be looking for and eliminating any ambiguity because what you are ultimately aiming for is (high) standard performance.
This stage is about when and how your assessments are performed. Injected remains the traditional approach, effectively ad-hoc assessments as and when it suits the organisation and availability of staff. However, with this method there is a danger that some people may be assessed more than others whilst some are never assessed at all. It may also mean objectivity is lost because the same assessor is used for the same staff all the time. There are other approaches though, notably:
- First Come
- Equal Distribution
These strategies open the door to some useful options such as team leaders assessing each other’s teams, outsourcing your assessments and balancing the work load across assessors.
This means creating a schedule of assessments such that the assessor performs the next assessment in line rather than, for instance, a supervisor only ever dealing with their own team.
This is broadly similar to the First Come approach but it limits the number of assessments per assessor so that work is distributed equally.
When designing Q.A. programmes in the past, it was usual to require the score to add up to 100% for convenience and ease of configuration. The right Q.A. system will eliminate this need and calculate percentages automatically.
You should also check that your system can handle unanswered questions; will this negate the whole test? Do you want it to?
Let’s say, for example, an Insurance company is running a Q.A. on their sales line. They have an opening section, a home insurance section, a car insurance section and a closing section.
- Opening : 25 points
- Home : 25 points
- Car : 25 points
- Closing : 25 points
If the Q.A. is performed on a home insurance call, the car insurance section is not applicable. However, you still need to score up to 100% for the call. These bugs need to be managed by the system or you’ll end up tearing your hair out trying to make the scoring work. This also ensures that grouped trend reports are not skewed by non-applicable questions.
Another potential fly in the ointment is an outright fail. Fail questions are traditionally weighted at -100, thus causing a definite fail; but that could negate all the other good work that was highlighted in the assessment and can also lead to poor training focus. At this point then, we need to think about the difference between Score and Outcome. A score would be a percentage achieved whilst the outcome is pass or fail. A good system will allow you to choose the word Fail which will produce a fail outcome without affecting the score.
Once the assessments have been performed, addressing areas of weakness is vital, otherwise your Q.A. process is more or less pointless and the more carefully you set up your assessment, the more directed the support and the greater the operational benefit.
Trend reporting shows how an individual, team, department or organisation as a whole are doing on the same report.
Sometimes applying scores to results is not possible through Union or other contractual relationships. Status reports provide simple counts without the individual focus that comes with performance management.
Having made the decision the Q.A. is something you need to engage in, it’s important to have the right system in place for your own particular processes which, although they may be similar to others, will be unique to your organisation. So, if your Q.A. is going to have any point it needs to show you exactly where help is needed, demonstrate trends and score and report in a way that is acceptable to your organisation and the people in it.