Difference between revisions of "Development Targets of Quality Assurance build"

From All n One's bxp software Wixi

Jump to: navigation, search
m (Philip Lacey moved page Quality Assurance to Development Targets of Quality Assurance build without leaving a redirect)
 
(8 intermediate revisions by the same user not shown)
Line 1: Line 1:
Quality Assurance or QA is current in development in BE.
+
= Development of the QA Monitoring Program (Input) =
  
[[File:Lg_QualityAssurance.png‎]]
+
 
 +
This process will be identical to the building of a data capture campaign.
  
  
== Development of the QA Monitoring Program (Input) ==
+
The primary developments in this area will be:
 +
* Some questions and answers can be without weighting / score.
 +
* Questions can be grouped to provide sub / section scores.
  
This process will be identical to the building of a data capture campaign.
 
  
The primary developments in this area will be:
 
* Some questions and answers can be without weighting / score
 
* Questions can be grouped to provide sub / section scores
 
  
 +
= Data Sources (Input) =
  
== Data Sources (Input) ==
 
  
 
QA consists of a number of potential input sources include:
 
QA consists of a number of potential input sources include:
Line 29: Line 28:
  
 
== List preparation (Process) ==
 
== List preparation (Process) ==
 +
  
 
There are a number of tools and approaches required to list preparation.
 
There are a number of tools and approaches required to list preparation.
 +
  
 
'''Examples'''
 
'''Examples'''
Line 36: Line 37:
 
* ''1 per agent / with fallback'' - This filter will present 1 record for grading.  If the one grading fails, then a fallback is presented and the average of the 2 grades calculated.
 
* ''1 per agent / with fallback'' - This filter will present 1 record for grading.  If the one grading fails, then a fallback is presented and the average of the 2 grades calculated.
 
* ''Random Order'' - This approach will grade as many records as presented but will randomise the order of presentation to ensure that the records are not graded by graders sequentially.
 
* ''Random Order'' - This approach will grade as many records as presented but will randomise the order of presentation to ensure that the records are not graded by graders sequentially.
 +
  
 
Once these preparations have been applied the lists will be locked and be made available for grading.
 
Once these preparations have been applied the lists will be locked and be made available for grading.
 +
  
 
* Open Grading - This will allow records to be entered as required and will not require filters.
 
* Open Grading - This will allow records to be entered as required and will not require filters.
 +
 +
  
 
== Grading (Process) ==
 
== Grading (Process) ==
 +
  
 
QA scoring is then done by either processing records loaded or adding a record to be graded.
 
QA scoring is then done by either processing records loaded or adding a record to be graded.
  
Outcomes will manage the outputs of the system
+
 
 +
Outcomes will manage the outputs of the system.
  
  
  
 
== Scoring Reports (Output) ==
 
== Scoring Reports (Output) ==
 +
  
 
Scoring reports follow the BE format of presenting reports.
 
Scoring reports follow the BE format of presenting reports.
Line 58: Line 66:
 
* Executive : Highly customised reports to summate the data into single totals
 
* Executive : Highly customised reports to summate the data into single totals
  
[[Category:Quality Assurance]]
+
 
 +
 
 +
[[Category:Topic:Quality Assurance]]
 +
[[Category:Module Specific:Quality Assurance]]

Latest revision as of 16:12, 31 December 2015

1 Development of the QA Monitoring Program (Input)

This process will be identical to the building of a data capture campaign.


The primary developments in this area will be:

  • Some questions and answers can be without weighting / score.
  • Questions can be grouped to provide sub / section scores.


2 Data Sources (Input)

QA consists of a number of potential input sources include:

  • Witness Systems
  • NICE
  • Nortel Quality Monitoring
  • Telstrat Call Parrot / IDVR (Integrated Digital Voice Recording)
  • Softex Ringmaster
  • and various other data formats inluding CSV and Excel.


The quality process is then applied at a call / instance level or at a case / multi-contact level.


2.1 List preparation (Process)

There are a number of tools and approaches required to list preparation.


Examples

  • X per agent - This filter will ensure that there is only 3 records selected per agent
  • 1 per agent / with fallback - This filter will present 1 record for grading. If the one grading fails, then a fallback is presented and the average of the 2 grades calculated.
  • Random Order - This approach will grade as many records as presented but will randomise the order of presentation to ensure that the records are not graded by graders sequentially.


Once these preparations have been applied the lists will be locked and be made available for grading.


  • Open Grading - This will allow records to be entered as required and will not require filters.


2.2 Grading (Process)

QA scoring is then done by either processing records loaded or adding a record to be graded.


Outcomes will manage the outputs of the system.


2.3 Scoring Reports (Output)

Scoring reports follow the BE format of presenting reports.

  • My : Scores on any QA that was performed on men
  • Team : Scores on any QA performed on any member in my team
  • Grouped : Filtered scores for any combination of QAs
  • All : Data output at all levels
  • Executive : Highly customised reports to summate the data into single totals