Menu
Log in


3-Sigma Learner Effectiveness

By: Marigrace McKay

Crazy as it may seem, my team and I were able to show that ‘the process’ of training design performed at a 3-sigma level. Whoa!  I will explain, please read on.

The series of statistical analyses ‘proved’ that the process of instructional design that we employed in developing an e-learning platform was a stable process. Stable means that the process was reliable, repeatable, and had a calculated, defined level of performance (3-sigma) as a process.

Six Sigma is defined by a progressive series of statistical analyses called DMAIC (duy-mayick). It stands for Define, Measure, Analyze, Improve and Control. Various ‘tests/tools’ are conducted within each stage prior to progressing to the next.

  • 3-sigma level means three deviations from the norm - think of a normal distribution curve. This is the basis of the ‘central limit theorem’.
  • Mathematically, 3-sigma means that a process produces fewer than 6.7 defects per 100 ‘opportunities’.  A simple example: With 100 learner evaluations, fewer tan 6.7 would be ‘bad’ evaluations (defective).  6-sigma is considered ‘near perfection’ – few defects, a highly-controlled process, with near-zero variation.
  • “Variation” in processes is considered ‘waste’ and is unstable.

Your Learning Strategy and Audience. Keep reading. I am not going to teach Six Sigma here. But I am going to explain the importance of knowing your learning strategy and audience. We considered a ‘defect’ a learner feedback score of ‘Disagree’ or ‘Strongly Disagree’ on a 10-question training evaluation using a 5-point Likert scale.  We had a captive audience of 400 managers and supervisors over a 3-year period of time.

What we did.  We designed an e-learning platform of 12 simple ‘classes’. After class #12, we were curious, and wanted to know whether the learner effectiveness could be greater, could it be improved. Q: IF we loaded up the classes with all manner of color, bells and whistles, movement, and splashy Captivate functionality - would the online class be more effective?

  • A basic formula of Statistical Process Control (SPC) is y = (f) x (read as:  ‘Y is a function of X’) - also stated as ‘Outputs are a function of Inputs’. Makes sense. 

We had 10-question learner evaluation results from the first time the online class was taken. We then took that class content, and loaded it up with new ‘inputs’ such as more text, contact, more pictures, color, inline feedback, more movement, etc (called ‘critical Xs’). Then we re-issued the class to the same audience (12 months had lapsed).  This represented the “I” for ‘Improve’ in the DMAIC process. 

We compared the evaluation data from the first time against the newly, ‘re-designed’ class. There were 2 results: 1) The ‘improvement’ according to learner evaluations was miniscule, not materially significant to time and effort it took to go ‘overboard’ in design.  And 2) the analysis showed that the initial class was within 3 standard deviations from the norm (it had fewer 6.7 defects per 100) according to the learners. 

We knew, or thought, the training design was good, but it was only one year into the project that we even thought to attempt to use SPC to ‘prove’ it. We followed the DMAIC process using appropriate tests within each of the 5 phases. Specifically, we tested the learner evaluation data for normality, built a SIPOC, conducted an MSA, built a Control plan, and used many other statistical tools that I promised not to teach you here.

Why you should care. Every training audience is unique. Wow! How do you write training to teach everyone?  Well, you can’t. But over time, and through study of learner evaluations, trainers should begin to understand their audience. These are essential training design elements:  Learner Characteristics, Training Content, and Environment – loosely drawn and associated with What Every Manager Should Know About Training by Dr. Robert F. Mager. 

We were able to design an effective e-learning platform because we had strong knowledge of these elements.

Learner Characteristics. We knew the audience. They were manufacturing supervisors and managers who were grounded in matter-of-fact, pressed for time, and chartered with making improvements their operations for ‘better, quicker, fewer mistakes’. They were smart, but not necessarily book-smart. They required a practical, timely approach to workplace learning.

Training Content.  We selected straightforward content that was derived from their ‘needs’ and feedback throughout the previous year. The content and course selection was a blending of what the learners said they ‘needed/wanted’ and what the managerial perspectives were about what was needed. We blended the data to offer classes that met both perceived sets of needs.

Environment. For learning to ‘stick’ the work environment must be supportive. Now that’s a loaded statement! This means a culture that is open to inquiry, reflection, and mistake management. Further, it means that managers know their important role in reinforcing training by providing opportunities to perform (practice, try new things), and also support learning by reinforcing it via their own behavior.

Closing. In closing with this formula:  y = (f) x, my team and I had very carefully selected the correct inputs, to have successfully produced a high-performing (3-sigma) outputs.  We carefully understood the Learner Characteristics and Training Content, and worked closely with managers to influence the Environment (culture) to support and reinforce learning in the workplace.

We were not ‘caught up’ by software functionality that was more fluff than was needed to develop effective synchronous learning for our audience.  We stuck to the formal ADDIE training development model. We were also very consistent in using a standard training evaluation over a 3-year period of time (30,000+ training hours) for both classroom and e-learning delivery. Further, we used easy, regular feedback mechanism (Zoomerang) to listen to, and stay close to, the needs of the audience(s).

The statistical analysis proved that the process we used (ADDIE, plus unique organizational elements) to develop training was 3-sigma ‘effective’. This is not the same as saying the learners ‘liked’ the classes (unlike a bulls-eye or smile sheet), nor that the learners applied t (a measure of performance) – we had other learner data that showed those results.

At a time when some of the manufacturing processes were under-performing (less than 3-sigma i.e., high rate of defective products), we were pleased to have developed an effective, business process for learning and development. We were gratified to know that the very first class we developed was certified 3-sigma, and so, by extension it meant the subsequent11 classes were also 3-sigma level of process performance.  ~~end

Industry Update:  According to recently released Deloitte 2015 Global Human Capital Trends report according to Josh Bersin: ‘L&D needs are exploding, rising from the no. 8 to the no. 3 most important talent challenge in this year's study, yet despite this demand, capabilities in learning dropped significantly’.



Marigrace McKay is an ATD-Certified Professional in Learning and Performance (CPLP, a Lean/Six Sigma Black Belt, and a senior HR Professional. She is a former Board member of the ATD Chattanooga Chapter. Her career hallmark is unique solutions in people and talent management and organizational effectiveness to meet mission critical objectives.

Marigrace earned a Master’s degree in Management and OD from The Johns Hopkins University. She has served pro bono as Examiner on various national quality awards. She is published in Continuous Learning: Delivering Consistent business Results (ISBN: 978-1475131253-3). Marigrace is available to assist progressive employers, and L&D colleagues alike. She can be reached at mg62529@att.net or via: www.linkedin.com/in/mckay1.


This forum is empty.

©2018 ATD of Chattanooga

Chattanooga Area Chapter of ATD
P.O. Box 28214
Chattanooga, TN 37424

Powered by Wild Apricot Membership Software