Through Our Looking Glass: The Kirkpatrick Model

Through Our Looking Glass: The Kirkpatrick Model

In the last few weeks we have analyzed and compared various Instructional Design models and learning theories. In the same spirit of understanding, we would like to present in our words the famous Kirkpatrick model for Evaluating Training Programs. It is popularly also known as Kirkpatrick’s 4 Levels.

The History

Donald L. Kirkpatrick, Professor Emeritus at the University of Wisconsin in the United States and a past president of the American Society for Training and Development (ASTD), described the “The Four Levels of Learning Evaluation” in a series of articles that appeared in the US Training and Development Journal in 1959. It was then published in Kirkpatrick’s 1994 book titled, “Evaluating Training Programs.”


As learning service providers, we are all well aware of the need for evaluating the learning/training programs. Kirkpatrick’s 4 Levels categorizes the ways of evaluation into four levels: 1] Reaction, 2] Learning, 3] Behavior on Transfer, and 4] Results.

The Kirkpatrick Model

The Levels in Detail

While the first two levels are quite popularly used, and are short-term, levels 3 and 4 are more observation based and take longer time to come to a conclusion. The Kirkpatrick-certified facilitators emphasize on planning the learning design based on the results expected, or the behavioral and skill changes envisaged.

Level 1: Learner’s Reaction

It is almost the most immediate evaluation that is done as soon as a training session comes to end. The learners are asked to give their opinion about the session. In the past, it was done through verbal communication, but these days smile sheets, survey forms, questionnaires etc. are used to understand whether or not the learners find the learning to be relevant, interactive, easy to understand etc.

A few things to be vary about while creating level 1 evaluation elements are:

  • Clearly Know what you want to find out
  • Design something that can quantify the reactions, yet capture the emotions through written comments and suggestions
  • Try to get complete participation and honest responses
  • Create a scale that is acceptable

Online evaluation at this level, makes it easier to organize and analyze.

Level 2: Learning Results

In simple words how do you find out whether the learners have learnt? These days we use pre-and post-assessments to do just that. The results allow trainers and learning developers to understand the gaps that need to be filled, and also make an analysis about the impact of the learning program. In the recent years, the use of this level of evaluation has gained prominence and almost become a standard part of eLearning modules, along with quizzes and other elements to get better clarity.

A few things to keep in mind here:

  • Try to use a control group wherever applicable
  • Use a test to measure knowledge and attitudes and also to measure skills. Incorporate pre-and Ensure participation.

Level 3: Behavior in Workplace (Knowledge Transfer)

Tests give immediate results but do they indicate that the learning sticks? Good scores do not always imply knowledge. Knowledge and skills are acquired over time through actions/implementations. Level 3 evaluation is a quite tricky as it looks at the behavioral change as a result of learning.

This level of evaluation can only be conducted after the passage of certain amount of time. Observation surveys, scorecards etc. are used by supervisors to keep a tab on the observable changes. Surveys can be also used to analyze it from both learner and administrative point-of-view.

  • A few things to note here:
  • Give it time.
  • Use a control group if possible.
  • Keep records of the behavior/skills before and after.
  • Ensure 100% participation.
  • Evaluate from time to time.
  • Think in terms of ROI.

Level 4: Business Result

Organizational growth, increased profit, increase in overall performance or the average turn around- any of these factors can be used to evaluate the results. However, it isn’t as easy as it sounds. Each of those aspects are connected to various other variables that keep fluctuating hence making it really difficult to attribute the changes to learning programs alone. Such interlinked analysis at an organizational level is often too difficult to quantify.

Level 3 and Level 4 Evaluation of Kirkpatrick’s model are conditional and greatly influenced by the operational environment, which is why most organizations ted to follow the 2011, Atlanta-based Kirkpatrick Partners modification of the learning and evaluation model that gave us an easier indicator the “return on expectations (ROE) of stakeholders.”

With the broadening in channels of learning delivery that ranges from face-to-face to formal and informal learning, the levels of evaluation too have to be flexible and adaptive. xAPI does allows to track and measure informal learning activities, but what we do with the data is entirely up to us.

So, how are you evaluating the effectiveness of your training programs? Let us know. We are open to hear your views on this.

No Comments

Leave a Comment.