Don’t confuse effectiveness with quality
Training organizations seek to organize their business, monitor and measure their shortcomings and ensure the smooth running of their internal processes. This attention to improvement may be a good thing, but beware of navel-gazing! All too often, evaluations only measure the organizational and logistical aspects of the training. The certifier may be satisfied with these quality-related chains of evidence, but let us not forget that they are not the client. Quality is an important approach, but should not be confused with the effectiveness of the training, which is another subject entirely and involves measuring the performance of the learner and of the client.
Focusing on the client’s return on expectations
In order to ensure that your evaluations correspond to the client’s real needs, it is important to take into account a simple tool: the Return on Expectations or ‘ROE’, not to be confused with the Return on Investment or ‘ROI’. The Return on Expectations process encourages a collaborative approach between the client and the training organization.
The first stage is to clearly identify the origin of the request for training. It could be that management notices a lack of skills or knowledge in its teams or in a particular manager and decides that certain employees must be trained based on information obtained on the ground.
It is from these individuals that you need to collect and define the need. The more involved they are in the work of designing the training, the more likely the training will be effective.
It is also at this stage that you should assess the relevance of an evaluation approach: training courses in the Maldives offered as a ‘reward’ for good employees is an industry reality! Evaluations should be used in proportion to the operational importance of the training.
The second stage is the identification of indicators. What new behaviour or measurable effect would the client like to see by the end of the training? It is important to be precise during this needs collection phase. The more detailed knowledge we have about what we want to measure, the easier it will be to measure it.
This process is more important than creating evaluations. It provides the teaching team with an analysis enabling them to design the right training to generate new behaviour.
Designing evaluations using a model familiar to HR
In order to involve clients in the ROE evaluation process, it is a good idea to use language and concepts familiar to general management and HR. This type of ‘scientific credibility’ also constitutes a useful sales tool.
The good news is that a standard evaluation model has existed since the 1950s, based on scientific principles and commonly taught to HR professionals: the Kirkpatrick model. This model distinguishes between 4 evaluation phases or levels.
The reaction questionnaire (level 1)
Everyone (or nearly everyone) has done this type of evaluation. You have probably already come across a standard form containing questions to detect the good and bad logistical and organizational aspects of a course, analyse the perceived quality of the teaching and gather learners’ immediate reactions.
Learning tests (level 2)
Has the learner understood and retained what has been taught? These tests can take several forms, from the most fun to the most austere. There is currently a trend for digital tools such as classroom quizzes. These are great tools for motivating learners, varying the pace of the training and, best of all, for testing knowledge acquisition in a very quantified manner. The importance of this level must not, however, be overestimated. Remembering knowledge is less important in a digital world where information is easily accessible.
Evaluation of behaviour (level 3)
Sometimes called ‘delayed evaluation’, this is the key tool for measuring the longer-term impact of the training. For practical reasons, it is better not to wait too long to conduct it; 2 to 3 months on average. Longer than this and the measurement will theoretically be more ‘reliable’, but the response rate will be very low. In effect, this is the hardest evaluation to get learners to complete. Work to design the training must therefore be carried out upstream and in collaboration with the client and with the learner’s managers. This will maximize the chance of this evaluation being taken seriously by those involved.
Evaluation of results (level 4)
The aim here is to measure the impact of the training on the organization that ordered it. If we take the example of training for a sales team, we would therefore seek to determine the impact on the company’s sales results following the training.
Level 4 evaluations are, of course, the most difficulty to implement.
Adopting best practices
Switching to digital technology
Use specially designed software, as paper evaluations are never compiled in digital format (or if they are, at the cost of time that could have been better spent on other, higher value-added tasks). With regard to emails asking the learner to fill out a digital evaluation, it is vital to adhere to certain best practices: very sober content to avoid being mistaken for spam and a clear, catchy title. Don’t send emails during the holidays. The best days are Tuesday and Thursday, preferably before midday.
There are many benefits of proposing a visual summary of the results, in the form of computer graphics, for example. The promise of such a document is a powerful sales argument, particularly if the document is designed to push the evaluation and analysis to level 4, i.e. the client’s expected results.
Designing quantifiable evaluations
Questionnaires must be short, with precise and unambiguous questions. Maximize the number of questions with quantifiable results. If you wish to include an open question, do so following a closed and quantified question on a similar subject.
Beware of satisfaction surveys conducted hastily in the final hour of the course, when the learners are in a hurry to catch their train home. The response you obtain will be of poor quality and often not very critical. It is better to send out the questionnaire the following day or a few days later and obtain considered responses that will enable you to progress. There may be a few answers missing, in which case do not hesitate to contact the learners concerned. It is better to have 90% high-quality responses than 100% hastily scribbled answer sheets.
Encourage trainees to commit to using what they have learned
During the training, ask trainees to define an action plan for the coming months. This is a simple way of inciting them to make commitments and, ideally, share them with their managers.
The response rate is often lower for delayed evaluations. Ideally, plan the delayed evaluation in advance in collaboration with the client. It will be harder to skip if it is a ‘mandatory’ part in the training imposed by the managers. As such, get the managers involved by also sending them an evaluation to fill in concerning the behaviour of their trained staff.
In the absence of help from management, you will have to be a bit cunning! Rekindle interest in the training by sending additional content. This could be a PDF containing 10 mistakes to be avoided by applying the skills acquired, an amusing video found on the Internet on a similar theme to the training, etc. Motivate the trainees to respond to the questionnaire by promising them a deliverable (e.g. an analysis of the results, computer graphics, etc.)
The landscape of professional training is undergoing major changes. Evaluating effectiveness can be a powerful tool for transforming the relationship between training company and client into a long-term partnership rather than a purchasing process focused on the lowest possible cost.
Cofondateur de Digiforma.