ASSIGNMENT # 1 Types and significance of evaluation of training program INTRODUCTION Training is vital for any and every organization. With the changing socio-economic and technological relevance of training, the definitions, scope, methods and evaluation of training program have also changed. One of the earlier classic definitions of training is ‘bringing lasting improvement in skills in jobs’. The present day definitions take a multi-dimensional perspective enveloping the needs of individuals, teams, organizations and the society.
The steps in the training program development are planning, programme implementation, and programme evaluation and follow-up. The evaluation of any training system helps measure the’ knowledge gap’, what is defined by Riech as ‘the gap between what the trainer teaches and what the trainee learns’. Evaluations help to measure Reich’s gap by determining the value and effectiveness of a learning programme. It uses assessment and validation tools to provide data for the evaluation. Evaluation of training systems, programmes or courses tends to be a demand of a social, institutional or economic nature.
A training program is not complete until you have evaluated methods and results. A key to obtaining consistent success with training programs is to have a systematic approach to measurement and evaluation. Training Evaluation Approach Evaluation methods should be determined based on the goals of the training process and should meet the demands of the various stakeholders involved. Every organization has multiple stakeholders and not everyone within the organization has the same information needs. Typically, organizational stakeholder groups include the training department, employees and business units.
Their information requirements fall into two categories: whether the competencies have been learned and whether the learning has been applied toward improved performance. DEFINITION * Goldstein (1993) defines evaluation as the “systematic collection of descriptive and judgmental information necessary to make effective decisions related to selection, adoption, value and modification of various instructional activities”. * Kirkpatrick (1996) defines evaluation as determination of the effectiveness of a training programme.
Evaluation of training as any attempt to obtain information on the effects of a training programme, and to assess the value of the training in the light of that information. * According to Van Dyk et al. (1997), definitions of evaluation have several implications:
• Evaluation is an ongoing process. It is not done at the end of course only.
• The evaluation process is directed towards a specific goal and objectives.
• Evaluation requires the use of accurate and appropriate measuring instruments to collect information for decision making.
• Evaluation is a form of quality control. Evaluation is not only concerned with the evaluation of students but with the wider training system as a whole. TYPES OF EVALUATION 1) Formation evaluation Formative evaluation Provides ongoing feedback to the curriculum designers and developers to ensure that what is being created really meets the needs of the intended audience. Formative Evaluation may be defined as “any combination of measurements obtained and judgments made before or during the implementation of materials, methods, or programs to control, assure or improve the quality of program performance or delivery. * It answers such questions as, “Are the goals and objectives suitable for the intended audience? ” “Are the methods and materials appropriate to the event? ” “Can the event be easily replicated? ” Formative evaluation furnishes information for program developers and implementers. * It helps determine program planning and implementation activities in terms of (1) target population, (2) program organization, and (3) program location and timing. * It provides “short-loop” feedback about the quality and implementation of program activities and thus becomes critical to establishing, stabilizing, and upgrading programs. ) Process evaluation Process evaluation provides information about what occurs during training. This includes giving and receiving verbal feedback. Process Evaluation answers the question, “What did you do? ” It focuses on procedures and actions being used to produce results. * It monitors the quality of an event or project by various means. Traditionally, working as an “onlooker,” the evaluator describes this process and measures the results in oral and written reports. * Process evaluation is the most common type of training evaluation. It takes place during training delivery and at the end of the event.
Most of you probably have done it in one form or another. The question we try to answer is “What did you do? ” * Following is a sample list of the kinds of information collected to answer this question: * Demographic data (characteristics about participants and their physical location) * What was taught; how long it took * Whether or not the objectives were met * Who did what to whom, and when 3) Outcome evaluation Outcome evaluation determines whether or not the desired results (e. g. , what participants are doing) of applying new skills were achieved in the short-term.
Outcome Evaluation answers the question, “What happened to the knowledge, attitudes, and behaviors of the intended population? ” * Specific and observable changes in behaviors that lead toward healthier or more productive lifestyles and away from problem-causing actions indicate a successful program. * For example, a successful project is one that is successful in causing a higher percentage of students to use condoms when…. This project would produce both “outcomes” and “impacts. ” Outcome evaluation is a long-term undertaking. * Outcome evaluation answers the question, “What did the participants do? * Because outcomes refer to changes in behavior, outcome evaluation data is intended to measure what training participants were able to do at the end of training and what they actually did back on the job or in their community as a result of the training. * 4. Impact evaluation Impact evaluation determines how the results of the training affect the strategic goal e. g. health promotion goal of reducing the incidence and prevalence of HIV/AIDS. Impact Evaluation takes even longer than outcome evaluation and you may never know for sure that your project helped bring about the change. The focus is on changes that have occurred in key social indicators which are used to gauge the levels of problem occurrence. * Examples of “impacts” are reduction in the incidence of HIV/AIDS; increase in condom use among students * Impacts occur through an accumulation of “outcomes. ” Impact evaluation is meant to answer the question, “How what did was taught in the training affect the problem? ” (Think back on the problem statements you developed. * Impact evaluation tries to measure whether or not training has affected the initial problem you identified. In other words, an impact evaluation is meant to assess the extent to which what was learned is making a difference at the community level, or targeted groups, or beneficiaries of the intervention Though this type of evaluation usually takes a long time and costs a lot of money, it is the type that really focuses, for instance, on assessing whether or not there has been a reduction in the incidence and prevalence of specific problems in the community. * The idea here is that the impact of training will hopefully be far reaching and make a difference in peoples’ lives. Need for Evaluation
Since evaluation is an integral part of the whole process of training and development the details have to be conceived much before the actual training activity; rather-than its ritualistic tagging at the end of training. The trainer should be fairly clear of: How to evaluate What to evaluate When to evaluate Answers to these questions are dependent on the need for evaluation. Why Should A Training Program Be Evaluated? * To identify the program’s strengths and weaknesses. * To assess whether content, organization, and administration of the program contribute to learning and the use of training content on the job. To identify which trainees benefited most or least from the program. * To gather data to assist in marketing training programs. * To determine the financial benefits and costs of the programs. * To compare the costs and benefits of training versus non-training investments. * To compare the costs and benefits of different training programs to choose the best program. Principles of Evaluation Schuman, E. A. describes evaluation as an integral part of an operating system meant to aid trainers/ training managers to plan and adjust their training activities in an attempt to increase the probability of achieving the desired action or goals.
In order to integrate training practices with business policy and objectives evaluation has to be based on sound principles such as: 1. Trainer/ Evaluator must be clear about the purpose of evaluation to be able to set the standards and criteria of evaluation. 2. For an objective evaluation, the methodology and criteria of evaluation should be based on observable and as far as possible measurable standards of assessment which have been agreed upon by the evaluators and the users of the training system. 3. Evaluation has to be accepted as a process than an end product of training. 4. As a process, it has to be continuous.
The ‘one-spot’ assessment cannot guide trainers for improving subsequent programmes, therefore it has to begin before the actual training activity and end much after the conclusion of visible training activity. 5. The training objectives should be an outcome of overall organizational goals to permit tangible evaluation of training results. 6. Evaluation data should be directive rather than conclusive. It must be comprehensive enough to guide trainers in the collection of information that will enable them to comment on current training effectiveness and to improve subsequent training. . A good evaluation system is tailor-made and should provide specific data about its strength and weakness. Generalizations drawn from one training activity may be in-applicable for training across different levels and to meet different standards. Besides, they should refrain from using single instances for conclusions and generalizations. 8. A good evaluative system should provide sufficient scope for self- appraisal by the trainer/ evaluator. 9. The Evaluative data should try to balance quantitative and qualitative information. 10.
Role of the evaluator needs tone based on sound working relationship with the participants, trainers, senior line managers and policy makers. Normally a researcher or a fresher is attached to the trainer to carry out end of the course evaluation. This evaluator may have the expertise of developing and designing-evaluative tools and techniques but it would be insufficient in promoting utilization of evaluation results. Evaluator’s acceptance by the participants and interpersonal sensitivity and trust for frank sharing of feedback is a must.
This would modify their role as one of giving and receiving feedback rather than just receiving feedback. They have to be proactive than argumentative. 11. Effective communication and coordination are essential. Training and evaluation plans should be discussed so that there is commonality of purpose amongst the trainers, the evaluators and those sponsoring the trainees. 12. Reporting system of evaluative data should be simple, clear, adequate and available for interpretation. It requires the, evaluator to be sensitive to the feelings of the guidance, has to be tactful and honest.
As far as possible terminology used should be concise and free from jargons. 13. Realistic targets must be set. A sense of urgency no doubt is desirable but deadline that are unrealistically high will result in poor quality. 14. Finally, a trainer who is sincere about training, evaluation would always insist on complete, objective and continuous feedback on the progress and deficiencies of training to be able to maintain the momentum of the training Programme, its evaluation and subsequent improvement. Benefits of Evaluation
• Improved quality of training activities Improved ability of the trainers to relate inputs to outputs
• Better discrimination of training activities between those that are worthy of support and those that should be dropped
• Better integration of training offered and on-the job development
• Better co-operation between trainers and line-managers in the development of staff
• Evidence of the contribution that training and development are making to the organization. Kirkpatrick’s Four-Level Training Evaluation Model The four levels of Kirkpatrick’s evaluation model essentially measure: 1. Reaction of student – what they thought and felt about the training 2.
Learning – the resulting increase in knowledge and/or capability 3. Behavior – extent of behavior and capability improvement and implementation/application 4. Results – the effects on the business or environment resulting from the trainee’s performance Level 1 Evaluation – Reactions This level measures how participants in a training program react to the training. Every program should at least be evaluated at this level to answer questions regarding the learners’ perceptions and improve training. This level gains knowledge about whether the participants liked the training and if it was relevant to their work.
Negative reactions reduce the possibility of learning. Evaluation tools:
• Program evaluation sheets
• Face-to-face interviews
• Participant comments throughout the training
• Ability of the course to maintain interest
• Amount and appropriateness of interactive exercises
• Ease of navigation in Web-based and computer-based training
• Participants’ perceived value and transferability to the workplace This type of evaluation is inexpensive and easy to administer using interaction with the participants, paper forms and online forms. Level 2 Evaluation – Learning
Level 2 evaluations are conducted before training (pre-test) and after training (post-test) to assess the amount of learning that has occurred due to a training program. Level 2 evaluations assess the extent learners have advanced in knowledge, skills or attitude. Level 2 evaluation methods range from self-assessment to team assessment to informal to formal assessment. Evaluation tools
• Individual pre- and post-training tests for comparisons
• Assessment of action based learning such as work-based projects and role-plays
• Observations and feedback by peers, managers and instructors. Level 3: Behavior
Level 3 involves the extent to which learners implement or transfer what they learned. This level differentiates between knowing the principles and techniques and using them on the job. Potential methodologies include formal testing or informal observation. This level of evaluation takes place post-training when the learners have returned to their jobs and is used to determine whether the skills are being used and how well. It typically involves contact with the learner and someone closely involved with the learner, such as the learners supervisor. Evaluation tools:
• Individual pre- and post-training tests or surveys Face-to-face interviews
• Observations and feedback from others
• Focus groups to gather information and share knowledge. Level 4 Evaluation- Results This evaluation measures the success of the training program in term that executives and managers can understand such as increased production, increased sales, decreased costs, improved quality, reduced frequency of accidents, higher profits or return on investment, positive changes in management style or in general behavior, increase in engagement levels of direct ports and favorable feedback from customers, peers and subordinates.
Methods of Evaluation of Training Programs: It is extremely important to assess the result of any training program. The participant must be made aware of the goals and objectives of the training program and on completion of the training program, they should be asked about the impact of the concerned training program. Evaluation of any program is a difficult task and more so of a training program. The first step toward evaluation of a training program is to define the goals and objectives of the training program. These goals and objectives should be stated in such format so that they can be measured statistically.
Also both the trainer and the trainees most be well acquainted with their role in the training Program In the evaluation of any training program. The first requirement is to collect valid and reliable data. The required data can be collected by using the fowling techniques. (12) 1. Self assessments answer sheets. 2. Question confronted by the trainees. 3. Assessing the collected information and observation. 4. Final result based on earlier information plus the new data Each method of data collection has its advantages and disadvantages. Which need to taken into Consideration?
The merits and demerits of each method are as follows. Merits of Self Assessment: 1. The cost factor is quite low. 2. Data can easily collect. 3. Time consumption of the trainer and trainee is negligible. 4. Outside interference is completely avoided. 5. Effective relationships develop between the trainees. 6. Well designed answer sheet can produce healthy results. Demerits of Self Assessment: 1. Self assessment is basically self evaluation which can be based of biased responses. The assessment must have enough reliability so as to draw right conclusion in regard to individual assessment. . The responses given by the trainees can be based on misrepresentation or misinterpretation of the questions asked. Thus self assessment questions should be small and easy to understand . in addition . no information should be sleeked which will embarrass the trainees. 3. The information provided by the trainees cannot be evaluated in terms of their correctness. All the trainees do not prefer to give the required information lest it may be used against at any point of time. All these problems can be easily solved. Self assessment is basically adhered to by all the training programs.
However what is important to consider is to make proper effective use of this technique as the trainees provide valuable information which the trainer can use to formulate training strategy. The second requirement for evaluating a training program is concerned with the evaluation of the training program when part of the training program has been completed. The time factor must be decided before the program is initiated and the evaluation criteria must be determined before the training program begins. The first evaluation will give adequate information to the trainers whither the program moving toward write direction.
At the same time trainees will be able to assess the value of the program in terms of its needs and usefulness. It is extremely important to realize whether the trainees have understood the need and importance of the training program. As this stage adequate data should be collected from the trainees to make proper evaluation of the training program. The collect data, interview and questionnaire methods can be most effective. Interviews can be conducted by seeking information face to face, by means of telephone, or by other strategies like group discussions etc.
Each of these methods has its own merits and demerits. Merits of Interviews: 1. Face to face interviews ensure some response, if any responses need to be clarified. The trainer can do so instantly. Similarly if the trainees want any clarification, the same can do immediately. This helps in ensuring correct information. 2. As far telephone interviews are concerned though there is lack of personnel touch. The trainee does not feel the pressure of the interviewer to give answers that suit the trainer. The trainer can answer all those question that are complex in nature.
These answers have far more validity as the responses are without any pressure. Demerits of Interviews: 1. The interview is a lengthy and costly process as it requires trained and skilled personal to get results that are reliable. 2. Another important drawback is the possibility of the trainer being involved in the interview. 3. Data collected through interview methods may be out of date and hence difficult to interpret. A primary survey was done using a detailed questionnaire as a tool. The survey helped in establishing an understanding f all the four levels of evaluation – reaction, learning, changes and results. The survey used the entire population of participants who attended the training programs of the Institution over the selected three years. The institution on an average trained 3000 participants every year from across the country in its 100 training programs per year. The questionnaire had three main parts – I. Personal details – to build the profile of the participants; II. ‘Effectiveness of Program’ was studied with key questions on whether the objectives of rural development were met within the program.
The participants were asked to rate the program content and design on the basic inputs of knowledge, skills and attitudes. III. ‘Professional relevance of training’ was evaluated with key questions asking how relevant the program content was for meeting the local needs and whether there was enough practical application which could be used for working or transferring the knowledge to functionaries further down the line. It also probed whether the learning could be shared with other colleagues in the organization and lastly whether the course had helped in the organizational performance.
Merits and Demerits of Questionnaire Questionnaires in one form or another do appear in all kinds of research and surveys. Hence it is extremely vital that the questionnaire is framed with utmost care so that it measures the variable inexactly the way it has been designed for. Once the initial design has been properly framed, a pre _ test must be conducted to find out whether the questions mean the same thing to the trainer and the trainee if found inappropriate the questionnaire should be redesigned and a pilot survey should be conducted.
If found appropriate. Full survey should be conducted and if found inappropriate the questionnaire should be redesigned again. The reliability and validity of the questionnaire should be properly evaluated before going in for full survey. In regard to collection of data. It may be observed, “As with any method of data collection it is vital to plan how the data is to be collected. However with this method, since it does not usually involve the design of some sort of formal survey instrument such as questionnaire.
It is all too easy to leap straight in without a plan. This can lead to a considerable waste of time and without a plan. This can lead to a considerable waste of time and even worse the wrong data being collected-so the message is plan and design your desk research in the same way as you would any more formal survey. ” Database: In the first instance, the database of 9000 participants was cleaned for missing names and incomplete addresses. The questionnaire was then posted to all the participants together with a stamped self-addressed envelope.
Three reminders were also posted over a period of three months to the trainees who had not replied. Questionnaires were also posted to e-mail ids wherever available. The replies received were tabulated in the SPSS format and analyzed. BARRIERS TO EFFECTIVE TRAINING EVALUATION * Lewis and Thorn hill, (1994) state that evaluation results that do not reflect positive changes or positive results may be a function of an incorrect decision to conduct training. This decision may have been taken higher in the organization’s hierarchy. Companies fail to do training evaluations correctly and thus do not obtain valid business or performance results (Sims, 1993). * According to Mann (1996) the question of what to evaluate is crucial to the evaluation strategy. The failure of training programme evaluations can be attributed to inadequate planning or design, lack of objectivity, evaluation errors of one sort or another, improper interpretation of results and inappropriate use of results and lack of sponsorships and lack of budget (Abernathy, 1999;Goldstein, 1993; Sims, 1993).
ISSUES OR DILEMMAS IN EVALUATING TRAINING PROGRAMS A. Perceptions and attitudes of learners about evaluation. For example, trainees seem to respond best to evaluation when: The instrument or technique is clear, sensible, agreed on (or expected), well-planned, and integrated in the training design; and they understand the purpose of evaluation and see it as part of the training process. B. Is learning measurable, Observable? Can we measure or “objectify” the important leanings? C. Is training cost effective? Example: Does it increase roductivity, reduce absenteeism, lower turnover? D. Confidentiality and other uses of evaluation Ethical uses? E. Who can really measure adult learning but the learner? F. Systems-level evaluation of programs: The pilot phase The model phase The institutionalization phase FOLLOW UP: A COMPONENT OF EVALUATION A. Evaluation of Training on the Job Behavioral change Results of application B. Help in Practical Applications External services such as coaching consultancy Help by superiors and colleagues C. Further Personal Development On-the-job Further training courses D.
Liaison with Former Participants Personal contacts Associations Information and conferences Alumni peer mentor in. Assessing the costs and benefits of training To conduct a thorough evaluation of a training program, it is important to assess the costs and benefits associated with the program. This is difficult to do but may he important for showing top management the value of training for the organization. For example, in one case, the net return of a training program for bank supervisors was calculated to be $148,400 over a 5-years period.
Generally, a utility model would be used to estimate the value of training (benefits minus costs). Some of the costs that should be measured for the training program include needs assessment costs, salaries of training designers, purchase of equipment (computers, video, handouts), program development costs, evaluation costs, trainers’ costs (e. g. , salaries, travel, lodging, meals), facilities rental, trainee wages during training, and other trainee costs (e. g. , travel, lodging, meals).
It is important to compare the benefits of the training program with its costs. One benefit that should be estimated is the dollar payback associated with the improvement in trainees’ performance after receiving training. Since the results of the experimental design will indicate any differences in behavior between whose trained and those untrained, the HR professional can estimate for that particular group of employees (e. g. managers, engineers) what this difference is worth in terms of the salaries of those employees.
Another factor that should be considered when estimating the benefits of training is the duration of the training’s impact-that is, the length of time during which the improved performance will be maintained. While probably no programmes will show benefits forever, those that do incur longer-term improved performance will have greater value to the organization. Conclusion The Evaluation of any training program has certain aims to fulfill. These are concerned with the determination of change in the organizational behavior and the change needed in the organizational structure.
Hence evaluation of any training program must inform us whether the training program has been able to deliver the goals and objectives in terms of cost incurred and benefits achieved. The analysis of the information is the concluding part of any evaluation program. The analysis of data should be summarized and then compared with the data of other training programs similar nature. On the basis of these comparisons, problems and strength should be identified which would help the trainer in his future training programs.