Whether we are talking about training or higher education, evaluation is the key. This allows us to determine the acquisition of skills by a learner, to determine the proper course of a pedagogy. It is the determining factor of an accomplished and effective training.
If in professional training there is a real emphasis on the evaluation and its analysis, in higher education, for a number of factors, assessing and grading lead to a disconnection with learning. The learner seeks only one thing, the best possible score without thinking of improving his skills on a subject he's interested in. This leads to assessing for the sake of the employer's recruitment strategy and not the learner's growth.
But again, with very low engagement rates in professional training both internally and in training centers, there seems to be a problem from pedagogy to evaluation. It is therefore necessary to review the basics of a good evaluation strategy, which is the accelerator and the learning tag. However, designing a good evaluation strategy is a big puzzle. We must ask the right questions, accept certain biases, discern the usefulness of the "trendy" in the types of evaluations and know what we do.
Docimology: laying the foundations for an evaluation
To evaluate properly, it is important to know about docimology, the science of testing, developed in the 1920s. The goal of docimology is to understand the importance of a number of factors having an impact on the assessment's efficiency. There are environmental factors such as the fatigue state of a learner or the pressure exerted on it. There are also psychological factors such as the contrast effect: a learner not succeeding a series of questions as a previous student did can cause the grader to rate more strictly the first student's copy.
It is important to consider these factors before asking the following questions:
- What skill to evaluate?
- For who?
- How (what rating format)?
The question of skills to be evaluated
In higher education, many skills to be validated are formulated by international accreditations and national standards. Yes, their expertise on this subject is clear, however, it remains important for trainers/teachers to appropriate the question of skills, as they are the learners' coaches.
Studying the skills to be assessed is essential in order to establish an evaluation strategy, to know if the training is effective, to mark the progress of the learner and to adapt the follow-up of the learning in relation to that.
Then comes the question: "How to evaluate a specific skill using a specific format of test?"
In this respect, two methods must be distinguished: free inquiry and subscripted questioning. The first is to ask an open question that will allow the learner to operate his memory and structure his response as freely as possible, according to his own reasoning. The second is to propose clues, possible paths for the learner to participate in the initiation of memory and reasoning.
Obviously, it seems more interesting, and also longer, to opt for the first method if we want more precisely:
- know what the learner knows
- to know what he believes to know
- In any case, making sure questions are clarified as much as possible is essential if the learner is to deliver his or her knowledge. A pressure, a competition, a trap, and the results will be distorted or unnecessarily undermined. Oddly enough, it is rather that it happens in formation and in Higher Education.
Which evaluation format should I choose?
And that's when it becomes a hell of a mess. MCQ, dissertation, QCU, presentation, simulation ... in short we do not know what to choose for what skills and in what training. So let's put some order in all this. First, we need to discern five different types of evaluations.
- The summative or certificative assessment allows to observe the acquisition of skills. We go straight to the point we ask questions that we expect specific answers
- The formative is a diagnostic of the knowledge of a learner. It is an evaluation for educational purposes that allows a learner to better understand his progress and a trainer to better help him
- In the same sense as the formative, there is the continuous evaluation, throughout the year, which has the same objective as the previous one
- Authentic assessment allows you to observe the acquisition of a learner's skills in the most real-life setting possible.
- Competency assessment or positioning test. these are also diagnostics tests to establish a report on the progress made between a point A and a point B. It is also a way for the trainer to receive feedback from the quality of his training.
Now that we have all this, what to choose? Today, alternative evaluations (simulation, role play, MCQs, presentations, etc.) are on the rise and with good reasons. Their effectiveness has been proven. This does not mean that more traditional formats should be left behind. It is necessary to be able to adapt an evaluation format to skills that we want to observe whether the learner acquires or not.
Moreover, if one wants to remain a pedagogical maximum and at the service of the learner, it is necessary to exercise it, indeed, the long-term memory is stimulated and facilitated by the exercise, but especially in a plurality of formats. Varying formats is to allow a learner to address a problem or knowledge in all its facets.
You are ready to establish your evaluation strategy!
To give you more leads, check out Didask's blog!