Whether we are talking about training or higher education, evaluation is the key. This allows us to determine the acquisition of skills by a learner, to determine the proper course of a pedagogy. It is the determining factor of an accomplished and effective training.
If in professional training there is a real emphasis on the evaluation and its analysis, in higher education, for a number of factors, assessing and grading lead to a disconnection with learning. The learner seeks only one thing, the best possible score without thinking of improving his skills on a subject he’s interested in. This leads to assessing for the sake of the employer’s recruitment strategy and not the learner’s growth.
But again, with very low engagement rates in professional training both internally and in training centers, there seems to be a problem from pedagogy to evaluation. It is therefore necessary to review the basics of a good evaluation strategy, which is the accelerator and the learning tag. However, designing a good evaluation strategy is a big puzzle. We must ask the right questions, accept certain biases, discern the usefulness of the “trendy” in the types of evaluations and know what we do.
To evaluate properly, it is important to know about docimology, the science of testing, developed in the 1920s. The goal of docimology is to understand the importance of a number of factors having an impact on the assessment’s efficiency. There are environmental factors such as the fatigue state of a learner or the pressure exerted on it. There are also psychological factors such as the contrast effect: a learner not succeeding a series of questions as a previous student did can cause the grader to rate more strictly the first student’s copy.
It is important to consider these factors before asking the following questions:
In higher education, many skills to be validated are formulated by international accreditations and national standards. Yes, their expertise on this subject is clear, however, it remains important for trainers/teachers to appropriate the question of skills, as they are the learners’ coaches.
Studying the skills to be assessed is essential in order to establish an evaluation strategy, to know if the training is effective, to mark the progress of the learner and to adapt the follow-up of the learning in relation to that.
Then comes the question: “How to evaluate a specific skill using a specific format of test?”
In this respect, two methods must be distinguished: free inquiry and subscripted questioning. The first is to ask an open question that will allow the learner to operate his memory and structure his response as freely as possible, according to his own reasoning. The second is to propose clues, possible paths for the learner to participate in the initiation of memory and reasoning.
Obviously, it seems more interesting, and also longer, to opt for the first method if we want more precisely:
And that’s when it becomes a hell of a mess. MCQ, dissertation, QCU, presentation, simulation … in short we do not know what to choose for what skills and in what training. So let’s put some order in all this. First, we need to discern five different types of evaluations.
Now that we have all this, what to choose? Today, alternative evaluations (simulation, role play, MCQs, presentations, etc.) are on the rise and with good reasons. Their effectiveness has been proven. This does not mean that more traditional formats should be left behind. It is necessary to be able to adapt an evaluation format to skills that we want to observe whether the learner acquires or not.
Moreover, if one wants to remain a pedagogical maximum and at the service of the learner, it is necessary to exercise it, indeed, the long-term memory is stimulated and facilitated by the exercise, but especially in a plurality of formats. Varying formats is to allow a learner to address a problem or knowledge in all its facets.
You are ready to establish your evaluation strategy!
To give you more leads, check out Didask‘s blog!