The rapid evolution of new technologies and changing workforce demographics creates constant pressure to attract and retain people with the right skills to achieve business objectives. There’s more to mastering skills than quality content and instruction—there needs to be a solid plan for assessing those skills along the way.
Before the journey can begin, learners need to have a better understanding of where they are. Knowing that 100 million workers may need to switch occupations by 2030, this need is even more crucial.
The training industry uses many assessment types to measure the effectiveness of training on learners and see how learners are applying newly gained proficiencies to advance their careers. Two approaches in particular that are best-suited to accomplish this are normative-referenced testing and criterion-referenced testing. While these are both widely used, let’s explore why one outweighs the other for evaluating a learner’s current comprehension of topics.
The normative model of assessments compares test results to a collection of other individuals that completed the same testing instrument. However, the challenge with this lies within the fact that the resulting average “score” may not match the required level of proficiency organizations need to be successful.
Given the importance of such grading, is it useful or fair to compare learners to others with completely different backgrounds, experiences, capabilities, knowledge, and skills? For example, what about a situation where your performance scores are judged against a cohort with a high preponderance of MIT grads taking that test? Or what if you are judged when an offer to win a free Xbox is circulating the dorms at MIT and your scores are mixed with future tech-titans? Or imagine that same ad for the Xbox makes its way through a junior high school. The likelihood is that this will produce a false measurement, as the results are lower than those generated in a corporate scenario.
Granted, these are two extremes, but the point is you never know the data pool in which your results are mixed. Would you want your career based on this sort of measurement?
In a criterion-based scenario, if you submit the correct answer or complete a section correctly, you get a positive score. If you get enough of these positive scores, you pass without any AI comparison to “unknown” sources. Instead of comparing scores to anyone else that took the assessment, your score is compared to a pre-determined standard score set by subject matter experts.
By doing this, you can assess how you actually measure up to what is needed versus how you measure up against Ivy League students or your common neighbor. Which would you want for your career? For corporate learners, the only accurate measure of proficiency is to demonstrate both the knowledge obtained and the ability to put that knowledge into practice with a lab, case study, or project that measures true applicability.
Criterion-Referenced Testing vs. Normative-Referenced Testing in the Workplace
Let’s discuss both assessment types together. An individual who compares their IQ score for a job or skill may rank in the top tier, but they may not be able to pass a different test on the materials. As a learner or an employer, would you prefer to have someone who ranks high on a scale but struggles to successfully execute the job or role they were hired for, or someone that can do the job? With normative testing, learners can perform poorly and pass as long as they are not the lowest performer.
Our educational institutions typically use normative testing, so the results can be shaped around a class, providing more than pass/fail to those who need to have grades for their graduation requirements. But for professional scenarios it makes sense to evaluate the knowledge and skills of your most valuable corporate assets—your employees—by using criterion-based testing.
When you are navigating all the options in corporate learning and training, make sure you know what goes into a quotient, IQ solution, or other measurements. Likewise, ensure you know if your learners can pass the criterion-referenced testing at a proficient score. At a time when research shows 76 percent of IT decision-makers experience critical skills gaps on their teams (a 145 percent increase since 2016), you need to know where your employees stand.
Responsible training providers provide assessments and testing that looks at the objectives for each course, and then tests to ensure those objectives have been met or the knowledge has been acquired and retained.
Mike Hendrickson is Vice President of Technology and Developer products at Skillsoft. Prior to Skillsoft, Mike spent 15 years at O’Reilly Media, Inc., where he was the VP of content strategy.