CBME focuses on training competencies by "numbers of procedures"
Numbers of procedures can be assessed with programmatic assessment
Sampling competence leads to "assessment FOR learning" rather than "assessment OF learning"
If you or your loved ones need a doctor, you want to want to be sure that this person is a
competent professional. But how do we know?
To stick with this example, pretty much all physicians go through a similar curriculum: a
certain number of years for undergraduate training followed by a certain number of years
for postgraduate training to become a specialist. Along the way we need to take some
courses to compile certificates and take a few exams to be allowed to proceed to the next
training unit. What counts is to complete a fixed amount of time in training and numbers of
But learners, as human beings, are all individually different. Every one of us learns different
things at a different speed. Some will need 5 repetitions to get it and others still struggle
This is the core of Competency-Based Medical Education, or CBME.
The goal of CBME is to focus on outcomes: making sure that learners are competent professionals at the end of the training. With competence being the constant “time in training” and “numbers of procedures” become the variable.
Now, the question is: How do we measure competence?
To be able to measure something we need to define what we are measuring. That is why
several frameworks on CBME have been developed over the last 20 years: among them the
CanMEDS framework. These frameworks describe the outcome in detail: how a competent
physician should look like. The CanMEDS model for example uses seven characteristic roles a competent physician should internalize and list within each role descriptors of the required
traits or competences. Using these descriptors, we can double-check if the learner in front of us fulfills the requirements or not yet.
Training, especially in the medical field, is mainly on-the-job, so it is probably best to assess
competence at the point of care and not (only) with written tests. The bad news:
Assessment of a learner’s performance in the workplace can never be objective. Workplace-based will always be context specific and dependent on many factors such as the learner, the supervisor, the relationship between trainee and supervisor, the task or task complexity, etc. The good news: a well-designed sampling strategy will do the trick. The only way to have a clear picture of a trainee’s competence is to capture many low-stakes workplace-based assessments. This concept within CBME is called “Programmatic Assessment”. Programmatic assessment means that we need to have a system in place for the assessment, so we cover all the important aspects of a learner. A high-stakes progress decision, like passing medical school, can only be made based on several low-stakes assessments. Ideally not a single person but a “competency committee” would compile and analyze all available data for the decision-making process.
One of the important aspects here is to have a system in place which ensures the continuing sampling of workplace-based assessments and other assessment data or evidence of competence (course certificates, written in-training exams, multi-source feedback, etc.). “Sampling, sampling, sampling”.
With programmatic assessment also comes a culture change from “assessment OF learning” towards “assessment FOR learning”: Every assessment situation should be optimized for learning.
CBME meets the expectations the community has of healthcare professionals.
When I need a doctor and she/he has completed a true competency-based curriculum
including programmatic assessment (no matter how long it took) I can be certain that this
person is competent in helping me with my health problem.