prepared CBME Blog

Why Counting Procedures No Longer Works – and What Modern Specialist Training Needs Instead

Written by Dr. Lukas Kandler | 05.11.2025 11:40:07

The Illusion of Safety in Medical Training

Most medical professionals are familiar with the checklists used during specialty training: proof of completed intubations, central venous catheters (CVC), or transesophageal echocardiographies (TEE). These logs create an illusion of safety — suggesting that merely counting procedures ensures competence and transparency in education.

In practice, this routine approach rarely guarantees that trainees can actually perform these tasks independently and competently.

Diese Praxis wirkt routiniert, führt aber nicht zuverlässig dazu, dass Trainees bestimmte Tätigkeiten tatsächlich selbstständig und kompetent ausführen können.

The Flaw in Counting Procedures

Here lies the critical weakness: when documentation becomes a mere box-ticking exercise, the system loses its meaning.
The European Society of Cardiology (ESC) demonstrated this in an analysis showing that more interventions were documented annually than there were patients treated.

These “fake numbers” are not unique to cardiology — they occur elsewhere too, for instance when multiple people are listed on an operative report even though they weren’t truly involved.

The result: counting cases or years of training does not reflect real expertise and instead promotes inaccurate or even forced documentation.

A Paradigm Shift: Competency-Based Training with EPAs

Today, quality counts more than quantity. Competency-Based Medical Education (CBME) focuses on workplace-based assessments (WBAs) and Entrustable Professional Activities (EPAs).

An EPA describes a professional task that a trainee may perform independently only after reaching a defined level of supervision.

Competence emerges from the interplay of knowledge, skill, and attitude.
Therefore, continuous, workplace-based evaluations are replacing the one-off “high-stakes exam” at the end of training.
Over time, this approach builds a meaningful and trustworthy competence profile.

How preparedEPA Increases Objectivity

preparedEPA replaces rigid checklists with authentic, workplace-based assessments.
The focus is on assessing the level of supervision required for each EPA — not whether something was “right” or “wrong,” but how much support the trainee still needs.

Two features make this approach particularly objective and context-sensitive:

  1. Distinguishing simple from complex cases:
    The app records whether a given task (e.g., inserting an IV line) was simple or complex — as judged by both trainee and supervisor.

  2. Direct observation on site:
    Supervision takes place where the work happens.
    Evaluations conducted weeks later are neither accurate nor meaningful.
    Recent master’s theses from the U.S. and Canada revealed an average delay of 31 days between a clinical event and its assessment — an unacceptable gap.

Instead of grading “good” or “bad,” the scale measures how much supervision is required — enabling a fairer and more nuanced evaluation of trainees.

Conclusion: Making Competence Visible Instead of Checking Boxes

Those who move away from counting procedures and toward continuous, competency-based EPA assessment gain a holistic and credible picture of professional growth. Countries such as the Netherlands have already replaced traditional board exams with structured EPA-based curricula.

The path toward modern specialty training is not to wait for the perfect checklist — but to start now: with ongoing assessment, meaningful observation, and a strong culture of feedback