Back to Blog

Key takeaways from ICBME Webinar series

Last updated: November 9, 2020
Author: Sylvia Mioduszewska
Implementing CBME

We’re all in this together

While both of the CBME (competency-based medical education) webinars in the recent International Competency-based Medical Education (ICBME) webinar series this Fall were presented with a different context by two panels with diverse representations, there was a common thread throughout.

The thing that bubbled up for me was the similarity of experience across the board. It was encouraging to see so many different organizations sharing a common experience. It was apparent in both sessions: the first on October 20 called “10 years of CBME” and Nov 2nd’s “Exploring Summative Entrustment: Learning from School Experiences in the AAMC Core EPA Pilot”. The one pain point that panelists from Canada, the US, and Europe all share is that implementing CBME is a challenge for all MedEd institutions.

While everyone agrees that CBME is a better assessment method and that dividing competencies up into their component parts makes for better, more quantitative assessment data— there is an agreement that we are not “there” yet. Recognizing the value of the data we can collect with CBME is not the same as being able to collect it. Implementing “learner-first” data collection and assessment models takes time. It was apparent to me that mere acceptance of CBME as a better way is only half of the challenge. It will take stewardship, patience, and of course better technology tools in order to tap into the real power of CBME. 

Creating information flow to help solve this challenge

When we talk about CBME, there are really two facets to it. Competence has been assessed for over 40 years, that’s not new. In this new kind of workplace-based assessment of competencies, we are assessing competence in a different way, and aiming to align teaching with learning, and align assessment with progression. 

CBME is trying to ascertain in a much more data-driven way whether learners are ready to practice unsupervised. This is tough for UME, where there is that little learners may do without direct supervision. However, even with undergrads, there are activities that day one interns and residents are expected to perform without direct supervision. These EPA data points could contribute to an entrustment decision, creating a flow of information between undergrad to postgrad and from postgrad to that graduation of learners who you know can perform the tasks of their discipline, not just competently, but expertly. 

If CBME were a soufflé 

To better explain the challenge of collecting the right kind of data to harness the power of CBME, let’s use the example of trainee chefs making a soufflé. It’s easy to assess their competence by tasting the souffle – the outcome – to see if it’s delicious. However, every chef has their own twist on a recipe, and a slightly different interpretation of delicious. 

In CBME, we can break down the process into all of its component parts: oven temperature, egg separation, beating the whites to the correct consistency, and so on. Assessment of each of these components can be tracked and recorded, indicating progression on each component skill. That way, if a soufflé doesn’t quite turn out, it’s easier to figure out which specific tasks need to be worked on, and monitor the progression of fixing them. 

The challenge is in capturing the assessment in the room where it happens. To use our soufflé example, evaluators have to be there to watch, assess, and keep a record of the parts of the process that led to the overall formative assessment. 

Over time, with enough observations by qualified evaluators, you would have a picture of a trainee chef’s attempts, and be able to make a decision about whether they would be able to perform this task independently. Gradually, you have a picture of a trainee who is able to take on more responsibility, more complex tasks, and perform them independently. 

An action plan for MedEd institutions that want to expand CBME

One thing we’ve learned through our work with medical schools implementing CBME is that change requires overcoming the barrier of a perceived additional workload for evaluators. How can we begin to transition evaluators to the new model and introduce more data points into the assessments? 

We’ve seen that starting with small pilots is a great way to start. In my experience, if you want to pilot something, it’s hard to jump in with both feet. Select a couple of key EPAs that are easy to assess and pick ones that are easier to roll into current practice. For example, taking a history and performing a physical is something that happens frequently on most services. Make the evaluation tools short, but ensure they provide sufficient contextual information, such as case complexity or setting, as well as key sub-tasks to prime evaluators. The goal is to get started, not to get overwhelmed in the details. 

Then you want to figure out a way to measure entrustment. There are a lot of different scales you can use, so you want to engage with your education consultants to see which scale would provide the information you need. Think about what you’re actually trying to measure, and then communicate that to your evaluators. 

If you are interested in exploring this topic further, here is a post on understanding the impact of assessing EPAs.


Our team at One45 has worked hand in hand with many LCME schools in planning and providing the tools to enable the transition to CBME. If you are interested in discussing the steps for successfully implementing competency-based evaluations at your school, we are happy to connect

Sylvia Mioduszewska

About Sylvia Mioduszewska

Sylvia is a Product Manager at One45. Prior to joining one45, she was the CBME technology lead for a Canadian medical school, and had previously managed evaluation and assessment for a large medical residency program. Sylvia loves to dig into customer problems to build products that make a difference for medical school users.