Author

mm

Scientist & Associate Director - The Wilson Centre; Assistant Professor - Department of Medicine, University of Toronto

Dr. Ryan Brydges is an education scientist whose research focuses in health professions education and healthcare simulation. Dr. Brydges conducts research in three related domains: (i) clarifying how healthcare trainees and professionals manage (through self-regulation) their life-long learning, (ii) understanding how to optimize the instructional design of healthcare simulation (and other technology-enhanced learning modalities) for training and assessment of healthcare professionals (iii) identifying best practices in the training and assessment for bedside invasive medical procedures. Dr. Brydges is the Director of Research and a Scientist at the Allan Waters Family Simulation Centre, St. Michael’s Hospital and an Education Scientist in the Department of Medicine, University of Toronto, and the Wilson Centre, UHN.

Big Ideas: Creating defensible simulation-based assessments

Competency-based health care education means many things to many people. One thing everyone agrees on, however, is that how and how often educators assess trainees is changing. As the frequency of assessments increase, so too does the potential for appeals, or trainees challenging the outcome of an assessment. Can your competency assessment program withstand such challenges? What evidence do you have to support the decisions you make using your program’s assessment data? Are your assessments defensible?

Defensible assessment requires program leaders to collect, analyze and evaluate the evidence generated through their assessments. Validity frameworks – a way of organizing one’s thinking about how to evaluate the quality of an assessment program – exist to help structure and interpret the resulting evidence. Ongoing validation underpins defensible assessment.

Health care professionals hear buzzwords like logistics, efficiency, patient safety and stewardship regularly in their practice. Such terms can limit opportunities to assess trainees as they work alongside patients. Fortunately, workplace-based assessment might be supplemented by high-quality simulation-based assessment.

I often define simulation as a process of mimicking patients and health care processes using any means possible for the purpose of training and assessment. Simulation scenarios perceived as realistic are often lauded as ‘valid.’ Realistic simulation, therefore, becomes a surrogate for defensible assessment. Without evidence confirming such assumptions, ‘realistic simulation’ can lead to speculative simulation-based assessment.

I’ll say it again (repetition breeds memory, after all): validity is not a property of a simulator or an assessment tool. Instead, validity is a property of the proposed interpretations and uses of the assessment scores. Gone are the days one can claim use of “a validated tool”!

Here’s an example of a common proposed interpretation and use of simulation-based assessment (SBA): SBA scores can predict performance in real clinical settings, and thus can be interpreted as surrogate measures of clinical performance. A big claim to investigate! The validity framework developed by education scholar Michael Kane calls on educators to examine such claims, and to then collect evidence systematically to evaluate whether it supports or refutes their claims.

Kane describes validity as a matter of degree; a fluid concept that may change over time as interpretations and uses develop and as new evidence accumulates. Kane’s framework pares assessment down into four key inferences: scoring, generalization, extrapolation and implications. Rather than define each inference here, I’ll point you to a great article that does that and much more, an open access article from my colleagues David Cook and Rose Hatala: “Validation of educational assessments: a primer for simulation and beyond”.

In that article, you’ll find a list of common mistakes associated with validity and validation. Here’s my Cole’s Notes version:

  • No matter what you do, use a validity framework to structure your approach. I like Kane, but there are many other options out there.
  • In any assessment program, clearly define your construct (what you think you’re measuring – like professionalism) and list out the claims and assumptions you’re making about how you will interpret and use the assessment scores.
  • Resist the desire to make your own tool. Instead, work with library services to review the literature and appraise the evidence for your assessment method.
  • During validation, consider your claims above and focus on the most important and needed validity evidence. Resist the desire to collect evidence because it’s easy. What quality process was ever easy?
  • After collecting your evidence, synthesize and critique the data and contrast honestly and rigorously relative to your original claims.
  • Disseminate evidence to your colleagues! We repeat mistakes because we don’t talk to each other enough, or because we think we only need to share our successes and hide our failures. Share your evidence!

Educational leaders live in a world where the quality of assessment data matters. Data quality relates directly to data defensibility. Keep your program strong with ongoing validation. Validation is a growing area of educational scholarship in need of leaders. If you’re responsible for assessment in your program, what are you waiting for?

 

This blog post appears as part of the Big Ideas Lecture Series for the Research Institute of Health Care Education. Big Ideas explores simulation, artificial intelligence, telesimulation and e-health innovation, among others. The series brings together experts to share insights into emerging issues and trends, to advance understanding of issues affecting health care education, research, practice and society, and to support the research and work produced at the Research Institute of Health Care Education.

Share