EVALUATION |
|
Year : 2000 | Volume
: 13
| Issue : 1 | Page : 45-52 |
|
Examining and Recording Clinical Performance: A Critique and Some Recommendations
Ken Cox
School of Medical Education, University of New South Wales, Australia
Correspondence Address:
Ken Cox Emeritus Professor of Surgery, Former Head, School of Medical Education, University of New South Wales, Sydney 2052 Australia
 Source of Support: None, Conflict of Interest: None  | Check |

|
|
Clinical performance is too complex and interactive for measurement. Judgment is always necessary for its assessment. Experienced clinicians judge trainee performance on many small details. This clinical judgment turns on the trainee's handling of important details in the patient and the malady.
But the recording of performance retreats to categories and checklists that contain nothing of those critical details or the trainee's judgment. Checklists are incapable of identifying what actually happened, and 'could do' categories have no predictive accuracy in asserting what cases a trainee can actually manage. Clinical examinations have even been subverted by the naive, pseudorational error that competence is de. ned by obedience to doing exactly what someone else expects you to do in every case, as in an OSCE examination.
Cases are the unit of clinical practice. The clinical curriculum should be comprised of the critical core cases the trainee must be able to handle in each discipline. Case management, procedural skills and professional behavior can be assessed accurately only in the context of daily clinical work. Formal examinations lack the range of cases and open-ended time that allow examiners to explore a trainee's case knowledge and judgment. Habitual behavior can be assessed only by observing habitual behavior in everyday practice. Assessment and recording should take place only in real world settings, focused on performance on the core cases trainees must be competent to manage. |
|
|
|
[PDF]* |
|
 |
|