Print this page Email this page Users Online: 368 | Click here to view old website
Home About us Editorial Board Search Current Issue Archives Submit Article Author Instructions Contact Us Login 

 Table of Contents  
Year : 2009  |  Volume : 22  |  Issue : 2  |  Page : 308

Developing a Tool for Measuring Quality of Medical Education

1 Makerere University, Kampala, Uganda
2 Maastricht University, The Netherlands

Date of Submission27-Jan-2009
Date of Acceptance22-Jul-2009
Date of Web Publication01-Aug-2009

Correspondence Address:
M Galukande
# 20190 kampala
Login to access the Email id

Source of Support: None, Conflict of Interest: None

PMID: 20029753

Rights and PermissionsRights and Permissions

How to cite this article:
Galukande M, van Berkel H, Wolfhagen I. Developing a Tool for Measuring Quality of Medical Education. Educ Health 2009;22:308

How to cite this URL:
Galukande M, van Berkel H, Wolfhagen I. Developing a Tool for Measuring Quality of Medical Education. Educ Health [serial online] 2009 [cited 2023 Feb 2];22:308. Available from:

Dear Editor,

Over the past decades, many institutions have undertaken initiatives to put mechanisms in place to improve and better account for use of resources and the quality of education. Whereas there is evidence for these mechanisms - including quality assessment - in institutions of higher learning in many developed countries, this is not necessarily the case in the developing world - East Africa being an example. A World Federation of Medical Education (WFME) booklet states that only a handful of the over 1,600 medical schools globally undergo regular accreditation (WFME, 2001).

Measuring quality is essential for continuous improvement of study programs, teaching and support services (Karle, 2002; Brennan & Shah, 2000). This letter highlights the process and rationale of developing a rapid appraisal tool for measuring quality of higher education in the context of an environment in which full-fledged accreditation processes have not yet been fully implemented.


Quality is a concept which lacks a common definition that could be applicable in all fields, for every phenomenon or any subject. It is a highly debated concept with multiple meanings to people who conceive higher education and quality differently (Biggs, 2001; Parri, 2006). Quality has been defined in several ways by different authors, with definitions falling into several categories – e.g., special or unique, refinement, goal compliance, worth the practice and quality as transformative (Harvey & Green, 1993; Newby, 1999; Tam, 2001).

At Makerere University, Uganda, developing a rapid appraisal tool was in part a response to the challenge of improving quality. Additionally, justifications for a rapid appraisal tool, as opposed to a protracted accreditation procedure, were cost containment and time-saving advantages. Appraisal using the proposed tool would in a short timeframe point to the areas most in need of attention, in an objective manner. Areas of deficiency would then merit further in-depth probing, by whatever means that would be available to the investigators and the institution.


Is quality measurable? We contend it is if the variables to be measured – both quantitatively and qualitatively – are well-defined. Quality assessment in higher education entails defining what quality is, setting assessment standards and comparing the latter with the real outcomes, and deciding to what extent standards are met (Lomas, 2002; Jones, 2003). The extent of compliance to the set standards should meet the needs of the stakeholders like sponsors of education and the community in which the students will serve (Massy, 2005). The institution, in turn, does what it promised and periodically allows inspection from within or by a third party (Scott et al., 1996; Lagrosen et al., 2004; Burke, 2005; Posner & Rudnitsky, 2006).

Rapid Appraisal Tool

The tool, in the form of a questionnaire, was developed based on the WFME template (2001). It consists of the following areas or domains from which specific items in the tool have been derived: mission and objectives; educational program; assessment of students; students’ welfare; educational facilities; academic staff; evaluation; governance and administration; and continuous improvement. Whereas the proposed 125-item tool appears bulky, it averages out to 16 items per domain. The appraisal tool is cheap, easy to use, easy to replicate and would apply to any educational program development.


The specific variables comprising the tool were derived from a set of education standards that had earlier been identified through several workshops that included a wide representation of stakeholders from within the country and abroad. The process costs were kept low by running several half-day workshops, a strategy preferred to full days by stakeholders, since this allowed them to attend to other duties the other half of the day. Participatory Question-Based Facilitation (PQBF) and Visualization in Participatory Planning (VIPP) approaches were used to generate and sustain individual interest.

The Participatory Question-Based Facilitation grows directly out of the need to improve the planning process in situations which are entrenched or stagnated, highly competitive or conflictual. It is based on participatory techniques designed to diffuse tensions, tackle core problems, generate relevant solutions, enhance commitment and create a culture effective for teamwork. This approach consists of a creative combination of different approaches to planning, centered around professional facilitation based on questions. The facilitator raises questions related to the objectives of the planning process; the answers to these questions, in turn, inform the planning process (Dodge et al., 2002; Dodge et al., 2003).

The set of variables derived from the standards setting workshops were reviewed by a content expert panel for validation. They were then analyzed using factor analysis and statistical tests for reliability (Field, 2005). The final appraisal tool had all items retained and was shown to have good reliability coefficients (Cronbach’s alpha) as well as internal consistency of the items in each subscale (domains). Overall, development of the tool required a few thousand dollars for rooms, lunches and transport of stakeholders and the expert panel as well as costs of photocopying, data analysis and a stipend for the individuals that distributed and collected the questionnaires.


This 125-item validated, inexpensive and easy-to-use tool is the first of its kind in the East and Central African region for rapid appraisal of the quality of higher education for health professionals. To this end, project development and implementation have been a success.

Additionally, where external accreditation exercises are needed, the tool may serve as a means of preparation for external peer review.

Moses Galukande, M.Med, MScHPE, FCS

College of Health Sciences, School of Medicine, Makerere, Uganda.

Henk van Berkel, PhD

School of Health Professions Education, Maastricht, Netherlands

Ineke Wolfhagen, PhD

School of Health Professions Education, Maastricht, Netherlands


Biggs, J. (2001). The reflective institution: Assuring and enhancing the quality of teaching and learning. Higher Education, 41, 221-238.

Brennan, J., & Shah, T. (2000). Quality assessment and institutional change: Experiences from 14 countries. Higher Education, 40, 331-349.

Burke, J. (2005). The many faces of accountability. In Burke J.C., ( Ed.). Achieving accountability in higher education. Balancing public, academic and market demands. San Francisco: Jossey-Bass.

Dodge, C.P., Musisi, B.M., & Okumba, M. (2002). Participatory Question Based Planning: A Makerere Handbook for Facilitators. Kampala, Uganda: MISR.

Dodge, C.P., Sewankambo, N., & Kanyesigye E. (2003). Participatory planning for the transformation of the Faculty of Medicine into a College of Health Sciences. African Health Sciences, 3, 94-101.

Field, A. (2005). Discovering statistics using SPSS. (2nd ed.). London, United Kingdom: SAGE Publications.

Harvey, L., & Green, D. (1993). Defining quality assessment and evaluation. Higher Education, 18, 9-34.

Jones, S. (2003). Measuring the quality of higher education: Linking teaching quality measures at the delivery level to administrative measures at the university level. Quality in Higher Education, 9, 223-229.

Karle, H. (2002). Global standards in medical education - An instrument in quality improvement. Medical Education, 36, 604–605.

Lagrosen, S., Seyyed-Hashemi, R. & Leitner, M. (2004). Examination of the dimensions of quality in higher education. Quality Assurance in Education, 12 (2), 61-69.

Lomas, L. (2002). Does the development of mass education necessarily mean the end of quality? Quality in Higher Education, 8, 71-80.

Massy, W.F. (2005). Academic Audit for Accountability and Improvement. In: Burke. J.C., (Ed). Achieving accountability in higher education: Balancing public, academic and market demands. San Francisco: Jossey-Bass.

Newby, P. (1999). Culture and quality in higher education. Higher Education Policy, 12, 261-275.

Parri, J. (2006). Quality in higher education. VADYBA/MANAGEMENT, 2, 107-111.

Posner, J.P., & Rudnitsky, A.N. (2006). Course Design: A guide to curriculum development for teachers. 7th Ed. Allyn and Bacon: USA.

Scott, C., Bums, A., & Cooney, G. (1996). Reasons for discontinuing study: The case of mature age students with children. Higher Education 31, 233-253.

Tam, M. (2001). Measuring quality and performance in higher education. Quality in Higher Education, 7, 47–54.

World Federation of Medical Education. (2001). WFME global standards for quality improvement in Undergraduate Medical Education.


Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

  In this article

 Article Access Statistics
    PDF Downloaded215    
    Comments [Add]    

Recommend this journal