Print this page Email this page Users Online: 1646 | Click here to view old website
Home About us Editorial Board Search Current Issue Archives Submit Article Author Instructions Contact Us Login 


 
 Table of Contents  
BRIEF COMMUNICATION
Year : 2008  |  Volume : 21  |  Issue : 1  |  Page : 113

An Example of Program Evaluation Project in Undergraduate Medical Education


Dokuz Eylul University School of Medicine, Medical Education Department, Inciralti, Izmir, Turkey

Date of Submission05-Sep-2007
Date of Acceptance06-Feb-2008
Date of Web Publication14-Apr-2008

Correspondence Address:
B Musal
Dokuz Eylul University School of Medicine, Medical Education Department, Inciralti, Izmir
Turkey
Login to access the Email id

Source of Support: None, Conflict of Interest: None


PMID: 19034833

Rights and PermissionsRights and Permissions
  Abstract 

Revisions to existing program evaluation approaches of the Dokuz Eylul University School of Medicine (DEUSM) were made by the Medical Education Department in June 2005. After considering several evaluation models, a mixed evaluation model was developed to meet institutional needs. The general program evaluation plan was structured as areas of inquiry under the three main program evaluation questions: what are the effects of the educational program on students and graduates, what are the effects of the educational program on trainers, and is the educational program being implemented as planned. The School's first report made through its new program evaluation approach was prepared in July 2006, leading to important revisions to the educational program. This article presents DEUSM's project to revise its program evaluation approach and briefly discusses its early implementation.

Keywords: Program evaluation, program revision


How to cite this article:
Musal B, Taskiran C, Gursel Y, Ozan S, Timbil S, Velipasaoglu S. An Example of Program Evaluation Project in Undergraduate Medical Education. Educ Health 2008;21:113

How to cite this URL:
Musal B, Taskiran C, Gursel Y, Ozan S, Timbil S, Velipasaoglu S. An Example of Program Evaluation Project in Undergraduate Medical Education. Educ Health [serial online] 2008 [cited 2023 Jun 4];21:113. Available from: https://educationforhealth.net//text.asp?2008/21/1/113/101590

Introduction



Evaluation is one of the essential elements of the educational process. Program evaluation has been described as the effort to determine whether program objectives have been reached and the gathering of information to assess the efficiency of a program. In a more comprehensive context, program evaluation is described as the act of collecting systematic information on the nature and quality of educational objects (Nevo, 1995). Program evaluation seeks to answer how well educational needs have been met and objectives and educational standards have been attained. Program evaluation also assesses the organization’s educational quality, the efficiency of its training method’s and identifies aspects of the curriculum that can be improved through modification (Morrison, 2003). Ideally, program evaluations are planned at the beginning of the educational program and implemented concomitantly with the program (Morrison, 2003).



Various evaluation approaches have been described, including objectives-oriented, expertise-oriented, management-oriented, naturalistic and participant-oriented approaches, as well as those using various models like logic model (Worthen & Sanders, 1987; Demirel, 2002; Mc Neil, 1996; Mennin, 2004; Logic Model Development Guide, 2001). Educational institutions have been advised to weigh the advantages and disadvantages of different evaluation models and approaches and to develop an institution-specific evaluation model that meets their particular needs (Worthen & Sanders, 1987). It has also been suggested that program evaluation emphasizes both educational processes and outcomes (Demirel, 2002).



Kirkpatrick has described four levels of program outcomes to be assessed (Kirkpatrick 1998). The first level is learners’ and instructors’ reactions and contentment with the program. The second level is to assess the increase in learners’ knowledge and skill, and the third level evaluates whether learners apply their new knowledge and skills through appropriate behavioral changes in their subsequent work/roles. The fourth level is to evaluate the impact of the program on the institution and society in which the program was implemented. It has been suggested that program evaluation should start with assessments of the first evaluation level and then, within practically achievable limits, continue with the second through fourth levels (Nickols, 2003; Kirkpatrick 1998; Hutchinson 1999).



When evaluating an educational program, an evaluation plan should be prepared in accordance with the general principles of the program’s objectives and the pressing questions about it that should be answered. For each program evaluation question, a format consisting of the evaluated parameter and its rationale, data collection method, indicator/criteria, data analysis/interpretation, implementation frequency, authorities receiving the reports and identity of the evaluators should be developed (Nevo, 1995; Curriculum 2000 Program Evaluation Study Group Final Report, 2000).



The Dokuz Eylul University School of Medicine (DEUSM) in Izmir, Turkey started a Problem-based Learning (PBL) program for its three pre-clinical years in 1997 and a Task-based Learning (TBL) program for two clinical years in 2000 (Musal et al., 2006; Ozkan et al., 2006). Since the initiation of the PBL and TBL programs, as part of the evaluation approach then used by the school, student performance levels, oral and written student and trainer feedback were assessed, and reports of educational committees were evaluated. Additionally, educational research studies were carried out.



A systematic, school-wide revision to its evaluation approaches was undertaken as a project by the Medical Education Department in June 2005. With the approval of the Dean’s Office, a systematic and multidimensional evaluation of the School’s educational program was initiated. This article presents DEUSM’s program evaluation project, its approaches and some of its early findings and curricular decisions made based on the findings.



Planning the Program Evaluation Project at DEUSM



Following a comprehensive review of evaluation models and examples, a mixed evaluation model was selected to meet institutional needs. The model included the logic model’s program elements (inputs, activities, outputs and outcomes) and their causal relationships (Logic Model Development Guide, 2001) and Kirkpatrick’s first three evaluation levels (Nickols, 2003; Kirkpatrick 1998; Hutchinson 1999).



Based on the general educational goals and strategies of the DEUSM (Appendix 1), the following three program evaluation questions were developed:

  • What are the effects of the educational program on students and graduates?

  • What are the effects of the educational program on trainers?

  • Is the educational program being implemented as planned? (Appendix 2).


For each of these three evaluation questions, we further developed a schema for answering them by articulating the necessary data collection methods, the indicators and criteria for success, the data analysis methods, frequency at which data were to be collected, and reporting mechanisms (Table 1, 2 and 3). The program evaluation activities for a one-year period were planned on a Gannt chart. Based on a written timetable, the data were collected, analyzed and interpreted in the planned manner.



Table 1. What are the effects of the educational program on students and graduates? (Question 1)







Table 2. What are the effects of the educational program on trainers? (Question 2)







Table 3. Is the educational program being implemented as planned? (Question 3)







Implementation of the New Program Evaluation Approach at DEUSM



All planned activities were implemented except the observation of activities during a PBL module and a TBL week. Due to time limitations, these omitted activities were moved to the following year’s evaluation plan. No other problems were experienced while carrying out the evaluation, except for some delays in disseminating the findings on the implementation of program activities through a report to relevant educational committees.



The results of program evaluation activities implemented throughout the year were presented to relevant educational committees upon their completion and they were used for program improvement. For instance, the results of a survey and focus group studies to assess students’ opinions on the educational program were discussed in the relevant educational committees. The survey, to be repeated at the end of each academic year, assesses students’ levels of contentment with each educational activity (PBL sessions, lectures, clinical skills practicals etc.), evaluation methods, tutors’ performance, the School’s medical facilities and supports for students. In this survey, the points attributed to each item are comparatively evaluated throughout the years (Musal et al., 2006). The focus group interviews were used to evaluate students’ opinions about the educational program. The qualitative and quantitative data originating from these efforts have been used to revise the programs. For example, in focus groups the students indicated that the joint implementation of pre-clinical disciplines’ practicals (anatomy, histology, physiology etc.) made learning difficult. As a result, the School decided to implement the practicals separately during the course of the PBL modules. The focus groups also revealed that concept maps were not developed in all PBL groups. In DEUSM, at the end of each PBL module students are expected to develop a concept map, a diagram exhibiting the main concepts pertaining to the PBL case and the relationships among them. Considering students’ statements regarding the development of concept maps, a course on concept maps for PBL tutors was started.



The first program evaluation report on the results of all implemented activities was prepared in July 2006. Program revision proposals were developed and reported to the Dean’s Office and relevant educational committees.



After the first year’s evaluation successes, the subsequent year’s evaluation plan was prepared. New evaluation activities were added for the second year specifically to evaluate students’ performance and assess graduates’ perceptions of the educational program and their perceived professional competencies.



Based on two successive years’ program evaluation studies, changes made in the School’s education programs include an overall revision of the curriculum, a reduction in the frequency of examinations, and a diversification of socio-cultural activities to meet students’ expectations. Numerous other changes were also made in response to the detailed content of the program evaluation report.



In the light of these outcomes, the School plans to continue its new program evaluation approach with its quantitative and qualitative methods assessing all components of the educational program.



References



CURRICULUM 2000 PROGRAM EVALUATION STUDY GROUP FINAL REPORT. Southern Illinois School of Medicine, Springfield, Illinois, U.S.A.



DEMIREL O. (2002). Kuramdan Uygulamaya: Egitim de Program Gelistirme (From theory to practice: Program evaluation in Education) . 4th ed. Ankara: Pegem A Publishers.



HUTCHINSON, L. (1999). Evaluation and researching the effectiveness of educational interventions. British Medical Journal, 318, 1267-1269.



KIRKPATRICK, D.L. (1998). Evaluating Training Programs. 2nd ed. San Francisco: Berret-Koehler Publishers, Inc.



LOGIC MODEL DEVELOPMENT GUIDE (2001). Michigan: Kellog Foundation.



MCNEIL, J.D. (1996). Curriculum: A Comprehensive Introduction. 5th ed. Harper Collins College Publishers.



MENNIN, S.P. Program Evaluation in Medical Education (workshop text). 3rd Turkish Medical Education Congress. April 12-16, 2004, Sanliurfa, Turkey.



MORRISON, J. (2003). ABC of learning and teaching in medicine: Evaluation. British Medical Journal, 326, 385-387.



MUSAL, B., GURSEL, Y., OZAN, S., TASKIRAN, H.C. & VAN BERKEL, H. (2006). The Satisfaction Levels of Students on Academic Support and Facilities, Educational Activities and Tutor Performance in a PBL Program. The Journal of the International Association of Medical Science Educators. 16:1, 1-8.



NEVO, D. (1995). School-based Evaluation: a dialogue for school improvement. London: Pergamon Press.



NICKOLS, F. (2003). Evaluating Training: There is no “cookbook” approach. http://home.att.net/~nickols/articles.htm. Retrieved date: 2 March 2006.



OZKAN, H., DEGIRMENCI, B., MUSAL, B., ITIL, O., AKALIN, E., KILINIC, O., OZKAN, S., & ALICI, E. ( 2006). Task-Based Learning Programme for Clinical Years of Medical Education. Education for Health, 19:1, 32-42.



WORTHEN, B.R. & SANDERS, J.R. (1987). Educational Evaluation. Alternative Approaches and Practical Guidelines. New York: Longman.

___________________________



Appendix 1. General Principles* (GP) of DEUSM’s Educational Program









Appendix 2. Program Evaluation Questions








 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract

 Article Access Statistics
    Viewed5786    
    Printed130    
    Emailed0    
    PDF Downloaded627    
    Comments [Add]    

Recommend this journal