Print this page Email this page Users Online: 130 | Click here to view old website
Home About us Editorial Board Search Current Issue Archives Submit Article Author Instructions Contact Us Login 

 Table of Contents  
Year : 2018  |  Volume : 31  |  Issue : 2  |  Page : 72-79

Development and testing of an analytic rubric for a master's course systematic review of the literature: A cross-sectional study

1 Nursing Science Program, Clinical Health Sciences; Department of Nursing Science, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands
2 Center for Teaching and Learning, Utrecht University, Utrecht, The Netherlands
3 Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands

Date of Web Publication30-Nov-2018

Correspondence Address:
Thóra B Hafsteinsdóttir
P.O. Box 85500, Suite Str. 7.116, 3508 Ga Utrecht
The Netherlands
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/efh.EfH_336_17


Background: Conducting grading of systematic reviews in master's level programs of health sciences education is a complex process. Students conduct systematic reviews under the supervision of course faculty in seminar groups where both draft version and definite version of the literature review are graded/assessed. The aim of this study was to develop a systematic review of the literature rubric (SRL-rubric) for the evaluation of systematic reviews in the course of SRL in a master's Program of Health Care Sciences and to investigate students and faculty experiences with and the usability of the SRL-rubric. Methods: The SRL-rubric was developed using a seven-step approach. Usability was investigated with cross-sectional survey. Results: The SRL-rubric included nine categories and five proficiency levels. Fifty-two of 59 students and all six faculty members at Utrecht University Program of Health Care Sciences completed the survey. Students rated the ease of working with the rubric with an average 6.6 (10-point scale). Faculty ratings ranged from 7 to 9. Problems were identified with distinction among cells describing proficiency levels and final grading. Discussion: A structured process focused on the requisite actions to develop the SRL-rubric. It was useful in writing and grading systematic reviews. However, some students indicated that they missed specific feedback and suggestions describing how to improve their weaknesses. Further development and research is needed to enhance grading reliability of SLR-rubric and to establish content validity and maintain consistency with criteria for conducting and reporting reviews.

Keywords: Graduate studies, performance criteria, rating scale, rubric, systematic review

How to cite this article:
Gamel C, van Andel SG, de Haan WI, Hafsteinsdóttir TB. Development and testing of an analytic rubric for a master's course systematic review of the literature: A cross-sectional study. Educ Health 2018;31:72-9

How to cite this URL:
Gamel C, van Andel SG, de Haan WI, Hafsteinsdóttir TB. Development and testing of an analytic rubric for a master's course systematic review of the literature: A cross-sectional study. Educ Health [serial online] 2018 [cited 2022 Jan 16];31:72-9. Available from:

  Background Top

Supervising and grading health sciences students' assignments and examinations in master's programs is a complex process. Students may have more supervisors to assure clinical and methodological expertise, and many supervisors may be involved in grading. An interdisciplinary faculty with distinct research competencies and possibly a different professional domain than the student further may complicate the process. It is a challenge to develop unequivocal criteria that can be used by such a diverse group. Rubrics can assist in the supervision, grading, and evaluation process.[1] A rubric is an assessment tool that identifies the criteria and levels of achievement for the evaluation of a specific assignment.[2] It divides the assignment into distinct skills or proficiencies and describes the required behaviors to attain the targeted achievement level.

A review of the use of rubrics in higher education identifies multiple applications including grading and enhancing student performance, improving teaching skills, and evaluating university courses.[1] However, the studies included did not describe the development of rubrics. Despite a lack of studies investigating validity and reliability of rubrics, some studies report the interrater reliability[3],[4],[5] and validity.[6],[7],[8] Clarity and appropriateness of language are central concerns and emphasized that graders must be sufficiently trained to achieve acceptable levels of reliability.[3],[4],[5],[6],[7],[8],[9] Studies describe rubrics for academic writing, research competencies,[8] writing academic papers, and case studies.[10]

An often-cited benefit of rubrics for faculty is the ability to grade assignments accurately, rapidly, and objectively.[1],[2] Rubrics are beneficial to align disparate opinions concerning priorities in grading criteria.[2] The objective descriptions enable multiple users to reach consensus over the grade assigned. Another benefit for students is the insight in learning goals and reporting research that is gained from feedback on work in progress or formative assessment.[1]

This article describes experiences of students and faculty in a Master of Science in Clinical Health Sciences Program in the Netherlands. Students are enrolled in Nursing Science, Physiotherapy Science, and Clinical Language, Speech and Hearing Sciences Programs. The faculty members are clinical research experts, who have the role of supervisor and grader of a group of students with diverse experiences in supervising and grading systematic reviews. Consequently, challenges may be encountered concerning the feedback students receive and assuring accuracy and objectivity among graders. Based on this background, our aim was first to develop a rubric for the evaluation of systematic reviews in a systematic review of the literature (SRL) course in a master's program and second to investigate students and course faculty experiences with and the usability of the rubric.

  Methods Top

A descriptive, cross-sectional survey study was conducted.[11] The study took place in 2014–2015 within a Master of Science Program of a Dutch university.

Rubric development

The development of the SRL-rubric was guided by a seven-step approach.[10] The content description of behaviors, which included research skills requisite to conduct and report a literature review, was based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement for transparent reporting of systematic reviews,[12] which is an evidence-based minimum set of items important for reporting systematic reviews.[12] This PRISMA version was available when the development of the rubric commenced in 2013.

Participants in the development were faculty of Clinical Health Sciences, because of their expertise and the importance for successful implementation.[1],[2] Draft sections of a SRL-rubric were developed by two members of the faculty (TBH, CG), which were discussed and modified as needed by a core group of eight faculty members and two educational scientists. Further input was provided by 10 additional faculty members and a student.

The following steps guided the development:

  • Step 1. Reflection: This stepwise process required reflection and decision-making. A draft rubric had been setup by a working group of faculty based on the PRISMA statement.[12],[13],[14] The following desicions were made: (a) to write the rubric in English, because the systematic review was written in English, the most common language for international publication; (b) to use an analytic rubric instead of a holistic rubric, because faculty wanted to score each criterion rather than assign an overall score; (c) to use the rubric in formative and summative evaluation, because the provision of feedback is integrated into the courses; and (d) to involve all faculty/supervisors, the course coordinator and a student in the development.
  • Step 2a. Criteria list: The performance criteria, which focused on the research skills requisite to conducting and reporting a review, drew on the PRISMA criteria and included seven content criteria.[12],[13],[15] Two generic criteria “manuscript organization” and “work style-own initiative” were added, reflecting important scholarly competencies.
  • Step 2b. Criteria weight and score: The weight percentage of each performance criterion was specified, with the cumulative weight of 100%. Faculty members were asked to assign a weight to each criterion, and a discussion of differences followed until consensus was reached on the final weight. A separate activity was the use of the criteria and associated descriptors to grade a review manuscript, promoting scoring consistency, and essential to fulfill the educational merits of the rubric.[13]
  • Step 3. Proficiency levels: Although various approaches are used to reflect proficiency progress on a continuum,[16] it was found important to avoid labeling and merely identify a range from level 1 to 4.[17] In the first version of the SRL-rubric, the scoring categories were in line with the respective university educational regulations, to anchor proficiency levels in a failure to excellent grading system as follows: unacceptable, failure, pass, satisfactory, good, and excellent. Furthermore, a description was written for each level of the nine performance criteria: abstract, introduction, aims, method, results, discussion, argumentation, manuscript organization, and work style. To achieve consistency in terminology, the variance in the quality of each performance attribute was described in terms of intensity, such as “50% of the of the criteria correct and complete.”[17]
  • Step 4. Testing the SRL-rubric: Faculty graders, from nursing, physical therapy, and speech-language therapy, used the proposed rubric to grade two anonymous review manuscripts, representing both endpoints of proficiency, and areas of differences were discussed. This resulted in further modifications and provided evidence of scoring validity and interrater reliability of the SRL-rubric
  • Step 5. Teaching the students: Students were informed about the use of the SRL-rubric at the start of the course, and the rubric was included in the course syllabus. Students were encouraged to use the rubric in seminar group discussions when writing the concept (formative evaluation) and the final review (summative evaluation)
  • Step 6. Application or Implementation: Faculty received information on how to use the rubric both as a formative and summative assessment tool. A written instruction with an exemplar was sent to faculty before use of the rubric to grade (as fail or pass) and to provide feedback on the concept version of the review and before grading the final version of the review. After using the rubric for a summative evaluation to provide feedback, the faculty met to reflect on the process
  • Step 7. Revision: In 2014, two expert meetings were held with faculty to review the content of the SRL-rubric criteria. This first version of the SRL-rubric was used for the evaluation of the reviews that course year. Although the student and faculty course evaluations were positive, the need for further refinement was evident.

Part two: Systematic review of the literature rubric evaluation

The study participants included students enrolled in the 2015 SRL-course and faculty, who taught, supervised this cohort.

A questionnaire for students' evaluation, developed by university faculty, was adapted for the aim of this study[18] including 26 questions and statements on students' experiences when using the SRL-rubric focusing on the following: organization, the supervisor's explanations, usability, weighting, clearness, grader objectivity, consistency in feedback and assessment, and using a 5-point (strongly disagree to strongly agree) Likert scale. A questionnaire for faculty included 18 statements focusing on the following: the process of assessment, usability, clarity, and reliability of the SRL-Rubric, using a 5-point Likert scale (strongly disagree to strongly agree).

Data from students were collected in May 2015 after course evaluation and following a lecture in a different course to obtain maximal response. Data from faculty were collected through e-mail in November 2015.

Data analysis

The data were analyzed with descriptive statistics using the SPSS 22 (IBM Corp., Armonk, NY, USA). The study was conducted in accordance with the Declaration of Helsinki.[19] Voluntary participation was confirmed. Participants could not be identified from the material presented and no plausible harm to participating individuals could arise from the study. All participants were thoroughly informed about the study. As customary, informed consent was inferred when participants completed and returned the survey. Data were collected anonymously and within the context of the course evaluation.

  Results Top

Systematic review of the literature-rubric

Before the start of the SRL-course for the 2015 student cohort, the SRL-rubric version 2014 was reevaluated and adapted by the faculty. Specifically, the number of proficiency levels was reduced from six to five because of an overlap between good and excellent proficiency levels. Furthermore, the proficiency scoring was modified so that graders could assign their own grade in scores for each proficiency level rather than using one fixed score. The large variation in criteria weight of the 2014 version ranging from: 5% (one criterion), 7,5% (two criteria), 10% (two criteria), and 15% (four criteria) was considered problematic and was changed in 2015 version to: 10% (seven criteria) and 15% (two criteria). Finally, textual descriptions were improved within and among cells describing proficiency scoring criteria [Table 1].
Table 1: Structure of systematic review of literature-rubrics version 2015

Click here to view

Students' evaluation

In total, 52 of 59 students assigned in the course completed questionnaires (88% response rate) from nursing science (n = 26), physical therapy science (n = 17), and speech-language therapy science (n = 7). Most of them (85%) had no previous experience with rubrics. All students read the SRL-rubric before writing their final review. On a scale of 1–10, students rated working with the rubric as 6.6 (standard deviation = 1.4) [Table 2]a.
Table 2:

Click here to view

The most students found the descriptors for five proficiency levels “reasonable” (56%) or “clear” (38%), while a small group found them “not clear” (7%). In their open comments, students identified problems as follows: when no feedback was given concerning the choice for a proficiency column; and when descriptors of proficiency columns contained inadequate information. Information provided at the beginning of the course was considered to be “adequate” by the more than half of the students (55%). Almost 60% of the students were encouraged to use the SRL-rubric by the supervisors (59%) [Table 2]b.

More than two-thirds of students (71%) used the SRL-rubric as a guideline when writing the review. The vast majority found the rubric provided a clear picture of what was expected (71%) and how the work would be assessed (75%). Some students were less positive about the usefulness in being informed about their strengths (35%). Students used the SRL-rubric to conduct a self-assessment of their own concept review. Half of the students (51%) were “more certain about how to improve their concept review” after using the rubric and 63% of them reported “actually having revised the concept review after completing the self-assessment.”

The majority (75%) of students found the faculty assessment “useful” (51%) or “very useful” (24%). An overwhelming majority (87%) agreed with the faculty assessment, and 83% of students rated the assessment as “motivating.” Almost all students (91%) thought that the faculty objectively assessed the concept review.

To achieve an objective grading of the final review, students were allocated to a grading group by randomization and an independent grader who assessed the final review. This grader could be from a different profession. Of the students, 44% found that the grading instructor “objectively assessed the review” and 32% found the explanation provided “sufficient to understand the grade.”

Faculty evaluation

All six faculty members who had the roles of seminar group supervisor and grader completed the questionnaires. Faculty reported that they encouraged students to use the rubric during seminars. Students were asked to perform a self-assessment of their concept review with the rubric. All faculty members used the rubric to assess the concept and final review. Faculty gave additional written feedback on the concept review to explain the chosen proficiency level for each criterion in the rubric. Besides the written information, four faculty members used the rubric to give verbal feedback about the concept review, whereas two did not [Table 3].
Table 3: Faculty evaluation of the SRL-rubrics

Click here to view

Faculty considered the use of percentages in the cells of the SRL-rubric helpful when choosing the best proficiency level, however, one person was neutral. Although the descriptions in the cells were clear, several comments identified the need for more explanation.

The final review was graded by a faculty member, often from another discipline than the student. Four faculty members agreed that they were just as comfortable using the SRL-rubric when grading students from another discipline as their own, whereas two were not comfortable with this.

Five faculty members agreed that filling in the SRL-rubric was little effort for them, whereas one faculty was neutral. Concerning the statement that the rubric saved time, a neutral response (n = 2) or disagreement (n = 1) was indicated by the faculty. All agreed that the assessment of the review with the SRL-rubric was better than other methods of assessment. The average faculty rating for the rubric was 8. In their comments, the benefits of SRL-rubric were emphasized by using terms as efficient, thorough, clear, and transparent. Working with the SRL-rubric was seen as an ongoing process, and consequently, review, reflection, and revision on a regular basis are essential.

  Discussion Top

This study aimed to develop a rubric for the SRL course in a master's degree program in health sciences and to investigate its usability from student and faculty perspectives. During the development, two versions of the SRL-rubric were used in the course over a 2-year period. Students and faculty completed a survey over their experience with the 2015 version of the SRL-Rubric. Students and faculty found the SRL-rubric usable when writing the draft or concept version of the review, conducting a self-assessment, grading the concept review, and reported good experiences with it. This is in-line with findings of studies demonstrating positive experiences of students and instructors.[20],[21] However, our survey findings indicate that the SRL-rubric needs to be further developed to maximize grading objectivity of the final review (summative evaluation) and to provide feedback about the assigned final grade that enables students to recognize strengths and weaknesses (usability).

We were able to include almost all students in the 2015 SRL-course and all faculty members. However, the small sample size was limited to only one university and restricted to professionals from nursing, physical therapy, and speech therapy disciplines. This limits the representativeness of the study and restricts generalizability of study findings. To secure optimal reporting of the study, we adhered to the STROBE statement, recommended for the reporting of observational studies.[22] A strength in the development of the SRL-rubric was the use of seven steps to develop a scoring rubric.[10] Using a well-defined structure during development was essential because of the complexity of working with different health-care professionals who had to agree on criteria and differences among the proficiency levels, namely, what makes a review unacceptable, fail, pass, good, and excellent. Reaching consensus required multiple work sessions enabling faculty to develop the content. Calibration sessions were used to evaluate the agreement in scoring between faculty and to encourage faculty dialog about assigned grades.[23] The structured process extended over 2 years and included revision and refinement after each use in the SRL-course, which contributed to improved validity and reliability of the rubric as an evaluation tool.[10] Faculty members commented on the evolving science of systematic reviews and the need for ongoing updates of the SRL-rubric and they highly valued the ongoing iterative process in developing the assessment tool: “SRL-rubrics is a living (evolving) document, in continuous development, we will need to continuously improve and develop the SRL-rubrics for this course.” Earlier studies emphasized the need for further research on development and assessment of validity and reliability of the SRL-rubric.[1],[3],[4],[5],[6],[7],[8],[23],[24]

A striking finding of our study was that less than half of the students (44%) thought that the final-SLR (summative) assessment was performed objectively, whereas almost all (91%) indicated the concept-SLR (formative) assessment by the seminar group supervisor was objective. In our situation, the faculty has two roles, seminar group supervisor and grader. Each faculty supervises 10–14 students from their own profession and grades 10–14 students from all professions. Possible explanations of students' response concerning objective grading include the following: professional discipline of graders, the criteria, and distinctions among proficiency levels for each criterion. Popham[25] advises that when developing a rubric and being faced with a choice between interrater agreement among graders and instructional impact, one should opt for the latter. Yet, rubrics are not instructionally useful if there are inconsistencies in the descriptions of performance criteria across proficiency scale levels.

Students were critical about the SRL-rubric and its usefulness in recognizing their strengths and areas for improvement. A possible explanation is when the grader checked the same proficiency level for all criteria without providing feedback or clarification for each criterion. Written and/or verbal feedback is indeed necessary for students to understand strengths, weaknesses, and what needs improvement. Another plausible explanation is a presence of a halo effect when graders mark the same proficiency level for all criteria.[24] This, however, may be a structural alignment issue because a rubric matrix “forces” all criteria to be described on the same scale of proficiency levels.[24] This explanation concerns the content of the rubric and may not be resolved by providing feedback and clarification of the assigned grade. Further research, however, is needed to investigate the validity, reliability, as well as the usability of the SRL-rubric.

  Conclusions Top

The SRL-rubric was found to be a useful grading tool for objective assessment of systematic literature reviews conducted by students in a master's level health-care sciences program. Ideally, the SRL-rubric will need to be optimized in and receive further validation in different settings with a sufficiently large sample of students and faculty before being broadly adopted.


The authors would like to thank the following faculty members who participated and delivered meaningful contribution in the development of the SRL-rubric: Marco van Brussel, Agnes van den Hoogen, Irene Jongerden, Digna Kamalski, Marijke Kars, Manon Kluijtmans, Janneke de Man-van Ginkel, Janjaap van der Net, Harmieke van Os-Medendorp, Martijn Pisters, Irina Poslawsky, Marieke Schuurmans, Mirelle Stukstette, Tim Takken, Saskia Weldam, and Frank van Wijnen. In addition, the authors would like to thank all students who participated in the study. Finally, we thank Eugene Custers for his comments and editing of this paper.

Financial support and sponsorship


Conflicts of interest

There are no conflicts of interest.

  References Top

Reddy YM, Andrade H. A review of Rubrics use in higher education. Assess Eval High Educ 2010;35:435-48.  Back to cited text no. 1
Stevens DD, Levi A. Introduction to Rubrics: An Assessment Tool to save Grading Time, Convey Effective Feedback, and Promote Student Learning. Sterling (VA): Stylus; 2013.  Back to cited text no. 2
Simon M, Forgette-Giroux R. Rubric for scoring postsecondary academic skills. PARE 2001;7:1-7. Available from: http// [Last accessed on 2016 Dec 05].  Back to cited text no. 3
Hafner JC, Hafner PM. Quantitative analysis of the rubric as an assessment tool: An empirical study of student peer-group rating. Int J Sci Educ 2003;25:1509-28.  Back to cited text no. 4
Dunbar NE, Brooks CF, Kubicka-Miller T. Oral communication skills in higher education: Using a performance-based evaluation rubric to assess communication skills. Innov Higher Educ 2006;31:115-28.  Back to cited text no. 5
Green R, Bowser M. Observations from the field: Sharing a literature review rubric. J Lib Adm 2006;45:185-202.  Back to cited text no. 6
Lapsley R, Moody R. Teaching tip: Structuring a rubric for online course discussions to assess both traditional and non-traditional students. J Am Aacad Bus 2007;12:167-72.  Back to cited text no. 7
Van den Berg I, van de Rijt B, Prinzie P. Evaluating academic writing skills with digital Rubrics. Onderzoek van Onderwijs 2014;43:6-14.  Back to cited text no. 8
Moni RW, Beswick E, Moni KB. Using student feedback to construct an assessment rubric for a concept map in physiology. Adv Physiol Educ 2005;29:197-203.  Back to cited text no. 9
Dennison RD, Rosselli J, Dempsey A. Evaluation Beyond Exams in Nursing Education: Designing Assignments and Evaluating with Rubrics. New York: Springer Publishing Company; 2014.  Back to cited text no. 10
Polit DF, Beck CT. Nursing Research, Generating and Assessing Evidence for Nursing Practice. Philadelphia (PA): Wolters Kluwer Health; 2012.  Back to cited text no. 11
Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. J Clin Epidemiol 2009;62:1006-12.  Back to cited text no. 12
Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, et al. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: Checklist and explanations. Ann Intern Med 2015;162:777-84.  Back to cited text no. 13
Equator Network. The Resource Center for Good Reporting of Health Research Studies. Available from: http// [Last accessed on 2015 Jan 26].  Back to cited text no. 14
Prisma Transparent Reporting of Systematic Reviews and Meta Analyses. Available from: http// [Last accessed on 2015 Jan 26].  Back to cited text no. 15
Tractenberg RE, Umans JG, McCarter RJ. A mastery rubric: Guiding curriculum design, admissions and development of course objectives. Assess Eval High Educ 2010;35:15-32.  Back to cited text no. 16
Tierney R, Simon M. What's still wrong with Rubrics: Focusing on the consistency of performance criteria across scale levels. Pract Assess Res Eval 2004;9:1-7. Available from: http// [Last accessed on 2015 Feb 18].  Back to cited text no. 17
Conclusions and Results of the Scaffolding Assessment for Learning Project (SCALA-project). Utrecht: SURF; 2014. Available from: [Last accessed on 2015 Jan 26].  Back to cited text no. 18
World Medical Association. Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects; 2013. Available from: http// [Last accessed on 2015 Jan 26].  Back to cited text no. 19
Andrade H, Du Y. Student perspectives on rubric-referenced assessment. Pract Assess Res Eval 2005;10:1-11.  Back to cited text no. 20
Schneider JF. Rubrics for teacher education in community college. Community Coll Enterp 2006;12:39-55.  Back to cited text no. 21
Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, et al. Strengthening the reporting of observational studies in epidemiology (STROBE): Explanation and elaboration. Int J Surg 2014;12:1500-24.  Back to cited text no. 22
Bennett PR, Cherlin AJ, Reese MJ. Calibrating Multiple Graders. Innovative Instructor. Center for Educational Resources. Best Practice Forum; 2011. p. 2. Available from: http// [Last accessed on 2017 Jan 02].  Back to cited text no. 23
Humphry SM, Heldsinger SA. Common structural design features of rubrics may represent a threat to validity. Educ Res 2014;43:253-63.  Back to cited text no. 24
Popham WJ. What's wrong, and what's right, with rubrics. Educ Leadersh 1997;55:72-5.  Back to cited text no. 25


  [Table 1], [Table 2], [Table 3]

This article has been cited by
1 Students’ Perceptions of Instructional Rubrics in Neurological Physical Therapy and Their Effects on Students’ Engagement and Course Satisfaction
Rafael García-Ros,Maria-Arantzazu Ruescas-Nicolau,Natalia Cezón-Serrano,Juan J. Carrasco,Sofía Pérez-Alenda,Clara Sastre-Arbona,Constanza San Martín-Valenzuela,Cristina Flor-Rufino,M. Luz Sánchez-Sánchez
International Journal of Environmental Research and Public Health. 2021; 18(9): 4957
[Pubmed] | [DOI]
2 Writing exceptional (specific, student and criterion-focused) rubrics for nursing studies
David Stanley,Sharon Coman,Deborah Murdoch,Karen Stanley
Nurse Education in Practice. 2020; : 102851
[Pubmed] | [DOI]


Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

  In this article
Article Tables

 Article Access Statistics
    PDF Downloaded551    
    Comments [Add]    
    Cited by others 2    

Recommend this journal