Articles on Systematic Review

A systematic review involves the application of scientific strategies, in ways that limit bias, to the assembly, critical appraisal, and synthesis of all relevant studies that address a specific clinical question.

Following are some articles from the KT Library that address this topic. KTDRR staff reviewed a number of articles, developed a brief abstract, and assigned ratings based on strength of evidence, readability, and consumer orientation. For more information on these ratings, see KT Library Descriptor Scales.


Dijkers, M. P. J. M. for the NCDDR Task Force on Systematic Review and Guidelines. (2009).When the best is the enemy of the good: The nature of research evidence used in systematic reviews and guidelines.

Abstract: Evidence-based practice, according to authoritative statements by the founders of this approach to health care, involves using the "best available" evidence in addition to clinical expertise and patient preferences to make decisions on the care of patients. However, many systematic reviewers interpret "best available" as "best possible" and exclude from their reviews any evidence produced by research of a grade less than the highest possible (e.g., the randomized clinical trial [RCT] for interventions), even if that means making no recommendations at all. Voltaire's comment that "the best is the enemy of the good" is applicable here. Rehabilitation would be disadvantaged especially, as it can boast few RCTs, because of its nature. The myopic focus on the "strongest" research designs may also steer researchers away from asking, "What is the best design to answer this research question?" Lastly, rehabilitation and other clinicians need to know not just which interventions are effective, but also how these interventions need to be delivered; information relevant to this latter aspect of knowledge translation is typically produced using "weak" research designs.

Descriptor Scales

Evidence: 3 - Qual./Quant. research
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Demner-Fushman, D., Few, B., Hauser, S. E., &Thoma, G. (2006). Automatically identifying health outcome information in MEDLINE records. Journal of the American Medical Informatics Association, 13, 52-60.     

Abstract: Demner-Fushman et al. target health care professionals with limited time to review research. The authors describe an automated evidence-based medicine model approach to identifying relevant information in medical research quickly without needing to analyze the entire document. The approach was ranked against PubMed Clinical Queries and the authors found that the outcome-based ranking provided significantly more accurate information.

Descriptor Scales

Evidence: 3 - Qual./Quant. research
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Glanville, J. M., Lefebvre, C., Miles, J. N. V., & Camosso-Stefinovic, J. (2006). How to identify randomized controlled trials in MEDLINE: Ten years on. Journal of the Medical Library Association, 94(2), 130-136.

Abstract: Glanville et al. examines whether the 1994 Cochrane Highly Sensitive Search Strategy to search for randomized controlled trials in MEDLINE could be improved after ten years of use. They found that "clinical trial" was the best discriminating term. In years in which Cochrane had assessed MEDLINE records, few additional records were found. However, for records not assessed by Cochrane, the term "randomized controlled trial" was very accurate at identifying non-indexed trials, almost equaling the precision of the Cochrane Highly Sensitive Search Strategy.

Descriptor Scales

Evidence: 1 - Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Haynes, R. B., Cotoi, C., Holland, J., Walters, L, Wilczynski, N., Jedraszewski, D.,McKinlay, J., Parrish, R., & McKibbon, K. A. (2006). Second-order peer review of the medical literature for clinical practitioners. JAMA, 295(15), 1801-1808.

Abstract: Haynes et al. describe The McMaster Online Rating of Evidence (MORE) system that utilizes practicing physicians to rate peer-reviewed journal articles in their discipline as the basis for inclusion in the McMaster Premium Literature Service (PLUS) Internet access program. Following a review by staff, volunteer physicians rate articles by whether the article is important to the field (relevance) and whether it is new information (newsworthy). The ratings provide a screen for articles to be included in an Internet service that notifies physicians of recent research. The project demonstrated the value of a peer review of published journal articles by discipline.

Descriptor Scales

Evidence: 1 – Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


IOM (Institute of Medicine). (2011).Finding what works in health care: Standards for systematic reviews. Washington, DC: National Academies Press.

Abstract: This is a report from the Institute of Medicine (IOM) on a consensus study from the Committee on Standards for Systematic Reviews of Comparative Effectiveness Research, Board on Health Care Services. The IOM suggests a number of standards for conducting systematic reviews, including standards for: initiating a systematic review, finding and assessing individual studies, synthesizing evidence, reporting, and improving the quality of systematic reviews.

Descriptor Scales

Evidence: 2 - Expert opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Johnston, M. V., Sherer, M., & Whyte, J. (2006). Applying evidence standards to rehabilitation research. American Journal of Physical Medicine and Rehabilitation, 85, 292-309. full-text source.

Abstract: Johnston et al. explain evidenced based practice standards used in systematic reviews. In addition, the authors apply the evidence based methods to analyze the quality of research in spinal cord injury, traumatic brain injury, and burn rehabilitation. The article concludes that although the rehabilitation field has experienced a dramatic increase in systematic reviews published each year, the number of studies that met the highest level of criteria was very small in all three areas of research.

Descriptor Scales

Evidence: 1 – Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Johnston, M. V., Vanderheiden, G. C., Farkas, M. D., Rogers, E. S., Summers, J. A., & Westbrook, J. D., for the NCDDR Task Force on Standards of Evidence and Methods. (2009).The challenge of evidence in disability and rehabilitation research and practice: A position paper. Austin, TX: SEDL.

Abstract: This position paper was developed by the NCDDR's Task Force on Standards of Evidence and Methods. The paper focuses on evidence for interventions in the field of disability and rehabilitation (D&R). The document's specific objectives are to clarify what is meant by the term evidence and to describe the nature of the contemporary systems used to identify and evaluate evidence in intervention research; to identify the challenges in meeting contemporary standards of evidence for D&R interventions, and to propose next steps for examining related issues and for taking action to promote the availability of evidence-based services and information in the field of D&R interventions.

Descriptor Scales

Evidence: 1 - Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


NCDDR. (2005). FOCUS Technical Brief (9). What are the standards for quality research?

Abstract: This issue of FOCUS discusses principles and standards for quality research, the basis for these standards, and strategies for reporting quality research. In the fields of disability and rehabilitation research, there is a healthy debate regarding the specific criteria for quality research, and the specific checklists to be used to standardize reporting. As the debate ensues, there are many ideas emerging in the public domain related to quality research and quality evidence that can be used to help guide the discussion.

Descriptor Scales

Evidence: 1 - Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Schlosser, R. W. (2006). FOCUS Technical Brief (15). The Role of Systematic Reviews in Evidence-Based Practice, Research, and Development.

Abstract: This issue of FOCUS written by Ralf W. Schlosser, PhD, is part one of a three part series on the topic of evidence-based technology. This issue provides an overview of systematic reviews in research and development. Systematic reviews aim to synthesize the results of multiple original studies by using strategies that delimit bias. Systematic reviews can be used to inform evidence-based practice, which is increasingly shaping the disability and rehabilitation research field.

Descriptor Scales

Evidence: 1 - Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Schlosser, R.W. (2007). FOCUS Technical Brief (17). Appraising the Quality of Systematic Reviews.

Abstract: This issue of FOCUS written by Ralf W. Schlosser, PhD, is part two of a three part series on systematic reviews. This issue describes critical considerations for appraising the quality of a systematic review including the protocol, question, sources, scope, selection principles, and data extraction. The author also describes tools for appraising systematic reviews.

Descriptor Scales

Evidence: 1 - Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Shadish, W., & Myers, D. (2004). Campbell Collaboration research design policy brief.

Abstract: The Research Design Policy Brief provides a rationale and proposed policies regarding The Campbell Collaboration's systematic reviews on the effectiveness of an intervention. The policies propose development of two databases for randomized and nonrandomized studies, standard design codes to be used in reviews, and designated searchable fields to identify research in the databases.

Descriptor Scales

Evidence: 1 - Author(s) opinions
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Shadish, W. R., & Rindskopf, D. M. (2007). Methods for evidence-based practice: Quantitative synthesis of single-subject designs. New Directions for Evaluation, 113, 95-109.

Abstract: Shadish and Rindskopf describe the use of single-subject designs in meta-analyses. The article reviews methods for analyzing multiple single-subject designs, suggests methods for conducting a meta-analysis using single-subject designs, and includes a list of current meta-analyses.

Descriptor Scales

Evidence: 1 - Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Task Force on Systematic Review and Guidelines. (2013). Assessing the quality and applicability of systematic reviews (AQASR). Austin, TX: SEDL, Center on Knowledge Translation for Disability and Rehabilitation Research.

Abstract: The basic purpose of the AQASR document and checklist is to help busy clinicians, administrators, and researchers to ask critical questions that help to reveal the strengths and weaknesses of a systematic review, in general, and as relevant to their particular clinical question or other practical concerns. Its primary audience is clinicians, as most systematic reviews are optimized to answer the clinical questions they have.

Descriptor Scales

Evidence: 2 - Expert opinion
Consumer Orientation: B - Some data
Readability: III - High (Grade 12 or above)


Teutsch, S. M., & Berger, M. L. (2005). Evidence synthesis and evidence-based decision making: Related but distinct processes. Medical Decision Making, 25, 487-489.

Abstract: Teutsch and Berger note the increasing use of evidence syntheses to assist a variety of leaders and policymakers in evidence-based decision making. The authors reports that evidence-based reviews and syntheses have very specific guidelines, including an appeals process, in order to make the process transparent to decision makers. However, the authors add that although evidence-based decision making should be transparent to stakeholders, there are no similar standards guiding the use of the information in evidence-based decision making.

Descriptor Scales

Evidence: 1 - Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Turner, H. M. & Nye, C. (2007). FOCUS Technical Brief (16). The Campbell Collaboration: Systematic Reviews and Implications for Evidence-Based Practice.

Abstract: This issue of FOCUS written by Herb M. Turner III, PhD and Chad Nye, PhD, highlights the work of the Campbell Collaboration (C2) and the development of systematic reviews of research evidence.

Descriptor Scales

Evidence: 1 - Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Zaza, S., Carande-Kulis, V. G., Sleet, D. A., Sosin, D. M., Elder, R. W., Shults, R. A., et al. (2001). Methods for conducting systematic reviews of the evidence of effectiveness and economic efficiency of interventions to reduce injuries to motor vehicle occupants. American Journal of Preventive Medicine, 21(4), (Suppl. 1), 23-30.     

Abstract: Zaza et al. describe a step-by-step process for completing systematic reviews. To explain their methods, the authors used the topic of the economic efficiency of interventions that reduce injuries of motor vehicle occupants. The article describes selection of team members, development of a conceptual approach to identifying interventions to be included in the review, criteria for selection of studies, search strategy, assessing the quality of the studies, and recommendations as well as implications for future research. The authors also noted collection of barriers and ancillary data that may explain the findings.

Descriptor Scales

Evidence: 1 - Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)



Additional Articles

Dijkers, M. P. J. M., Brown, M., & Gordon, W. A. (2008). FOCUS Technical Brief (19). Getting Published and Having an Impact: Turning Rehabilitation Research Results Into Gold.

Abstract: The FOCUS authored by Drs. Marcel Dijkers, Margaret Brown, and Wayne Gordon from the Mount Sinai School of Medicine, Department of Rehabilitation Medicine, New York, suggests strategies that rehabilitation researchers can use to maximize their work-turning "research results into gold." In the disability and rehabilitation research community, it is important for researchers to be cognizant of how published results of research studies can facilitate or limit their use in answering important evidence-based questions.

Descriptor Scales

Evidence: 1 - Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)


Daumen, M. E. & Conley, D. J. (2009). FOCUS Technical Brief (23). The Use of CIRRIE's Database of International Rehabilitation Research in Conducting Systematic Reviews.

Abstract: This FOCUS authored by CIRRIE's Marcia E. Daumen and Daniel J. Conley, describes the Center for International Rehabilitation Research Information and Exchange's (CIRRIE) bibliographic database of International Rehabilitation Research. The database is useful for conducting systematic reviews. It includes research conducted in most geographic regions of the world as well as citations to articles originally published in languages other than English.

Descriptor Scales

Evidence: 1 - Author(s) opinion
Consumer Orientation: C - No data
Readability: III - High (Grade 12 or above)