FOCUS
TECHNICAL BRIEF NO. 15
2006
PDF version

The Role of Systematic Reviews in Evidence-Based Practice, Research, and Development

Ralf W. Schlosser, PhD, Department of Speech-Language Pathology and Audiology, Northeastern University

Evidence-Based Practice

The construct "evidence-based practice" (EBP) is increasingly shaping the disability and rehabilitation field as the preferred approach for professionals rendering services to individuals with disabilities. Historically, our practice may have been accurately labeled as "experience-based," "eminence-based," or "habit-based" (e.g., Law, 2002). In these times of increasing demands on accountability and research utilization, professionals rendering services to people with disabilities are being asked to consider research evidence as part of their clinical and educational decision making. There is a growing consensus that EBP should involve the integration of the best and most current research evidence with clinical/educational expertise and relevant stakeholder perspectives in the pursuit of making the best possible decisions for a particular consumer (e.g., Law, 2002; Straus, Richardson, Glasziou, & Haynes, 2005; Schlosser, 2003; Schlosser & Raghavendra, 2004). Thus, EBP is not practice that is driven by research evidence alone, which is a popular misconception. The key ingredient of this definition is integration. A practitioner has to consider not only the external research evidence related to a particular diagnostic tool or a treatment approach that is being considered for a consumer, but also the data (and other knowledge) generated from this consumer and his or her perspectives and preferences. This is by far not a cookie-cutter approach to practice but rather a creative process.

For practitioners who seek to implement EBP, the following process has been delineated: (1) ask a well-built question, (2) select evidence sources, (3) implement a search strategy, (4) appraise and synthesize the evidence, (5) apply the evidence, and (6) evaluate the evidence application (e.g., Sackett et al., 2000). Schlosser (2003) added the dissemination of the findings as a seventh step so that others may benefit from what has been learned. This last step also allows EBP to come full circle in that practitioners and relevant stakeholders have a means to influence the direction of future research.

This process requires knowledge and skills in several key areas, such as searching the literature efficiently for the best and most current evidence relevant to the question at hand as well as knowledge and skills in the critical appraisal of evidence. The latter includes a working knowledge of factors that contribute to the internal validity of evidence and its external validity in order to determine how transportable the data are to the particular question. Many practitioners in the disability and rehabilitation field may not be adequately prepared for this task, creating a formidable knowledge and skill barrier to EBP implementation. In addition to knowledge and skills, these activities can be very time-consuming, which may also deter practitioners from implementing EBP (Humphris, Littlejohns, Victor, O'Halloran, & Peacock, 2000). This is where systematic reviews may be a tremendous asset.

Role of Systematic Reviews in EBP

Practitioners can save considerable time and rely on someone else's expertise when they are provided with access to pre-filtered evidence. Pre-filtered evidence is established when someone with expertise in a substantive area has reviewed and presented the methodologically strongest data in the field (Guyatt & Rennie, 2002). Systematic reviews provide practitioners a vehicle to gain access to such pre-filtered evidence. Essentially, systematic reviews aim to synthesize the results of multiple original studies by using strategies that delimit bias (Cook, Mulrow, & Haynes, 1997). According to Petticrew and Roberts (2006) systematic reviews "… adhere closely to a set of scientific methods that explicitly aim to limit systematic error (bias), mainly attempting to identify, appraise and synthesize all relevant studies (of whatever design) in order to answer a particular question (or set of questions)" (p. 9). Systematic reviews substantially reduce the time and expertise it would take to locate and subsequently appraise and synthesize individual studies.

The efficacy or effectiveness of a rehabilitation intervention is rarely established in a convincing manner with only one study. In fact, multiple studies are needed that are then synthesized to offer sound evidence in support of or to reject an intervention. It is for this reason, in addition to its systematic methods for minimizing biases, that systematic reviews, in particular those that employ meta-analyses, rank higher than both individual studies and non-systematic reviews (or narrative reviews) on hierarchies of treatment evidence in medicine (e.g., Schlosser & Raghavendra, 2004). Meta-analyses involve the calculation of effect sizes across multiple studies in order to determine the effectiveness of an intervention (Cooper & Hedges, 1994). Not all systematic reviews are meta-analytic or should be meta-analytic. In part it depends on the question that is being explored as well as the reporting of the primary data. Some questions may not require any calculation of effect sizes. For example, a systematic review may describe the reporting patterns of treatment research in methodological terms. This review would rely on many other characteristics of systematic reviews in terms of searching for and appraising of evidence, but omit effect size calculations. Other times, a meta-analysis may be desirable, but the data may not lend themselves to effect size calculation.

Roles of Systematic Reviews in Research

What role do systematic reviews play in research itself? Although the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) supports many efforts that are directly or indirectly aimed at improving practices, one of NIDILRR's missions is to facilitate, fund, and disseminate disability and rehabilitation research. The production of systematic reviews is a rigorous process that is nothing short of research, allowing for convincing demonstrations of the efficacy or effectiveness of an intervention. Given the established importance of systematic reviews for practice, researchers need to produce more systematic reviews. Although this assertion is not based on a systematic literature search, the quantity of systematic reviews produced in the disability and rehabilitation field seems to lag behind other fields such as medicine and education. The production of systematic reviews, however, is a time-consuming process and requires adequate resources and funding. Estimates vary from 216 to 2,518 hours with a mean of 1,139 hours and an average of approximately $104,750 (Petticrew & Roberts, 2006).

Although systematic reviews help us determine what we know, they are also powerful tools for documenting knowledge gaps in the literature. These identified gaps can be used to shape future research agendas (Eagly & Wood, 1994). How does a systematic review do that? Systematic reviews typically develop a coding protocol and manual in which all the categories of data that are to be extracted from primary studies are listed and defined. Often, these categories include subject characteristics, setting characteristics, intervention descriptors, and other contextually relevant variables. Once a review is completed, an analysis of these data might reveal that a certain intervention has only been evaluated with older individuals rather than with children from birth to age 3. This might then stimulate primary research with that population. A meta-analysis on the effects of augmentative and alternative communication (AAC) interventions in promoting generalization and maintenance in children with developmental disabilities demonstrated that much of the intervention took place in segregated settings rather than inclusive settings (Schlosser & Lee, 2000). This might help support research of AAC intervention in more inclusive settings. Once such gaps are identified, they may serve as data-based rationales for establishing the need for a particular research project subsequently submitted for funding. In fact, funding agencies increasingly require that a systematic review be completed prior to initiating a new research study.

Role of Systematic Reviews in Development

Development projects differ from research projects in many ways. Perhaps most importantly, development projects aim to develop a product whether that is a new piece of software or a new assistive technology device. In developing this product, developers typically rely on three tiers of evaluation: formative evaluation, process evaluation, and outcomes evaluation (Robinson, Patrick, Eng, & Gustafson, 1998). Therefore, although there are some evaluative aspects that are common with research projects, the three stages of evaluation in development projects tend to be focused on the product as the bottom line. It is also fair to say that outcomes evaluation in development projects is not expected to be as rigorous, from a scientific point of view, as it is in research studies.

NIDILRR also funds development projects, so the question arises whether systematic reviews have any role in development activities. A non-systematic search of the literature suggests that there is surprisingly little in the literature to address this question. However, several roles may be envisioned. Systematic reviews can be used to determine risk factors. Risk factors could be described as predictors of negative outcomes. Knowledge of such risk factors could serve as an impetus for the development of better technology. For example, a systematic review of premature newborn studies in the intensive care unit might reveal that newborns are at particular risk for pneumonia. This information may be used to propose the development of improved incubators. A review of studies on the effects of certain features of devices in AAC might lead to the development of improved AAC devices. Clearly, systematic reviews can offer data-based rationales for the need to pursue certain development projects.

As mentioned earlier, development projects also involve the evaluation of outcomes. The question arises whether a body of development projects should be synthesized using systematic review methodology. One could argue that outcomes evaluation efforts offer valuable data concerning the effectiveness of newly developed products or technologies. Hence, it might be worthwhile to synthesize these data across development projects. In order to aggregate effect size measures, however, the products or technologies as well as the outcome measures should be sufficiently homogenous. This requirement is no different from the aggregation of research data. Otherwise one ends up with the "apples and oranges" problem, rendering a meta-analysis meaningless. For the most part, development efforts are characterized by an effort to create something novel—something that does not yet exist. Hence it is likely that one development project differs from the next to a degree that would contraindicate aggregation. It is conceivable, however, that a specific new product or technology is being evaluated across multiple types of users, contexts, and environments, either within the same project or across several projects. Here, an aggregation of effect sizes arising from outcomes evaluation might be appropriate. Even in situations where an aggregation of effect sizes (i.e., a meta-analysis) is not possible, systematic review methodology has something to offer development projects. For example, a team conducting a systematic review might wish to determine whether the development projects funded by NIDILRR meet certain quality standards associated with state-of-the-art development efforts. Quality standards could pertain to, for instance, evidence for the use of formative, process, and outcomes evaluation or more specifically to the involvement of consumers throughout the project. Such reviews would rely on a systematic search as well as systematic data extraction methods to arrive at sound conclusions (but omit the meta-analysis).

Conclusions

Systematic reviews are not only instrumental for implementing evidence-based practice but also for taking stock relative to a particular question (or set of questions) and for the shaping of future research. For development, the primary role of systematic reviews rests with the creation of data-based rationales for newly proposed development activities. Systematic reviews may also be used to extract valuable information concerning the quality of development efforts. If certain conditions are met, development projects may even be subjected to meta-analysis.

Despite these numerous benefits of systematic reviews, they are no panacea. As with primary research studies, systematic reviews vary greatly in quality and hence in the trustworthiness of the yielded outcomes and recommendations. Therefore, it is important that one can distinguish between sound systematic reviews from reviews that are not. A subsequent issue of Focus will discuss the various considerations for appraising systematic reviews.

References

Cook, D. J., Mulrow, C. D., & Haynes, R. B. (1997). Synthesis of best evidence for clinical decisions. Annals of Internal Medicine, 126(5), 376-380.

Cooper, H., & Hedges, L. V. (1994). The handbook of research synthesis. New York: Russell Sage Foundation.

Eagly, A. H., & Wood, W. (1994). Using research to plan future research. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 485-500). New York: Russell Sage Foundation.

Guyatt, G., & Rennie, D. (2002). Users' guide to the medical literature: Essentials of evidence-based clinical practice. Chicago, IL: AMA Press.

Humphris, D., Littlejohns, P., Victor, C., O'Halloran, P., & Peacock, J. (2000). Implementing evidence-based practice: Factors that influence the use of research evidence by occupational therapists. British Journal of Occupational Therapy, 63(11), 516-522.

Law, M. (2002). Evidence-based rehabilitation: A guide to practice. Thoroughfare, NJ: Slack Inc.

Petticrew, M., & Roberts, H. (2006). Systematic reviews in the social sciences: A practical guide. Malden, MA: Blackwell Publishing Co.

Robinson, T. N., Patrick, K., Eng, T. R., & Gustafson, D. (1998). An evidence-based approach to interactive health communication: A challenge to medicine in the Information Age. Journal of the American Medical Association, 280(14), 1264-1269.

Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2000). Evidence-based medicine: How to practice and teach EBM (2nd ed.). London: Churchill Livingstone.

Straus, S. E., Richardson, W. S., Glasziou, P., & Haynes, R. B. (2005). Evidence-based medicine: How to practice and teach EBM (3rd ed.). Edinburgh: Elsevier Science.

Schlosser, R. W. (2003). The efficacy of augmentative and alternative communication: Toward evidence-based practice. San Diego, CA: Academic Press.

Schlosser, R. W., & Lee, D. (2000). Promoting generalization and maintenance in augmentative and alternative communication: A meta-analysis of 20 years of effectiveness research. Augmentative and Alternative Communication, 16(4), 208-227.

Schlosser, R. W., & Raghavendra, P. (2004). Evidence-based practice in augmentative and alternative communication. Augmentative and Alternative Communication, 20(1), 1-21.

View FOCUS Technical Brief Archived Issues

Last Updated: Thursday, 19 December 2024 at 05:54 PM CST