Training and Technical Assistance for NIDILRR Grantees Interested in Doing a Systematic Review
- The Campbell Collaboration is awarding grants up to $40,000 for new systematic reviews:
Deadline March 29, 2016
- May 3-4, Alexandria VA: Participate in "Systematic Reviewing for Evidence-based Practice: An Introductory Workshop." (No charge; please register by April 17) Information/Register here
- Technical Assistance for Grantees: Experienced systematic review authors are available to help NIDILRR grantees who are working on SRs.
More information about KTDRR's TA Services
|
SAVE THE DATES! Online Workshop on Scoping Reviews
Save the dates for an upcoming workshop focusing on scoping reviews. Dr. Chad Nye and Dr. Oliver Wendt are leading a two-part live online workshop that will take place on April 27 and May 25, 3:00-4:30 PM (ET). More information coming soon!
|
|
SAVE THE DATE!
2016 Online KT Conference: Oct. 24-26-28, 2016
The Center on KTDRR sponsors an annual Online KT Conference for NIDILRR-funded grantees and others, by invitation. This conference is designed to address strategies in the planning and implementation of effective and efficient KT approaches. The 4th Online Conference will take place on October 24, 26, and 28, 2016. Check back to the 2016 Conference Home Page for themes, speakers, and registration information.
|
This issue of KT Update presents another in a series of brief articles by Dr. Marcel Dijkers. This article highlights four types of abuse of the term and concept "evidence".
Four Types of Evidence Abuse
Evidence-based medicine (EBM) had its start in the late 1980s to early 1990s as a reaction to the then-prevailing "expert-based medicine," which had (self-appointed) experts writing on what physicians needed to do or must not do, opining based on their experience and often in defiance of the existing empirical evidence (Dijkers, Murphy, & Krellman, 2012). According to Smith and Rennie (2014), earlier related efforts were the "critical appraisal" of published research and "clinical epidemiology"; both were attempts to let empirical evidence speak loudly and to believe the experts only if they appropriately incorporated valid evidence into their pronouncements. The term "evidence-based medicine" was coined by Gordon Guyatt (1991). In disability and rehabilitation disciplines, professional leaders followed suit and developed evidence-based practice (EBP; Dijkers et al., 2012), which to date has not deviated much from EBM in goals or methods.
The claim of the EBP/EBM proponents was that evidence from empirical research, when published, was used haphazardly, if at all. It was not carefully collected, qualified, synthesized, and used to improve patient/client care. If there was synthesis, it was qualitative and subjective (Dijkers & Task Force on Systematic Reviews and Guidelines, 2009a), the result of filtering through the brain of the "experts," who might have their blind spots, biases, and even financial and intellectual conflicts of interest. The EBP/EBM methods developed over the last two decades emphasized protocols to
- specify methods of collecting relevant studies;
- carefully evaluate primary research to place it in an appropriate evidence hierarchy;
- create evidence tables to lay out the quality, quantity, and variety of the evidence relevant to a clinical question;
- qualitatively and quantitatively synthesize (meta-analyze) the evidence using forest plots and similar devices; and
- inform clinical practice using summaries such as number needed to treat (NNT) and number needed to harm (NNH) as easily grasped formats.
In this article, I aim to highlight four types of abuse of the term and concept "evidence" and of evidence itself—abuse committed by people inside and outside the EBM/EBP movement.
The first type of evidence abuse is the gratuitous use of the label "evidence-based." In the beginning, there was much criticism of EBM and EBP. Maybe the most important was that the evidence was used to run roughshod over the values and preferences of patients and clients, and maybe even over those of clinicians. In response, the definition of EBM/EBP was changed; initially, it was something like "Evidence-based medicine…is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research" (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996, p. 4). The definition was changed to something more like "... access to, and application and integration of evidence to guide clinical decision making to provide best practice for the patient/client. Evidence-based practice includes the integration of best available research, clinical expertise, and patient/client values and circumstances related to patient/client management, practice management, and health care policy decision making" (American Physical Therapy Association, 2007).
Patient values have been built into the EBM process at various critical points and in various forms. As set forth by Guyatt et al. (2011), the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach tells systematic reviewers and guideline developers to focus on and give greatest importance to evidence for those outcomes that are most valued by patients or clients. Decision-making aids have been developed to help consumers of care and services weigh and select interventions with different balances of pros and cons. There also are plain language summaries of evidence as well as attempts at visualization and infographics to translate EBP parameters into consumer-friendly information.
EBM and EBP were developed and taken up by some of the brightest and most articulate senior clinician-scholars and methodologists, and what was a fringe phenomenon in 1991 now is mainstream and sine qua non. Government agencies, professional organizations, university departments, and independent practitioners have spent millions of dollars and thousands of hours performing systematic reviews, developing practice guidelines based on these reviews, and fine-tuning the methods for creating both the reviews and guidelines. And they have been successful beyond what the pioneers may have expected. There still are rearguard battles that rehash arguments from 20 years ago, as laid out by Sackett et al. (1996), but it would be surprising today to see a leader in any profession stand up and condemn EBM/EBP as a useless fad.
To the contrary, now everything has to be "evidence-based" or it is no good. So people have started slapping the label "evidence-based" on almost everything they do and create. I come across this phenomenon while peer reviewing articles that have been submitted for journal publication. To protect the innocent, I will not give names, but the last instance of gratuitous use of the "evidence-based" brand I saw was typical: "…until more evidence-based research is available." Strictly speaking, only practice can be evidence-based, and it is evidence-based only if it is founded on evidence, whether located and synthesized by a practitioner herself or by others in the form of a critically appraised topic or, even better, a systematic review. I admit, this abuse of the term "evidence-based" is just a minor annoyance, caused by people who misapply a term to make themselves look more grounded, scientific, and/or state of the art.
Creating evidence is not evidence based—although researchers may use evidence to write parts of their research protocols; for example, "What does the literature say is the best outcome measure for construct X?" It has even been recommended (and supposedly some research funding agencies in Europe now are taking this up) that no randomized clinical trial (RCT) or similar clinical investigation be proposed without performing a prior systematic review that results in a conclusion that there indeed is an evidence (i.e., knowledge) gap, which the investigator is prepared to fill. Developing a new treatment program in response to patient need is not evidence based unless indeed the developer indeed uses clinical practice guidelines based on systematic reviews to shape the program.
A second type of evidence abuse is that by third-party payers, who use sleight of hand to turn lack of evidence as to the value of an intervention into evidence that the intervention lacks value (Katz, Ashley, O'Shanick, & Connors, 2006). They do not have solid evidence to point to in the form of well-performed studies showing that the intervention in question indeed does not produce results better than doing nothing (NNT = ∞) or generates results even worse than those of doing nothing (NNH = 1). This payer policy is especially harsh when the bar for what constitutes acceptable evidence is set very high—at a minimum, a single RCT. And for this we may thank EBP adherents, for instance in the Cochrane Collaboration, who systematically disregarded all evidence not produced by RCTs, even somewhat weak evidence from observational studies in the absence of RCTs (Dijkers & Task Force, 2009b). Even if we are convinced that only the best possible evidence should be the basis for action, it is far from established that such evidence is always provided by RCTs, as suggested by Berlin and Golub (2014) and Landsman (2006).
A third type of evidence abuse has been brought up by the EBM Renaissance Group (Greenhalgh, Howick, & Maskrey, 2014). They claim that the "evidence-based brand" has been distorted by vested commercial interests—Big Pharma and the medical device industry. These entities have the money to pay for research and use this power to set the research agenda. The commercial interests may medicalize common human experiences (male baldness, for example), come up with treatments, and produce the research protocols that academic researchers follow to mine the ore from which Big Pharma staff distill the evidence supporting these treatments. Even in the case of nonfictitious disorders, they create the evidence they need to sell their products, using surrogate end points (outcome measures), inclusion/exclusion criteria that produce samples without comorbidities, comparison against a placebo comparator only, selective publication of study results, and more. EBP adherents may fight back, using such measures as registration of studies before enrollment starts, network meta-analysis, etc., but it is a rearguard fight against a well-capitalized enemy: "Evidence based medicine's quality checklists and risk of bias tools may be unable to detect the increasingly subtle biases in industry sponsored studies" (Greenhalgh et al., 2014, p. 2).
A fourth type of abuse, also brought up by the EBM Renaissance Group, is that evidence, even good evidence, is displacing the common sense, common touch, and common feeling of the clinician in his/her dealings with patients and clients. They claim that the health care bureaucracy translates the evidence into rules for clinician behavior, and as happens in bureaucracies, the means (here: the rules) supplant the goals (here: good patient care). Clinicians are incentivized to follow the rules, and no one would be surprised that some clinicians would be seduced into rule-following while disregarding what patients want or value.
The EBM Renaissance Group has as their reference point the "socialized medicine" of England, but one could wonder whether the same is happening in the United States (e.g., with the incentives the Centers for Medicare & Medicaid Services is building into its Medicare reimbursement system, in large health maintenance organizations, and especially in the nation's largest health care system, the Veterans Administration). Greenhalgh and her Renaissance colleagues stress that optimal care is individualized to the client or patient that meets with an experienced senior clinician who has the ability and opportunity to sit down with the person and discuss what really is the presenting problem and what matters to this consumer. This dialogue can and should be evidence informed but not evidence driven and certainly not distorted by prescriptive rules as to which tests to order in what sequence.
This is throwing down the gauntlet. EBM has hardly reached adulthood and is again under serious attack, and this time from within. Among the members of the EBM Renaissance Group I recognize the names of a number of prominent EBM theoreticians and researchers, such as Glasziou and Heneghan. Or is this not a fight against EBM per se but one against the commercial interests and bureaucrats who have highjacked EBP or, more specifically, EBM?
Evidence abuse. The misuse of the term "evidence-based" is just an annoyance, experienced only by language or methods purists. The switch from "absence of evidence" to "evidence of absence" is of great concern, especially in fields such as rehabilitation, where often there is no evidence—or the existing evidence is weak—when judged by traditional standards. The highjacking of the EBP brand by commercial interests is disconcerting but may not be a major problem for disability and rehabilitation practitioners because there is so little in what they do that can be monetized. Turning the results of evidence reviews into a straightjacket for clinicians may also be less of a concern to rehabilitation and disability practitioners because so much of what they do is individualized, and it is clear that simple (or simplistic) rules just cannot capture any of the complexity of interdisciplinary care—a complex intervention. But it is a trend worth monitoring.
References
American Physical Therapy Association. (2007). Vision 2020. Retrieved from http://www.apta.org/vision2020/
Berlin, J. A., & Golub, R. M. (2014). Meta-analysis as evidence: Building a better pyramid. JAMA: The Journal of the American Medical Association, 312(6), 603–605.
Dijkers, M. P. J. M., & Task Force on Systematic Reviews and Guidelines. (2009a). The value of traditional reviews in the era of systematic reviewing. American Journal of Physical Medicine and Rehabilitation, 88(5), 423–430. doi:10.1097/PHM.0b013e31819c59c6
Dijkers, M. P. J. M. for the NCDDR Task Force on Systematic Review and Guidelines. (2009b). When the best is the enemy of the good: The nature of research evidence used in systematic reviews and guidelines. Austin, TX: SEDL. Retrieved from https://ktdrr.org/ktlibrary/articles_pubs/ncddrwork/tfsr_best/
Dijkers, M. P., Murphy, S. L., & Krellman, J. (2012). Evidence-based practice for rehabilitation professionals: Concepts and controversies. Archives of Physical Medicine and Rehabilitation, 93(8 Suppl), S164–76.
Greenhalgh, T., Howick, J., & Maskrey, N. for the Evidence Based Medicine Renaissance Group. (2014). Evidence based medicine: A movement in crisis? BMJ, 348, g3725.
Guyatt, G. H. (1991). Evidence-based medicine. ACP Journal Club, 114(2), A16.
Guyatt, G., Oxman, A. D., Akl, E. A., Kunz, R., Vist, G., Brozek, J., ... Schunemann, H. J. (2011). GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. Journal of Clinical Epidemiology, 64(4), 383–394.
Katz, D. I., Ashley, M., J., O'Shanick, G. J. & Connors, S. H. (2006). Cognitive rehabilitation: The evidence, funding and case for advocacy in brain injury. McLean, VA: Brain Injury Association of America. Retrieved from
http://www.biausa.org/_literature_49035/cognitive_rehabilitation_position_paper
Landsman, G. H. (2006). What evidence, whose evidence?: Physical therapy in New York State's clinical practice guideline and in the lives of mothers of disabled children. Social Science & Medicine (1982), 62(11), 2670–2680.
Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn't. BMJ, 312(7023), 71–72.
Smith, R., & Rennie, D. (2014). Evidence-based medicine—An oral history. JAMA: The Journal of the American Medical Association, 311(4), 365–367.
|
|