American Institutes for Research

KT Update

An e-newsletter from the Center on Knowledge Translation for Disability and Rehabilitation Research

Vol. 6, No. 2 - February 2018

TABLE OF CONTENTS

Stay Connected:
www.ktdrr.org

KTDRR Facebook page       KTDRR Twitter page

Send email to:
ktdrr@air.org




The contents of this newsletter were developed under grant number 90DPKT0001 from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR). NIDILRR is a Center within the Administration for Community Living (ACL), Department of Health and Human Services (HHS). The contents of this newsletter do not necessarily represent the policy of NIDILRR, ACL, HHS, and you should not assume endorsement by the Federal Government.

Copyright © 2018 by American Institutes for Research

In-Person Workshop on Outreach to Policymakers for NIDILRR Grantees

KTDRR is sponsoring a free post-NARRTC Conference workshop for National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) grantees on Wednesday morning, March 28, 2018. Bobby Silverstein will present research-based strategies for networking and sharing research results with representatives on Capitol Hill and in grantees’ home states. Restrictions on lobbying will also be addressed.

A webcast will be held on Thursday, Feb. 22, at 3:00 p.m. (EST) to help grantees prepare for the workshop. Registrants will receive details about participating in the webcast.

  • Date/time: March 28, 2018, 8:30 a.m.–12:00 p.m. (Registration and refreshments
    8:30–9:00 a.m.)
  • Location: Plaza D, The Ritz-Carlton Pentagon City, 1250 S. Hayes St., Arlington, VA 22202
  • Presenter: Bobby Silverstein, Principal at Powers Pyles Sutter & Verville PC

Registration: www.surveygizmo.com/s3/3995885/Register-Workshop-032818

Latest from the EPPI-Centre

Listen to a series of brief webisodes conducted by staff members of the University College London’s Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre). The pre-recorded webisodes introduce a product, tool, or project of the EPPI-Centre and include a short discussion of the topic and outcomes of its use. Air dates: February 7, 14, and 21, 2018 (3:00 p.m. EST) Register Here!


KTDRR’s Community of Practice on Evidence for Disability and Rehabilitation Research

New KTDRR funding includes continued support for the Community of Practice on Evidence for Disability and Rehabilitation Research (CoP-EDR). The focus of the CoP-EDR is on the creation, evaluation, and use of evidence and related topics that members identify. Over the next 5 years, the CoP will focus on the stages of research identified by NIDILRR as well as on standards, guidelines, and methods for evidence reporting.

Past members of the CoP, as well as anyone interested, will be invited to participate in the next teleconference on March 12, 12:00 PM Eastern. Please contact Joann Starks (jstarks@air.org) if you would like more information about the CoP. Learn more about the CoP-EDR

John Westbrook Prize

John Westbrook  was a leading disability researcher active in promoting the use of evidence. John initiated the creation of the Campbell Collaboration’s Disability Coordinating Group and served as principal investigator for the Center on Knowledge Translation for Disability and Rehabilitation Research (KTDRR). He also served as cochair of Campbell’s Knowledge Translation and Implementation Coordinating Group.

John passed away in December 2016. As a strong supporter of Campbell, he left a bequest to support its work in knowledge translation (KT). Campbell used this bequest to create the John Westbrook Memorial Fund. This fund supports the John Westbrook Prize, which is awarded annually in recognition of outstanding contributions to KT and the dissemination and implementation of evidence. John was given the award posthumously in 2017.

The Campbell Collaboration has launched a fundraising appeal to keep the fund going strong. If the size of the fund permits, occasional awards may also be made to support the production and use of systematic reviews. Should you wish to contribute to the fund, please use the following link: https://campbellcollaboration.org/donation.

2017 Knowledge Translation Conference Archive Now Available

Archived presentations from the 2017 Online KT Conference are now available: https://ktdrr.org/conference2017/expo/conf_materials.html

The theme for 2017 was “Overcoming Barriers to Outreach.” Archived resources include captioned YouTube videos, edited transcripts, and downloadable copies of presentation files. Preapproved for up to 9.5 CRC-CEUs through Oct. 29, 2018.


What Does Knowledge Translation Look Like in
NIDILRR Grantee Contexts?

Most NIDILRR grantees understand that KT goes beyond conference presentations and journal articles. But given constraints on time and other resources, it can be difficult to think through what else can be done to promote the use of project findings and products. According to the 2017 KT Centers’ Community of Practice survey of NIDILRR grantees, 58.7% of principal investigators who responded reported “assessing barriers to the use of your NIDILRR-funded research” as an area where training is needed.

KTDRR and AIR’s Center on Knowledge Translation for Employment Research (KTER) offer examples of how NIDILRR grantees “do KT.” Check out these resources to help you think through how these examples might apply to your project:

KTDRR KT Casebook

Modeled after an approach taken by the Canadian Institutes of Health Research, KTDRR develops and disseminates an online casebook that showcases NIDILRR grantees’ KT activities. The purposes of the KT Casebook are to (1) share examples of successful and less successful KT strategies for others to learn from and build on; (2) demonstrate change created through KT activities; (3) help identify factors that affect KT, including barriers and facilitators to implementing KT strategies in a variety of settings; and (4) provide documentation of the outcomes and impact of KT activities. To develop each iteration of the casebook, KTDRR staff gather basic information from NIDILRR grantees about their KT activities and outcomes. This included the context and background of a project; a description of the KT challenge; a description of the type of KT activities carried out such as the engagement of knowledge users; a discussion of how things worked and lessons learned; and a report on the impact of the KT activities (Ilott, Gerrish, & Booth, 2011).

KTDRR uses several strategies to identify and recruit KT examples. Staff review grantees’ project abstracts and submissions to the survey of grantee KT activity that the KT Centers' Community of Practice administers each year, attend grantee presentations, and consult with KTDRR’s NIDILRR project officer to identify projects of interest. Grantees can also self-nominate. KTDRR staff work with entrants to develop case descriptions that include links to pertinent information and contact information for future follow-up.

Will you be at this year’s NARRTC meeting on March 27? Representatives from several projects highlighted in the Casebook will be presenting in a panel that KTDRR organized, KTDRR’s Knowledge Translation Casebook: NIDILRR Grantees Showcase Their KT Activities. Some cross-cutting themes to be discussed include the importance of consumer involvement and partnerships, how to tailor knowledge to specific audiences, and being patient with the process—KT takes time! Grantees can also learn about how they might participate in upcoming editions of the Casebook during the new award cycle.

Can’t make NARRTC? Sign up to receive updates from KTER Today so you can attend upcoming broadcasts in KTER’s webcast series about the Projects Translating Research from Disability and Rehabilitation Research Into Practice that NIDILRR funds under its Disability and Rehabilitation Research Projects Program. Existing offerings include a 2016 webcast introducing audiences to the first three projects funded under this mechanism, with presentations from Marsha Langer Ellison (TEST—Translating Evidence to Support Transitions: Improving Outcomes of Youth in Transition With Psychiatric Disabilities by Use and Adoption of Best Practice Transition Planning project ), Lynn Worobey (Translating Transfer Training and Wheelchair Maintenance Into Practice project) and Mark Harniss (Translating Evidence About TBI into Practice Within Washington State Department of Corrections project). In 2017, Sloane Huckabee gave KTER an update on TEST project activities. Upcoming webcasts include more information about TEST project activities in its third year, and news from Dr. Harniss about his project’s progress.

If you can’t make these events, get in touch and we can fill you in. No-cost assistance is available to NIDILRR grantees. E-mail us at ktdrr@air.org or fill out this technical assistance request form: https://ktdrr.org/ta/index.html.


In this issue of KT Update Dr. Marcel Dijkers shares information about duplication of evidence within systematic reviews. How can authors ensure that multiple sources do not include duplicates of the evidence? This article describes the problem and some possible solutions for deduplication.

Duplicate Publications and Systematic Reviews:
Problems and Proposals

Marcel Dijkers, PhD, FACRM
Icahn School of Medicine at Mount Sinai
Department of Rehabilitation Medicine

 [ Download PDF version 194kb ]

“Double dipped? What…what…what are you talking about?” George Costanza 1


The Popularity of Systematic Reviews

Disability and rehabilitation researchers increasingly use systematic reviews (SRs) to summarize the evidence that exists with regard to a particular issue, such as the effectiveness of an intervention, the quality of an outcome measure, or the prognosis after a sudden-onset disorder. The National Rehabilitation Information Center, the library of the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR), which collects the articles, reports, curricula, guides, and other publications and products of the research projects funded by NIDILRR, contains the following numbers of documents submitted in recent years that have “systematic review” in their title or abstract (search conducted July 23, 2017):

2010: 78
2011: 85
2012: 90
2013: 110
2014: 142
2015: 122
2016: 117
2017: 49

A total of 793 documents have been published over 7.5 years, for an average of 105 per year. Some of these documents may not be SRs, but papers reporting on the methodology of SRs, or otherwise referring to SRs. A scan of the titles of the documents published in 2017 revealed that 41 were SRs (with or without meta-analysis), three described SR methodology, and five were “other.”

Footnote 1
1 David, L. (Creator), Seinfeld, J. (Creator), Mehlman, P. (Writer), & Cherones, T. (Director). (1993). The implant [Television series episode, season 4, episode 19]. In L. David, A. Scheinman, G. Shapiro, & H. West (Executive Producers), Seinfeld. Beverly Hills, CA: West/Shapiro.

The Problem of Duplicate Publication

SRs aim to collect all the relevant evidence (although it is common that non-English publications are excluded, and other convenience exclusions are made), carefully assess the quality of the study that generated each piece of evidence, and then combine the findings across studies to answer the question(s) that led to the review. Combining can be done quantitatively (producing a meta-analysis) or qualitatively. In both approaches, weighting of the studies–based on sample size, study quality, or other factors–may play a role. It goes without saying that each study should be included once only, which means that each unit of analysis (most commonly, a patient, client, or research subject) is counted only once.

Using multiple bibliographic databases (some say at least three should be consulted) to find potential studies is proper SR procedure, which may be supplemented by ancestor and descendant searches, hand searching of key journals, searches of the gray literature, contacts with authors, and other means of finding published and unpublished reports (Task Force on Systematic Review and Guidelines, 2013). This searching often leads to multiple inclusions of the same paper, which need to be deduplicated, a stage typically shown in the CONSORT flow diagram depicting the various steps in the identification of publications to be extracted (Schulz, Altman, Moher, & CONSORT Group, 2010). A reference manager such as Endnote or RefWorks is typically used to do this (Qi et al., 2013). Given the poor performance of reference managers, which require much double-checking of the results by the SR authors and deletion of duplicates that slipped through, other approaches have been developed (Rathbone, Carter, Hoffmann, & Glasziou, 2015).

Reference managers and these other approaches serve only to detect duplicate publication titles. A second step is needed to determine whether there is duplicate publication of the same material in manifold papers. There are legitimate reasons for conveying study results in multiple papers, for instance, follow-up outcomes are reported after the initial outcomes have been published, or a different analytical method (preferably, one not available when a first paper was written) is used to have another look at the sample or a subgroup (Jamrozik, 2004). These legitimate additional papers generally refer to the earlier publication(s) that have taken place so that readers (and SR authors) can easily find all reports resulting from a single study. However, publish or perish and similar pressures on researchers have led to many less legitimate follow-on reports, ranging from pure duplicates (although possibly worded in a slightly different language) to multiple publications each addressing a different outcome (the “salami science” approach to padding one’s résumé). One study (Abby, Massey, Galandiuk, & Polk, 1994) found that one first author had 83 publications that “expressed the same theme 83 different ways” (p. 107); in three of these, the Methods, Results, tables, and figures were virtually identical.

If the outcomes for one study sample are reported twice without detection by an SR author, this may have various impacts on the findings of the SR. In the case of intervention studies, duplicate entry in the evidence table of a study with a low effect size may bring down the pooled effect size. In the case of studies with a high effect size, the opposite may happen. Even double counting a study with an effect size that is about average for the entire set of studies may have a deleterious consequence: the confidence interval around the pooled effect size is reduced, potentially changing the interpretation of the clinical significance of the intervention. Similar problems may be the result of “double dipping” in SRs of a diagnostic measure, prognosis, or economic impact. Given the strong lure of “p < .05” for authors and editors, the chance of “significant” studies with duplicate reports being found is greater. As the Cochrane Handbook for Systematic Reviews of Interventions states, “Studies with significant results are more likely to lead to multiple publications and presentations… which makes it more likely that they will be located and included in a meta-analysis” (Higgins & Green, 2011, section 10.2.2.1).

Frequency and Types of Duplicate Publication

This issue has been on the radar of SR authors and SR methodologists for a long time (Bailey, 2002; Gøtzsche, 1989; Tramèr, Reynolds, Moore, & McQuay, 1997). von Elm, Poglia, Walder, and Tramèr (2004) analyzed 141 SRs in the anesthesia and analgesia domain, and found that 56 of these had to address duplicate publication issues (although 14 had not mentioned it in their report but responded positively to the authors’ queries). These 56 systematic reviews included 1,131 main articles and excluded 103 duplicates that originated from 78 main articles. “Sixty articles were published twice, 13 three times, 3 four times, and 2 five times” (von Elm et al., 2004, p. 974). The authors analyzed the papers involved, and identified six duplication patterns:

  1. The sample and outcomes reported were the same (n = 21 pairs). In three quarters of these instances, the secondary article did not refer to the prior publication; 29% were translations.
  2. Two or more prior articles were “assembled to produce yet another article” (n = 16) (von Elm et al., 2004, p. 977).
  3. The same sample was used but different outcomes were reported (n = 24).
  4. A larger sample (additional cases) was used, but identical outcomes were described (n = 11).
  5. A smaller sample (subgroup of the cases in a large trial) and identical outcomes were reported (n = 11).
  6. Different samples and different outcomes were conveyed (n = 20). “In pattern [6] duplicates, both samples and outcomes were different from the main article. Confirmation of duplication was only possible through contact with the original authors” (von Elm et al., 2004, p. 977).

von Elm et al. (2004) concluded, “Duplication goes beyond simple copying. Six distinct duplication patterns were identified after comparing study samples and outcomes of duplicates and corresponding main articles. Authorship was an unreliable criterion” (p. 974).

Suggestions for Systematic Review Authors

The key phrase here for prospective SR authors is, “Authorship was an unreliable criterion.” The authors of the Cochrane Handbook (Higgins & Green, 2011, section 10.2.2.1) state: “There are examples where two articles reporting the same trial do not share a single common author…. Thus, it may be difficult or impossible for review authors to determine whether two papers represent duplicate publications of one study or two separate studies without contacting the authors, which may result in biasing a meta-analysis of this data.” These authors also assert that it can be difficult to detect duplicate publication, and detective work may be required on the part of SR authors (Higgins & Green, 2011, section 7.2.2). According to the Handbook, some of the most useful criteria for comparing reports are:

  • Names of authors (most duplicate reports have authors in common, although that is not always the case);
  • Location and setting of the study (particularly if institutions, such as hospitals, are named);
  • Unambiguous details of the interventions (e.g., dose, frequency);
  • The number of participants and the baseline data reported for them; and
  • The date and duration of the study, including recruitment periods and follow-up periods.

(I suggest that the following can be added: grant number and granting agency; name of uncommon outcome measures). Higgins and Green (2011) indicate that where uncertainties remain after considering these and other factors, it may be necessary to correspond with the authors of the reports.

Despite of the commonness of duplicate publication and the widespread knowledge of its existence, the issue is not always addressed in handbooks for SR authors. For instance, the Joanna Briggs Institute (JBI) Reviewers’ Manual is “designed to provide authors with a comprehensive guide to conducting JBI systematic reviews. It describes in detail the process of planning, undertaking and writing up a systematic review of qualitative, quantitative, economic, text and opinion based evidence” (Joanna Briggs Institute, 2014, p. 9). Even so, the JBI Reviewer’s Manual describes title deduplication using a reference manager, but it does not explain how to address duplicate publication.

The following eight steps are recommended to identify duplicate publication:

Step 1. Create an alphabetical list of all authors (not just the primary ones) of all candidate studies, with all coauthors for each author provided on the same line. Table 1 shows an example, with the entries for two hypothetical papers, Cleary, Jones and Carron (2012) and Jones, Williams and Smith (2015).

Table 1. Example of Alphabetical Author List

First Author Coauthors
Cleary Clarron, Jones
Clarron Cleary, Jones
Jones Cleary, Clarron
Jones Williams, Smith
Smith Jones, Williams
Williams Jones, Smith

This type of table reveals that Jones may have published the same material twice, once as a coauthor of Cleary, and once as first author with the assistance of Smith and Williams.

Step 2. Analyze this list to identify authors who appear on two or more publications, as first author or coauthor.

Step 3. Obtain the full texts of the papers involved.

Step 4. Analyze the references the authors make to earlier, later, or parallel papers by any of the team members or associates.

Step 5. If there is no citing of such reports, analyze in all two, three, four, and so on, suspected papers (i.e., with at least one shared author) what is written about the sample, baseline demographic and clinical data, the intervention (if applicable), and key outcome measures. Determine whether there is any double reporting and, if so, of which von Elm pattern.

Step 6. If there is no clear-cut evidence of an overlap in subjects and no clear-cut evidence of a lack of overlap, write to the authors for clarification.

Step 7: Based on the results of this detective work, in the SR use the best report (von Elm pattern 1); a combination of the findings in two or more reports (pattern 3); the later paper with more subjects (pattern 4); the original paper, unless the subsample analysis has more useful data (pattern 5); or a combination of the articles (pattern 2). With a response from the authors, pattern 6 duplication instances presumably can be reassigned to patterns 1–5. No guidance can be given when there is no response from any of the (co)authors involved; SR authors should take their best guess, but it would not be wrong for them to add a comment such as: “Report x and report y possibly describe the same sample; however, this could not be confirmed.”

Step 8: Steps 1–7 are predicated on overlapping authorships. Just in case there are multiple reports on a study that do not share a single author, at least one SR author should read all papers still being considered as potential sources of evidence. This SR author should be armed with a high level of suspicion to ferret out remaining duplications. Any suspicious twins or triplets should be submitted to the step 5, and as necessary, step 6 actions.

Some Personal Experience

Two current projects have brought the importance of, and difficulty of, detecting duplicate publication to my attention. The first project is an SR of the frequency of, reasons for, costs of, and impacts of rehospitalization of individuals with spinal cord injury (SCI) after discharge from initial inpatient rehabilitation. A key set of papers are those based on data from the NIDILRR-supported SCI Model Systems, as deposited in the SCI National Data Base (NDB). The question “In the past year, have you been hospitalized? (and if so, for what and how long?)” has been part of the SCI NDB since 2000, and in that period there have been 11 publications by varying author teams. Here, determining that these publications are related is not difficult because each publication refers to the SCI NDB as the source of the data.

The difficult issue is what to do with the papers. Generally, the most recent publication tends to be the largest because it contains more follow-ups for more patients (Year 1, 5, 10, 15, and so on, after initial rehabilitation discharge). However, specifically what is analyzed and how it may vary from one report to the next. (Were multiple readmissions within a year all considered? Added up to get a cumulative length of stay?) Data from multiple publications may be of relevance, even if there is overlap of patients and follow-up years.

This issue occurs with all large and longitudinal databases to which geographically dispersed researchers contribute new cases as well as follow-ups on previously registered cases, all of whom have a right to analyze the entire data set. Within the NIDILRR-supported SCI, traumatic burn injury (TBI) and burn model systems, there is some control on wild growth of duplicative analyses: authors or author groups considering an analysis need to inform their colleagues of their plans. In the TBI Model Systems, this process is known as the notification system. All insiders and outsiders wanting to use the database need to inform all current project directors of the nature of their analysis plans. The project directors can object to an analysis because another group is already working on the same or an overlapping question, or has published a paper in recent years.

The second project colleagues and I are finishing is a systematic review of systematic reviews–an overview of SRs–of the use of exoskeletons with patient with neurological disorders (mostly, stroke and SCI). Being familiar with the SCI literature (Bryce, Dijkers, & Kozlowski, 2015), I had a high level of suspicion, which was borne out by a careful comparison of the primary study reports. Most studies in this research area are small case series, and the authors tend to provide a table listing demographic and clinical information for each subject. A comparison of this information suggested that it was highly unlikely that, for example, “Jones & Williams, 2015” studied seven subjects who did not overlap at all with the eight subjects examined by “Williams & Jones, 2016” (Dijkers, Akers, Galen, Patzer, & Vu, 2016). Therefore, one question to be addressed in our SR of SRs is: Do the secondary studies (i.e. the SRs) indicate whether they have searched for duplication of cases in the primary studies, and what amount of duplication does exist, with or without the SR authors having looked for it?

To address this question, we created a list of all authors of all primary studies referenced by all SRs (step 1 above), sorted them into author groups, and obtained the full text of each publication. We analyzed these following steps 2–6. In some instances, it was clear that there was no duplication. In other situations, duplication was mathematically very likely, to say the least. We are following up with at least one author of all suspected studies to confirm our misgivings.

We found 13 SRs, each of which was based on three to 26 out of a total of 98 different primary studies. Only three of the SRs mentioned issues of double dipping in the primary studies–and one of these three did not exclude the duplicates.

Recommendations

What is the bottom line for SR authors and users? First, duplicate reporting exists in disability and rehabilitation studies as much as in other areas of scholarly activity. Authors report the same study in two or more publications, for legitimate reasons (addition of significant new numbers of cases to a model system database) and illegitimate ones (republication of essentially the same data in a new paper to cope with “publish or perish”). It behooves the authors of SRs to take all reasonable efforts to find related papers, delete the complete duplicates (von Elm’s pattern 1), and carefully consider how to handle all others (patterns 2–6). SR authors should avoid double dipping, and in doing so will escape offering misleading if not completely wrong conclusions and advice.

The following recommendations are made for the various entities involved in the SR enterprise:

  • For authors of reporting guidelines for primary studies (e.g., CONSORT): Require that authors state what earlier reports on a study exist, and how they relate to the present paper (http://www.consort-statement.org).
  • For authors of reporting guidelines for SRs and other review papers (e.g., PRISMA): Require that authors describe how they searched for and handled duplicate publications (http://www.prisma-statement.org).
  • For authors of primary studies: Describe clearly what earlier reports on a study exist and how they relate to the current paper.
  • For authors of SRs and other review papers: Explain how duplicate publications were searched for and how evidence from duplicate and overlapping studies (if identified) was handled in the evidence synthesis.
  • For journal editors:
    • Demand that the authors of primary studies report on parallel publications, and that they submit a copy of these papers to allow editors and peer reviewers to determine whether the manuscript newly submitted really produces novel information that expands our knowledge base.
    • Require that the authors of SRs and similar manuscripts describe how steps 1–8 were handled in data extraction and synthesis, of qualitative and quantitative information.
    • Ask peer reviewers (who are presumably selected for their expertise in the area addressed by the paper they are requested to review) whether they are familiar with similar papers already in the literature.
  • For collective owners of databases (like the SCI NDB): Create a mechanism by which newly proposed data analyses are made known to all parties with a proprietary interest.

With these steps, illegitimate doubling of reports can be minimized in SRs, and legitimate duplication can be more easily identified by SR authors. Until these measures are implemented across all areas of rehabilitation and disability research, reader beware: The systematic review you are reading and considering for implementation in your practice could have a serious flaw.

References:

Abby, M., Massey, M. D., Galandiuk, S., & Polk, H. C. Jr. (1994). Peer review is an effective screening process to evaluate medical manuscripts. JAMA, 272(2), 105–107.

Bailey, B. J. (2002). Duplicate publication in the field of otolaryngology-head and neck surgery. Otolaryngology-Head and Neck Surgery, 126(3), 211–216.

Bryce, T. N., Dijkers, M. P., & Kozlowski, A. J. (2015). Framework for assessment of the usability of lower-extremity robotic exoskeletal orthoses. American Journal of Physical Medicine & Rehabilitation, 94(11), 1000-1014.

Dijkers, M. P., Akers, K. G., Galen, S. S., Patzer, D. E., & Vu, P. T. (2016). Letter to the editor regarding "Clinical effectiveness and safety of powered exoskeleton-assisted walking in patients with spinal cord injury: systematic review with meta-analysis." Medical Devices: Evidence and Research, 9, 419–421.

Gøtzsche, P. C. (1989). Multiple publication of reports of drug trials. European Journal of Clinical Pharmacology, 36(5), 429–432.

Higgins, J. P. T., & Green, S. (Eds.). (2011). Cochrane handbook for systematic reviews of interventions (Version 5.1.0). London, UK: The Cochrane Collaboration. Retrieved from http://handbook-5-1.cochrane.org/

Jamrozik, K. (2004). Of sausages and salami. Australian and New Zealand Journal of Public Health, 28(1), 5–6.

JBI. (2014). Joanna Briggs Institute reviewers’ manual: 2014 edition. Adelaide, Australia: Author. Retrieved from https://wiki.jbi.global/display/MANUAL

Qi, X., Yang, M., Ren, W., Jia, J., Wang, J., Han, G., & Fan, D. (2013). Find duplicates among the PubMed, EMBASE, and Cochrane Library Databases in systematic review. PLoS One, 8(8), e71838. doi:10.1371/journal.pone.0071838

Rathbone, J., Carter, M., Hoffmann, T., & Glasziou, P. (2015). Better duplicate detection for systematic reviewers: Evaluation of Systematic Review Assistant-Deduplication Module. Systematic Review, 4(6). doi:10.1186/2046-4053-4-6.

Schulz, K. F., Altman, D. G., Moher, D., & CONSORT Group. (2010). CONSORT 2010 Statement: Updated guidelines for reporting parallel group randomised trials. Journal of Clinical Epidemiology, 63(8), 834–840.

Task Force on Systematic Review and Guidelines. (2013). Assessing the quality and applicability of systematic reviews (AQASR). Austin, TX: SEDL, Center on Knowledge Translation for Disability and Rehabilitation Research.

Tramèr, M. R., Reynolds, D. J., Moore, R. A., & McQuay, H. J. (1997). Impact of covert duplicate publication on meta-analysis: a case study. BMJ, 315(7109), 635–640.

von Elm, E., Poglia, G., Walder, B., & Tramèr, M. R. (2004). Different patterns of duplicate publication: An analysis of articles used in systematic reviews. JAMA, 291(8), 974–980.


Go to KT Update Archive