Search Database

KT Strategies - Search Results

You searched for records matching:

1. Citation: Newberry, S.J., Ahmadzai, N., Motala, A., Tsertsvadze, A., Maglione, M., Ansari, M.T., Hempel, S., Tsouros, S., Schneider Chafen, J., Shanman, R., Skidmore, B., Moher, D., & Shekelle, P.G. (2013). Surveillance and identification of signals for updating systematic reviews:  Implementation and early experience. Agency for Healthcare Research and Quality, 1-156.
Title: Surveillance and identification of signals for updating systematic reviews: Implementation and early experience
Author(s): Newberry, S.J.
Ahmadzai, N.
Motala, A.
Tsertsvadze, A.
Maglione, M.
Ansari, M.T.
Hempel, S.
Tsouros, S.
Schneider Chafen, J.
Shanman, R.
Skidmore, B.
Moher, D.
Shekelle, P.G.
Year: 2013
Journal/Publication: Agency for Healthcare Research and Quality


The question of how to determine when a systematic review needs to be updated is of considerable importance. Changes in the evidence can have significant implications for clinical practice guidelines and for clinical and consumer decision-making that depend on up-to-date systematic reviews as their foundation. Methods have been developed for assessing signals of the need for updating, but these methods have been applied only in studies designed to demonstrate and refine the methods, and not as an operational component of a program for systematic reviews.


The Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice (EPC) program commissioned RAND’s Southern Californian Evidence-based Practice Center (SCEPC) and University of Ottawa Evidence-based Practice Center (UOEPC), with assistance from the ECRI EPC, to develop and implement a surveillance process for quickly identifying Comparative Effectiveness Reviews (CERs) in need of updating.


We established a surveillance program that implemented and refined a process to assess the need for updating CERs. The process combined methods developed by the SCEPC and the UOEPC for prior projects on identifying signals for updating: an abbreviated literature search, abstraction of the study conditions and findings for each new included study, solicitation of expert judgments on the currency of the original conclusions, and an assessment of whether the new findings provided a signal according to the Ottawa Method and/or the RAND Method, on a conclusion-by-conclusion basis. Lastly, an overall summary assessment was made that classified each CER as being of high, medium, or low priority for updating. If a CER was deemed to be a low or medium priority for updating, the process would be repeated 6 months later; if the priority for updating was deemed high, the CER would be withdrawn from subsequent 6-month assessments.

Results and Conclusions

Between June 2011 and June 2012,we established a surveillance process and completed the evaluation of 14 CERs. Of the 14 CERs, 2 were classified as high priority, 3 as medium priority, and 9 as low priority. Of the 6 CERs released prior to 2010 (meaning over 18 months before the start of the program) 2 were judged high priority, 2 were judged medium priority, and 2 were judged low priority for updating. We have shown it is both useful and feasible to do such surveillance, in real time, across a program that produces a large number of systematic reviews on diverse topics.

From the Effective Health Care Program, Agency for Healthcare Research and Quality -


Type of Item: Evaluation Instrument
Type of KT Strategy: Continuous Quality Improvement/Total Quality Management
Target Group: Decision Maker
Research Funders
Evidence Level: 4
Record Updated:2014-02-28