WHICH DATABASES SHOULD BE USED TO IDENTIFY STUDIES FOR SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS?

TitleWHICH DATABASES SHOULD BE USED TO IDENTIFY STUDIES FOR SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS?
Publication TypeJournal Article
Year of Publication2018
AuthorsArber M, Glanville J, Isojarvi J, Baragula E, Edwards M, Shaw A, Wood H
JournalInternational journal of technology assessment in health care
Volume34
Issue6
Pagination547-554
Date Published2018 Jan
ISSN1471-6348
KeywordsCost-benefit analysis; Databases, Factual; MEDLINE; Systematic Reviews as Topic; Technology Assessment, Biomedical
AbstractOBJECTIVES: This study investigated which databases and which combinations of databases should be used to identify economic evaluations (EEs) to inform systematic reviews. It also investigated the characteristics of studies not identified in database searches and evaluated the success of MEDLINE search strategies used within typical reviews in retrieving EEs in MEDLINE. METHODS: A quasi-gold standard (QGS) set of EEs was collected from reviews of EEs. The number of QGS records found in nine databases was calculated and the most efficient combination of databases was determined. The number and characteristics of QGS records not retrieved from the databases were collected. Reproducible MEDLINE strategies from the reviews were rerun to calculate the sensitivity and precision for each strategy in finding QGS records. RESULTS: The QGS comprised 351 records. Across all databases, 337/351 (96 percent) QGS records were identified. Embase yielded the most records (314; 89 percent). Four databases were needed to retrieve all 337 references: Embase + Health Technology Assessment database + (MEDLINE or PubMed) + Scopus. Four percent (14/351) of records could not be found in any database. Twenty-nine of forty-one (71 percent) reviews reported a reproducible MEDLINE strategy. Ten of twenty-nine (34.5 percent) of the strategies missed at least one QGS record in MEDLINE. Across all twenty-nine MEDLINE searches, 25/143 records were missed (17.5 percent). Mean sensitivity was 89 percent and mean precision was 1.6 percent. CONCLUSIONS: Searching beyond key databases for published EEs may be inefficient, providing the search strategies in those key databases are adequately sensitive. Additional search approaches should be used to identify unpublished evidence (grey literature).
DOI10.1017/S0266462318000636
Alternate JournalInt J Technol Assess Health Care