Summarized Research in Information Retrieval for HTA

Welcome to Summarized Research in Information Retrieval for HTA (SuRe Info), a web resource that provides research-based information relating to the information retrieval aspects of producing systematic reviews and health technology assessments. SuRe Info seeks to help information specialists stay up-to-date in the latest developments by providing easy access to current methods papers, and support more research-based information retrieval practice.

The website has two sections, listed below:

  1. Information on general search methods common across all health technologies;
  2. Methods to use when searching for specific aspects of health technologies (mainly based on the structure of the HTA Core Model® developed by EUnetHTA)

Click on a heading name to reach the chapter summarizing the current research findings within that category. The references listed at the end of each chapter are linked to structured appraisals of the references, written by members of the SuRe Info project team.

SuRe Info methods and processes are described in the Authors' Manual (please see the bottom of this page).

General search methods

Searching for specific aspects of HTA


This resource is for personal, professional and non-commercial use only. Reproduction or republishing the content is not allowed. When using the HTA Core Model® we have followed the HTA Core Model Terms of Use. Views expressed in the publication appraisals are those of the SuRe Info reviewers’ and do not reflect the opinions of their respective organizations.

SuRe Info_Authors manual_version 1.1_04.2017.pdf637.89 KB

Search strategy development


The Cochrane Information Retrieval Methods Group have published an evidence-based chapter on search methods for the Cochrane Handbook (1), which provides the basis for this summary alongside guidance produced by the Centre for Reviews and Dissemination (2) and the Agency for Healthcare Research and Quality (AHRQ) (3). 

The revised and updated searching chapter of the Cochrane Handbook is in preparation. To avoid duplication of effort with the development of the Cochrane Handbook, appraisals have not been prepared for studies in this chapter. Once the revised Cochrane Handbook is available it will be used to update this chapter.

Sensitivity and precision

In order to retrieve as many studies relevant to the review as possible, and to compensate for the limitations of information source records and indexing, search strategy development and construction for systematic reviews (SRs) conventionally aims for sensitivity (1). Increasing the sensitivity of a search increases the possibility of identifying all relevant studies, but also tends to reduce precision because the number of irrelevant results is increased (1, 2). Sampson et al examined a cross section of 94 health related SRs that reported the flow of bibliographic records through the review process and found that search precision of approximately 3% was typical (4). The number of results retrieved, which therefore must be screened against eligibility criteria, has implications for the resources required to conduct a SR. This trade-off between sensitivity and precision should be acknowledged and discussed with the wider review team, and an appropriate balance sought within the context of the resources available.

The emphasis on search strategy sensitivity over precision typically reflects the context of SRs of quantitative research on clinical interventions. This emphasis may not be the same in searches developed for different purposes within the health technology assessment (HTA) context.  In the context of qualitative SRs or qualitative evidence syntheses for example, there is discussion as to whether these type of reviews share the same need as SRs of quantitative research for ‘comprehensive’, ‘exhaustive’ bibliographic database searches (5). Guidance from the Cochrane Qualitative and Implementation Methods Group recommends that search procedures in the context of qualitative evidence synthesis should generally privilege specificity over sensitivity (6).  Similarly, in the context of conducting a search to inform an ‘evidence-map’ (where an overview of the extent, nature and characteristics of a research area is of interest) research has indicated that less sensitive searches may be appropriate.  In a study which compared a ‘highly sensitive’ search strategy with a ‘highly specific’ search strategy for an evidence-mapping exercise on diabetes and driving to inform clinical guidance development, the authors reported that the results of the ‘highly specific’ search would have been sufficient for answering the research question (7). The authors concluded that using highly specific instead of sensitive search strategies is “fully adequate for evidence maps with the aim of covering mainly the breadth rather than depth of a research spectrum”.  Recent research has also suggested that the conventional approach to search methodology – with its focus on sensitive searches of bibliographic databases as the primary method of study identification – may not be optimal for some SRs on complex topics, or in areas other than clinical health.  In the context of a SR to evaluate the health benefits of environmental enhancement and conservation activities, Cooper et al (8) compared an approach led by searches of bibliographic databases with an approach led by supplementary search methods.  The authors found that extensive bibliographic database searching was of limited value in terms of contribution to synthesis, but that grey literature searching was valuable and identified studies that made unique contributions to both the quantitative and qualitative synthesis. The authors concluded that the approach led by supplementary search methods (where the primary methods of study identification were grey literature searching and contacting experts, supplemented by bibliographic database searches which emphasised precision over sensitivity) was valid when compared with the conventional approach.

Structuring the search

The Cochrane Handbook suggests a search strategy should be structured around the main concepts being examined by the review.  For reviews of interventions, this can be expressed using PICO (Patient (or Participant or Population), Intervention, Comparison and Outcome).  It is usually seen as undesirable to include all elements of the PICO in the search strategy as some concepts are often poorly described or non-existent in the title and abstract of a database record or the assigned indexed terms. For reviews of many interventions a search may reasonably be comprised of the population, intervention, and a study design filter (1) if appropriate. A validated search filter is recommended where one exists for the concept of interest (3). In some topic areas, for example complex interventions, where many of the concepts are particularly ill-defined, it may be preferable to use a broader search strategy (such as searching only for the population or intervention) and increase the resources allocated to sifting records (2). 

Alternatives to the PICO framework have also been evaluated for searches in some fields; examples include the SPIDER tool to structure searches for qualitative and mixed methods research (9) and the BeHEMoTh tool to structure searches for theory (10). In a structured methodological review on searching for qualitative research, Booth lists 11 different notations for use in this context (including PICO, SPIDER and BeHEMoTh), but states that, as with quantitative reviews, there is little empirical data to support the merits of question formulation (5). In a SR published in 2018, Eriksen and Frandsen investigated whether the use of the PICO model as a search strategy tool affected the quality of a literature search (11). The authors found only three studies which assessed the effect of using the PICO model versus other available models or unguided searching. The authors concluded that no solid conclusions could be drawn about the effect of using the PICO model on the quality of the literature search.

Selecting search terms

The Cochrane Handbook recommends that in order to identify as many relevant records as possible, search strategies should combine subject headings selected from the database’s controlled vocabulary or thesaurus (with appropriate “explosions”) and a wide range of free-text terms (1). The choice of free-text terms should include consideration of synonyms, related terms and variant spellings. 

Methods for identifying search terms have traditionally included techniques such as checking the bibliographic records of known relevant studies, consulting topic experts and scanning database subject indexing guides (3), but alternative methods have also been proposed.

Bramer et al evaluated a structured approach where thesaurus terms and synonyms for title / abstract searching were collected from the Emtree thesaurus, combined into a search strategy, and then tested for completeness using an ‘optimization method’ (12). This method involved identifying articles indexed with identified Emtree thesaurus terms but which did not include the synonyms already used in the search strategy in their title or abstract. Relevant terms from the titles and abstracts of these records were then added to the search strategy, and their added value was evaluated in discussion with the researcher who had requested the search. Further optimisation was done by reversing this process: looking for new thesaurus terms in articles where the titles and/or abstracts contained one of the identified synonyms but lacked the thesaurus terms already identified.  The authors concluded that the method creates opportunities for faster development of SR search strategies that find more relevant studies than other methods with equivalent search precision.

Text mining is a rapidly developing tool with potential application in a range of tasks associated with the production of SRs, including the identification of search terms (2). AHRQ published a review on the use of text-mining tools as an emerging methodology within SR processes, including the literature search (13). The aim of the AHRQ project was to provide a ‘snapshot’ of the state of knowledge, rather than an in-depth assessment. The review referred to 12 studies where text-mining tools were used for development of ‘topic’ search strategies and identified several general approaches to development.  These included assessing word frequency in citations (using tools such as PubReminer or EndNote) and automated term extraction (using tools such as Termine).  The review reported that all of the identified studies found benefit in automating term selection for SRs, especially those comprising large unfocused topics. The AHRQ review made no conclusions which were specific to the use of text-mining tools for the literature search process. The general conclusions on the use of text-mining for SR processes were that text-mining tools appeared promising, but further research was warranted.

Studies cited in the AHRQ review included a study by O'Mara-Eves et al which evaluated whether additional search terms for the topic of ‘community engagement’ were generated when using the text-mining data-extraction tool Termine (14) in addition to typical search development techniques.  The study authors reported that although in many cases the terms generated by text-mining had already been identified by the reviewers as relevant, text-mining did reveal some useful synonyms and terms associated with the topic that had not previously been considered.  The study authors stated that the text-mining approach studied should never be used on its own but alongside usual search development processes. The authors concluded that text mining helped to identify relevant search terms for a broad topic that was inconsistently referred to in the literature.

The AHRQ review also cited 2 studies published by researchers at the German HTA agency IQWiG. In the first study, the authors proposed an 'objective approach' to strategy development using text analysis methods (15). The authors argued that this method ensures the process of selecting search terms is transparent and reproducible and allows a searcher with little specialist knowledge of the search topic to make decisions on the inclusion of terms that are informed by evidence.  In the second study the authors aimed to validate the ‘objective approach’, and concluded that it was noninferior to the standard 'conceptual approach' (16). Subsequent correspondence on this publication (17, 18) and the authors' responses to this correspondence (19, 20) debated the study’s conclusions and the strengths and limitations of the methods used for this research. Since the publication of the AHRQ review, IQWiG researchers have published a third paper on their ‘objective approach’, comparing it with the ‘conceptual approach’ (21). The authors reported that the ‘objective approach’ yielded higher sensitivity than the ‘conceptual approach’, with similar precision, and stated that ‘objective approaches’ should be routinely used in the development of high-quality search strategies.

Stansfield et al (22) used a case study of searching to inform a guideline on the care and support of older people with learning disabilities, and other examples, to reflect on the utility of text-mining technologies in improving the precision and sensitivity of search strategies.   The technologies investigated include term frequency–inverse document frequency (TF-IDF) analysis and Lingo3G automated clustering tool within EPPI-Reviewer 4.0, Termine, BibExcel, and EndNote. The authors concluded that text mining could aid the discovery of search terms for search strategies for diversely-described topics to support an iterative search strategy development process, and that using multiple tools appeared to be particularly fruitful, though the overriding challenge of finding efficient ways to identify an unknown body of literature for incorporation in SRs still remained.

Combining search terms with Boolean operators and other search syntax

The Cochrane Handbook describes how a search strategy should be built up using controlled vocabulary terms, text words, synonyms and related terms for each concept at a time, joining together each of the terms within each concept with the Boolean ‘OR’ operator. The sets of terms may then be combined with AND which limits the results to those records that contain at least one search term from each of the sets. If an article does not contain at least one of the search terms from each of the sets then it will not be retrieved. Cochrane advise against the use of the NOT operator where possible to avoid inadvertently excluding relevant records (1).

The AHRQ manual refers searchers to the PRESS (Peer Review of Electronic Search Strategies) Checklist (23) and states that search strategies should make use of the advanced search techniques such as truncation, wildcards and proximity searching described in the PRESS document (3).  In 2015, the PRESS 2015 Guideline Statement was published, which updated and expanded on the previous PRESS publications (24).

Although search strategy development and construction for SRs conventionally aims for sensitivity, researchers have investigated the potential of 'focusing' search terms to reduce the number of search results and therefore screening burden. Focusing techniques which have been investigated include searching with subject headings limited to those with a major focus (major subject headings) and searching using terms in titles and abstracts alone (i.e. not including controlled vocabulary in the search strategy).

In a 2015 report produced by CADTH (25), researchers reran the search strategies reported in HTAs or SRs produced by a range of agencies, varying the use of major Emtree headings. The impact of the changes on the retrieval of the known relevant records (included studies) in the HTAs or SRs was assessed. The authors stated that overall their findings suggested that focusing Emtree headings was likely to reduce already suboptimal sensitivity for only small gains in precision. The report's recommendations for practice stated that searchers who were confident that their strategy was highly sensitive might wish to use focused Emtree terms for the intervention concept of their search. They suggested using caution when considering focusing the Emtree terms for the population concept, when considering focusing Emtree terms in more than two concepts, or when considering focusing terms in non-drug treatment reviews.

In a 2018 study, Bramer et al investigated whether researchers could use 'focused' searches to reduce the screening time burden (26). The original search strategies (designed by a single librarian) from a broad range of SRs were modified in four ways: by searching Embase thesaurus terms as major descriptors; by removing thesaurus terms from the Embase search so that terms were searched in the title and/or abstract fields only; by searching both MEDLINE and Embase thesaurus terms as major descriptors; by searching both MEDLINE and Embase for terms in the title and/or abstract fields only.  The authors concluded that if the number of search results retrieved was too high for the project resource context, search strategies in Embase alone or in both Embase and MEDLINE could be focused by searching for thesaurus terms as major descriptors. They stated that this approach 'may not ultimately have negative consequences in SRs', as long as a thorough searches in other databases (such as Web of Science) were performed in addition to the MEDLINE and Embase searches. They also stated however that the reduction in search result numbers was likely to be limited. The authors did not recommend searching Embase and MEDLINE using terms in titles and abstracts alone, as this resulted in too many relevant articles being missed.

Checking and testing search strategies

Search strategies should be checked to ensure they are fit for purpose: that they are likely to find relevant studies. This is difficult to ascertain but checking of search strategies can be carried out by expert / peer review (for example, using the PRESS Checklist (23, 24)), comparing against previously published strategies, or by testing that known relevant documents are retrieved by the strategy (3). 

Alternatively, more formal testing can be undertaken. Such methods are summarised by Booth, whose brief review identified eight methods for determining optimal retrieval of studies for inclusion in HTAs (27). The review concluded that although numerous methods were described in the literature, there was little formal evaluation of the strengths and weakness of each approach.

Sampson and McGowan developed and assessed a method (Inquisitio Validus Index Medicus) for validation of MEDLINE search strategies (28). The method used a version of the known relevant item approach, testing recall of relevant indexed studies identified through all search methods and indexed in the database being tested. The validation occurred once screening had been completed and the eligible studies were known. Poorly performing search strategies could be amended, re-tested and re-run. New studies identified by the amended search could be screened and any relevant studies could be included in the review.  The authors reported that the validation method was robust and was able to demonstrate that the retrieval of relevant studies from MEDLINE in a sample of six updated Cochrane reviews was sub-optimal. The authors concluded that the Inquisitio Validus test was a simple method of validating the search, and could determine whether the search of the main database performed adequately or needed to be revised to improve recall, allowing the searcher an opportunity to improve their search strategy.

One aspect of testing searches is to inform reviewers when searching has retrieved 'enough' studies. There is little research evidence on empirically based 'stopping rules' but methods such as capture-mark-recapture have been explored for developing such rules (29). Capture-mark-recapture has also been reported as being used to evaluate searches by estimating retrospectively their closeness to capturing the total body of literature (30, 31). The process involved hand-searching a sample journal and running a search strategy on information sources indexing the same journal. The number of relevant records identified by each process was then used to gain a statistical estimate of what had been missed by all searches conducted (30).

Despite these investigations the ARHQ guidelines state that no currently available method can be easily applied to searches for comparative effectiveness reviews. It is argued that the searcher’s judgement is required to decide whether searching additional sources is likely to result in the retrieval of unique items or whether the search has reached the point of saturation. The decision must balance the desire to identify all relevant studies with the resources available to carry out the search (3).


Reference list

Search filters

What are search filters?

Search filters (sometimes called hedges) are collections of search terms designed to retrieve selections of records from a bibliographic database (1). Search filters may be designed to retrieve records of research using a specific study design (e.g. randomised controlled trial) or topic (kidney disease) or some other feature of the research question (age of study’s participants). They are usually combined with the results of a subject search using the AND operator.

Why you would use a search filter?

When included in a database search strategy, a robust search filter can significantly reduce the number of records that researchers may need to sift and recent research has shown that this is a key use of search filters (2). Search filters are not available, however, for all study types or all databases or all database interfaces.

Key features

Filters are typically designed for one purpose, which may be to maximise sensitivity (or recall) or to maximise precision (and reduce the number of irrelevant records that need to be assessed for relevance). Sensitivity is the proportion of relevant records retrieved by the filter and is the most frequently reported performance measure (3). Precision is the proportion of relevant records in the retrieved records and is also frequently reported (3). Specificity is the proportion of irrelevant records successfully not retrieved. Filters are database and interface specific. Performance measures can be difficult to interpret and alternative graphical approaches to presenting performance information may assist with making decisions about which filter to select (3).

Where can you find search filters?

Search filters of interest to researchers producing technology assessments are incorporated into some of the MEDLINE interfaces. For example, they are labelled as Clinical Queries in PubMed (4). Often searchers ‘translate’ filters or adapt them to run on different interfaces (2). Translations and adaptations should be undertaken carefully since different interfaces function in different ways, and different databases may have different indexing languages.

Study design search filters can also be identified from internet resources such as

Some guidance documents for the conduct of health technology assessments recommend specific filters and others leave the choice to the discretion of the searcher.

Critical appraisal of filters

When published, the methods used to compile search filters should be clearly described by the authors. It is also valuable to have access to critical assessments of filters in practice. Search filter development methods have developed over time to become more objective and rigorous (1, 4).  The quality of search filters can be appraised using critical appraisal tools (5, 6) which assess the focus of the filter, the methods used to create it and the quality of the testing and validation which have been conducted to ensure that it performs to a specific level of sensitivity, precision or specificity.

It is also important to know the date when the filter was created so an assessment can be made as to its currency. The value of a search filter can decrease over time as new terms are added to a database thesaurus.

Search filters are not quality filters in terms of identifying only high quality research evidence. All records resulting from the use of a search filter will require an assessment of relevance and quality. All search filters and all search strategies are compromises and an assessment of the performance of filters for each technology appraisal is recommended.

Increasing numbers of filters have led to the assessment of the relative performance of different filters to find the same study design and these can be a good starting point for deciding which filter to use. 

A systematic review of the performance of a large number of diagnostic test accuracy (DTA) filters has provided recommendations that search filters should not be used as the only method for searching for DTA studies for systematic reviews and technology appraisals (7). The review concludes that the filters risk missing relevant studies and do not offer benefits in terms of enhanced precision.

A comparison study (8) of the performance of search filters used to identify economics evaluations concluded that, while highly sensitive filters are available, their precision is low. The performance data provided in this paper can help researchers select the filter that’s most appropriate to their needs.

More recently a study (9) demonstrated that a search filter with adequate precision and sensitivity was not yet available to identify studies of epidemiology in the MEDLINE  database

Search filter development

Creating a search filter to identify database records of a specific study design or some other feature requires a "gold standard" reference set that can be used to measure performance. The reference set can be created by using relative recall (10) or by handsearching.

A recent case study (11) describes how such a gold standard set was created to support the development of a prognostic filter for studies of oral squamous cell carcinoma in MEDLINE. The methods used are generic and could be applied to both other databases and to other types of research studies.

The authors use a flowchart to illustrate the overall process and describe each of the stages: how to generate the initial set of records; the sample size required for filter development; use of an annotation tool and annotation guidelines; and the calibration process to measure inter-annotator agreement.

Reference List



Other limits: language, date


This summary is based on the Cochrane Handbook for Systematic Reviews of Interventions (1), the guidance for undertaking systematic reviews produced by the Centre for Reviews and Dissemination (CRD) (2) and the methods guide for effectiveness and comparative effectiveness reviews produced by the Agency for Healthcare Research and Quality (AHRQ) (3). The Cochrane Handbook and AHRQ methods guide are based on the best available evidence and the CRD guidance is recommended as a source of good practice by agencies such as the National Institute for Health and Clinical Excellence (NICE).

The revised and updated searching chapter of the Cochrane Handbook is in preparation. To avoid duplication of effort with the development of the Cochrane Handbook, appraisals have not been prepared for studies in this chapter. Once the revised Cochrane Handbook is available it will be used to update this chapter.

Limiting a search strategy by date

Limiting a search strategy by date may reduce the number of records retrieved for screening, but date limits should only be applied if there is a robust rationale for doing so.  For example, if a healthcare intervention was introduced at a certain date, limiting the search strategy to only retrieve studies reported from this date would be appropriate (3,4). 

Applying date limits to the search strategy can also be an option if an existing search is being updated (2,4). When conducting an update search, searchers should be cautious about how date limits are applied (2,4). If attempting to limit by date, an appropriate field (or fields) such as update date rather than publication date should be used.  Limiting searches by publication date risks missing relevant records (2). For databases where there is no update field, running the search without date limits is advised, using reference managing software to de-duplicate the returned records against the original search results. (2)

Limiting a search strategy by language

Including non-English language studies in a review can add to the resources necessary to complete the review (for example, time needed to identify results, translation costs, time needed to data-extract) (4).  By only including English language studies however, language bias is potentially introduced (2,3,5).  For topic areas where research from, or relating to, non-English speaking regions is of increased significance, the issue of limiting by language may be a particular concern. Shenderovich et al studied methodological issues in systematic reviews which aimed to include evidence from low- and middle-income countries, using the example of a review of risk factors for child conduct problems and youth violence (6).  The authors reported that 15 % of the eligible studies were in a language other than English and therefore would not have been retrieved if English language search limits were applied.  The impact of omitting the non-English studies on the conclusions of the review was not investigated.

Current guidance recommends that search strategies should not be restricted by language (2,3,4).  This is advised even if translation is not feasible (2, 3).  Reviewers may exclude non-English language studies from the review, but make a list of potentially relevant studies which were excluded on the basis of language.  This can help inform an assessment of the potential risk for language bias (2,3).

Reference list

Peer reviewing search strategies


Search strategy peer review, within the evidence synthesis context, is a process by which the searches designed and conducted for a Health Technology Assessment (HTA) or systematic review are designed, ideally, by an Information Specialist and subsequently reviewed by another Information Specialist, prior to the searches being run. The goal of peer review of search strategies is to detect errors in a timely fashion (that is, before the searches are run), to improve quality, and to reduce not only the risk of missing relevant studies but also the risk of identifying unnecessarily large numbers of irrelevant records.

As the search strategy is the cornerstone of a well-conducted HTA or systematic review, its quality could affect the results of the final review.  A study published in 2006 by Sampson and McGowan found that errors in search strategies were common: the principal mistakes being spelling errors, missed spelling variants, truncation errors, logical operator errors, use of wrong line numbers, missed or incorrect use of Medical Subject Heading index terms (e.g. MeSH), and the search strategy not being tailored for use in other databases (1).  A study by Franco et al (2) published in 2018 assessed a random sample of 70 Cochrane systematic reviews of interventions published in 2015, evaluating the design and reporting of their search strategies using the recommendations from the then current Cochrane Handbook for Systematic Reviews of Interventions (2011 version) (3), the then current Methodological Expectations of Cochrane Intervention Reviews (MECIR standards - 2013 version) (4) and the then current Peer Review of Electronic Search Strategies (PRESS) evidence‐based guideline (5, 6).  They found problems in the design of the search strategies in 73% of the reviews (95% CI, 60‐84%) and 53% of these contained problems (95% CI, 38‐69%) that could limit both the sensitivity and precision of the search strategies. More recently, a study by Salvador-Olivan et al (7) published in 2019 found that 92.7% of their 137 included systematic reviews, published in January 2018, contained some type of error in the MEDLINE/PubMed search strategy, and that 78.1% of these errors affected recall / sensitivity.

How is peer review of search strategies performed?

Peer review of search strategies has been performed informally since searching for studies for HTA and systematic reviews began. HTA Information Specialists who are part of information teams have always been able to check colleagues’ search strategies for mistakes. The search strategy peer reviewer and the Information Specialist who designed the search strategy have been able to meet face-to-face (or more recently virtually) to discuss errors and revisions. Not all Information Specialists, however, are based in teams and so may be unable to call on colleagues to peer review their search strategies. A forum has been established to enable Information Specialists to submit their searches for peer review by a fellow Information Specialist, on a reciprocal basis (

In addition to the forum mentioned above, a tool has been developed which enables Information Specialists to check search strategies in a more formal, structured way. The PRESS (Peer Review of Electronic Search Strategies) checklist is an evidence-based checklist that summarizes the main potential mistakes made in search strategies. This checklist can help Information Specialists to improve and assure the quality of their own search strategies and those of their colleagues. It provides clear guidance for peer reviewers to follow. It can also help non-searchers understand how search strategies have been constructed and what it is they have been designed to retrieve. Full details about the original PRESS project can be found in the original funder’s report (8) and the accompanying journal article (5). Further information, including the original PRESS checklist (now superseded by PRESS 2015 (9, 10)) can be found elsewhere (6). An update of the PRESS processes was published in 2016 (9). This involved an updated systematic review, a web-based survey of experts and a consensus meeting to update the PRESS tools. The 2015 Guideline Explanation and Elaboration (PRESS E&E) incorporates four components:

The six main domains of the updated PRESS 2015 evidence-based checklist are:

It is recommended that the peer review of search strategies is undertaken at the research protocol phase. Peer review of the search strategy should be performed before the searches are conducted, the results downloaded and researchers start the selection of studies. The latest version of the Cochrane Handbook chapter on searching for and selecting studies recommends peer review of search strategies at the protocol stage (11), whilst the PRISMA-S checklist includes an item explicitly for peer review of search strategies (12). Both also suggest the acknowledgment of search strategy peer reviewers, for example the names, credentials and institutions of the peer reviewers of the search strategies should be noted in the review (with their permission) in the Acknowledgements section.

Is there any evidence of the value of the peer review of search strategies?

The Agency for Healthcare Research and Quality (AHRQ) has conducted a study assessing use of the PRESS checklist and found that it “seems to cut down the time needed to do the review, increase response, and do a better job of identifying actual errors in search strategies” (13). The time burden of the review process using the PRESS checklist was less than two hours.

We have not been able to identify any evidence of whether this tool affects the final quality of systematic reviews or its economic cost. CADTH, however, conducted an internal investigation to see whether peer review of search strategies has an effect on the number and quality of articles included in CADTH Rapid Response reports (14, 15, 16) and found that both the number and quality of relevant articles retrieved were improved. We have also found increased reporting of both peer reviewing of search strategies and use of the PRESS checklist (without reporting evidence of the effectiveness).  Folb and colleagues conducted an evaluation of workshops they were running for librarians on systematic reviews and found that pre-class only 9% of librarians had ever provided peer review of search strategies but at 6-months post-class follow-up this had risen to 17% (17).  With respect to seeking peer review of their own searches, they found that pre-class only 36% of librarians had ever sought peer review of search strategies but at 6-months post-class follow-up this had risen to 48% (17).

It is worth noting that there is increasing interest, at least within the librarian and information specialist community, in librarian and information specialist involvement in peer reviewing search strategies at the manuscript submission for publication stage.  For example, a recent online survey of medical librarians and information specialists conducted by Grossetta Nardini and colleagues, found that only 22% (63/291) of respondents had ever been invited to peer review a systematic review or meta-analysis journal manuscript (18).  The recent launch of the Librarian Peer Reviewer Database (, which serves to connect librarians who have expertise in searching for evidence syntheses with journal editors who need peer reviewers with expertise in this area, should go some way towards remedying this situation.  In April 2020, four of the major international library associations (the Canadian Health Libraries Association (CHLA/ABSC), the European Association for Health Information and Libraries (EAHIL), Health Libraries Australia (HLA-ALIA) and the US Medical Library Association (MLA)) submitted a joint letter to the International Committee of Medical Journal Editors (ICMJE) urging journal editors to actively seek Information Specialists as peer reviewers for knowledge synthesis publications and to advocate for the recognition of their methodological expertise.  This letter has also been published in the journals of the respective library associations (19-22).

As clarified above, peer review of searches at the pre-publication stage is, strictly speaking, beyond the scope of this summary, which focusses on peer review of search strategies prior to the searches being run but indicates increasing awareness of this related topic.


Reference list

Health problem and current use of the technology


This domain describes the target conditions, target groups, epidemiology and the availability and patterns of use of the technology in question. Furthermore, the domain addresses the burden – both on individuals and on the society – caused by the health problem, the alternatives to the technology in question, as well as the regulatory status of the technology and the requirements for its use.  It  covers the qualitative description of the target condition, including the underlying mechanism (pathophysiology), natural history (i.e. course of disease), available screening and diagnostic methods, prognosis, and epidemiology (incidence, prevalence), as well as the underlying risk factors for acquiring the condition as well as available treatments. A description of subgroups or special indications should be included especially in the case when the technology does not target the whole population. (1)

Sources to search

Designing search strategies

Methodologies familiar from clinical or HTA research are not suitable for finding proper up-to-date answers for questions of this domain. It may be much faster and more efficient to collect a proper background set of information through an international survey among HTA agencies, health ministries or health service providers, rather than to perform extensive literature searches. If a literature search is conducted, basic principles of systematic review methodology should be followed. (1)

Reference list

Description and technical characteristics of the technology


This domain describes the technology (or a sequence of technologies) and its technical characteristics, i.e. when it was developed and introduced, for what purpose(s); who will use the technology, in what manner, for what condition(s), and at what level of health care. Material requirements for the premises, equipment and staff are described, as are any specific training and information requirements. The regulatory status of the technology should be listed, where applicable. The issues in this domain need to be described in sufficient detail to differentiate the technology from its comparators. Terms and concepts should be used in a manner that allows those unfamiliar with the technology to get an overall understanding of how it functions and how it can be used. It is important to distinguish between scientifically proven versus suspected mechanisms of action. Important terms should be defined, and a glossary or a list of product names provided. The section may include pictures, diagrams, videos, or other visual material, in order to facilitate understanding for persons who are not experts in the field. The issues contained in this domain are related to the four main topics: (1) training and information needed to use the technology; (2) features of the technology; (3) investments and tools required to use the technology and (4) regulatory status. (1) 

Sources to search

The source of information will depend on the location of a technology within its product life cycle (1).

Designing search strategies

Gathering descriptive information does not necessarily imply a systematic literature search. However, for the transparency of HTA the approaches and sources of information should be documented. If a systematic literature search is performed, the basic principles of systematic review methodology should be followed. (1)

Reference list



Safety is an umbrella term for any unwanted or harmful effects caused by using a health technology. Safety information, balanced with data on effectiveness, forms the basis for further assessment of the technology. (1)

Safety issues can be 

This chapter uses the term adverse effects to be consistent with the literature discussing information-seeking issues within this field. Most of the research findings included in this chapter are for adverse drug effects.

Sources to search

Relying solely on MEDLINE is not recommended, as it is unlikely to be a comprehensive source on adverse effects information (2,3).

A wide range of sources needs to be used for the search to be thorough and in order to provide the best results (4). In a systematic review (3) and a case study of a single drug (4) Golder and Loke identified a combination of sources and techniques that might be expected to provide comprehensive information on adverse effects (in alphabetical order):

Golder et al. (5) and Wieseler et al. (6) found that unpublished data such as company clinical trials reports and drug approval information could be valuable sources of adverse effects information.

The HTA Core Model® recommends the following additional sources: product data sheets, national and international safety monitoring systems, disease and technology registers, routinely collected statistics from health care institutions and Internet discussion forums (1).

In a case study on spinal fusion, Golder et al. (7) found that multiple sources need to be searched in order to identify all the relevant studies with safety data for a medical device. The minimum combination of sources in the study was Science Citation Index, Embase, CENTRAL and either MEDLINE or PubMed, in addition to reference checking, contacting authors and using automated current awareness services.

Designing search strategies

In a study conducted in 2012, Golder and Loke found that adverse effects terms were increasingly prevalent in the title, abstract and indexing of adverse effects papers in MEDLINE and Embase (8). They concluded, therefore, that reviewers could, with some caution, choose to use more focused search filters or specific adverse effects terms in their search strategies, rather than run broad non-specific searches (without adverse effects terms), followed by evaluation of large numbers of full-text articles, at least for articles published more recently.

Even though no single published adverse effects search filter has been shown to capture all relevant records, such filters may still be useful in retrieving adverse effects data (9). The purpose of the search, topic under evaluation, resources available and anticipated gain in precision are factors one should take into consideration when applying such filters. Golder and Loke found that adverse effects search filters, when combined with specific adverse effects search terms, could be applied in MEDLINE with an increase in precision without major loss of sensitivity (10). They also found that adverse effects search filters should be applied with caution in Embase as there might be too high a loss of sensitivity without much improvement in precision (10).

Performance measurement of individual search terms included in search filters in MEDLINE and Embase has shown that:

Studies by Golder et al. (9, 10) provide an overview and comparisons of published search filters. Papers dealing with development of search filters are not included in this SuRe Info chapter, but these can be found at the InterTASC Information Specialists' Sub-Group (ISSG) Search Filter Resource.

However, the currently available adverse effects search filters may not necessarily be useful when searching for adverse effects data of medical devices (11). In a case study, Golder et al. (11) found that the most successful search terms in identifying adverse effects data of medical devices differed from the most successful terms used in search filters for adverse drug effects. The authors emphasize the need to create specific search fiters for adverse effects of medical devices.

Systematic reviews of adverse effects should not be restricted to specific study types (12). Golder et al. found that there was no difference on average between estimates of harm in meta-analyses of RCTs compared to observational studies (12).

Search approaches to identify systematic reviews of adverse effects should be similar to those used to identify primary studies of adverse effects. According to Golder et al. (13) ‘floating’ subheadings provided the highest sensitivity for searching the two major databases of systematic reviews: the Database of Abstracts of Reviews of Effects (DARE) and the Cochrane Database of Systematic Reviews (CDSR). In DARE, MeSH terms achieved the highest level of precision.


We acknowledge Carol Lefebvre and David Kaunelis for their work as co-authors of previous versions of the chapter.

Reference list

  1. (1) EUnetHTA Joint Action 2, Work Package 8.  HTA Core Model® version 3.0; 2016 (pdf).
    [Further reference details] [Publication appraisal] [Free Full text]
  2. (2) Golder S, Loke YK. Sources of information on adverse effects. Health Info Libr J 2010;27(3):176-190. [Further reference details] [Publication appraisal] [Free Full text]
  3. (3) Golder S. Optimising the retrieval of information on adverse drug effects. Health Info Libr J 2013;30(4);327-331. [Further reference details] [Publication appraisal] [Free full text]
  4. (4) Golder S, Loke YK. The contribution of different information sources for adverse effects data. Int J Technol Assess Health Care 2012;28(2):133-137. [Further reference details] [Publication appraisal] [Free Full text]
  5. (5) Golder S, Loke YK, Bland M. Unpublished data can be of value in systematic reviews of adverse effects: methodological overview. J Clin Epidemiol 2010;63(10):1071-1081. [Further reference details] [Publication appraisal] [Free full text]
  6. (6) Wieseler B, Wolfram N. McGauran N et al. Completeness of reporting of patient-relevant clinical trial outcomes: comparison of unpublished clinical study reports with publicly available data. PLoS Med 2013;10(10):e1001526. [Further reference details] [Publication appraisal] [Free full text]
  7. (7) Golder S, Wright K, Rodgers M. The contribution of different information sources to identify adverse effects of a medical device: a case study using a systematic review of spinal fusion. Int J Technol Assess Health Care 2014;(30)4:1-7. [Further reference details] [Publication appraisal] [Free full text]
  8. (8) Golder S, Loke YK. Failure or success of electronic search strategies to identify adverse effects data. J Med Libr Assoc 2012;100(2):130-134. [Further reference details] [Publication appraisal] [Free Full text]
  9. (9) Golder S, Loke Y. The performance of adverse effects search filters in MEDLINE and EMBASE. Health Info Libr J 2012;29(2):141-151. [Further reference details] [Publication appraisal] [Free Full text]
  10. (10) Golder S, Loke YK. Sensitivity and precision of adverse effects search filters in MEDLINE and EMBASE: a case study of fractures with thiazolidinediones. Health Info Libr J 2012;29(1):28-38.
    [Further reference details] [Publication appraisal] [Free Full text]
  11. (11) Golder S, Wright K, Rodgers M. Failure or success of search strategies to identify adverse effects of medical devices: a feasibility study using a systematic review. Syst Rev 2014;3:113.
    [Further reference details] [Publication appraisal] [Free full text]
  12. (12) Golder S, Loke YK, Bland M. Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: methodological overview. PLoS Med 2011;8(5):e1001026.
    [Further reference details] [Publication appraisal] [Free Full text]
  13. (13) Golder S, McIntosh HM, Loke Y. Identifying systematic reviews of the adverse effects of health care interventions. BMC Med Res Methodol 2006;6:22. [Further reference details] [Publication appraisal] [Free Full text]

Diagnostic accuracy


HTA may include assessment of new diagnostic technologies or techniques. These can involve the identification and review of diagnostic test accuracy (DTA) studies designed to differentiate between individuals with and without a target condition (1). The Cochrane Collaboration has published an evidence-based guide to searching for DTA studies, which provides the basis for this summary (2).

DTA studies tend to be poorly reported and searching for them can be problematic due to this inadequate reporting and inconsistent terminology, the absence of appropriate indexing terms in some databases for this publication type, and inconsistent use of suitable indexing terms where they are available (2).

Sources to search

Relying only on searching MEDLINE is not recommended, as it is unlikely to be the most comprehensive source of diagnostic information and because diagnostic studies are not easy to retrieve efficiently in bibliographic databases (3). Relative recall analysis of systematic reviews has also suggested other databases might yield additional studies including Science Citation Index, BIOSIS and LILACS (3). Recent analyses have suggested that fewer databases might be adequate, but are weakened by their reliance on known-item searches (4,5,6). Review searches may not detect all the records in MEDLINE that might be relevant to a review, so searching other databases provides opportunities to pick up (MEDLINE indexed) studies by other routes. An analysis of ten meta-analyses found that only using studies indexed in MEDLINE did not impact significantly on the sensitivity and specificity estimates of the meta-analyses in those reviews (4). A second analysis of 16 meta-analyses of diagnostic accuracy studies of depression screening tools found 94% (range: 83-100%) of the primary studies included in the meta-analyses were indexed in MEDLINE (5). The remaining non-MEDLINE indexed studies were located in Scopus, PsycINFO, and Embase. The authors acknowledged that the quality of the majority of the original reviews could not be determined. Another recent study of nine reviews performed by a single research group found that the reviewers’ original searches would have found 85% of their included studies from MEDLINE and Embase (range: 60-100%) (6). Adding reference checking to the process would have found 93% of the included studies. The Cochrane Handbook (2), based on available research-evidence, currently recommends that searches should include the following databases for reviews and primary studies:

●      MEDLINE
●      Embase
●      ARIF
●      HTA database
●      DARE (closed to new records from March 2015)
●      Cochrane Database of Systematic Reviews
●      Searches for unpublished studies in dissertations databases and grey literature databases
●      Reference checking
●      Citation searches in services such as Science Citation Index, Scopus, and Google Scholar, as well as related articles options in interfaces such as PubMed or Ovid.

In addition the following new databases could also be searched:

●     PROSPERO - register of systematic reviews
●     Epistemonikos - collection of systematic reviews and their included studies
●     PDQ Evidence - collection of systematic reviews about health systems and their included studies 

HTA agencies may also undertake assessments of diagnostic tests and so agency websites should also be explored, for example NICE diagnostic test guidance can be accessed here.

Although the proportion of ongoing studies investigating diagnostic test accuracy may still be relatively low (7), some are being recorded prospectively on trials registers such as and the ICTRP portal (8). Searching for unpublished studies is important for reducing potential biases and research has demonstrated that between 25% and 50% of DTA studies do not get published in peer-reviewed publications (9). A recent study in a large sample of 200 systematic reviews of DTA studies has demonstrated that searching for unpublished studies is not yet standard practice (9).

The evidence for the value of handsearching is currently sparse, with one recent study of one topic showing that handsearching contributed little (10). It is possible that the topic of the research was well defined and the database searches were exemplary, and therefore the handsearching contribution would be different in other topics (10). More evidence is required on the yield and value of handsearching. Where a topic is published in journals that are not indexed in bibliographic databases, handsearching can still serve a purpose but this needs to be evaluated question by question.

Designing search strategies

Search strategies should be designed to be highly sensitive using a wide variety of search terms, both text words and subject indexing, to ensure that the many different ways that a test may be described feature in the search (2).  Information specialists should be aware of the weaknesses of reporting in abstracts of diagnostic accuracy studies. One exploratory study evaluating the comprehensiveness of reporting in the abstracts of 12 high-impact journals found 50% of the articles did not identify the study as a diagnostic accuracy study in the title and 65% included the sensitivity and/or specificity estimates in the abstract (11). In addition, the application of available indexing may not be consistent in databases and should not be relied upon. One study reported that the sensitivity of three key Emtree headings, including the checktag ‘diagnostic test accuracy study’, was found to be individually below 50% and only achieving 72.7% when used together (12).

The search should reflect some, but not necessarily all, of the key concepts of the review (2). The search is likely to capture the index test being investigated and the target condition being diagnosed (2,11). A third set of terms can be considered to capture the patient description or the reference standard. The development of search strategies for DTA studies can be challenging and may involve several iterations to reach a strategy that captures the complex way records may present concepts of diagnosis (2). Cochrane Reviews of diagnostic test accuracy studies and the Cochrane handbook provide examples of search approaches for these, often complex, topics. Strategies may include both general terms (such as the generic type of diagnostic method, for example dipsticks) and specific terms such as named dipstick tests (2).

There are many published methodological search filters designed to capture studies of diagnostic test accuracy and including test measurement terms such as sensitivity and accuracy (13). The evidence, however, on the performance of DTA search filters suggests that combining filters with a search for a population and an index test is likely to miss relevant studies (,14,15,16,17). Search filters for DTA studies do not seem to perform consistently and may result in unacceptable reductions in sensitivity (13,14,15,16,17,18). Some studies have found that there may be instances where these methodological filters could be used, but these are not within the context of information retrieval for the production of health technology assessments (19, 20).  When all the research is considered together, current evidence suggests that for search strategies designed to support systematic reviews of diagnostic accuracy, as long as DTA filters are not the only approach, they may be useful as one component of a search strategy which involves several search approaches:  a “multi-stranded” approach involves multiple queries run sequentially and using different combinations of concepts. Search filters can be identified from the InterTASC Information Specialists' Sub-Group (ISSG) Search Filter Resource (13).

Subheadings (floating subheadings and subheadings attached to the index test or the target condition) may be a helpful component of the search strategy (2).

Attention to proper translation of DTA search strategies into subsequent databases is important. To illustrate, a recent study identified suboptimal translation of MEDLINE DTA search strategies into the LILACS database and provides detailed guidance on how to search for DTA studies in LILACS (21).

Developing a search strategy can be iterative and complex and it can be helpful to have topic experts to review samples of search results for relevance and it is always helpful to be able to test retrieval against sets of known relevant records.

Reporting Search Strategy Methods

The  PRISMA-DTA Checklist and PRISMA-DTA for Abstracts Checklist [Link to PRISMA-DTA] provide guidance on minimum reporting methods for systematic reviews of DTA studies. A study evaluating the “completeness” of reporting of DTA systematic reviews using PRISMA-DTA guidelines, found that though information sources searched were reported in 87/100 systematic reviews analyzed, information on the last search date and the complete search strategies were only moderately reported and could be improved (22)


We have used the searching chapter of the Cochrane DTA Handbook (2) as our baseline and SuRe Info appraisals have only been prepared for recently identified studies.


Reference list


Clinical effectiveness


This domain focuses on the identification of evaluations of the efficacy or effectiveness of a technology (device, medicine, vaccine, procedure or system) or intervention. These evaluations focus on whether a technology works, as well as the magnitude of health benefits or harms caused by the technology. Searching for adverse effects is summarized in the SuRe Info Safety chapter, and diagnostic tests in the SuRe Info Diagnostic Accuracy chapter. Study designs used to assess clinical effectiveness of a technology or intervention include randomized controlled trials (RCTs), quasi-experimental studies, and observational studies. Clinical effectiveness searches will focus on the identification of reports of these study types (1,2).

This chapter is primarily based on Cochrane’s 2020 update of the Searching for and selecting studies chapter (Chapter 4) of the Cochrane Handbook for Systematic Reviews of Interventions (3)and the EUnetHTA guideline on information retrieval for systematic reviews and health technology assessments of clinical effectiveness (4).

This chapter is the result of extensive Cochrane work, and appraisals have not been prepared for studies cited in the Cochrane Handbook.

Sources to search

The choice and number of sources selected to search will depend on the research question, which resources can be accessed, and time and budget constraints (3,5). The Cochrane Handbook states the following databases should be searched as a minimum to identify as many relevant efficacy studies as possible and minimise the risk of publication bias:

While CENTRAL includes records from MEDLINE, Embase, clinical trial registers and other bibliographic databases, the Cochrane Handbook (section recommends supplementary searches of MEDLINE and Embase for comprehensiveness and currency since there is a time delay between records being indexed in MEDLINE/Embase and appearing in CENTRAL.

There are various interfaces for these databases, some free and some fee-paying, check your own institution library to see which resources you can access. For more information see the SuRe Info Service providers and search interfaces page.

Subject specific databases

The 2020 Cochrane Handbook (section states that it is highly desirable to search appropriate subject-specific databases (3). There is mixed evidence on whether searching subject-specific databases adds value in terms of identifying additional unique references, and the decision about whether to search subject specific databases may depend on the research question or topic area (6–11).

Examples of subject-specific bibliographic databases are listed in an Appendix. Further information is available in the Technical Supplement associated with the Cochrane Handbook (12).

National/regional databases

The 2020 Cochrane Handbook (section recommends searching national and regional databases (3), as these may index journals not included in international bibliographic databases such as MEDLINE and Embase, and could minimize the risk of language bias (13–15). There may be particular relevance in searching regional databases for certain topic areas, for example, searching Chinese databases may identify additional trials in searches on Chinese traditional medicine (16). Examples of regional and national bibliographic databases are listed in the Appendix. Further information is available in the Technical Supplement associated with the Cochrane Handbook (12).

Bethel and Rogers (2019) recommend using search summary tables to evaluate the impact of decisions regarding bibliographic database selection (5). Search summary tables make it easier to identify the resources that contribute unique records and may aid decision-making for update searches.

Clinical trial registers, ongoing studies and unpublished data

Sources of ongoing and unpublished studies should be included in the search. Searching for such studies can prove challenging, but should be undertaken to minimise bias (17, 30). Ongoing studies can be found in trials registries, whose records are updated when the trials are completed and published. 

The following is a selected list of trial registries and a search engine:

Some clinical trials registries are region or topic specific (for a more exhaustive list see Chapter 4 of the 2020 Cochrane Handbook (3)or the YHEC clinical trials website).

Investigators’ documentation and clinical study reports are another source of unpublished data so contact with principle investigators or researchers may be considered. Isojarvi et al 2018 have summarised research evidence around identifying unpublished data (18)and research highlights the need to search multiple trials registers (19). Reports of RCTs and quasi-RCTs from ICTRP and are now included within the CENTRAL database, however, research suggests searching CENTRAL alone may not be sufficient to identify all relevant clinical trial registrations (31).

Supplementary searching

Consider which supplementary search methods to use; sources will depend on the topic of the search (2020 Cochrane Handbook section 4.3.5). These methods may locate publications that were not found in the original search or identify other concepts that have been missed. They may also identify comments, errata, retractions or related studies (20–22)

There is evidence that suggests hand searching may identify unique studies not identified through bibliographic database searches, particularly in non-English language journals (23). However, the value of hand searching may vary across subject areas (24).

Citation index searching can help identify studies particularly where subject searches are challenging (25,26). The 2020 Cochrane Handbook (section 4.3.5) recommends screening the reference lists of previous reviews on the same topic to identify studies (3). For further information on citation searching, see the SuRe Info section on the Value of using different search approaches.

Web-searching may identify individual studies or organisations such as academic units or research bodies that publish relevant material. The usefulness of this is also topic dependent and further information is available in the Technical Supplement associated with the Cochrane Handbook (12).

Searching for grey literature (or literature not controlled by traditional commercial publishing including reports, dissertations, theses, databases of conference abstracts) is considered highly desirable for Cochrane Reviews (Cochrane Handbook section 4.3.5). Sources of grey literature are listed in the Appendix. The current Cochrane Handbook states that searching for grey literature is highly desirable (3), however, there is discussion regarding the value of grey literature for certain topics, so the time and effort spent searching grey literature sources should be carefully considered when planning the search (25,27).

Regulatory agency, manufacturer websites and clinical study reports may be useful for providing more extensive detail on interventions than is available in journal articles (18). Regulatory agencies provide access to detailed pharmaceutical submission documents such as the EMA’s public assessment reports (EPARs), FDA drug approval package or DAPs, as well as product recalls, market withdrawals or safety alerts. 

Health technology assessments (HTAs) published by national health technology assessment agencies can provide detailed information on the clinical effectiveness, economic analysis and patient related issues around new health technologies.

Designing search strategies

The 2020 Cochrane Handbook (section 4.4) provides guidance on issues to consider when designing search strategies for systematic reviews (3). The PICO model (Patient or Population or Problem; Intervention; Comparison; Outcome) is commonly used to develop the structure of a search strategy for clinical effectiveness research questions, but other frameworks are available (28).

In many bibliographic database strategies, the search is likely to have 3 sets of terms, using a combination of subject headings and free-text terms to describe:

For further information, see the SuRe Info chapter on Search strategy development.

Search filters

Search filters are combinations of search terms designed to retrieve particular types of references, including specific methodological study designs. The 2020 Cochrane Handbook (section 4.4.7) and Glanville et al (2020) recommend using specially designed and tested filters (such as the Cochrane Highly Sensitive Search Strategies for identifying randomized trials in Ovid MEDLINE) when appropriate (3, 32). A search filter should not be used in a pre-filtered database such as CENTRAL.

Some systematic reviews may include non-randomized controlled trials (2020 Cochrane Handbook Chapter 24) (3). Search strategies for these study types can be problematic as they are not well defined or indexed consistently (29).

For further information on the sources of methodological filters (including filters to identify randomized controlled trials and observational studies), see the SuRe Info section on Search filters.

Documenting searches

For further information regarding documenting and reporting searches, refer to the SuRe Info chapter.


Reference list


ClinicalEffectivenessChapter_Appendix.pdf138.08 KB

Costs and economic evaluation

We are grateful for the assistance of Eleanor Kotas on the chapter in 2019-2020 and Kelly Farrah in 2020.


This domain focuses on the importance of obtaining information about costs and outcomes as well as efficacy and effectiveness when evaluating new technologies. Economic evaluation is an important part of health technology assessment because it assists with priority-setting between different health technologies. An economic evaluation identifies, measures, values and compares the costs and outcomes of a technology with its relevant comparator.

This domain overlaps with the effectiveness domain and the organizational domain (1).

Guidance on conducting searching as part of systematic reviews of economic evaluations and utilities have recently been published (2,3).

Sources to search

There are some databases which identify and collect economic evaluations and health economics studies (4,5,6,7,8,9) to promote efficient retrieval. These databases are built largely from MEDLINE and Embase, but offer a variety of value added information such as critical appraisals, results, categorisations and indexing. These databases can save time in identifying economic evaluations, but may not be comprehensive because of publication lags or geographical focus (e.g. the Cost-Effectiveness Analysis (CEA) registry). NHS EED ceased updating at the end of 2014 and is available only as a closed database. HEED is no longer available. This means that sensitive searches should also include searches of general medical databases such as MEDLINE and Embase (4,5,8,9,10,11,12). Searching Science Citation Index and conference abstracts (via websites as well as Embase) may also increase retrieval (10,13). Pitt et al. conducted a bibliometric analysis of full economic evaluations of health interventions published in 2012-14, comparing, among other things, the sensitivity and specificity of searches in 14 databases (14). This study confirms that Econlit is not a high yield resource for economic evaluations and suggests that Scopus may be a useful resource to search, which may merit investigation.

Searching non-database sources is likely to identify further studies outside of commercial journal publications (10).

The majority of recent reviews of economic evaluations have not followed published searching approaches in detail and are also currently poorly reported (15). Reviews should report the searches explicitly and search a range of resources (2,9,15). The following information sources should be considered when searching for economic evaluations and utility studies:

Identifying information to populate economic models may involve searching sources ranging from statistical resources to bibliographic databases (4,5,21,22,23,24,25). Guidance on suggested minimum searching levels for model parameters is available, although the author notes that much of the guidance has not been empirically tested (25). Additional suggestions for identifying utility studies, include standard approaches such as checking the reference lists of eligible studies, consulting experts, carrying out citation searches and named author searching (3).  One study has examined the use of routine data (typically obtained for health insurance funds or other reimbursement data sources rather than bibliographic databases) in economic evaluations and highlighted that these data may increasingly need to be included in economic evaluations (26).

Designing search strategies

Principles of systematic review methodology should be followed for the design of search strategies to identify economic evaluations. The development of sensitive subject searches within the specific economic evaluation databases is recommended to capture the population and the intervention of interest (4,5,27). An overview of methods for systematic reviews of health economic interventions suggests that a systematic search should use relevant elements of PICO combined with an economic search filter (18). Shemilt is more cautious still, suggesting that only intervention search terms may be required and focus can be achieved by adding the population concept (16). However, there is no requirement to add an economic evaluation search filter to searches within economic evaluation databases because they are pre-filtered (3,7). Search filters for economic studies can be considered (in combination with concepts capturing the population and/or intervention) in general bibliographic databases such as MEDLINE or Embase (18). Published search filters, which can be identified from the InterTASC Information Specialists' Sub-Group (ISSG) Search Filter Resource, tend to have high sensitivity but poor precision (28,29,30). A filter validation study conducted in 2018 highlights Embase filters with between 89.9% (2.9% precision) and 70.2% sensitivity (14.1% precision) (31). CADTH offers a more precision maximizing search filter for rapid reviews (32). Search strategies to identify cost-effectiveness information may need to be adapted from those developed for searching for effectiveness studies (33). Searching for particular economic methods may require the use of several techniques (34).

Searches to inform specific parameters of decision models may not be required to be as extensive and systematic as those to identify economic evaluations, as decision models are developed in an organic way, some parameters do not require the identification of comprehensive evidence and also it may not be feasible to conduct extensive searches for all parameters of a model (4,22,25).

Health state utility values (HSUVs) are important parameters in decision models and searching for them requires specific techniques (3,21,35) and the careful use of search filters can also be considered (3,36). There are few subject headings dedicated to utilities within MeSH and EMTREE, and although general subject headings such as ‘Quality of life’ will yield relevant studies, they are likely to demonstrate poor precision (3). Free text terms should be included in searches and three types may be helpful to include: general terms (such as QALY), instrument specific terms (such as EQ-5D) and terms describing methods of utility elicitation such as standard gamble (3). Search filters make use of a selection of these terms which have been shown to perform well in practice (3,34,36). Arber et al. have published three validated filters for retrieving HSUVs (sensitivity maximizing; a balance of sensitivity and precision; and precision maximizing) (36). One study recommends the use of iterative searching for utilities, following an initial scoping search, and lists factors to consider when defining the search criteria (35).

Searching for cost of illness/burden of illness can make use of population search terms (perhaps taken from an accompanying effects review) (16).


Reference list

Ethical analysis


The ethical analysis domain considers prevalent social and moral norms relevant to the technology in question. It involves an understanding of implementing or not implementing a healthcare technology in two respects: with regard to the prevailing societal values and with regard to the norms and values the technology itself constructs when it is put into use. The moral value that societies attribute to the consequences of implementing a technology is affected by socio-political, legal, cultural and religious and economic differences. However, many ethical considerations are common to all cultures and societies. There are also moral and ethical issues related to consequences of performing a health technology assessment (e.g. ethical consequences of choosing specific endpoints and whether there are ethical problems related to economic evaluation). (1)

Ethical analyses in HTA are realized in different ways (2):

Different methods and approaches (3) to analyze ethical aspects in HTA do exist and are applied in some of the published HTA reports. These methods publications do not provide detailed guidance on how and where to find relevant literature, which is the aim of this chapter.

Sources to search

Medical ethics is an interdisciplinary field of research (4). Searching beyond the major biomedical databases is recommended. Rauprich et al. (5) compared the search process and the results of MEDLINE and the ethics database BELIT and found in their examples only a small overlap of 3-4% between the two. Fangerau (4) identified the highest quantity of Medical Ethics journal literature by searching a combination of databases, as Current Contents, MEDLINE, Research Alert, Social Sciences Citation Index, Embase, AgeLine, CINAHL, E-psyche, Sociological Abstracts, and Family Index.

Droste et al. (6) recommended, depending on the topic in question, the following sources (searchable in English language) in information retrieval for ethical issues:

As ethical aspects are related to individual and public preferences, norms and values (sense of morality) they are regionally and nationally different. Thus, for thorough reflections on ethical issues relevant national databases are of interest too (6). In this context, Dracos introduced the Italian bioethics database SIBIL (7).

Ethical aspects as well as legal aspects refer to international or supranational rules and regulations and in particular to national law. For these reasons the use of the information sources recommended for legal issues are of interest too (ref. chapter “Legal aspects”).

Further, hand-searching for non-indexed journals, searching of the web pages of relevant ethics institutes or searching for experts may be considered (6).

Designing search strategies

No internationally established standard exists on how to develop search strategies on ethical analysis related to health technologies (6). A study by Droste et al. (6) introduced a proposal for an information retrieval procedure similar to the workflow of information retrieval for effectiveness assessments.

One should first try to identify relevant laws, rules and regulations, and the ethical issues relevant to the topic of interest and the methods approach chosen for analysis, then start the workflow:

Step 1: Translation of the search question respective definition of the search components by using the PICO scheme and additional components.

Step 2: Concept building by modeling and linking search components with Boolean operators.

Step 3: Identification of synonyms in all relevant languages.

Step 4: Selection of relevant information sources.

Step 5: Design of search strategies for bibliographic databases.

Step 6: Execution of search strategies and information seeking, including hand-searching.

Step 7: Saving of retrieval results and standardized reporting of the process and results.

Step 8: Final quality check and calculation of precision and recall.

In the first step, it is recommended to add an additional component related to the ethical aspects to the PICO scheme describing the population, intervention, comparator, and outcomes of interest (PICOE). In databases that allow advanced searching, subject headings and text words describing the relevant ethical issues are then combined component by component with the Boolean operator “AND”. The final steps of the proposed workflow are the final quality check, saving the results and reporting the search process transparent and reproducible manner.

Some support is provided by Kahn et al. (8, 9) who presented the Bioethics Thesaurus Keywords and MeSH equivalents as well as some recommendations on how to find publications by entering MeSH Headings and free-text queries in PubMed and further databases provided by the National Library of Medicine (NLM). Additionally, the National Reference Center for Bioethics Literature (10) published a guide on the bioethics literature databases at the Georgetown University and the National Library of Medicine (NLM), on BELIT (German Reference Centre for Ethics in the Life Sciences, DRZE) and the Global Ethics Observatory (UNESCO).

The paper of Droste et al. (6) provides (besides the overview of relevant databases) relevant subject headings for searching for ethical aspects in MEDLINE and Embase. The use of less sophisticated search strategies has to be to be considered in other sources.

There have been undertaken some efforts to develop a standard search filter for identifying publications on values by Petrova et al. (11) but the study results show that a) “values” are hard to define and topic specific, b) “values” are not representable by a brief search filter (124 MeSH terms, 144 free text words were processed), and c) sensitivity / external validity is too low to be applied in HTA or systematic reviews.

Some overlap does exist between ethical, legal and social aspects of health technologies (6). For example, issues of patient autonomy are part of each of these aspects. To avoid duplication of work, joint information retrieval processes for these three aspects may therefore be considered.

Some information on ethical aspects of health technologies is published in qualitative studies. Thus, it is recommended to search for such studies from various disciplines too. Guidance on how to search for qualitative research will later be supplemented by a separate chapter.


Reference list


Social aspects

Organizational aspects


Health Technology Assessments (HTAs) not only evaluate a health technology and its effectiveness (and often cost-effectiveness) but also consider organizational aspects surrounding its implentation, or sometimes removal, within a specific context or setting. This domain of an HTA examines how various types of resources (administrative, human, technological, etc.) need to be structured when implementing a technology. Any impacts that may result within the health care organization or the health system as a whole are considered. (1,2).

In general, the organizational domain explored the following issues, but may also consider others (1,2):

  1. Health care delivery processes and how the technology may affect current work flows
  2. The structure of health care services and equitable access to the new technology
  3. Process related costs for purchasing and setting up the new technology along with budget impacts
  4. Management issues
  5. Cultural issues including acceptance of the new technology by those within health care organizations

General Search Guidance

There is little information regarding the optimal methods for conducting analyses in this domain and consequently little guidance on best practices for searching the evidence base.  EUnetHTA’s HTA Core Model (1) and the Danish Centre for Health Technology Assessment’s Health Technology Assessment Handbook (2) offer the most detailed guidance in this area.

In general, both sources agree that this is a challenging area for information retrieval as evidence on the organization and delivery of health services encompases a wide range of disciplines, study types, and is spread across a wide range of published and grey literature.  The information required for this section of an HTA  is often context (and often country) specific which can result in little to no published literature being available. (1,2)

It is recommended that, as a first attempt, an extensive literature search focusing on identifyiing systematic reviews of organizational aspects should be conducted. If no systematic reviews are available, then the search should be revised to focus on guidelines and relevant primary studies. If no relevant data are identified, the third step is to identify primary data which might involve conducting surveys or interviews of healthcare professionals and content experts.  Data might also be obtained from administrative databases of the relevant organizations involved in the analysis (1). New primary qualitative research might be the only way to assess real world practice use and misuse. (2)

Sources to search

A wide range of sources of published and unpublished (grey) information should be searched. Other search techniques should also be considered including contacting experts and scanning reference lists of relevant papers and hand searching of journals.  Information should be gathered not just from traditional health sciences literature sources but also from sources of social sciences, business, and even education literature.  The choice and number of resources to search will depend on the topic of the assessment and the time/resources available for searching. At a minimum, the most commonly used databases below should be consulted (1,2).

Resources recommended to search for the organizational domain (1,2) include:

In addition the following sources could also be searched:


Types of Research Studies to Include in the Search

Because of the complexity of the organizational domain in terms of the variety of literature that is needed, no single type of research study is appropriate. The evidence base may encompass many varying types of studies, both quantitative and qualitative.  The HTA Core Model (1) recommends searching  for the following study types:

With this is mind, we also recommend that individuals consult the chapter (note: in development) on searching for qualitative literature to aid in the development of search strategies for the organizational domain.  

Designing Search Strategies

In terms of designing the search strategy for this topic area, there is little guidance available and we suggest that information specialists explore published reviews and HTAs to see how others have searched for topics such as health delivery processes and health structures.

Search Filters

A number of search filters can be employed in the design of search strategies, each with a specific focus.  Wilczynski et al. (3) have developed a health services research filter and Simon et al. (4) have created a filter aimed at uncovering nurse staffing research.  Van Walraven et al. (5) have developed a filter to identify studies that use administrative data, and Hempel et al. on quality improvement interventions (6).  Each of these authors acknowledge that research in these topic areas is difficult to search for because of the wide variety of applicable subject headings/terms and the variable keywords and language used to describe the field (3-6). While all of these filters have quite good sensitivity, they all have much poorer precision which is likely to result in quite large search yields and many irrelevant records that need to be manually screened.

Additional search filters can be identified from the InterTASC Information Specialists' Sub-Group (ISSG) Search Filter Resource.


Reference list

Legal aspects


The HTA Core Model specifies four categories of legal aspects of importance to HTA (1):

In addition, when assessing diagnostic technologies, legal issues related to who is the end-user are also of importance (1).

Sources to search

Legal aspects related to health care policy refer to international or supranational rules and regulations (such as European law) and in particular to national law. Droste and Rixen (2) recommend the use of the following information sources for legal issues directly related to the health technology in question:

-Health policy administration, Decision-making bodies, Health insurances etc. (Benefit rights)
-European Medicines Agency (EMA), National agencies etc. (Pharmaceutical legislation)
-Notified Bodies (European Union), National authorities etc. (Medical device law)
-Governments, Parliaments, Authorities with legislative rights etc. (Rehabilitation and care legislation, Hospital law, Remuneration law)
-Medical Council etc. (Professional law)
-Government, Parliament, Legislative Authorities (Criminal law)
-Government, Parliament, Federal / National Court of Justice, Higher District Court etc. (Liability law)
-World Health Organization (WHO), EU Council of Ministers, Government, Parliament, Federal / National Social Court, Federal / National Constitutional Court etc. (Patient rights)

Sources for patents may be consulted when looking for information on whether the technology in question infringes some intellectual property rights or whether the introduction of the technology means there will be additional licensing fees to be paid (1).

Depending on the topic in question, the following information sources (searchable in English language) may be used when searching for legal issues directly related to the patient and e.g. his/her basic rights and freedoms (2):

Further, hand-searching for non-indexed journals and searching of the web pages of relevant health law and ethics institutes may be considered (2).

Designing search strategies

No internationally established standard exists for how to develop search strategies on legal aspects related to health technologies (2). A study by Droste and Rixen (2) introduces a proposal for an information retrieval procedure similar to the workflow of information retrieval for effectiveness assessments.

One should first try to identify relevant laws, rules and regulations, and legal issues relevant to the topic of interest (2). When defining the research question, it is recommended to add an additional category related to the legal aspects to the PICO scheme describing the population, intervention, comparator, and outcomes of interest (PICOL). In databases that allow advanced searching, subject headings and text words describing the relevant legal issues are then combined with search terms characterising the selected relevant PICO categories, with the Boolean operator AND. The paper of Droste and Rixen (2) provides an overview of relevant subject headings for searching for legal aspects in MEDLINE and Embase. Use of less sophisticated search strategies may need to be considered in other types of sources.

Patent information cannot be searched using standardized search approaches and it is considered to be challenging (4). A human recombinant insulin case study by Dirnberger (4) compared the performance of three different search approaches: "crude” keyword search strategy, complex focused keyword search strategy, and sequence search strategy. The best search results in terms of recall and precision were achieved by combining the focused keyword and sequence search approaches.

There is some overlap between ethical, legal and social aspects of health technologies (2). For example, issues of patient autonomy are part of each of these aspects. To avoid duplication of work, joint information retrieval processes for these three aspects may therefore be considered. 


Reference list