Summarized Research in Information Retrieval for HTA

Welcome to Summarized Research in Information Retrieval for HTA (SuRe Info), a web resource that provides research-based information relating to the information retrieval aspects of producing systematic reviews and health technology assessments. SuRe Info seeks to help information specialists stay up-to-date in the latest developments by providing easy access to current methods papers, and support more research-based information retrieval practice.

The website has two sections, listed below:

  1. Information on general search methods common across all health technologies;
  2. Methods to use when searching for specific aspects of health technologies (mainly based on the structure of the HTA Core Model® developed by EUnetHTA)

Click on a heading name to reach the chapter summarizing the current research findings within that category. The references listed at the end of each chapter are linked to structured appraisals of the references, written by members of the SuRe Info project team.

If you have any questions, comments or suggestions relating to SuRe Info please complete this contact form.

General search methods

Searching for specific aspects of HTA

DISCLAIMER

This resource is for personal, professional and non-commercial use only. Reproduction or republishing the content is not allowed. When using the HTA Core Model® we have followed the HTA Core Model Terms of Use. Views expressed in the publication appraisals are those of the SuRe Info reviewers’ and do not reflect the opinions of their respective organizations.

Search strategy development

Author(s): 
Last revised: 
2017-03-31

Introduction

The Cochrane Information Retrieval Methods Group have published an evidence-based chapter on search methods for the Cochrane Handbook (1), which provides the basis for this summary alongside guidance produced by the Centre for Reviews and Dissemination (2) and the Agency for Healthcare Research and Quality (AHRQ) (3). 

The revised and updated searching chapter of the Cochrane Handbook is in preparation. To avoid duplication of effort with the development of the Cochrane Handbook, appraisals have not been prepared for studies in this chapter. Once the revised Cochrane Handbook is available it will be used to update this chapter.

Sensitivity and precision

In order to retrieve as many studies relevant to the review as possible, and to compensate for the limitations of information source records and indexing, search strategy development and construction for systematic reviews should generally aim for sensitivity (1). Increasing the sensitivity of a search increases the possibility of identifying all relevant studies, but also tends to reduce precision because the number of irrelevant results is increased (1, 2). Sampson et al examined a cross section of 94 health related systematic reviews that reported the flow of bibliographic records through the review process and found that search precision of approximately 3% was typical (4). The number of results retrieved, which therefore must be screened against eligibility criteria, has implications for the resources required to conduct a systematic review. This trade-off between sensitivity and precision should be acknowledged and discussed with the wider review team, and an appropriate balance sought within the context of the resources available.

The emphasis on search sensitivity above precision typically reflects the context of systematic reviews of quantitative research. This emphasis may not be the same in searches developed for different purposes within the HTA context.  In the context of qualitative systematic reviews or qualitative evidence syntheses for example, there is discussion as to whether these type of reviews share the same need as systematic reviews of quantitative research for ‘comprehensive’, ‘exhaustive’ bibliographic database searches (5).  Similarly, in the context of conducting a search to inform an ‘evidence-map’ (where an overview of the extent, nature and characteristics of a research area is of interest) research has indicated that less sensitive searches may be appropriate.  In a study which compared a ‘highly sensitive’ search strategy with a ‘highly specific’ search strategy for an evidence-mapping exercise on diabetes and driving to inform clinical guidance development, the authors report that the results of the ‘highly specific’ search would have been sufficient for answering the research question (6). The authors conclude that using highly specific instead of sensitive search strategies is “fully adequate for evidence maps with the aim of covering mainly the breadth rather than depth of a research spectrum”.

Structuring the search

The Cochrane Handbook suggests a search strategy should be structured around the main concepts being examined by the review.  For reviews of interventions, this can be expressed using PICO (Patient (or Participant or Population), Intervention, Comparison and Outcome).  It is usually seen as undesirable to include all elements of the PICO in the search strategy as some concepts are often poorly described or non-existent in the title and abstract of a database record or the assigned indexed terms. For reviews of many interventions a search may reasonably be comprised of the population, intervention, and a study design filter (1) if appropriate. A validated search filter is recommended where one exists for the concept of interest (3). 

In some topic areas, for example complex interventions, where many of the concepts are particularly ill-defined, it may be preferable to use a broader search strategy (such as searching only for the population or intervention) and increase the resources allocated to sifting records (2). Alternatives to the PICO framework have also been evaluated for searches in some fields; examples include the SPIDER tool to structure searches for qualitative and mixed methods research (7) and the BeHEMoTh tool to structure searches for theory (8). In a structured methodological review on searching for qualitative research, Booth lists 11 different notations for use in this context (including PICO, SPIDER and BeHEMoTh), but states that, as with quantitative reviews, there is little empirical data to support the merits of question formulation (5). Methley et al tested the SPIDER search tool in a systematic narrative review of qualitative literature, comparing it with use of the PICO tool and a modified version of PICO with added qualitative search terms (PICOS) (9). The authors conclude that where comprehensiveness is a key factor the PICO tool should be used preferentially due to the risk of missing relevant studies using the SPIDER tool.

Selecting search terms

The Cochrane Handbook recommends that in order to identify as many relevant records as possible, search strategies should combine subject headings selected from the database’s controlled vocabulary or thesaurus (with appropriate “explosions”) and a wide range of free-text terms (1). The choice of free-text terms should include consideration of synonyms, related terms and variant spellings. 

Methods for identifying search terms have traditionally included techniques such as checking the bibliographic records of known relevant studies, consulting topic experts and scanning database subject indexing guides (3). However, text mining is a rapidly developing tool with potential application in a range of tasks associated with the production of systematic reviews, including the identification of search terms (2). AHRQ have published a review on the use of text-mining tools as an emerging methodology within systematic review processes, including the literature search (10). The aim of the AHRQ project was to provide a ‘snapshot’ of the state of knowledge, rather than an in-depth assessment. The review refers to 12 studies where text-mining tools were used for development of ‘topic’ search strategies and identifies several general approaches to development.  These include assessing word frequency in citations (using tools such as PubReminer or EndNote) and automated term extraction (using tools such as Termine).  The review reports that all of the identified studies found benefit in automating term selection for systematic reviews, especially those comprising large unfocused topics. The AHRQ review made no conclusions which were specific to the use of text-mining tools for the literature search process. The general conclusions on the use of text-mining for systematic review processes were that text-mining tools appear promising, but further research is warranted.

Studies cited in the AHRQ review include a study by O'Mara-Eves et al which evaluated whether additional search terms for the topic of ‘community engagement’ were generated when using the text-mining data-extraction tool Termine (11), in addition to typical search development techniques.  The study authors report that although in many cases the terms generated by text-mining had already been identified by the reviewers as relevant, text-mining did reveal some useful synonyms and terms associated with the topic that had not previously been considered.  The study authors state that the text-mining approach studied should never be used on its own but alongside usual search development processes. The authors conclude that text mining helped to identify relevant search terms for a broad topic that was inconsistently referred to in the literature.

The AHRQ review also cited 2 studies published by researchers at the German HTA agency IQWiG. In the first study, the authors propose an 'objective approach' to strategy development using text analysis methods (12). The authors argue that this method ensures the process of selecting search terms is transparent and reproducible and allows a searcher with little specialist knowledge of the search topic to make decisions on the inclusion of terms that are informed by evidence.  In the second study the authors aim to validate the ‘objective approach’, and conclude that it was noninferior to the standard 'conceptual approach' (13). Subsequent correspondence on this publication (14, 15) and the authors' responses to this correspondence (16, 17) has debated the study’s conclusions and the strengths and limitations of the methods used for this research.  Since the publication of the AHRQ review, IQWiG researchers have published a third paper on their ‘objective approach’, comparing it with the ‘conceptual approach’ (18). The authors report that the ‘objective approach’ yielded higher sensitivity than the ‘conceptual approach’, with similar precision and state that ‘objective approaches’ should be routinely used in the development of high-quality search strategies.

Combining search terms with Boolean operators and other search syntax

The Cochrane Handbook describes how a search strategy should be built up using controlled vocabulary terms, text words, synonyms and related terms for each concept at a time, joining together each of the terms within each concept with the Boolean ‘OR’ operator. The sets of terms may then be combined with AND which limits the results to those records that contain at least one search term from each of the sets. If an article does not contain at least one of the search terms from each of the sets then it will not be retrieved. Cochrane advise against the use of the NOT operator where possible to avoid inadvertently excluding relevant records (1).

The AHRQ manual refers searchers to the PRESS (Peer Review of Electronic Search Strategies) Checklist (19) and states that search strategies should make use of the advanced search techniques such as truncation, wildcards and proximity searching described in the PRESS document (3).  In 2015, the PRESS 2015 Guideline Statement was published, which updated and expanded on the previous PRESS publications (20).

Testing search strategies and deciding when to stop searching

Search strategies should be tested to ensure they are fit for purpose: that they find relevant studies. This is difficult to ascertain but testing of search strategies can be carried out informally by expert review, checking that known relevant documents are retrieved by the strategy, or by comparing against previously published strategies (3). 

Alternatively, more formal testing can be undertaken. Such methods are summarised by Booth, whose brief review identified eight methods for determining optimal retrieval of studies for inclusion in HTAs (21). The review concludes that although numerous methods are described in the literature, there is little formal evaluation of the strengths and weakness of each approach. Sampson and McGowan developed and assessed a method (Inquisitio Validus Index Medicus) for validation of MEDLINE search strategies (22). The method uses a version of the known relevant item approach, testing recall of relevant indexed studies identified through all search methods and indexed in the database being tested. The validation occurs once screening has been completed and the eligible studies are known. Poorly performing search strategies can be amended, re-tested and re-run. New studies identified by the amended search can be screened and any relevant studies can be included in the review.  The authors report that the validation method was robust and was able to demonstrate that the retrieval of relevant studies from MEDLINE in a sample of six updated Cochrane reviews was sub-optimal. The authors conclude that the Inquisitio Validus test is a simple method of validating the search, and can determine whether the search of the main database performs adequately or needs to be revised to improve recall, allowing the searcher an opportunity to improve their search strategy.

One aspect of testing searches is to inform reviewers when searching has retrieved 'enough' studies. There is little research evidence on empirically based 'stopping rules' but methods such as capture-mark-recapture have been explored for developing such rules (23). Capture-mark-recapture has also been reported as being used to evaluate searches by estimating retrospectively their closeness to capturing the total body of literature (24, 25). It involves hand-searching a sample journal and running a search strategy on information sources indexing the same journal. The number of relevant records identified by each process is then used to gain a statistical estimate of what has been missed by all searches conducted (24).

Despite these investigations the ARHQ guidelines state that no currently available method can be easily applied to searches for comparative effectiveness reviews. It is argued that the searcher’s judgement is required to decide whether searching additional sources is likely to result in the retrieval of unique items or whether the search has reached the point of saturation. The decision must balance the desire to identify all relevant studies with the resources available to carry out the search (3).

 

Reference list

 

Search filters

Last revised: 
2017-04-05

What are search filters?

Search filters (sometimes called hedges) are collections of search terms designed to retrieve selections of records from a bibliographic database (1). Search filters may be designed to retrieve records of research using a specific study design (e.g. randomised controlled trial) or topic (kidney disease) or some other feature of the research question (age of study’s participants). They are usually combined with the results of a subject search using the AND operator.

Why you would use a search filter?

When included in a database search strategy, a robust search filter can significantly reduce the number of records that researchers may need to sift and recent research has shown that this is a key use of search filters (2). Search filters are not available, however, for all study types or all databases or all database interfaces.

Key features

Filters are typically designed for one purpose, which may be to maximise sensitivity (or recall) or to maximise precision (and reduce the number of irrelevant records that need to be assessed for relevance). Sensitivity is the proportion of relevant records retrieved by the filter and is the most frequently reported performance measure (3). Precision is the proportion of relevant records in the retrieved records and is also frequently reported (3). Specificity is the proportion of irrelevant records successfully not retrieved. Filters are database and interface specific. Performance measures can be difficult to interpret and alternative graphical approaches to presenting performance information may assist with making decisions about which filter to select (3).

Where can you find search filters?

Search filters of interest to researchers producing technology assessments are incorporated into some of the MEDLINE interfaces. For example, they are labelled as Clinical Queries in PubMed (4). Often searchers ‘translate’ filters or adapt them to run on different interfaces (2). Translations and adaptations should be undertaken carefully since different interfaces function in different ways, and different databases may have different indexing languages.

Study design search filters can also be identified from internet resources such as

Some guidance documents for the conduct of health technology assessments recommend specific filters and others leave the choice to the discretion of the searcher.

Critical appraisal of filters

When published, the methods used to compile search filters should be clearly described by the authors. It is also valuable to have access to critical assessments of filters in practice. Search filter development methods have developed over time to become more objective and rigorous (1, 4).  The quality of search filters can be appraised using critical appraisal tools (5, 6) which assess the focus of the filter, the methods used to create it and the quality of the testing and validation which have been conducted to ensure that it performs to a specific level of sensitivity, precision or specificity.

It is also important to know the date when the filter was created so an assessment can be made as to its currency. The value of a search filter can decrease over time as new terms are added to a database thesaurus.

Search filters are not quality filters in terms of identifying only high quality research evidence. All records resulting from the use of a search filter will require an assessment of relevance and quality. All search filters and all search strategies are compromises and an assessment of the performance of filters for each technology appraisal is recommended.

Increasing numbers of filters have led to the assessment of the relative performance of different filters to find the same study design and these can be a good starting point for deciding which filter to use. 

A systematic review of the performance of a large number of diagnostic test accuracy (DTA) filters has provided recommendations that search filters should not be used as the only method for searching for DTA studies for systematic reviews and technology appraisals (7). The review concludes that the filters risk missing relevant studies and do not offer benefits in terms of enhanced precision.

A comparison study (8) of the performance of search filters used to identify economics evaluations concluded that, while highly sensitive filters are available, their precision is low. The performance data provided in this paper can help researchers select the filter that’s most appropriate to their needs.

More recently a study (9) demonstrated that a search filter with adequate precision and sensitivity was not yet available to identify studies of epidemiology in the MEDLINE  database

Search filter development

Creating a search filter to identify database records of a specific study design or some other feature requires a "gold standard" reference set that can be used to measure performance. The reference set can be created by using relative recall (10) or by handsearching.

A recent case study (11) describes how such a gold standard set was created to support the development of a prognostic filter for studies of oral squamous cell carcinoma in MEDLINE. The methods used are generic and could be applied to both other databases and to other types of research studies.

The authors use a flowchart to illustrate the overall process and describe each of the stages: how to generate the initial set of records; the sample size required for filter development; use of an annotation tool and annotation guidelines; and the calibration process to measure inter-annotator agreement.
 

Reference List

 

 

Other limits: language, date

Author(s): 
Last revised: 
2016-03-31

Introduction

This summary is based on the Cochrane Handbook for Systematic Reviews of Interventions (1), the guidance for undertaking systematic reviews produced by the Centre for Reviews and Dissemination (CRD) (2) and the methods guide for effectiveness and comparative effectiveness reviews produced by the Agency for Healthcare Research and Quality (AHRQ) (3). The Cochrane Handbook and AHRQ methods guide are based on the best available evidence and the CRD guidance is recommended as a source of good practice by agencies such as the National Institute for Health and Clinical Excellence (NICE).

The revised and updated searching chapter of the Cochrane Handbook is in preparation. To avoid duplication of effort with the development of the Cochrane Handbook, appraisals have not been prepared for studies in this chapter. Once the revised Cochrane Handbook is available it will be used to update this chapter.

Limiting a search strategy by date

Limiting a search strategy by date may reduce the number of records retrieved for screening, but date limits should only be applied if there is a robust rationale for doing so.  For example, if a healthcare intervention was introduced at a certain date, limiting the search strategy to only retrieve studies reported from this date would be appropriate (3,4). 

Applying date limits to the search strategy can also be an option if an existing search is being updated (2,4). When conducting an update search, searchers should be cautious about how date limits are applied (2,4). If attempting to limit by date, an appropriate field (or fields) such as update date rather than publication date should be used.  Limiting searches by publication date risks missing relevant records (2). For databases where there is no update field, running the search without date limits is advised, using reference managing software to de-duplicate the returned records against the original search results. (2)

Limiting a search strategy by language

Including non-English language studies in a review can add to the resources necessary to complete the review (for example, time needed to identify results, translation costs, time needed to data-extract) (4).  By only including English language studies however, language bias is potentially introduced (2,3,5).  For topic areas where research from, or relating to, non-English speaking regions is of increased significance, the issue of limiting by language may be a particular concern. Shenderovich et al studied methodological issues in systematic reviews which aimed to include evidence from low- and middle-income countries, using the example of a review of risk factors for child conduct problems and youth violence (6).  The authors reported that 15 % of the eligible studies were in a language other than English and therefore would not have been retrieved if English language search limits were applied.  The impact of omitting the non-English studies on the conclusions of the review was not investigated.

Current guidance recommends that search strategies should not be restricted by language (2,3,4).  This is advised even if translation is not feasible (2, 3).  Reviewers may exclude non-English language studies from the review, but make a list of potentially relevant studies which were excluded on the basis of language.  This can help inform an assessment of the potential risk for language bias (2,3).

Reference list

Peer reviewing search strategies

Last revised: 
2017-10-31

Introduction

Search strategy peer review is a process by which the searches designed and conducted for a Health Technology Assessment (HTA) or systematic review are designed, ideally, by an Information Specialist and subsequently reviewed by another Information Specialist. The goal of peer review of search strategies is to detect errors, to improve quality, and to reduce not only the risk of missing relevant studies but also the risk of identifying unnecessarily large numbers of irrelevant records.

As the search strategy is the cornerstone of a well-conducted HTA or systematic review, its quality could affect the results of the final review.  A study found that errors in search strategies are common: the principal mistakes being spelling errors, missed spelling variants, truncation errors, logical operator errors, use of wrong line numbers, missed or incorrect use of Medical Subject Heading index terms (e.g. MeSH), and the search strategy not being tailored for use in other databases (1).

How is peer review of search strategies performed?

Peer review of search strategies has been performed informally since literature searching for HTA and systematic reviews began. HTA Information Specialists who are part of information teams have always been able to check colleagues’ search strategies for mistakes, if project time allowed. The search strategy peer reviewer and the Information Specialist who designed the search strategy have been able to meet face-to-face to discuss errors and revisions. However, not all Information Specialists are based in teams and so may be less able to call on colleagues to peer review their search strategies. A forum has been established to enable Information Specialists to submit their searches for peer review by a fellow Information Specialist, on a reciprocal basis (http://pressforum.pbworks.com).

In addition to the forum mentioned above, a tool has been developed which enables Information Specialists to check search strategies in a more formal, structured way. The PRESS (Peer Review of Electronic Search Strategies) checklist is an evidence-based checklist that summarizes the main potential mistakes made in search strategies. This checklist can help Information Specialists to improve and assure the quality of their own search strategies and those of their colleagues. It provides clear guidance for peer reviewers to follow. It can also help non-searchers understand how search strategies have been constructed and what it is they have been designed to retrieve. Full details about the original PRESS project can be found in the original funder’s report (2) and the accompanying journal article (3). Further information, including the original PRESS checklist itself can be found elsewhere (4). An update of the PRESS processes has recently been published (5). This involved an updated systematic review, a web-based survey of experts and a consensus meeting to update the PRESS tools. The 2015 Guideline Explanation and Elaboration (PRESS E&E) incorporates four components:

A Guideline Statement has been published as a companion document to the updated CADTH report (6).

The six main domains of the updated PRESS 2015 evidence-based checklist are:

It is recommended that the peer review of search strategies is undertaken at the research protocol phase. If peer review of the search strategy is performed before the searches are conducted, the results downloaded and researchers start the selection of studies, there will be no need to revisit strategies and rerun searches.

Is there any evidence of the value of the peer review of search strategies?

The Agency for Healthcare Research and Quality (AHRQ) has conducted a study assessing use of the PRESS checklist and found that it “seems to cut down the time needed to do the review, increase response, and do a better job of identifying actual errors in search strategies” (7). The time burden of the review process using the PRESS checklist was less than two hours. There is no evidence of whether this tool affects the final quality of systematic reviews or its economic cost. However, CADTH conducted an internal investigation to see whether peer review of search strategies has an effect on the number and quality of articles included in CADTH Rapid Responses reports (8, 9), and found that both the number and quality of relevant articles retrieved were improved.

Reference list

Health problem and current use of the technology

Author(s): 
Last revised: 
2017-10-05

Introduction

This domain describes the target conditions, target groups, epidemiology and the availability and patterns of use of the technology in question. Furthermore, the domain addresses the burden – both on individuals and on the society – caused by the health problem, the alternatives to the technology in question, as well as the regulatory status of the technology and the requirements for its use.  It  covers the qualitative description of the target condition, including the underlying mechanism (pathophysiology), natural history (i.e. course of disease), available screening and diagnostic methods, prognosis, and epidemiology (incidence, prevalence), as well as the underlying risk factors for acquiring the condition as well as available treatments. A description of subgroups or special indications should be included especially in the case when the technology does not target the whole population. (1)

Sources to search

Designing search strategies

Methodologies familiar from clinical or HTA research are not suitable for finding proper up-to-date answers for questions of this domain. It may be much faster and more efficient to collect a proper background set of information through an international survey among HTA agencies, health ministries or health service providers, rather than to perform extensive literature searches. If a literature search is conducted, basic principles of systematic review methodology should be followed. (1)

Reference list

Description and technical characteristics of the technology

Author(s): 
Last revised: 
2017-10-05

Introduction

This domain describes the technology (or a sequence of technologies) and its technical characteristics, i.e. when it was developed and introduced, for what purpose(s); who will use the technology, in what manner, for what condition(s), and at what level of health care. Material requirements for the premises, equipment and staff are described, as are any specific training and information requirements. The regulatory status of the technology should be listed, where applicable. The issues in this domain need to be described in sufficient detail to differentiate the technology from its comparators. Terms and concepts should be used in a manner that allows those unfamiliar with the technology to get an overall understanding of how it functions and how it can be used. It is important to distinguish between scientifically proven versus suspected mechanisms of action. Important terms should be defined, and a glossary or a list of product names provided. The section may include pictures, diagrams, videos, or other visual material, in order to facilitate understanding for persons who are not experts in the field. The issues contained in this domain are related to the four main topics: (1) training and information needed to use the technology; (2) features of the technology; (3) investments and tools required to use the technology and (4) regulatory status. (1) 

Sources to search

The source of information will depend on the location of a technology within its product life cycle (1).

Designing search strategies

Gathering descriptive information does not necessarily imply a systematic literature search. However, for the transparency of HTA the approaches and sources of information should be documented. If a systematic literature search is performed, the basic principles of systematic review methodology should be followed. (1)

Reference list

Safety

Last revised: 
2017-03-08

Introduction

Safety is an umbrella term for any unwanted or harmful effects caused by using a health technology. Safety information, balanced with data on effectiveness, forms the basis for further assessment of the technology. (1)

Safety issues can be 

This chapter uses the term adverse effects to be consistent with the literature discussing information-seeking issues within this field. Most of the research findings included in this chapter are for adverse drug effects.

Sources to search

Relying solely on MEDLINE is not recommended, as it is unlikely to be a comprehensive source on adverse effects information (2,3).

A wide range of sources needs to be used for the search to be thorough and in order to provide the best results (4). In a systematic review (3) and a case study of a single drug (4) Golder and Loke identified a combination of sources and techniques that might be expected to provide comprehensive information on adverse effects (in alphabetical order):

Golder et al. (5) and Wieseler et al. (6) found that unpublished data such as company clinical trials reports and drug approval information could be valuable sources of adverse effects information.

The HTA Core Model® recommends the following additional sources: product data sheets, national and international safety monitoring systems, disease and technology registers, routinely collected statistics from health care institutions and Internet discussion forums (1).

In a case study on spinal fusion, Golder et al. (7) found that multiple sources need to be searched in order to identify all the relevant studies with safety data for a medical device. The minimum combination of sources in the study was Science Citation Index, Embase, CENTRAL and either MEDLINE or PubMed, in addition to reference checking, contacting authors and using automated current awareness services.

Designing search strategies

In a study conducted in 2012, Golder and Loke found that adverse effects terms were increasingly prevalent in the title, abstract and indexing of adverse effects papers in MEDLINE and Embase (8). They concluded, therefore, that reviewers could, with some caution, choose to use more focused search filters or specific adverse effects terms in their search strategies, rather than run broad non-specific searches (without adverse effects terms), followed by evaluation of large numbers of full-text articles, at least for articles published more recently.

Even though no single published adverse effects search filter has been shown to capture all relevant records, such filters may still be useful in retrieving adverse effects data (9). The purpose of the search, topic under evaluation, resources available and anticipated gain in precision are factors one should take into consideration when applying such filters. Golder and Loke found that adverse effects search filters, when combined with specific adverse effects search terms, could be applied in MEDLINE with an increase in precision without major loss of sensitivity (10). They also found that adverse effects search filters should be applied with caution in Embase as there might be too high a loss of sensitivity without much improvement in precision (10).

Performance measurement of individual search terms included in search filters in MEDLINE and Embase has shown that:

Studies by Golder et al. (9, 10) provide an overview and comparisons of published search filters. Papers dealing with development of search filters are not included in this SuRe Info chapter, but these can be found at the InterTASC Information Specialists' Sub-Group (ISSG) Search Filter Resource.

However, the currently available adverse effects search filters may not necessarily be useful when searching for adverse effects data of medical devices (11). In a case study, Golder et al. (11) found that the most successful search terms in identifying adverse effects data of medical devices differed from the most successful terms used in search filters for adverse drug effects. The authors emphasize the need to create specific search fiters for adverse effects of medical devices.

Systematic reviews of adverse effects should not be restricted to specific study types (12). Golder et al. found that there was no difference on average between estimates of harm in meta-analyses of RCTs compared to observational studies (12).

Search approaches to identify systematic reviews of adverse effects should be similar to those used to identify primary studies of adverse effects. According to Golder et al. (13) ‘floating’ subheadings provided the highest sensitivity for searching the two major databases of systematic reviews: the Database of Abstracts of Reviews of Effects (DARE) and the Cochrane Database of Systematic Reviews (CDSR). In DARE, MeSH terms achieved the highest level of precision.

Acknowledgement

We acknowledge Carol Lefebvre and David Kaunelis for their work as co-authors of previous versions of the chapter.

Reference list

  1. (1) EUnetHTA Joint Action 2, Work Package 8.  HTA Core Model® version 3.0; 2016 (pdf).
    [Further reference details] [Publication appraisal] [Free Full text]
  2. (2) Golder S, Loke YK. Sources of information on adverse effects. Health Info Libr J 2010;27(3):176-190. [Further reference details] [Publication appraisal] [Free Full text]
  3. (3) Golder S. Optimising the retrieval of information on adverse drug effects. Health Info Libr J 2013;30(4);327-331. [Further reference details] [Publication appraisal] [Free full text]
  4. (4) Golder S, Loke YK. The contribution of different information sources for adverse effects data. Int J Technol Assess Health Care 2012;28(2):133-137. [Further reference details] [Publication appraisal] [Free Full text]
  5. (5) Golder S, Loke YK, Bland M. Unpublished data can be of value in systematic reviews of adverse effects: methodological overview. J Clin Epidemiol 2010;63(10):1071-1081. [Further reference details] [Publication appraisal] [Free full text]
  6. (6) Wieseler B, Wolfram N. McGauran N et al. Completeness of reporting of patient-relevant clinical trial outcomes: comparison of unpublished clinical study reports with publicly available data. PLoS Med 2013;10(10):e1001526. [Further reference details] [Publication appraisal] [Free full text]
  7. (7) Golder S, Wright K, Rodgers M. The contribution of different information sources to identify adverse effects of a medical device: a case study using a systematic review of spinal fusion. Int J Technol Assess Health Care 2014;(30)4:1-7. [Further reference details] [Publication appraisal] [Free full text]
  8. (8) Golder S, Loke YK. Failure or success of electronic search strategies to identify adverse effects data. J Med Libr Assoc 2012;100(2):130-134. [Further reference details] [Publication appraisal] [Free Full text]
  9. (9) Golder S, Loke Y. The performance of adverse effects search filters in MEDLINE and EMBASE. Health Info Libr J 2012;29(2):141-151. [Further reference details] [Publication appraisal] [Free Full text]
  10. (10) Golder S, Loke YK. Sensitivity and precision of adverse effects search filters in MEDLINE and EMBASE: a case study of fractures with thiazolidinediones. Health Info Libr J 2012;29(1):28-38.
    [Further reference details] [Publication appraisal] [Free Full text]
  11. (11) Golder S, Wright K, Rodgers M. Failure or success of search strategies to identify adverse effects of medical devices: a feasibility study using a systematic review. Syst Rev 2014;3:113.
    [Further reference details] [Publication appraisal] [Free full text]
  12. (12) Golder S, Loke YK, Bland M. Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: methodological overview. PLoS Med 2011;8(5):e1001026.
    [Further reference details] [Publication appraisal] [Free Full text]
  13. (13) Golder S, McIntosh HM, Loke Y. Identifying systematic reviews of the adverse effects of health care interventions. BMC Med Res Methodol 2006;6:22. [Further reference details] [Publication appraisal] [Free Full text]

Diagnostic accuracy

Last revised: 
2017-10-31

Introduction

HTA may include assessment of new diagnostic technologies or techniques. These can involve the identification and review of diagnostic test accuracy (DTA) studies designed to differentiate between individuals with and without a target condition (1). The Cochrane Collaboration has published an evidence-based guide to searching for DTA studies, which provides the basis for this summary (2).

DTA studies tend to be poorly reported and searching for them can be problematic due to this inadequate reporting and inconsistent terminology, the absence of appropriate indexing terms in some databases for this publication type, and inconsistent use of suitable indexing terms where they are available (2).

Sources to search

Relying only on searching MEDLINE is not recommended, as it is unlikely to be the most comprehensive source of diagnostic information and because diagnostic studies are not easy to retrieve efficiently in bibliographic databases (3). Relative recall analysis of systematic reviews has also suggested other databases might yield additional studies including Science Citation Index, BIOSIS and LILACS (3). Recent analyses have suggested that fewer databases might be adequate, but are weakened by their reliance on known-item searches (4,5,6). Review searches may not detect all the records in MEDLINE that might be relevant to a review, so searching other databases provides opportunities to pick up (MEDLINE indexed) studies by other routes. An analysis of ten meta-analyses found that only using studies indexed in MEDLINE did not impact significantly on the sensitivity and specificity estimates of the meta-analyses in those reviews (4). A second analysis of 16 meta-analyses of diagnostic accuracy studies of depression screening tools found 94% (range: 83-100%) of the primary studies included in the meta-analyses were indexed in MEDLINE (5). The remaining non-MEDLINE indexed studies were located in Scopus, PsycINFO, and Embase. The authors acknowledged that the quality of the majority of the original reviews could not be determined. Another recent study of nine reviews performed by a single research group found that the reviewers’ original searches would have found 85% of their included studies from MEDLINE and Embase (range: 60-100%) (6). Adding reference checking to the process would have found 93% of the included studies. The Cochrane Handbook (2), based on available research-evidence, currently recommends that searches should include the following databases for reviews and primary studies:

●      MEDLINE
●      Embase
●      ARIF
●      HTA database
●      DARE (closed to new records from March 2015)
●      Cochrane Database of Systematic Reviews
●      Searches for unpublished studies in dissertations databases and grey literature databases
●      Reference checking
●      Citation searches in services such as Science Citation Index, Scopus, and Google Scholar, as well as related  articles options in interfaces such as PubMed or Ovid.

In addition the following new databases could also be searched:

●     PROSPERO - register of systematic reviews
●     Epistemonikos - collection of systematic reviews and their included studies
●     PDQ Evidence - collection of systematic reviews about health systems and their included studies 

HTA agencies may also undertake assessments of diagnostic tests and so agency websites should also be explored, for example NICE diagnostic test guidance can be accessed here.

Although the proportion of ongoing studies investigating diagnostic test accuracy may still be relatively low (7), some are being recorded prospectively on trials registers such as ClinicalTrials.gov and the ICTRP portal.

The evidence for the value of handsearching is currently sparse, with one recent study of one topic showing that handsearching contributed little (8). It is possible that the topic of the research was well defined and the database searches were exemplary, and therefore the handsearching contribution would be different in other topics (8). More evidence is required on the yield and value of handsearching. Where a topic is published in journals that are not indexed in bibliographic databases, handsearching can still serve a purpose but this needs to be evaluated question by question.

Designing search strategies

Search strategies should be designed to be highly sensitive using a wide variety of search terms, both text words and subject indexing, to ensure that the many different ways that a test may be described feature in the search (2).  Information specialists should be aware of the weaknesses of reporting in abstracts of diagnostic accuracy studies. One exploratory study evaluating the comprehensiveness of reporting in the abstracts of 12 high-impact journals found 50% of the articles did not identify the study as a diagnostic accuracy study in the title and 65% included the sensitivity and/or specificity estimates in the abstract (9).

The search should reflect some, but not necessarily all, of the key concepts of the review (2). The search is likely to capture the index test being investigated and the target condition being diagnosed (2,9). A third set of terms can be considered to capture the patient description or the reference standard. The development of search strategies for DTA studies can be challenging and may involve several iterations to reach a strategy that captures the complex way records may present concepts of diagnosis (2). Cochrane Reviews of diagnostic test accuracy studies and the Cochrane handbook provide examples of search approaches for these, often complex, topics. Strategies may include both general terms (such as the generic type of diagnostic method, for example dipsticks) and specific terms such as named dipstick tests (2).

There are many published methodological search filters designed to capture studies of diagnostic test accuracy and including test measurement terms such as sensitivity and accuracy (10). The evidence, however, on the performance of DTA search filters suggests that combining filters with a search for a population and an index test is likely to miss relevant studies (11,12,13,14). Search filters for DTA studies do not seem to perform consistently and may result in unacceptable reductions in sensitivity (10, 11,12,13,14). Some studies have found that there may be instances where these methodological filters could be used, but these are not within the context of information retrieval for the production of health technology assessments (15,16).  When all the research is considered together, current evidence suggests that for search strategies designed to support systematic reviews of diagnostic accuracy, as long as DTA filters are not the only approach, they may be useful as one component of a search strategy which involves several search approaches:  a “multi-stranded” approach involves multiple queries run sequentially and using different combinations of concepts. Search filters can be identified from the InterTASC Information Specialists' Sub-Group (ISSG) Search Filter Resource (10).

Subheadings (floating subheadings and subheadings attached to the index test or the target condition) may be a helpful component of the search strategy (2).

Developing a search strategy can be iterative and complex and it can be helpful to have topic experts to review samples of search results for relevance and it is always helpful to be able to test retrieval against sets of known relevant records.

We have used the searching chapter of the Cochrane Handbook (2) as our baseline and SuRe Info appraisals have only been prepared for recently identified studies.

Reference list

Clinical effectiveness

A major revision of the searching chapter of the Cochrane Handbook is currently underway, with publication expected in mid to late 2018.
To avoid duplication of effort, no content has been prepared for the SuRe Info chapter on clinical effectiveness. The revised Cochrane Handbook will be used to create this chapter.

Costs and economic evaluation

Last revised: 
2017-10-10

Introduction

This domain focuses on the importance of obtaining information about costs and outcomes as well as efficacy and effectiveness when evaluating new technologies. Economic evaluation is an important part of health technology assessment because it assists with priority-setting between different health technologies. An economic evaluation identifies, measures, values and compares the costs and outcomes of a technology with its relevant comparator.

This domain overlaps with the effectiveness domain and the organizational domain (1).

Guidance on conducting searching as part of systematic reviews of economic evaluations has recently been published (2).

Sources to search

There are some databases which identify and collect economic evaluations and health economics studies (3,4,5,6) to promote efficient retrieval. These databases are built largely from MEDLINE and Embase, but offer a variety of value added information such as critical appraisals, results, categorisations and indexing. These databases can save time in identifying economic evaluations, but may not be comprehensive because of publication lags or geographical focus (e.g. the Cost-Effectiveness Analysis (CEA) registry). NHS EED ceased updating at the end of 2014 and is available only as a closed database. HEED is no longer available.This means that sensitive searches should also include searches of general medical databases such as MEDLINE and Embase (3,4,7,8,9). Searching Science Citation Index and conference abstracts (via websites as well as Embase) may also increase retrieval (7). Pitt et al. conducted a bibliometric analysis of full economic evaluations of health interventions published in 2012-14, comparing, among other things, the sensitivity and specificity of searches in 14 databases (10). This study confirms that Econlit is not a high yield resource for economic evaluations and suggests that Scopus may be a useful resource to search, which may merit investigation.

Searching non-database sources is likely to identify further studies outside of commercial journal publications (7).

The majority of recent reviews of economic evaluations have not followed published searching approaches in detail and are also currently poorly reported (11). Reviews should report the searches explicitly and search a range of resources (2,11). The following information sources should be considered when searching for economic evaluations and utility studies:

●      Specialist economic databases (CEA Registry is a live database; Paediatric Economic Database Evaluation           (PEDE) is a database of pediatric economic evaluations; NHS EED closed at the end of 2014 (12,13,14)
●      Technology assessment databases (the Health Technology Assessment (HTA) database)
●      General medical literature databases (MEDLINE, Embase) (12)
●      Websites of HTA agencies
●      Grey literature (conferences such as ISPOR and HTAi; the RePEC economic working papers collection) (3,4)
●      Collections of utility studies including ScHARRHUD and instrument websites (15), as well as utility mapping         collections (http://www.herc.ox.ac.uk/downloads/herc-database-of-mapping-studies).

Searches to identify information to populate economic models may involve a range of resources ranging from statistical sources to bibliographic databases (3,4,16,17,18,19). Guidance on suggested minimum searching levels for model parameters is available, although the author notes that much of the guidance has not been empirically tested (20).

Designing search strategies

Principles of systematic review methodology should be followed for the design of search strategies to identify economic evaluations. The development of sensitive subject searches within the specific economic evaluation databases is recommended to capture the population and the intervention of interest (3,4,21). An overview of methods for systematic reviews of health economic interventions suggests that a systematic search should use relevant elements of PICO combined with an economic search filter (14). Shemilt is more cautious still, suggesting that only intervention search terms may be required and focus can be achieved by adding the population concept (12). However, there is no requirement to add an economic evaluation search filter to searches within economic evaluation databases because they are pre-filtered (3,6). Search filters for economic studies can be considered (in combination with concepts capturing the population and/or intervention) in general bibliographic databases such as MEDLINE or Embase (14). Published search filters, which can be identified from the InterTASC Information Specialists' Sub-Group (ISSG) Search Filter Resource tend to have high sensitivity but poor precision (22,24). CADTH offers a more precision maximizing search filter for rapid reviews (25). Search strategies to identify cost-effectiveness information may need to be adapted from those developed for searching for effectiveness studies (26). Searching for particular economic methods may require the use of several techniques (27).

Searches to inform specific parameters of decision models may not be required to be as extensive and systematic as those to identify economic evaluations, as decision models are developed in an organic way, some parameters do not require the identification of comprehensive evidence and also it may not be feasible to conduct extensive searches for all parameters of a model (3,17, 20). Health state utility values (HSUVs) are important parameters in decision models and searching for them requires specific techniques (16) as well as the use of a search filter (28). Searching for cost of illness/burden of illness can make use of population search terms (perhaps taken from an accompanying effects review) (12).

Reference list

●(1) EUnetHTA Joint Action 2 Work Package 8. HTA Core Model version® 2.0; 2013 (pdf). [Further reference details] [Publication appraisal]  [Free full text]
●(2) Thielen FW, Van Mastrigt G, Burgers LT, Bramer WM, Majoie H, Evers S, et al. How to prepare a systematic review of economic evaluations for clinical practice guidelines: database selection and search strategy development (part 2/3). Expert Rev Pharmacoecon Outcomes Res. 2016; 1–17. [Further reference details] [Publication appraisal] [Free full text]
●(3) Glanville J, Paisley S. Searching for evidence on resource use, costs, effects and cost-effectiveness. In: Shemilt I et al (eds). Evidence based economics. Oxford:Wiley-Blackwell;2010. [Further reference details] [Publication appraisal] [Free Full text]
●(4) Glanville J, Paisley S. Identifying economic evaluations for health technology assessment. Int J Technol Assess Health Care 2010;26(4):436-440. [Further reference details] [Publication appraisal] [Free Full text]
●(5) Alton V, Eckerlund I, Norlund A. Health economic evaluations: how to find them. Int J Technol Assess Health Care 2006;22(4):512-517. [Further reference details] [Publication appraisal] [Free Full text]
●(6) Nixon J, Duffy S, Armstrong N, Craig D, Glanville J, Christie J, Drummond M, Kleijnen J. The usefulness of the NHS Economic Evaluation Database to researchers undertaking technology assessment reviews. Int J Technol Assess Health Care 2004;20(3):249-257. [Further reference details] [Publication appraisal] [Free Full text]
●(7) Royle P, Waugh N. Literature searching for clinical and cost-effectiveness studies used in health technology assessment reports carried out for the National Institute for Clinical Excellence appraisal system. Health Technol Assess 2003;7(34). [Further reference details] [Publication appraisal] [Free Full text]
●(8) Waffenschmidt S, Hausner E, Engel L, Volz F, Kaiser T. Benefit of searching different databases to identify health economic evaluations in German HTA-reports. Abstract presented at: Health Technology Assessment International (HTAi) 7th Annual Meeting; 2010 June 6-9; Dublin, Ireland. Abstract T4-29. [Further reference details] [Publication appraisal] [Free Full text]
●(9) Coyle KB, Trochlil K, Iversen P. MEDLINE and EMBASE for health economic literature reviews [abstract]. Value Health 2012;15(4):A162. [Further reference details] [Publication appraisal] [Free Full text]
●(10) Pitt C, Goodman C, Hanson K. Economic evaluation in global perspective: a bibliometric analysis of the recent literature. Health Econ. 2016 Feb;25 Suppl 1:9-28.  [Further reference details] [Publication appraisal] [Free full text]
●(11) Wood H, Arber M, Glanville JM. Systematic reviews of economic evaluations: How extensive are their searches? Int J Technol Assess Health Care. 2017; 27:1-7.[Further reference details] [Publication appraisal] [Free Full text]
●(12) Shemilt I, Mugford M, Vale L, Craig D. Searching NHS EED and HEED to inform development of economic commentary for Cochrane intervention reviews. Oxford: Cochrane Collaboration; 2011. [Further reference details] [Publication appraisal] [Free Full text]
●(13) Sullivan S M, Tsiplova K, Ungar W J. A scoping review of pediatric economic evaluation 1980-2014: do trends over time reflect changing priorities in evaluation methods and childhood disease?, Expert Rev Pharmacoecon Outcomes Res. 2016;16(5): 599-607. [Further reference details] [Publication appraisal] [Free full text]
●(14) Mathes T, Walgenbach M, Antoine SL, Pieper D, Eikermann M. Methods for systematic reviews of health economic evaluations: a systematic review, comparison, and synthesis of method literature. Med Decis Making. 2014;34(7):826-840. [Further reference details] [Publication appraisal] [Free full text]
●(15) Dakin H. Review of studies mapping from quality of life or clinical measures to EQ-5D: an online database. Health and Quality of Life Outcomes 2013;11(1):151. [Further reference details] [Publication appraisal] [Free full text]
●(16) Golder S, Glanville J, Ginnelly L. Populating decision-analytic models: the feasibility and efficiency of database searching for individual parameters. Int J Technol Assess Health Care 2005;21(3):305-311. [Further reference details] [Publication appraisal] [Free Full text]
●(17) Papaioannou D, Brazier J, Paisley S. Systematic searching and selection of health state utility values from the literature. Value Health 2013;16(4):686-695. [Further reference details] [Publication appraisal] [Free Full text]
●(18) Zechmeister-Koss I, Schnell-Inderst P, Zauner G. Appropriate evidence sources for populating decision analytic models within health technology assessment (HTA): a systematic review of HTA manuals and health economic guidelines. Med Decis Making 2014;34(3):288-299. [Further reference details] [Publication appraisal] [Free Full text]
●(19) De Cock E, Cosmatos I, Kirsch E. Use of databases for health resource utilizaton and cost analyses in EU-5: Results from a focused literature review [abstract]. Value in Health Conference: ISPOR 21st Annual International Meeting Research Washington, DC.  Value in Health 2016;19(3):A80-A8). [Further reference details] [Publication appraisal] [Free full text]
●(20) Paisley S. Identification of evidence for key parameters in decision-analytic models of cost effectiveness: a description of sources and a recommended minimum search requirement.  PharmacoEconomics. 2016 Jun;34(6):597-608. [Further reference details] [Publication appraisal] [Free full text]
●(21) Rosen AB, Greenberg D, Stone PW, Olchanski NV, Neumann PJ. Quality of abstracts of papers reporting original cost-effectiveness analyses. Med Decis Making 2005;25:424-428. [Further reference details] [Publication appraisal] [Free Full text]
●(22) Glanville J, Fleetwood K, Yellowlees A, Kaunelis D, Mensinkai S. Development and testing of search filters to identify economic evaluations in MEDLINE and EMBASE. Ottawa: Canadian Agency for Drugs and Technologies in Health; 2009. [Further reference details][Publication appraisal] [Free Full text]
●(23) Glanville J, Kaunelis D, Mensinkai S. How well do search filters perform in identifying economic evaluations in MEDLINE and EMBASE. Int J Technol Assess Health Care 2009;25(4):522-529. [Further reference details] [Publication appraisal] [Free Full text]
●(24) McKinlay RJ, Wilczynski NL, Haynes RB, Hedges Team. Optimal search strategies for detecting cost and economic studies in EMBASE. BMC Health Serv Res 2006;6:67. [Further reference details] [Publication appraisal] [Free Full text]
●(25) Kaunelis D. When everything is too much: development of a CADTH narrow economic search filter. Poster presented at: 2011 CADTH Symposium. 2011 Apr 3-5; Vancouver, BC. [Further reference details] [Publication appraisal] [Free Full text]
●(26) Droste S, Dintsios C-M. Informationsgewinnung für gesundheitsökonomische Evaluationen im Rahmen von HTA-Berichten. Gesundheitsökonomie & Qualitätsmanagement 2011;16(1):35-57. [Further reference details][Publication appraisal] [Free Full text]
●(27) Hinde S, Spackman E, Claxton K, Sculpher MJ. The cost-effectiveness threshold: the results of a novel literature review method. Value Health 2011;14:A354. [Further reference details] [Publication appraisal] [Free Full text]
●(28) Arber M, Garcia S, Veale T, Edwards M, Shaw A, Glanville J. Sensitivity of a search filter designed to identify studies reporting health state utility values [poster]. Presented at the Cochrane Colloquium, Vienna, 3-7 October 2015. [Further reference details] [Publication appraisal] [Free full text]

Ethical analysis

Last revised: 
2017-03-08

Introduction

The ethical analysis domain considers prevalent social and moral norms relevant to the technology in question. It involves an understanding of implementing or not implementing a healthcare technology in two respects: with regard to the prevailing societal values and with regard to the norms and values the technology itself constructs when it is put into use. The moral value that societies attribute to the consequences of implementing a technology is affected by socio-political, legal, cultural and religious and economic differences. However, many ethical considerations are common to all cultures and societies. There are also moral and ethical issues related to consequences of performing a health technology assessment (e.g. ethical consequences of choosing specific endpoints and whether there are ethical problems related to economic evaluation). (1)

Ethical analyses in HTA are realized in different ways (2):

Different methods and approaches (3) to analyze ethical aspects in HTA do exist and are applied in some of the published HTA reports. These methods publications do not provide detailed guidance on how and where to find relevant literature, which is the aim of this chapter.

Sources to search

Medical ethics is an interdisciplinary field of research (4). Searching beyond the major biomedical databases is recommended. Rauprich et al. (5) compared the search process and the results of MEDLINE and the ethics database BELIT and found in their examples only a small overlap of 3-4% between the two. Fangerau (4) identified the highest quantity of Medical Ethics journal literature by searching a combination of databases, as Current Contents, MEDLINE, Research Alert, Social Sciences Citation Index, Embase, AgeLine, CINAHL, E-psyche, Sociological Abstracts, and Family Index.

Droste et al. (6) recommended, depending on the topic in question, the following sources (searchable in English language) in information retrieval for ethical issues:

As ethical aspects are related to individual and public preferences, norms and values (sense of morality) they are regionally and nationally different. Thus, for thorough reflections on ethical issues relevant national databases are of interest too (6). In this context, Dracos introduced the Italian bioethics database SIBIL (7).

Ethical aspects as well as legal aspects refer to international or supranational rules and regulations and in particular to national law. For these reasons the use of the information sources recommended for legal issues are of interest too (ref. chapter “Legal aspects”).

Further, hand-searching for non-indexed journals, searching of the web pages of relevant ethics institutes or searching for experts may be considered (6).

Designing search strategies

No internationally established standard exists on how to develop search strategies on ethical analysis related to health technologies (6). A study by Droste et al. (6) introduced a proposal for an information retrieval procedure similar to the workflow of information retrieval for effectiveness assessments.

One should first try to identify relevant laws, rules and regulations, and the ethical issues relevant to the topic of interest and the methods approach chosen for analysis, then start the workflow:

Step 1: Translation of the search question respective definition of the search components by using the PICO scheme and additional components.

Step 2: Concept building by modeling and linking search components with Boolean operators.

Step 3: Identification of synonyms in all relevant languages.

Step 4: Selection of relevant information sources.

Step 5: Design of search strategies for bibliographic databases.

Step 6: Execution of search strategies and information seeking, including hand-searching.

Step 7: Saving of retrieval results and standardized reporting of the process and results.

Step 8: Final quality check and calculation of precision and recall.

In the first step, it is recommended to add an additional component related to the ethical aspects to the PICO scheme describing the population, intervention, comparator, and outcomes of interest (PICOE). In databases that allow advanced searching, subject headings and text words describing the relevant ethical issues are then combined component by component with the Boolean operator “AND”. The final steps of the proposed workflow are the final quality check, saving the results and reporting the search process transparent and reproducible manner.

Some support is provided by Kahn et al. (8, 9) who presented the Bioethics Thesaurus Keywords and MeSH equivalents as well as some recommendations on how to find publications by entering MeSH Headings and free-text queries in PubMed and further databases provided by the National Library of Medicine (NLM). Additionally, the National Reference Center for Bioethics Literature (10) published a guide on the bioethics literature databases at the Georgetown University and the National Library of Medicine (NLM), on BELIT (German Reference Centre for Ethics in the Life Sciences, DRZE) and the Global Ethics Observatory (UNESCO).

The paper of Droste et al. (6) provides (besides the overview of relevant databases) relevant subject headings for searching for ethical aspects in MEDLINE and Embase. The use of less sophisticated search strategies has to be to be considered in other sources.

There have been undertaken some efforts to develop a standard search filter for identifying publications on values by Petrova et al. (11) but the study results show that a) “values” are hard to define and topic specific, b) “values” are not representable by a brief search filter (124 MeSH terms, 144 free text words were processed), and c) sensitivity / external validity is too low to be applied in HTA or systematic reviews.

Some overlap does exist between ethical, legal and social aspects of health technologies (6). For example, issues of patient autonomy are part of each of these aspects. To avoid duplication of work, joint information retrieval processes for these three aspects may therefore be considered.

Some information on ethical aspects of health technologies is published in qualitative studies. Thus, it is recommended to search for such studies from various disciplines too. Guidance on how to search for qualitative research will later be supplemented by a separate chapter.

 

Reference list

 

Social aspects

Author(s): 

Organizational aspects

Last revised: 
2017-10-30

Introduction

Health Technology Assessments (HTAs) not only evaluate a health technology and its effectiveness (and often cost-effectiveness) but also consider organizational aspects surrounding its implentation, or sometimes removal, within a specific context or setting. This domain of an HTA examines how various types of resources (administrative, human, technological, etc.) need to be structured when implementing a technology. Any impacts that may result within the health care organization or the health system as a whole are considered. (1,2).

In general, the organizational domain explored the following issues, but may also consider others (1,2):

  1. Health care delivery processes and how the technology may affect current work flows
  2. The structure of health care services and equitable access to the new technology
  3. Process related costs for purchasing and setting up the new technology along with budget impacts
  4. Management issues
  5. Cultural issues including acceptance of the new technology by those within health care organizations

General Search Guidance

There is little information regarding the optimal methods for conducting analyses in this domain and consequently little guidance on best practices for searching the evidence base.  EUnetHTA’s HTA Core Model (1) and the Danish Centre for Health Technology Assessment’s Health Technology Assessment Handbook (2) offer the most detailed guidance in this area.

In general, both sources agree that this is a challenging area for information retrieval as evidence on the organization and delivery of health services encompases a wide range of disciplines, study types, and is spread across a wide range of published and grey literature.  The information required for this section of an HTA  is often context (and often country) specific which can result in little to no published literature being available. (1,2)

It is recommended that, as a first attempt, an extensive literature search focusing on identifyiing systematic reviews of organizational aspects should be conducted. If no systematic reviews are available, then the search should be revised to focus on guidelines and relevant primary studies. If no relevant data are identified, the third step is to identify primary data which might involve conducting surveys or interviews of healthcare professionals and content experts.  Data might also be obtained from administrative databases of the relevant organizations involved in the analysis (1). New primary qualitative research might be the only way to assess real world practice use and misuse. (2)

Sources to search

A wide range of sources of published and unpublished (grey) information should be searched. Other search techniques should also be considered including contacting experts and scanning reference lists of relevant papers and hand searching of journals.  Information should be gathered not just from traditional health sciences literature sources but also from sources of social sciences, business, and even education literature.  The choice and number of resources to search will depend on the topic of the assessment and the time/resources available for searching. At a minimum, the most commonly used databases below should be consulted (1,2).

Resources recommended to search for the organizational domain (1,2) include:

In addition the following sources could also be searched:

 

Types of Research Studies to Include in the Search

Because of the complexity of the organizational domain in terms of the variety of literature that is needed, no single type of research study is appropriate. The evidence base may encompass many varying types of studies, both quantitative and qualitative.  The HTA Core Model (1) recommends searching  for the following study types:

With this is mind, we also recommend that individuals consult the chapter (note: in development) on searching for qualitative literature to aid in the development of search strategies for the organizational domain.  


Designing Search Strategies

In terms of designing the search strategy for this topic area, there is little guidance available and we suggest that information specialists explore published reviews and HTAs to see how others have searched for topics such as health delivery processes and health structures.

Search Filters

A number of search filters can be employed in the design of search strategies, each with a specific focus.  Wilczynski et al. (3) have developed a health services research filter and Simon et Al. (4) have created a filter aimed at uncovering nurse staffing research.  Van Walraven et al. (5) have developed a filter to identify studies that use administrative data, and Hempel et al. on quality improvement interventions (6).  Each of these authors acknowledge that research in these topic areas is difficult to search for because of the wide variety of applicable subject headings/terms and the variable keywords and language used to describe the field (3-6). While all of these filters have quite good sensitivity, they all have much poorer precision which is likely to result in quite large search yields and many irrelevant records that need to be manually screened.

Additional search filters can be identified from the InterTASC Information Specialists' Sub-Group (ISSG) Search Filter Resource.

 

Reference list

Legal aspects

Last revised: 
2017-03-08

Introduction

The HTA Core Model specifies four categories of legal aspects of importance to HTA (1):

In addition, when assessing diagnostic technologies, legal issues related to who is the end-user are also of importance (1).

Sources to search

Legal aspects related to health care policy refer to international or supranational rules and regulations (such as European law) and in particular to national law. Droste and Rixen (2) recommend the use of the following information sources for legal issues directly related to the health technology in question:

-Health policy administration, Decision-making bodies, Health insurances etc. (Benefit rights)
-European Medicines Agency (EMA), National agencies etc. (Pharmaceutical legislation)
-Notified Bodies (European Union), National authorities etc. (Medical device law)
-Governments, Parliaments, Authorities with legislative rights etc. (Rehabilitation and care legislation, Hospital law, Remuneration law)
-Medical Council etc. (Professional law)
-Government, Parliament, Legislative Authorities (Criminal law)
-Government, Parliament, Federal / National Court of Justice, Higher District Court etc. (Liability law)
-World Health Organization (WHO), EU Council of Ministers, Government, Parliament, Federal / National Social Court, Federal / National Constitutional Court etc. (Patient rights)

Sources for patents may be consulted when looking for information on whether the technology in question infringes some intellectual property rights or whether the introduction of the technology means there will be additional licensing fees to be paid (1).

Depending on the topic in question, the following information sources (searchable in English language) may be used when searching for legal issues directly related to the patient and e.g. his/her basic rights and freedoms (2):

Further, hand-searching for non-indexed journals and searching of the web pages of relevant health law and ethics institutes may be considered (2).

Designing search strategies

No internationally established standard exists for how to develop search strategies on legal aspects related to health technologies (2). A study by Droste and Rixen (2) introduces a proposal for an information retrieval procedure similar to the workflow of information retrieval for effectiveness assessments.

One should first try to identify relevant laws, rules and regulations, and legal issues relevant to the topic of interest (2). When defining the research question, it is recommended to add an additional category related to the legal aspects to the PICO scheme describing the population, intervention, comparator, and outcomes of interest (PICOL). In databases that allow advanced searching, subject headings and text words describing the relevant legal issues are then combined with search terms characterising the selected relevant PICO categories, with the Boolean operator AND. The paper of Droste and Rixen (2) provides an overview of relevant subject headings for searching for legal aspects in MEDLINE and Embase. Use of less sophisticated search strategies may need to be considered in other types of sources.

Patent information cannot be searched using standardized search approaches and it is considered to be challenging (4). A human recombinant insulin case study by Dirnberger (4) compared the performance of three different search approaches: "crude” keyword search strategy, complex focused keyword search strategy, and sequence search strategy. The best search results in terms of recall and precision were achieved by combining the focused keyword and sequence search approaches.

There is some overlap between ethical, legal and social aspects of health technologies (2). For example, issues of patient autonomy are part of each of these aspects. To avoid duplication of work, joint information retrieval processes for these three aspects may therefore be considered. 

 

Reference list