Background

The increasing incidence of antibiotic resistance represents a serious worldwide problem. In November 2001, the European Council adopted a recommendation on the prudent use of antimicrobial agents in human medicine (2002/77/EC), with a focus on the surveillance of antimicrobial resistance, surveillance of antimicrobial use, control and preventive measures, education and training, and research [1].

The project proposal “Implementing antibiotic strategies (ABS) for appropriate use of antibiotics in hospitals in member states of the European Union—ABS International” was presented to the EU Commission in 2005. The project started in September 2006 and was implemented in nine Member States of the EU: Austria, Germany, Belgium, Italy, Poland, Hungary, Czech Republic, Slovenia and Slovakia [2, 3].

As part of the project, structure and process indicators for evaluating activities of antimicrobial stewardship committees were developed in order to provide antimicrobial stewardship committees or antimicrobial management teams (AMTs) with quality assessment tools for evaluating their activities [4]. Structural indicators describe the organisation and resources as well as communication and evaluation tools available at the hospital level for implementing a multi-modal, multi-disciplinary antibiotic stewardship programme [57]. These indicators should focus on the appropriateness of antimicrobial drug prescribing and administration in hospital care, with reference to national standards and international, national or local practice guidelines. In addition to optimising individual patient care outcome, the quality objective for antibiotic use is also an important ecological dimension, namely, to minimise the risk of antibacterial resistance selection and spread associated with individual and population antibiotic exposure.

Finally, in a general setting of budgetary limitations, the efficient use of financial and human resources should also be considered in recommending any interventions to modify or monitor antimicrobial drug use. Antibacterial drugs are among the most frequently administered drugs in hospital care and a significant driver of drug acquisition, administration and bio-monitoring costs.

This study describes the development of structure indicators for antimicrobial stewardship and antibiotic use in a hospital setting by a multi-national expert panel. Furthermore, it reports on the results of a validation survey based on the selected indicators across a pilot sample of European hospitals.

Methodology

Development of structure indicators

A multi-disciplinary team composed of five infectious disease specialists, two clinical microbiologists, three hospital pharmacists and three quality of health care experts from four countries (Austria, Germany, Belgium, USA) developed and selected structure indicators on hospital organisation and resources, as well as drug use. This team was composed on an ad hoc basis with experts participating in the ABS International project. The development of structure indicators was achieved in three steps. In the first step, candidate quality indicators were identified based on the scientific literature and a structured list was compiled by all team members. The second step was to score and rank the listed quality indicators using multi-criteria scoring based on their perceived scientific value and applicability. Finally, quality indicators were selected by consensus during a general discussion in a face-to-face meeting.

The identification of potential quality indicators was based on effective interventions and programme components identified in recent reviews of the literature, quality indicators as proposed in national/international guidelines and standards, as well as ABS/BAPCOC (The Belgian Antibiotic Policy Coordination Committee) questionnaires used in Austria and Belgium for auditing the quality of antibiotic stewardship programmes [816].

Multi-criteria decision analysis was used to score and rank the quality indicators based on scientific value and applicability. Multi-criteria decision analysis is a procedure aimed at supporting decision makers who need to assess a number of options against potentially conflicting criteria, combining those evaluations into an overall evaluation of relative value through a transparent and traceable process. It provides a clear audit trail for reporting the decision-making process.

The methodology for scoring and ranking the potential quality indicators was adapted from the procedure described by Schouten et al. [17]. After discussion in the consensus group, two sets of criteria were agreed upon; a first set of four criteria was used for ranking the potential value of all proposed indicators and a second set of two criteria was scored to assess the assumed applicability across health care centres in Europe.

For both sets of criteria, a scoring scale of 0 (lowest value) to 5 (maximum value) was used for scoring by each of the 13 team members to remotely and independently assess each proposed quality indicator; the sum of rates for each criterion provided the final mean score (maximum of 20 for value ranking) for each quality indicator. This ranking score was used to prioritise the options in descending order within the structure indicators. The applicability score was used during a group discussion to decide upon suitability for inclusion in the field validation phase. Scoring criteria for ranking score was based on clinical relevance, ecological relevance, economic relevance and scientific validity (Table 1).

Table 1 Scoring criteria for ranking and applicability score

Previous systematic reviews, evidence-based guidelines and meta-analyses were used as the main sources for ranking this dimension. Scoring criteria for calculating the applicability score were generalisability and assumed feasibility based on the expert experience (Table 1). Finally, a consensus meeting was organised to discuss the ranking results and select the quality indicators. Additionally, based on the highest score for ranking and applicability, the top-ten indicators were identified as the minimal set of key structure indicators.

Structured questionnaire survey

To pilot the feasibility and validate the discriminatory power of the selected indicators, a structured questionnaire survey comprising hospital information [hospital affiliation, number of beds, number of intensive care unit (ICU) beds] and questions to score indicators was developed. The survey was administered by email (April 2008) to the director of the antimicrobial stewardship programme in 11 volunteer acute care hospitals participating in the ABS project: five in Austria, two in Belgium, one in the Czech Republic, two in Germany and one in Slovenia. The respondents could send back the filled in questionnaire by email or post to a central data manager. For further analysis, the yes/no answers for the indicators were transformed into numbers in order to calculate the total scores for each dimension of structure. One point was given in the case of a “yes” answer and zero points in the case of a “no” answer. This calculation was made for both the extensive list of structure indicators and the top-ten key indicators

Results

Development of indicators

A list of 74 potential quality indicators was identified based on a literature review and national quality indicators implemented in the countries participating in the project. Each indicator was scored, resulting in a ranking and applicability score. The scores were used during the consensus meeting to select and clarify the final indicators.

Based on the initial list of 74 structure indicators, and after screening for redundancy, a final list of 58 indicators were selected and categorised in the following topics: antimicrobial stewardship services (n = 12), tools (n = 16), human resources and mandate (n = 6), health care personnel development (n = 4), basic diagnostic capabilities (n = 6), microbiological rapid tests (n = 2), evaluation of microbiological data on antibiotic resistance (n = 3), antibiotic consumption control (n = 5) and drug use monitoring (n = 4) (Table 2). The top-ten structure indicators with the highest score for ranking and applicability are identified with an asterisk (*) and were considered to be key elements of an effective antibiotic stewardship programme.

Table 2 Value ranking and applicability scores for selected potential indicators. The highest scores are indicated in bold

Validation survey

Eleven hospitals, including seven university and four general hospitals, participated in the pilot study. The size of the hospitals ranged from 280 to 2,392 beds, with the number of ICU beds ranging from 9 to 132.

As shown in Table 3, the total score of individual hospitals ranged from 32 to 50 points. The maximum possible score of 58 was not reached by any hospital. When only the ten indicators of key elements of an effective antibiotic stewardship programme were listed for the hospitals, the score ranged from 5 to 10 points (maximum possible score = 10).

Table 3 Overview of scores for the 11 acute hospitals in the structural indicator survey

Discussion

An extensive list of 58 potential structure quality indicators was selected as being useful for the assessment of the comprehensiveness and resource-intensity of antibiotic stewardship programmes. The extensive list offers hospitals a tool to characterise and evaluate the activities and resources of the local programme. As we were aware that indicators ought to be few and simple to be used in practice, we have identified a set of ten key indicators as recommended for monitoring the effective deployment of antimicrobial stewardship programmes in acute care hospitals. The top-ten key structure indicators focus on the availability of an antibiotic formulary and guidelines for the provision of a formal mandate for a multi-disciplinary AMT which would be able to deliver bedside antibiotic advice, educate prescribers and audit compliance with local clinical guidelines. To strengthen the AMT decisions, one of the team members should also be present on the drugs and therapeutics committee.

One can presume that the selected indicators seems to be already implemented in most hospitals, but the literature shows the opposite. A survey in 32 European hospitals showed that 52 % of the hospitals had no antibiotic committee and 23 % had no antibiotic formulary [18]. A survey of infectious diseases physician members of the Infectious Diseases Society of America Emerging Infections Network (IDSA EIN) revealed that 27 % of respondents reported that their institutions did not have or were not planning an antibiotic stewardship programme. Lack of funding and lack of personnel were reported as major barriers to implement a programme. A recent Policy Statement on Antimicrobial Stewardship by the Society for Healthcare Epidemiology of America (SHEA), the IDSA and the Pediatric Infectious Diseases Society (PIDS) outlines recommendations for the mandatory implementation of antimicrobial stewardship throughout health care, suggests process and outcome measures to monitor these interventions, and addresses deficiencies in education [19]. Another survey in Belgium demonstrated a well-developed structure of AMTs in hospitals and a broad range of services provided [16]. The Belgian experience showed that the mandatory implementation of antimicrobial stewardship programmes in hospitals and the yearly mandatory review of structure indicators was the key to the extensive implementation of antimicrobial stewardship programmes across the national hospital care system. Also, the Scottish Antimicrobial Prescribing Group (SAPG) has demonstrated that the implementation of regularly reviewed national prescribing indicators, acceptable to clinicians, implemented through regular systematic measurement, can drive improvement in the quality of antibiotic use in key clinical areas [20].

In this article, we describe the development of structure indicators for assessing antimicrobial stewardship programmes. The project “Implementing antibiotic strategies (ABS) for appropriate use of antibiotics in hospitals in member states of the European Union—ABS International” validated also process indicators for evaluating surgical antibiotic prophylaxis (indication, drug choice, timing and duration of administration) and process indicators for antibiotic therapy: (1) management of community-acquired pneumonia (blood culture and Legionella antigen tests and drug choice for empirical treatment); (2) management of Staphylococcus aureus bacteraemia (echocardiography, IV catheter removal and duration of effective therapy) and (3) IV-PO switch for treatment with fully bio-available antibiotics [4, 21, 22].

Less focus was given on outcome indicators which were perceived to fall outside the scope of the validation in the ABS feasibility study. Nathwani et al. noted that “measurement for improvement is not focussed on judging whether data meet a compliance threshold or target but rather is a means of determining whether the changes we make to improve are effective and to what degree” [20]. Outcome indicators are, indeed, necessary to measure this. Recently, McGowan Jr et al. stated that antimicrobial stewardship programmes are associated with desirable outcomes for clinical care and cost reduction, but that less evidence exists for reduction in antibiotic resistance as a result of antimicrobial stewardship programmes and for their cost-effectiveness [23]. They also focussed on the methodological problems in assessing outcomes, which are barriers in developing evidence-based outcome indicators.

Since the performance of the ABS study, other studies on indicators for assessing antimicrobial stewardship programmes have been published. The SAPG has developed prescribing indicators for hospital and primary care [24]. Improvement in compliance with the indicators has been demonstrated with resultant reductions in Clostridium difficile infection rates. In 2007, New South Wales Therapeutic Advisory Group (NSW TAG) developed a set of process indicators to measure the quality use of medicines (QUM) in Australian hospitals in collaboration with the NSW Clinical Excellence Commission (CEC) [25]. As part of the European Commission concerted action Antibiotic Resistance Prevention and Control (ARPAC) Project, data on antibiotic stewardship were collected and relationships investigated by antibiotic consumption in European hospitals using antibiotic stewardship indicators with focus on the structure, design and content of written hospital antibiotic policies and formularies [18]. Policies and practices relating to antibiotic stewardship varied considerably across European hospitals. A ten-member expert panel from Canada and the United States defined five quality metrics for antimicrobial stewardship programmes with focus on process and outcome indicators from three domains including antimicrobial consumption, antimicrobial resistance and clinical effectiveness [26].

Participants of the pilot validation survey had developed a local antibiotic stewardship programme with dedicated resources and provided a wide range of education, evaluation and regulation tools for local prescribers. In particular, 10 out of the 11 centres had local multi-disciplinary practice guidelines for antibiotic prophylaxis and therapy, and seven centres had already performed clinical audit of these guidelines. There was significant heterogeneity among participating centres with regard to their scoring for structural components of effective antibiotic stewardship, which ranged from 32 to 50 out of the maximum score of 58. Hospitals with a lower score for the complete set of indicators also performed poorly for the top-ten key indicators. These findings confirm the results of the previously mentioned surveys in Europe and United States revealing heterogeneity among participating hospitals when considering the implementation of antimicrobial stewardship programmes [18, 27].

Our study has several limitations. The selected indicators were developed by consensus of a multi-disciplinary team of professionals (infectious disease specialists, clinical microbiologists, hospital pharmacists, and quality and health care scientists) from four countries. Although this composition reflects the range of expertise considered to be optimal for the composition of an antibiotic policy group for hospital care, no attempt was made to extend its composition beyond the ABS project group to represent all stakeholders in the field due to the timelines of the project. Therefore, it only reflects the subjective opinion and knowledge of a self-selected group of experts. A second limitation was the methodology used for scoring the scientific validity of quality indicators based on the secondary literature and personal knowledge of the primary literature of the ABS quality indicator team members. A third limitation could be the use of multi-criteria decision analysis to score and rank the quality indicators. Although this methodology was recently also used by Rello et al. for the development of a European care bundle for the prevention of ventilator-associated pneumonia, most studies developing indicators in human medicine used a modified Delphi method [26, 28]. Nevertheless, we can conclude that the different stages are more or less the same comparing the multi-criteria decision analysis and the modified Delphi method such as, for instance, that used by Morris et al. Each expert scored each indicator in regard to the chosen items (taken from the literature) and the next stage was to send the individual ranking scores to all experts. Everybody scored the indicators again and, afterwards, there was discussion in the experts’ consensus group.

Benchmarking by comparisons between hospitals can be an important stimulus to quality improvement [18, 29]. Variations may reflect real and important variations in actual health care quality, e.g. inappropriate antibiotic use, that merit further investigation and action, but some apparent variation may also arise because of other misleading factors. such as the lack of adjustment for case-mix differences.

We suggest that a selection among the potential structure indicators examined in this study, with focus on the top-ten indicators proposed by the ABS International group, could be used for regular assessment of the extent and strength of hospital antimicrobial stewardship programmes. This can be done by administering questionnaire surveys on a national or international basis. These organisational elements should be seen as part of the hospital patient safety and quality of care system. In order to operate, they should be adequately supported and empowered and funded by health authorities and hospital management. Verification of the actual implementation of these structure indicators may be considered by national or regional health authorities responsible for hospital accreditation.

Conclusion

An international multi-disciplinary team developed and tested 58 potential structure indicators for feasibility across health care settings, of which a minimal set of ten key structure indicators were selected, that may be used for antibiotic stewardship programme monitoring and comparing efforts by health institutions to improve antimicrobial prescribing quality. In this pilot survey in five European countries, there was significant heterogeneity with both the extensive and key indicator results among participating centres.