+++
Outcomes Research Versus Traditional Clinical Research
++
Outcomes research “focuses on the effects of medical care on individuals and society.”4 It is this research focus, not necessarily the methodology, that distinguishes outcomes research from traditional clinical research.4,13,18 Table 87–1 describes the differences between outcomes research and traditional clinical research.4 Traditional clinical research is hypothesis-driven, and evaluates efficacy, asking whether an intervention works in an idealized setting. Outcomes research is concerned with effectiveness: Does the intervention help an individual patient in a real-world setting?19 Additionally, outcomes research is patient-centered, rather than disease-centered, and focuses on “what one ultimately wants health care to achieve,”20 for example, improvements in functional status or quality of life. Furthermore, outcomes research tends to be more inclusive in what it considers an intervention. Traditional clinical research often involves the evaluation of new drugs or devices, while the interventions in outcomes research can range from a new drug to a new structure for health care delivery. In order to achieve its goals, outcomes research tends to utilize observational study designs and draw methods from the social sciences more frequently than traditional clinical research (see the “Methods” section).
++
+++
Commonly Used Outcome Measures
++
Mortality is perhaps the most tangible and meaningful endpoint. It is easy to define and almost uniformly recorded. It sounds relatively simple: Did the patient survive the acute illness or insult? However, using mortality as an endpoint is actually not so straightforward. One must first determine the appropriate time at which to assess mortality: ICU or hospital discharge? 30, 60, or 90 days? Years? Selecting a timeframe that is too short may provide an inaccurate assessment of the impact of an intervention if the natural history of the condition being studied is longer than the study period. A specific intervention might reduce 30-day mortality, but increase 90-day mortality. Conversely, long-term outcomes may reflect the patient’s disease prognosis, age, premorbid conditions,21 and preexisting functional status22,23 more than the ICU care they received. Therefore, the appropriate mortality endpoint depends on the specific research question, the study design, and the mechanisms of the disease or treatment being studied.4
++
Even when an appropriate mortality endpoint is chosen, outcomes can be artificially affected by patient choice, as well as ICU and hospital practice patterns. For instance, a patient’s decision to transition to comfort care may hasten death; the availability of palliative care suites for transfer of ICU patients receiving comfort measures only alters ICU mortality; and the availability of long-term care facilities for transfer of ventilator-dependent patients could alter hospital mortality. Additionally, comparing mortality rates across different hospitals and ICUs is fraught with problems, largely due to difficulty in adjusting for differences in case-mix.24 Using mortality as an endpoint may not be feasible if the mortality of the condition being studied is relatively low, as the sample size required to generate adequate power to detect a clinically relevant difference will be quite large. And, of course, the use of mortality as an endpoint does not consider morbidities or the quality of life of the survivor.
++
Health status encompasses several measures of patient-assessed outcomes, including functional status and quality of life. Health status is an important endpoint for critical care outcomes research, as recent data has shown that the long-term consequences of critical illness on physical functional status, cognitive function, and quality of life can be profound.25,26,27,28
++
Measurement of health status is not standardized, and relies on patient interviews and questionnaires. Physical functional status can be assessed using the 6-minute-walk test, Medical Outcomes Study Short-Form 36 (SF-36),25,26 the Karnofsky Performance Status Scale score, the Barthel Index, and the Lawton-Instrumental Activities of Daily Living score.29,30 Tools to assess mental health status, focusing on symptoms of depression, anxiety, and post-traumatic stress disorder, include the Center for Epidemiologic Studies–Depression Scale,31 the Hospital Anxiety and Depression Scale,32 and the Impact of Events Scale-Revised.33 The Informant Questionnaire on Cognitive Decline in the Elderly, the Trail Making Test, and the Repeatable Battery for the Assessment of Neuropsychological Status have been used to determine cognitive function.27,28,34 Quality of life can be determined from the results of these various tests, or can be assessed specifically via tools such as the EuroQol-5D30 or SF-36.35 Quality of life data can then be used to calculate quality-adjusted life years (QALYs), which take into account both mortality and quality of life.35
++
Of note, completion of these instruments by ICU survivors themselves is not always possible, and the use of surrogates to complete them may not be as useful. Additionally, interpretation of these tests requires knowledge of the patient’s baseline prior to ICU admission, which may also require a surrogate for determination, and is subject to recall bias. Nonetheless, data is mounting that ICU patients continue to suffer long after their discharge from the hospital: Depression and anxiety may be present in nearly half of ICU survivors,36 and post-traumatic stress disorder in one-third.32,33 Decline in cognitive function is common and can persist for up to eight years.27 And physical disability is nearly ubiquitous; in one study, 100% of patients reported experiencing subjective weakness and decreased exercise capacity and almost a quarter of patients were unable to return to work at 5 years after discharge from the ICU.25 Thus, continued work on targeting improvement in health status for ICU survivors is paramount.
++
Economic evaluation is “the comparative analysis of alternative health care interventions in their relative costs (resource use) and effectiveness (health effects).”37 With the aging of the US population, and the availability of new and more expensive treatments, the cost of critical care is increasing.38,39 As mentioned earlier, ICU care is disproportionately expensive,6,7 costing over $80 billion per year in the United States alone.8 Therefore, identifying cost-effective interventions in the ICU is incredibly important. Cost-effectiveness analyses (CEAs) produce a ratio in which the numerator is the cost of the intervention, and the denominator is the benefit in terms of the clinical outcome.40 The clinical outcome is often reported in terms of QALYs, such that the ratio produced is cost per QALY.
++
Given the increasing importance of CEAs, and the increasing frequency with which they are performed, both the US Public Health Service and the American Thoracic Society convened panels to address methodological issues and provide recommendations for the reporting of CEAs.40,41 Their recommendations include describing the model used; identifying model assumptions; describing how estimates of effectiveness, costs, and health states were obtained; and defining the type of costs, year of costs, inflation adjustment methods, and discount rates used. Even when following these recommendations, critical care outcomes researchers face additional challenges that make CEAs more challenging in the ICU setting. These include the complexity of ICU patients, the lack of data on effectiveness of interventions in the ICU, the unavailability of cost data, and the infrequent collection of ideal outcomes measures for CEAs (eg, long-term quality-adjusted survival rates).40 The challenge of obtaining accurate cost data can make CEAs difficult to interpret and compare, and limits the generalizability of the results. Charge data (ie, what the patient is billed) are often substituted for cost; however, even when adjusted with cost-per-charge ratios, these data are department and institution specific, and may not reflect actual costs.37,42 Therefore, interpretation of CEAs must be undertaken with great caution.
++
A common focus of outcomes research is quality improvement. Outcome measures for quality improvement research must be “granular enough to be meaningful to clinicians [and to] adequately drive quality improvement interventions.”3 Since preventability is one of the central tenets of quality improvement, outcome measures must also be viewed as preventable.
++
A recent study by Martinez et al. utilized a consensus process to identify meaningful outcomes measures for quality improvement in the ICU.3 In the study, 164 ICU providers identified five preventable outcomes: pressure ulcers, central line-associated bloodstream infection, pulmonary embolism, methicillin-resistant Staphylococcus aureus infection, and gastrointestinal bleed. Indeed, data support the preventability of these outcomes. For instance, in a landmark study by Pronovost et al., central-line associated bloodstream infections were eradicated in the state of Michigan.43 However, prior to this study, central-line associated bloodstream infections were not uniformly considered preventable. Therefore, although the development and use of outcomes measures for quality improvement must rely on existing data suggesting preventability, clinicians and researchers must also be creative and innovative, continuing to question our current knowledge base of what is preventable.
++
Although the focus of outcomes research differs from that of traditional scientific research, the methods used need not. Outcomes research can take the form of case-control studies, cohort studies, and even randomized controlled trials (RCTs). However, outcomes researchers tend to utilize observational study designs and draw methods from the social sciences more frequently than traditional clinical researchers. The use of large administrative datasets is a growing trend in outcomes research. Additionally, outcomes researchers use qualitative methods to generate hypotheses and describe complex phenomena that do not lend themselves to quantitative methods or traditional hypothesis testing.4
+++
Observational Studies
++
Observational studies can be prospective or retrospective, and include cohort studies, case-control studies, and cross-sectional studies.44 Such studies seek to identify associations between an exposure (eg, a medication, intervention, or organization of health care delivery) and outcomes, and can utilize primary or secondary data. Primary data are collected to answer a specific research question, while secondary data are data that already exist but are reemployed to answer a novel research question.45
++
Administrative data are a type of secondary data that were originally collected for reasons other than research. Administrative data include health care encounter data, enrollment data, clinical data, data registries, performance data, survey data, and national data.4 Examples of data sources utilized in critical care outcomes research include Medicare, the University HealthSystem Consortium, the National Inpatient Sample, and the National Hospital Discharge Database.46 There are several benefits to using administrative data for outcomes research. First, administrative data may provide answers to research questions that ethically, legally, or practically cannot be answered by RCTs. Second, large registries and administrative data may be broader in scope and thus more generalizable than primary data. Furthermore, large datasets can more efficiently answer questions associated with rare diseases or outcomes. Finally, policymakers may be more interested in outcomes assessed via administrative data. In fact, policy concerns regarding Medicare spending, racial disparities, and unexplained geographic variation in health care are fueled by the results of analysis of administrative data.46
++
However, the use of administrative data is not without its drawbacks. As with all observational studies, studies using administrative data are subject to bias and confounding. Confounding is of particular concern since patients are not randomly assigned to the exposure of interest; thus, any association between the exposure and outcome could be due to a third, unmeasured variable (see the “Limitations” section). However, the primary concern specific to administrative data is data quality. Unlike traditional clinical research, in which the study design is completed before data collection begins, with administrative data, the quality of the data must be assessed before designing the study (but, of course, after the research question is defined).45 The Directory of Clinical Databases in the United Kingdom recently developed a framework for assessing the quality of administrative data.47 The framework focuses on data coverage and data accuracy (Table 87–2). Coverage is determined by the representativeness of the data, the completeness of recruitment, the variables included, and the extent of missing variables, while accuracy is determined by the collection of raw data, the definitions and rules utilized, the reliability of coding, the independence of observations, and the method of data validation.46
++
++
Once the quality of the data has been assessed, the process of research can continue (Figure 87–2). Several aspects of this process are unique to secondary data analyses, and deserve mentioning. First, the analysis plan must be developed a priori in order to maintain the validity of the study. Since secondary data, by definition, already exists, it can be tempting to perform preliminary analyses before finalizing an analysis plan; this temptation should be avoided as it can bias the results. Similarly, “data-dredging” via post-hoc analyses may result in the identification of erroneous, or at least meaningless, associations, since exploring any 20 associations will, on average, produce one result that is statistically significant to P < 0.05.45 Thus, the number of statistical tests performed should be minimized. In addition, adjustment for bias and confounding must be performed. However, it is important to remember that even the most sophisticated statistics cannot compensate for poor data quality.
++
++
Qualitative methods are increasingly being used in outcomes research. Qualitative research uses methods such as interviews, focus groups, field observations, and document review (eg, diaries) in order to “understand complex social processes, organizational change, individual health behaviors, and nuanced aspects of environmental context that influence quality of care, health care delivery, and health outcomes for individuals and populations.”48 Qualitative research differs from quantitative research in that it describes the breadth and complexity of a phenomenon rather than measuring occurrences to determine frequency, incidence, prevalence, or magnitude.49 Given the complexity and nuanced aspects of critical care, qualitative research can be particularly useful in this setting.
++
Qualitative methods should be considered when: (1) the phenomena of interest are difficult to measure quantitatively, (2) a comprehensive understanding of a problem is desired, (3) the goal is to generate insight as to why an intervention has a specific impact, and (4) special populations are being studied.49 Once collected, qualitative data can be coded and analyzed. Analysis focuses on identifying taxonomies and themes that can explain and predict outcomes.50 Although qualitative methods alone can provide interesting and rich data, mixed methods—which combine quantitative and qualitative methods—are even more impactful as they benefit from the strengths of each approach.49
+++
Randomized Controlled Trials
++
RCTs are traditionally designed as efficacy studies—assessing the impact of an intervention in an ideal setting—and are thus most commonly used in conventional clinical research. RCTs can also be utilized in outcomes research if the study design focuses on effectiveness—the implications of an intervention or exposure in a “real-world” setting. Unlike observational studies, RCTs do not have the problem of confounding since, by definition, the exposure is allocated randomly. The lack of confounding makes it easier to conclude causality. However, the use of RCTs in outcomes research is relatively rare for several reasons. First, RCTs are expensive and time-consuming. Second, it may not be legal or ethical to randomize patients to certain exposures or interventions. And, finally, some exposures (eg, socioeconomic status or insurance status) may be impossible to randomize.4 Nonetheless, RCTs remain the “gold standard,” and should be considered by outcomes researchers when feasible. Recently, two RCTs of early mobilization in the ICU, designed as effectiveness studies, showed improved functional status for patients who received physical therapy during their critical illness.51,52