Background: Policymakers in many countries are confronted with the question of whether, and how much, resources should be committed to improving hospital quality. This is because there is little consensus about the relationship between hospital quality and cost, despite the extensive existing literature. This makes it difficult for policyholders to commit resources to make improvements. The diversity of methods used also makes cross-comparison difficult. To address this, two specific methodological issues that are commonly observed will be investigated. The first is the choice of metric used for assessing hospital quality. The second is the way the distribution of the measure is specified.
Methods: An empirical example will be setup that resembles a typical study on this research topic. The purpose is to demonstrate that a change in the metric used for measuring hospital quality, or a change in its distributional assumptions, will lead to a different result, even when the same data is used. For simplicity, the two most general metrics of patient mortality and unplanned readmissions will be used. The measuring statistic is the odds ratio to allow for patient risk-adjustment. A bootstrap-adjusted regression modified from Lindley and Smith (1972) will be used to account for the distribution of the quality statistic. Hospital cost data is derived from financial statements on a subset of Victorian public hospitals from 2002/03 to 2004/05. Patient data is sourced from encoded records in the Victorian Admitted Episodes Database (VAED) for the six years from 1999/00 to 2004/05, which are used for patient risk-adjustment. Hospitals and patients are anonymously linked.
Results: The relationship between quality and cost is negative at the 95% significance levels when the bootstrap adjustment is applied, for both adjusted mortality and readmissions. When the adjustment was removed, the relationship became positive with mortality and close to zero for readmissions, both with much wider confidence intervals. A hypothetical static comparison exercise resulted in a difference in estimated costs of around several million AUD a year in operating costs of public hospitals in the state of Victoria.
Conclusion: When determining the effectiveness of hospital policies, more than one operative measure of quality should be used as a robustness check. The dispersion of quality measures must be explicitly accounted for. MeSH codes: N03.219.262 (Hospital Economics); N05.300.375.500 (Costs, Hospital); N02.278.421.510 (Hospitals, Public); N04.452.871.715.800 (Risk Adjustment); E02.760.400.620 (Patient Readmission); E05.318.308.985.550.400 (Hospital Mortality); E05.318.740.750 (Regression Analysis).
Keywords: Hospital economics, risk adjustment, costs, hospital, regression analysis, hospitals, public, hospital mortality