Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Info for
    • Authors
    • Reviewers
  • About Us
    • About the Ochsner Journal
    • Editorial Board
  • More
    • Alerts
    • Feedback
  • Other Publications
    • Ochsner Journal Blog

User menu

  • My alerts
  • Log in

Search

  • Advanced search
Ochsner Journal
  • Other Publications
    • Ochsner Journal Blog
  • My alerts
  • Log in
Ochsner Journal

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Info for
    • Authors
    • Reviewers
  • About Us
    • About the Ochsner Journal
    • Editorial Board
  • More
    • Alerts
    • Feedback
Research ArticleFeatured Articles

A Method to Report Utilization for Quality Initiatives in Medical Facilities

M. A. Krousel-Wood, Richard N. Re, Ahmed Abdoh, Natalie Gomez, Richard B. Chambers, David Bradford and Andrew Kleit
Ochsner Journal October 2001, 3 (4) 200-206;
M. A. Krousel-Wood
Director, Clinical/Outcomes Research Department, Research Division, Alton Ochsner Medical Foundation, New Orleans, LA
MD MSPH
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Richard N. Re
Director, Research Division, Alton Ochsner Medical Foundation, New Orleans, LA
MD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ahmed Abdoh
Clinical epidemiologist and biostatistician, University of Manitoba, Canada
PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Natalie Gomez
Nurse researcher, Research Division, Alton Ochsner Medical Foundation, New Orleans, LA
RN
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Richard B. Chambers
Biostatistician, Research Division, Alton Ochsner Medical Foundation, New Orleans, LA
MSPH
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
David Bradford
Associate Professor, Economics, Center for Health Care Research & Department of Health Administration & Policy, Medical University of South Carolina, Charleston, SC
PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Andrew Kleit
Associate Professor, Economics, Pennsylvania State University, University Park, PA
PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • References
  • Info & Metrics
  • PDF
Loading

Abstract

Objective: We undertook this project to outline a methodology for quantifying aggregate health care utilization of medical “technologies” that could be rank ordered by volume. The identification of specific high-volume technologies could guide future efforts for quality initiatives such as program planning, preventive services implementation, quality improvement activities, and innovative and cost-effective technology development. Design: This study utilized a retrospective cross-sectional study design. Methods: We generated combined ranks for the top 200 high-volume procedures from three data sources that incorporated in- and outpatient procedures. Data were collected using primarily ICD-9 and CPT-4 codes; all codes were translated into CPT-4 codes and collapsed into categories using truncated three-digit CPT-4 codes. Frequencies for each collapsed code were determined with each dataset; procedures were reranked based on the mean rank of the three sources. Main Outcome Measures: We itemized the individual procedure codes making up each of the top 20 categories and reported the unique codes making up at least 80% of the procedure code category. Results: The top five procedure categories identified in this study were patient visits (inpatient and outpatient), chest x-rays, mammograms, ophthalmological services, and electrocardiograms. Conclusion: The methodology described provides a new way to combine and concisely report on utilization of procedures that is relevant to data obtained from different sources. This methodology may be of potential benefit to health care administrators, technology developers, and other planners as they contemplate ways to identify quality and technology development initiatives that can have a broad impact on populations served by health care organizations.

In the current ear of health care accountability, there is a need for a quantitative, population-based approach to assessing medical care utilization. In the United States, the ability to meet this demand is complicated in part by the lack of a central data repository, the lack of a standard information infrastructure, and variation in medical practices (1). Nevertheless, the availability of, and access to, this quantified information has important implications for quality initiatives such as planning programs, targeting prevention activities, identifying quality improvement areas, and identifying opportunities for the introduction of cost-effective technologies. Health care organizations and, in particular, managed care organizations with expanding networks, are increasingly called upon to aggregate data from multiple sources in order to meet reporting and quality performance requirements from internal and external groups. Selection of topics or conditions for study that are relevant to each of the different populations served by the organization or network is often necessary; this facilitates allocation of appropriate resources and potentiates the probability of a successful quality or technology development initiative. We undertook this project to outline a methodology for quantifying what we do in medicine that is relevant to different US health care populations (e.g. Medicare, managed care). This study focused on utilization of health care “technologies” that could be rank ordered by volume across three different large datasets. The goal was the identification of high-volume technologies in the top ranked categories that were relevant to each of the unique datasets analyzed.

Technology Defined

Health care technology has been defined by the Office of Technology Assessment as “the drugs, devices, and medical and surgical procedures used in medical care and the organizational and support systems within which such care is provided.” The National Health Service (NHS) study group on health technology defined technology as “all methods used by health professionals to promote health, to prevent and treat disease, and to improve rehabilitation and long-term care.” Both definitions cast a wide net, capturing procedures like the standard office visit which might not be traditionally thought of as a technology. In an effort to focus our task of rank ordering health care technologies used by various US medical components, we narrowed the definition of health care technology to those captured by billing codes in administrative databases of payer and health care delivery systems. This approach provides a quantitative estimate of technology utilization for rank ordering by volume and allows us to describe a methodology for quantifying and reporting on technology utilization in US medical facilities with datasets from different sources. This methodology is a potentially useful tool in identifying areas for quality initiatives.

Methods

The reliance on administrative databases to produce output for reporting mandates has escalated. Although administrative claims databases can be problematic, the payment system for procedures is based on submission of billing codes, which renders these databases potentially useful for assessing utilization of procedures. Thus, the inclusion criteria for this project required that the claims data be reflective of different payer groups (e.g. Medicare, managed care), be from different data sources, be procedure code-oriented, and reflect utilization of services. The most recent available data at the onset of this project were from calendar year 1994. A retrospective cross-sectional study design was employed using data obtained from three sources: Medicare, a Western managed care organization (MCO), and a Southern MCO.

The Medicare population consists of potentially frequent users of health care services. The Medicare population in 1994 consisted of approximately 37 million beneficiaries (over 13% of the US population): 57% women, 89% aged (vs. 11% disabled), and 84% white. To review health care utilization in inpatient and outpatient settings, the project team obtained from the US Health Care Financing Administration (HCFA) Part B Medicare files (2) the top 200 procedures (predominantly Current Procedural Terminology 4th revision [CPT-4] codes) ranked by frequency of allowed services. The Part B files include noninstitutional services and reflect procedures occurring in both inpatient and outpatient settings. These data are based on the HCFA's Common Procedure Coding System (HCPCS) codes, which are required when providers report services and procedures provided to Medicare beneficiaries. HCPCS is a three-level coding system: CPT-4 codes (3), HCPCS-national codes (4), and HCPCS-local codes.

The western MCO provided information regarding its 1994 enrollees, which included over 2.4 million patients: 51% women, 62% between the ages of 2l and 65 years (11% older than 65 years), and 70% white (data on race available for enrollees 20 years of age and older). For hospitalizations, data were provided on the top 200 principal procedures (International Classification of Diseases 9th revision [ICD-9] procedure codes) by volume (descending order) for all inpatient and outpatient (i.e. same day surgery) procedures. The top 200 office visit procedures (CPT-4 codes) ranked by volume (descending order) for one representative month in 1994 were provided; these codes were translations of an appointment reason code. The top 200 radiology procedures (predominantly CPT-4 coding) were provided rank ordered by volume for one representative month in 1994. Laboratory data were not available. Prior to analysis, the ICD-9 codes were translated into CPT-4 coding; the single month data provided were annualized; and the individual data files from the MCO were aggregated.

The Southern MCO provided data on over 63,000 enrollees for 1994; Medicare patients were not included. The population was 48% women, 66% persons aged 20–64 years (17% were older than 65 years). No data were available regarding race/ethnic origin. A composite list of the top 200 procedures (CPT-4 and HCPCS codes) for 1994, rank-ordered by volume (descending order) was obtained. The report included inpatient and outpatient hospitalizations, office visits, and radiology and laboratory utilization.

These datasets were used to generate rank-ordered lists by volume. The procedures identified were captured primarily by ICD-9 or CPT-4 procedure codes, which were developed independently. For this study, the ICD-9 procedure codes were translated by a Medical Records' Department Coding Specialist to CPT-4 codes. The two primary issues identified were: (a) for some ICD-9 codes, there were multiple related CPT-4 codes; (b) overlap in the code descriptions. To minimize the potential of these issues to cause an overestimation of the rank-ordered status of some procedures, we collapsed the CPT-4 or HCPCS codes to the first three digits. For each data source used, a rank-ordered list (descending order) by collapsed code was generated. For those categories with a missing rank (i.e., the procedure did not fall into the top 200 ranked procedures in one or two sources), the highest rank for the dataset (i.e., 200) was assigned. The procedures were reranked (ascending order) based on the mean rank generated from each data source for each procedure. The resulting ranks are independent of the actual volume of each data source for a given procedure and depend only on the rank relationship between the volumes in the datasets. This allows both large and small datasets to be combined for the final rank. Using the top 2 ranked categories in the Table⇓ as an example, we can calculate mean rank for each. For patient visit, we calculate the mean of the three ranks listed. The mean of rank 3, 1, and 1 equals 1.7. For chest x-ray, the mean of ranks 9, 5, and 8 equals 7.3. The mean for patient visit of 1.7 is lower than that of chest x-ray (7.3); therefore, patient visit and chest x-ray have final rankings of 1 and 2, respectively.

View this table:
  • View inline
  • View popup
  • Download powerpoint

Table. Rank order volume (ROV): High volume procedures comprising the top 20 ranked procedure categories for the combined inpatient and outpatient report

View this table:
  • View inline
  • View popup
  • Download powerpoint

Table. (continued)

Main Outcome Measures

Once the highest ranked procedure categories were identified, we itemized the individual procedure codes making up each of the top 20 categories and reported the unique codes making up at least 80% of the procedure code category (Table)⇑.

Results

Using the data sources described in the methods section, we generated the rank ordered list. The procedures comprising the top 20 categories are shown in Table 1⇑. The top five procedure categories utilized in these US medical facilities are the following: patient visits (inpatient and outpatient), chest x-rays, mammograms, ophthalmological services, and electrocardiograms (5).

Discussion

The goal of this project was to identify a methodology for quantifying and reporting on technology utilization for different patient populations in US medical facilities. The identification of high-volume technologies that are relevant to each contributing health care entity could guide future efforts for quality initiatives such as program planning, preventive services implementation, quality improvement activities, and innovation in cost-effective technology development.

Institutions or organizations involved in health care delivery or medical technology development are often interested in technologies or procedures that are highly utilized by multiple populations. For instance, an MCO may have two product lines: Medicare risk and commercial population. One population (e.g. commercial) may be significantly larger than the other (e.g. Medicare risk), and simple frequency calculations may result in overestimation of the relative importance of the procedure in the smaller population. The mean ranking allows identification of high-volume procedures that are relevant to both populations. Similarly, a technology developer may be looking to develop a tool that will be useful in multiple settings (e.g. inpatient and outpatient, managed care and Medicare). This methodology provides a means of assessing top-ranked categories derived from multiple settings or datasources. From this, one might be able to estimate the potential impact of an innovative technology. For example, patient visit (inpatient and outpatient) was the leading procedure category in this study. Although this was not unexpected, it provides quantitative information for services and program planning and supports the exploration of innovative technology (e.g. Telemedicine) in the evaluation and management of patients in Medicare and selected managed care settings.

Another use of this methodology and its results is in the area of quality or performance improvement (QI or PI). PI activities are undertaken by many health care organizations (including MCOs) to improve the quality of patient care and to meet regulatory requirements (Joint Commission for Accreditation of Health Care Organizations [JCAHO]) and the National Committee for Quality Assurance [NCQA]). Conditions or procedures that are high volume meet one criterion used to select topics for QI/PI activities (6). In addition, conditions or procedures that impact all populations served by the health care organization are often considered high priority. Relevant QI activities can be identified using this ranking methodology: institution-specific databases can be accessed and aggregated, codes collapsed, and procedures (or diagnosis) ranked by volume followed by calculation of the mean rank. Subsequently, a quantitative listing can be generated of high-volume procedure/conditions from the top-ranked categories derived from multiple data sources. From this listing, several areas that are relevant to each health care component can be selected for potential QI efforts.

Limitations

A potential limitation of this study's conclusions is that administrative data were used as the primary source of information. The results of any study using administrative data are dependent on accurate, reliable, and complete coding of the procedures and conditions that are under study (7–9). Procedures or conditions can be miscoded or can be not coded (missing data). In our experience, coding for procedures was reliable (10) and accurate (when compared with some diagnoses codes). Nevertheless, administrative databases were designed to predominantly track utilization of services for billing purposes; therefore, the use of administrative databases to address the goal of rank ordering technologies utilized in US medical facilities was appropriate.

Some coding nuances may underestimate the impact of some procedures when evaluated by this method. For example, in the Table⇑, thyroid tests are captured by two collapsed procedure categories: rank 9, collapsed code 800 “laboratory blood tests,” thyroid panel is captured by CPT-4 codes 80091 and 80092; rank 18, collapsed code 844 “thyroxine, TSH,” thyroxine and TSH are captured by codes 84436 and 84443, respectively. In addition, laboratory data were not available from the Western MCO. The methodology used in this study underestimates the ranking of thyroid tests. Nevertheless, the information that thyroid tests are highly utilized (in the top 20 high volume procedures) is evident. Another example of the impact of coding nuances is the category 994, “evaluation and management.” The ranking of this category was driven primarily by the managed care organizations (it did not rank in the top 200 procedures for the Medicare population). The codes beginning with the three digits “994” are often used when an appropriate code beginning with the three digits “992” (patient visit) cannot be identified. Therefore, when reviewing these data, one might envision an even greater opportunity for innovative technologies that would support the interaction between patients and providers. In any event, this study identified the patient visit as the leading high volume procedure; therefore, although there may be need for some caution with the use of this methodology, these examples suggest that the impact of coding nuances on the identification of high-volume procedures is minimal.

In using the results reported in this paper to compare with subsequent data that become available from similar sources, one should be aware of the potential impact of annual coding on documentation of procedure frequency of annual coding and technology changes (11). Misunderstanding of coding and technology/practice changes may lead to erroneous conclusions regarding technology utilization in US facilities.

Conclusion

We report a rank ordering methodology for the determination of high-volume medical procedures in the top-ranked categories of three different health care entities combined into one aggregate dataset. This methodology provides a new way to combine and concisely report on utilization of procedures, with respect to volume, that is relevant to each different source from which data are obtained. We identified leading high-volume in- and outpatient procedures performed in US facilities using the methods and assumptions described. Use of rank ordering to assess technology utilization is dependent on methods of coding, reimbursement, and changes in technology access. This methodology may be of potential benefit to health care administrators, technology developers, and other planners as they contemplate ways to identify quality initiatives that can involve and impact every population served by their health care organization(s).

Figure
  • Download figure
  • Open in new tab
  • Download powerpoint

Dr. Krousel-Wood is Ochsner's Director of Clinical/Outcomes Reseach

Acknowledgments

The authors would like to acknowledge the contributions of the following persons: William Bertrand, PhD (Tulane University); Mark Fendrick, MD (University of Michigan); Clifford Goodman, PhD (the Lewin Group); Blackford Middleton, MD, MPH (Medicalogic, Inc.); Robert Nease, PhD (Washington University School of Medicine); Haya Rubin, MD, PhD (Johns Hopkins University). The authors would also like to thank Carol Legendre and Lori Nunez for their secretarial assistance in the preparation of this article.

  • Ochsner Clinic and Alton Ochsner Medical Foundation

References

  1. ↵
    1. Wennberg J. E.,
    2. Cooper M. M.
    (editors). The Dartmouth Atlas of Health Care. Chicago, IL: American Hospital Publishing, Inc., 1996.
  2. ↵
    1. Public use files catalog as of July 1, 1996
    , US Department of Health and Human Services. Healthcare Financing Administration. Bureau of Data Management and Strategy. July, 1996. Bethesda, MD.
  3. ↵
    1. Physicians' Current Procedural Terminology CPT '97
    , American Medical Association Physicians' Current Procedural Terminology 1997. Chicago, IL: Elm Street Publishing Services, 1996.
  4. ↵
    , St. Anthony's HCPCS Level II Code Book, 1995 edition. Reston,VA: St. Anthony's Publishing Inc., 1994.
  5. ↵
    1. Krousel-Wood M. A.,
    2. Abdoh A.,
    3. Re R.
    Technology utilization in US health facilities. International Society for Technology Assessment in Health Care Annual Meeting, Barcelona, Spain. May 1997. (abstract).
  6. ↵
    1. McGlynn E. A.,
    2. Asch S. M.
    (1998) Developing a clinical performance measure. Am J Prev Med 14(3):14–21, pmid:9566932, Suppl.
    OpenUrlCrossRefPubMed
  7. ↵
    1. Edwards F. H.,
    2. Clark R. E.,
    3. Schwartz M.
    (1994) Practical considerations in the management of large multiinstitutional databases. Ann Thorac Surg 58:1841–1844, pmid:7979779.
    OpenUrlCrossRefPubMed
    1. Jollis J. G.,
    2. Ancukrewicz M.,
    3. Delong E. R.
    (1993) Discordance of databases designed for claims payment versus clinical information systems. Implications for outcome research. Ann Intern Med 119:844–850, pmid:8018127.
    OpenUrlCrossRefPubMed
    1. Malenka D. J.,
    2. Mclerran D.,
    3. Roos N.,
    4. Fisher E. S.,
    5. Wennberg J. E.
    (1994) Using administrative data to describe casemix: A comparison with the medical record. J Clin Epidemiol 47:1027–1032, pmid:7730905.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Krousel-Wood M. A.,
    2. Gomez N. F.,
    3. Re R. N.
    (1996) The National Clinics Research Consortiurn's (NCRC) use of administrative databases for collaborative study: the case of prostatectomy. Am J Manag Care 2:269–275.
    OpenUrl
  9. ↵
    1. Krousel-Wood M. A.,
    2. Abdoh A.,
    3. Re R. N.
    Impact of Coding Changes on Technology Utilization in U.S. Hospitals. International Society for Technology Assessment in Health Care Annual Meeting, Barcelona, Spain; May 1997. (abstract).
PreviousNext
Back to top

In this issue

Ochsner Journal
Vol. 3, Issue 4
Oct 2001
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on Ochsner Journal.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
A Method to Report Utilization for Quality Initiatives in Medical Facilities
(Your Name) has sent you a message from Ochsner Journal
(Your Name) thought you would like to see the Ochsner Journal web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
A Method to Report Utilization for Quality Initiatives in Medical Facilities
M. A. Krousel-Wood, Richard N. Re, Ahmed Abdoh, Natalie Gomez, Richard B. Chambers, David Bradford, Andrew Kleit
Ochsner Journal Oct 2001, 3 (4) 200-206;

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
A Method to Report Utilization for Quality Initiatives in Medical Facilities
M. A. Krousel-Wood, Richard N. Re, Ahmed Abdoh, Natalie Gomez, Richard B. Chambers, David Bradford, Andrew Kleit
Ochsner Journal Oct 2001, 3 (4) 200-206;
del.icio.us logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Technology Defined
    • Methods
    • Main Outcome Measures
    • Results
    • Discussion
    • Conclusion
    • Acknowledgments
    • References
  • Figures & Data
  • References
  • Info & Metrics
  • PDF

Cited By...

  • Recent Publications by Ochsner Authors
  • Google Scholar

More in this TOC Section

  • Vertebral and Intracranial Artery Angioplasty
  • Influence of Preoperative Risk Factors on Outcome After Carotid Endarterectomy
  • Extracranial-Intracranial Bypass in Cerebral Ischemia
Show more Featured Articles

Similar Articles

Ochsner Journal Blog

Current Post

Be Careful Where You Publish -- Part 2

Our Content

  • Home
  • Current Issue
  • Ahead of Print
  • Archive
  • Featured Contributors
  • Ochsner Journal Blog
  • Archive at PubMed Central

Information & Forms

  • Instructions for Authors
  • Instructions for Reviewers
  • Submission Checklist
  • FAQ
  • License for Publishing-Author Attestation
  • Patient Consent Form
  • Submit a Manuscript

Services & Contacts

  • Permissions
  • Sign up for our electronic table of contents
  • Feedback Form
  • Contact Us

About Us

  • Editorial Board
  • About the Ochsner Journal
  • Ochsner Health
  • University of Queensland-Ochsner Clinical School
  • Alliance of Independent Academic Medical Centers

© 2025 Ochsner Clinic Foundation

Powered by HighWire