Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Info for
    • Authors
    • Reviewers
  • About Us
    • About the Ochsner Journal
    • Editorial Board
  • More
    • Alerts
    • Feedback
  • Other Publications
    • Ochsner Journal Blog

User menu

  • My alerts
  • Log in

Search

  • Advanced search
Ochsner Journal
  • Other Publications
    • Ochsner Journal Blog
  • My alerts
  • Log in
Ochsner Journal

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Archive
  • Info for
    • Authors
    • Reviewers
  • About Us
    • About the Ochsner Journal
    • Editorial Board
  • More
    • Alerts
    • Feedback
Review ArticleReviews and Contemporary Updates

Clinical Comparative Effectiveness Research Through the Lens of Healthcare Decisionmakers

Eboni G. Price-Haywood
Ochsner Journal June 2015, 15 (2) 154-161;
Eboni G. Price-Haywood
Departments of Internal Medicine and Research, Ochsner Clinic Foundation, New Orleans, LA
The University of Queensland School of Medicine, Ochsner Clinical School, New Orleans, LA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: eboni.pricehaywood@ochsner.org
  • Article
  • Figures & Data
  • References
  • Info & Metrics
  • PDF
Loading

Abstract

Background Healthcare expenditures in the United States exceed the healthcare expenditures of other countries, yet relatively unfavorable health outcomes persist. Despite the emergence of numerous evidence-based interventions, wide variations in clinical care have caused disparities in quality of care and cost. Comparative effectiveness and cost effectiveness research may better guide healthcare decisionmakers in determining which interventions work best, for which populations, under which conditions, and at what cost.

Methods This article reviews national health policies that promote comparative effectiveness research (CER), healthcare decisionmaker roles in CER, methodological approaches to CER, and future implications of CER.

Results This article provides a brief summary of CER health policy up to the Patient Protection and Affordable Care Act and its establishment of the Patient-Centered Outcomes Research Institute (PCORI). Through PCORI, participatory methods for engaging healthcare decisionmakers in the entire CER process have gained momentum as a strategy for improving the relevance of research and expediting the translation of research into practice. Well-designed, methodologically rigorous observational studies and randomized trials conducted in real-world settings have the potential to improve the quality, generalizability, and transferability of study findings.

Conclusion Learning health systems and practice-based research networks provide the infrastructure for advancing CER methods, generating local solutions to high-quality cost-effective care, and transitioning research into implementation and dissemination science—all of which will ultimately guide health policy on clinical care, payment for care, and population health.

Keywords
  • Comparative effectiveness research
  • health policy
  • patient outcome assessment

INTRODUCTION

Per capita healthcare spending in the United States continues to be among the highest in the world; however, this investment has not translated into better health outcomes compared to other high-income countries.1 Wide variations in treatments, outcomes, and costs clearly indicate a need for improvement in the US healthcare system. These problems are fueling demands from healthcare decisionmakers for more evidence of the comparative effectiveness and cost effectiveness of medical interventions.

Comparative effectiveness research (CER) is believed to be the mechanism that will fill current knowledge gaps in healthcare decisionmaking.2-3 The Institute of Medicine (IOM) National Priorities Committee defines CER as “the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care.”2 The purpose of CER is to “assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve healthcare at both the individual and population levels.”2 The IOM further emphasizes the need to directly compare alternative interventions, study patients in real-world clinical settings, and strive to tailor medical decisions to individual (or subgroup) values and preferences.

Given the increased prominence of CER for healthcare decisionmaking, this article provides a brief overview of the (1) historic context of national health policies supporting CER; (2) role of healthcare stakeholder engagement in CER; (3) methodological considerations in the conduct of CER; and (4) future implications for research, clinical practice, and health policy.

LITERATURE SEARCH

The author searched PubMed for methodological guidelines and publishing standards for CER using the following search strategy: (comparative effectiveness research [MeSH Terms] OR (comparative [All Fields] AND effectiveness [All Fields] AND research [All Fields]) OR comparative effectiveness research [All Fields]). The search, limited to review articles published between 2009 and 2014, yielded 941 articles. The search was further restricted to articles with the search terms in the title, yielding 20 articles. From these articles, the author selected general reviews on research methodology and standards for reporting and then searched PubMed for related articles. The author also searched the reference lists of articles identified as key references to find additional articles. The author employed the same search strategy to identify key reference articles that discussed CER within the context of the Patient Protection and Affordable Care Act (PPACA) and stakeholder engagement. Table 1 provides an overview of the key findings of this review process.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1.

Summary of Comparative Effectiveness Research

HEALTH POLICIES

After passage of the Social Security Amendments of 1965 that established Medicare and Medicaid, health services research became especially important to Congress as the members of the House of Representatives and Senate struggled to figure out how to contain rising healthcare expenditures.4 In 1999, Congress established the Agency for Healthcare Research and Quality (AHRQ) that was tasked with examining (1) outcomes, effectiveness, and cost effectiveness of medical practices and technologies; (2) utilization and access to care; (3) organization, delivery, and financing of services and their interaction with and impact on quality of care; (4) methods for measuring and strategies for improving quality of care; (5) strategies for engaging patients in their care; and (6) methods by which healthcare stakeholders learn best practices and use this information for healthcare delivery.5,6 In 2003, the Medicare Prescription Drug, Improvement, and Modernization Act created the Effective Health Care Program to expand AHRQ's responsibility to include CER.4 In 2009, the American Recovery and Reinvestment Act appropriated $1.1 billion for CER and tasked the IOM with establishing research priorities. The IOM subsequently recommended 100 CER priorities—many of which focus on the need to improve health service delivery.2

The PPACA moved the United States toward a national policy for CER to increase accountability for quality and cost of care. The PPACA established the Patient-Centered Outcomes Research Institute (PCORI) as a government-sponsored nonprofit organization to advance the quality and relevance of clinical evidence that patients, clinicians, health insurers, and policy makers can use to make informed decisions.4 PCORI's funding source is the Patient-Centered Outcomes Research Trust Fund that receives funding from the Federal Hospital Insurance Trust Fund, the Federal Supplementary Medical Insurance Trust Fund, the Treasury general fund, and fees on health plans to support CER.7 The PPACA defines CER as “research evaluation and comparing health outcomes and clinical effectiveness, risks, and benefits of two or more medical treatments, services, and items.”4,7 The PPACA defines treatment, services, and items as “healthcare interventions; protocols for treatment, care management, and delivery; procedures; medical devices; diagnostic tools; pharmaceuticals; integrative health practices; any other strategies or items being used in the treatment, management, and diagnosis, or prevention of, illness or injury in individuals.”4,7 The law further specifies that PCORI must ensure that CER accounts for differences in key subpopulations (eg, race/ethnicity, gender, age, and comorbidity) to increase the relevance of the research.

PCORI formally incorporated the concept of “patient-centeredness” into CER and characterized patient-centered outcomes research (PCOR) as (1) comparing alternative approaches to clinical management, (2) actively engaging patients and key stakeholders throughout the research process, (3) assessing outcomes that are meaningful to patients, and (4) implementing research findings in clinical settings.8 Examining the impact of interventions on patient-reported outcome measures such as symptom severity, functional status, and quality of life is an imperative component of PCOR. The best way to determine which outcomes matter most to patients and their caregivers is to engage them in the research process.

STAKEHOLDER ENGAGEMENT

Engaging stakeholders in research improves the relevance of study questions, increases transparency, enhances study implementation, and accelerates the adoption of research findings into practice and health policy.9 The degree of stakeholder participation depends on interest, expertise, negotiation, and/or project governance structure. Stakeholders are categorized into 7 groups, including patients and the public, providers (individuals or organizations), purchasers (responsible for underwriting costs of care), payers (responsible for reimbursement), policy makers, product makers (drug/device manufacturers), and principal investigators (researchers or their funders).10 Two 2014 reviews of stakeholder engagement in CER and PCOR demonstrate that patients are the most frequently engaged stakeholder group, engagement most often occurs in the early stages of research (prioritization), and stakeholder roles in research are highly variable.10,11 Engagement strategies range from surveys, focus groups, and interviews to participation in study advisory boards or research teams. No clear evidence supports any particular strategy for engaging patient stakeholders as better than others.11

The PCORI Patient and Family Engagement Rubric describes stakeholder engagement in the study planning, study implementation, and dissemination of results for CER and PCOR.12 The rubric is not intended to be comprehensive or prescriptive. However, it outlines 4 PCOR engagement principles: (1) reciprocal relationships (clearly outlining the roles of all research partners, including patients); (2) colearning (a bidirectional process in which patient partners understand the research process and researchers understand the principles of patient-centeredness and engagement); (3) partnership (fair financial compensation, thoughtful consideration for the time commitment requested, and accommodation for cultural diversity); and (4) trust/transparency/honesty (inclusive decisionmaking, sharing information with all partners, commitment to open and honest communication, and communicating study findings in meaningful and usable ways).12

METHODOLOGICAL CONSIDERATIONS

Study Designs

Although CER and PCOR utilize participatory approaches to research, stakeholder engagement does not preclude the need to employ the methodological rigor of research. Careful attention to study design is imperative. The principal methods for CER are observational studies (prospective and retrospective), randomized trials, decision analysis, and systematic reviews.13 Table 2 summarizes 3 studies that employ the 2 main forms of CER—randomized trials and observational studies. The Adherence and Intensification of Medications (AIM) study demonstrates the value of conducting randomized controlled pragmatic trials.14 The Swedish Obese Subjects (SOS) trial is one of the largest prospective observational cohort studies to date.15 The Initial Choice of Glucose-Lowering Medication for Diabetes Mellitus study used insurance claims data for a retrospective cohort study.16

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 2.

Examples of Comparative Effectiveness Research (CER) Using Pragmatic Clinical Trial and Observational Study Designs

The advantage of observational studies is that they can quickly provide low-cost, large study populations. Observational studies also include data from diverse patients obtained during routine clinical practice that strengthen the external generalizability of study findings. Nonetheless, observational studies are limited by the inherent bias and confounding of results that routinely occur in nonrandomized studies.13,17,18 To minimize the threats to the internal validity of observational studies, research guidelines recommend the following: a priori specification of research questions, targeted patient populations, comparative interventions, and postulated confounders; selection of study designs that are appropriate to the study questions; selection of the appropriate data source; and transparency in protocol development and prespecified analytic plans.18,19 A discussion of analytic methods (regression analysis, propensity scores, sensitivity analysis, instrumental variables, and structural model equations) for observational studies is beyond the scope of this article, but these methods are discussed in detail in several reviews of CER.13,17,18

The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Good Research Practices Task Force provides detailed recommendations on how to determine when to do a prospective vs retrospective study, the advantages and disadvantages of different study designs, and analytic approaches to consider in study execution.18,19 Several designs have been developed for prospective observational studies.18 The single group, pretest/posttest design is a longitudinal study in which subjects serve as their own control, and outcomes are collected before and after an intervention. In contrast, the multiple group, pretest/posttest design collects outcomes for at least 2 comparison groups. The multiple group, cross-sectional design involves study participants with a particular condition who have already undergone one of multiple interventions. Prospective cohort studies are longitudinal studies in which outcomes are only collected after an intervention.

The ISPOR task force also provides guidelines for conducting retrospective observational studies on secondary data sources (eg, claims databases and electronic medical records).19 Electronic databases contain information collected for operational reasons rather than research purposes and therefore have minimal reporting bias. However, data quality (eg, missing/incomplete data), selection bias, and unmeasured confounding inherent to data collected in clinical practice are major threats to internal validity. Careful attention to the epidemiologic study design (cross-sectional, cohort, case-control, case-crossover) and statistical methods is critical to enhancing study validity. Cross-sectional studies provide a snapshot of data but limit the ability to establish the temporality of exposure to an intervention relative to the outcome of interest. When temporality of exposure is of particular interest, cohort studies are ideal. Case-control designs are historically helpful when the outcome of interest is rare. The case-crossover design, in which individuals serve as their own control, is ideal when transient exposures result in acute events or outcomes. Retrospective studies, if done well, can supplement evidence from prospective observational studies and randomized trials.

While the more costly traditional randomized controlled trials (RCTs) have the advantage of strong internal validity, their restrictive inclusion criteria tend to result in homogenous groups of study participants that are not reflective of what clinicians see in real practice. For cost reasons, RCTs are often limited in size (limiting the ability to detect adverse effects) and in duration (limiting the ability to observe long-term outcomes).

RCTs require large populations to implement the key elements of CER as defined by the IOM (comparison of alternative interventions in real-world settings and tailored to the values of individuals or subpopulations) and to avoid false conclusions.13 Cluster randomized trials meet the CER criteria of direct comparison of interventions because randomization occurs at the practice level instead of the individual patient level. Implementation of a single intervention at a site resembles what happens in real practice. Pragmatic or practical trials examine interventions that are currently in use in typical practice settings and include patients with demographic profiles that are similar to patients routinely treated in real practice.3,13,20,21 The major drawback to pragmatic trials is that they usually require larger sample sizes and longer follow-ups of major clinical outcomes compared to traditional RCTs to better reflect the natural history of disease. To minimize costs, pragmatic trials may limit outcome measurements to easily obtained data.13 Doing so, however, may result in not collecting cause-specific measures, thus limiting the ability to ascertain why an intervention is or is not effective. Adaptive trials change design in response to prespecified criteria and study data collected that may reveal early indications of a study's ultimate outcomes.13 The changes can occur in any aspect of the study—the number of arms, types of intervention, sample sizes, sampling strategy for subgroups of interest, or outcome measures. Adaptive changes maximize study efficiency and increase relevance.

Regardless of which study design is employed (observational or randomized trial), transparency and adherence to methodological standards will enhance generalizability and transferability of study results across populations, settings, and systems of care. A number of research guidelines are available to help investigators appropriately plan and execute methodologically rigorous CER studies.13,17-19,22-28

Integrating Research in Clinical Practice

To expand CER, researchers must have conducive practice environments in which to conduct studies. Learning health systems and practice-based research networks may be uniquely positioned to meet the infrastructure needs of CER because they are positioned to promote research prioritization, evidence generation, and translation of evidence into practice.29-31

In learning health systems, research and clinical practice are tightly integrated; thus research priorities are aligned with key issues clinicians face in everyday practice, and research on those issues informs best practice.21 Key attributes of learning health systems include proactively identifying problems to guide research priorities, testing pilot interventions to identify strategies for successfully implementing interventions in diverse settings, evaluating interventions with predefined impact measures, adjusting interventions to the contextual environment, and disseminating findings internally and externally.32 Learning health systems such as hospital-based CER centers may be the ideal model for improving evidence-based practice and cost containment.33 Hospital-based CER can harness local data on utilization, outcomes, and cost of care from electronic medical records and other data warehouses to identify gaps in service or practice. To close the gap, clinical decision support and quality improvement initiatives can be integrated into the health system while using administrative and clinical data to monitor performance.

Practice-based research networks (PBRNs) are organized networks of ambulatory practices involved in primary care research. According to the AHRQ, “PBRNs draw on the experience and insight of practicing clinicians to identify and frame research questions whose answers can improve the practice of primary care. By linking these questions with rigorous research methods, the PBRN can produce research findings that are immediately relevant to the clinician and, in theory, more easily assimilated into everyday practice.”34 As of July 2012, the AHRQ reported that of the 136 PBRNs in the United States, 15% were national and 28% were regional network collaborations.35 Greater external generalizability of study results from these large networks is the obvious advantage. PBRNs are especially attractive for recruiting priority patient populations that are often underrepresented in clinical research (eg, multiple chronic conditions) and for collecting comprehensive data on patient-reported outcomes, practice settings, and contextual factors that affect healthcare decisionmaking.31 Research in PBRNs is usually aligned with quality improvement activities increasingly using participatory methods to not only make the research relevant but also to improve the translation of research into practice.36 PBRNs are rapidly evolving into learning collaboratives, given the nature of the research and various engagement strategies.

FUTURE IMPLICATIONS

Improving the quality and relevance of evidence on the clinical effectiveness of healthcare services holds promise for shaping policies related to clinical care, payment for services, and population health. Increased emphasis on implementation science to accelerate the use of evidence-based interventions in clinical practice is needed.37 Participatory research methods are likely to produce evidence-based interventions that are more relevant and actionable to healthcare decisionmakers compared to nonparticipatory research. Healthcare organizations can advance CER by investing in (1) personnel dedicated to data collection, monitoring, and interpretation; (2) data infrastructure and analytics; (3) evidence-based quality improvement initiatives; and (4) adaptive implementation and dissemination strategies.38 A systems approach to implementing evidence-based interventions in research is necessary to account for the dynamic, complex environments in which patient care occurs.37

CONCLUSION

Broadening approaches to research whereby investigators use study designs that best match research questions rather than limiting discovery research to randomized trials is critical to advancing CER. Prospective, practice-based studies will help clarify the resources needed to implement evidence-based interventions and identify which interventions work best for specific populations and settings. Given the connection between local healthcare delivery and national healthcare expenditures, any local knowledge gained from CER in learning health systems and PBRNs automatically has implications for national health policy.

This article meets the Accreditation Council for Graduate Medical Education and the American Board of Medical Specialties Maintenance of Certification competencies for Medical Knowledge and Systems-Based Practice.

ACKNOWLEDGMENTS

The author has no financial or proprietary interest in the subject matter of this article.

  • © Academic Division of Ochsner Clinic Foundation

REFERENCES

  1. ↵
    1. Moses H 3rd.,
    2. Matheson DH,
    3. Dorsey ER,
    4. George BP,
    5. Sadoff D,
    6. Yoshimura S.
    (11 13, 2013) The anatomy of health care in the United States. JAMA 310(18):1947–1963, pmid:24219951.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Institute of Medicine
    (2009) Initial National Priorities for Comparative Effectiveness Research (National Academies Press, Washington, DC) http://books.nap.edu/openbook.php?record_id=12648. Accessed September 25, 2014.
  3. ↵
    1. Tunis SR,
    2. Stryer DB,
    3. Clancy CM.
    (9 24, 2003) Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA 290(12):1624–1632, pmid:14506122.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Kinney ED.
    (2011) Comparative effectiveness research under the Patient Protection and Affordable Care Act: can new bottles accommodate old wine? Am J Law Med 37(4):522–566, pmid:22292212.
    OpenUrlPubMed
  5. ↵
    Healthcare Research and Quality Act of 1999, Pub L No. 106-129, https://www.congress.gov/bill/106th-congress/senate-bill/580. Accessed November 28, 2014.
  6. ↵
    Agency for Healthcare Research and Quality: General Provisions—Definitions, 42 USC §299-299c-7 (2010), http://uscode.house.gov/browse/prelim@title42/chapter6A&edition=prelim. Accessed November 28, 2014.
  7. ↵
    1. Thorpe JH.
    (Nov-Dec 2010) Comparative effectiveness research and health reform: implications for public health policy and practice. Public Health Rep 125(6):909–912, pmid:21121237.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Hickam D,
    2. Slutsky J.
    Getting to Know PCORI: From Application to Closeout [PCORI workshop], PCORI funding announcements (PFAs), methodology standards, and national priorities, (San Diego, CA) Presented at, September 22-23, 2014, http://www.pcori.org/sites/default/files/PCORI-Getting-To-Know-PCORI-Workshop_PFA-Methodology-Standards-and-National-Priorities_092214.pdf. Accessed November 28, 2014.
  9. ↵
    1. Cargo M,
    2. Mercer SL.
    (2008) The value and challenges of participatory research: strengthening its practice. Annu Rev Public Health 29:325–350, pmid:18173388.
    OpenUrlCrossRefPubMed
  10. ↵
    1. Concannon TW,
    2. Fuster M,
    3. Saunders T,
    4. et al.
    (12, 2014) A systematic review of stakeholder engagement in comparative effectiveness and patient-centered outcomes research. J Gen Intern Med 29(12):1692–1701, pmid:24893581.
    OpenUrlPubMed
  11. ↵
    1. Domecq JP,
    2. Prutsky G,
    3. Elraiyah T,
    4. et al.
    (2 26, 2014) Patient engagement in research: a systematic review. BMC Health Serv Res 14:89, pmid:24568690.
    OpenUrlCrossRefPubMed
  12. ↵
    1. Patient Centered Outcome Research Institute
    PCORI Patient and Family Engagement Rubric, http://www.pcori.org/assets/2014/02/PCORI-Patient-and-Family-Engagement-Rubric.pdf. February 3, 2014. Accessed November 28, 2014.
  13. ↵
    1. Sox HC,
    2. Goodman SN.
    (4, 2012) The methods of comparative effectiveness research. Annu Rev Public Health 33:425–445, pmid:22224891.
    OpenUrlCrossRefPubMed
  14. ↵
    1. Heisler M,
    2. Hofer TP,
    3. Schmittdiel JA,
    4. et al.
    (6 12, 2012) Improving blood pressure control through a clinical pharmacist outreach program in patients with diabetes mellitus in 2 high-performing health systems: the adherence and intensification of medications cluster randomized, controlled pragmatic trial. Circulation 125(23):2863–2872, pmid:22570370.
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Sjöström L.
    (3, 2013) Review of the key results from the Swedish Obese Subjects (SOS) trial—a prospective controlled intervention study of bariatric surgery. J Intern Med 273(3):219–234, pmid:23163728.
    OpenUrlCrossRefPubMed
  16. ↵
    1. Berkowitz SA,
    2. Krumme AA,
    3. Avorn J,
    4. et al.
    (10 27, 2014) Initial choice of oral glucose-lowering medication for diabetes mellitus: a patient-centered comparative effectiveness study. JAMA Intern Med, Epub, doi, 10.1001/jamainternmed.2014.5294.
  17. ↵
    1. Schneeweiss S.
    (8, 2007) Developments in post-marketing comparative effectiveness research. Clin Pharmacol Ther 82(2):143–156, pmid:17554243.
    OpenUrlCrossRefPubMed
  18. ↵
    1. Berger ML,
    2. Dreyer N,
    3. Anderson F,
    4. Towse A,
    5. Sedrakyan A,
    6. Normand SL.
    (Mar-Apr 2012) Prospective observational studies to assess comparative effectiveness: the ISPOR good research practices task force report. Value Health 15(2):217–230, pmid:22433752.
    OpenUrlCrossRefPubMed
  19. ↵
    1. Berger ML,
    2. Mamdani M,
    3. Atkins D,
    4. Johnson ML.
    (Nov-Dec 2009) Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part I. Value Health 12(8):1044–1052, pmid:19793072.
    OpenUrlCrossRefPubMed
  20. ↵
    1. Chalkidou K,
    2. Tunis S,
    3. Whicher D,
    4. Fowler R,
    5. Zwarenstein M.
    (8, 2012) The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research. Clin Trials 9(4):436–446, pmid:22752634.
    OpenUrlCrossRefPubMed
  21. ↵
    1. Rosenthal GE.
    (2014) The role of pragmatic clinical trials in the evolution of learning health systems. Trans Am Clin Climatol Assoc 125:204–216, pmid:25125735, discussion 217-218.
    OpenUrlPubMed
  22. ↵
    1. Luce BR,
    2. Drummond MF,
    3. Dubois RW,
    4. et al.
    (9, 2012) Principles for planning and conducting comparative effectiveness research. J Comp Eff Res 1(5):431–440, pmid:24236420.
    OpenUrlCrossRefPubMed
    1. Berger ML,
    2. Martin BC,
    3. Husereau D,
    4. et al.
    (3, 2014) A questionnaire to assess the relevance and credibility of observational studies to inform health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health 17(2):143–156, pmid:24636373, Erratum in: Value Health. 2014 Jun;17(4):489.
    OpenUrlPubMed
    1. Dreyer NA,
    2. Schneeweiss S,
    3. McNeil BJ,
    4. et al.
    (6, 2010) GRACE principles: recognizing high-quality observational studies of comparative effectiveness. Am J Manag Care 16(6):467–471, pmid:20560690.
    OpenUrlPubMed
    1. Schneeweiss S.
    (Nov-Dec 2009) On guidelines for comparative effectiveness research using nonrandomized studies in secondary data sources. Value Health 12(8):1041, pmid:19744290.
    OpenUrlCrossRefPubMed
    1. Schneeweiss S,
    2. Seeger JD,
    3. Jackson JW,
    4. Smith SR.
    (8, 2013) Methods for comparative effectiveness research/patient-centered outcomes research: from efficacy to effectiveness. J Clin Epidemiol 66((8 Suppl)):S1–S4, pmid:23849143.
    OpenUrlCrossRefPubMed
    1. Glasgow RE,
    2. Magid DJ,
    3. Beck A,
    4. Ritzwoller D,
    5. Estabrooks PA.
    (6, 2005) Practical clinical trials for translating research to practice: design and measurement recommendations. Med Care 43(6):551–557, pmid:15908849.
    OpenUrlCrossRefPubMed
  23. ↵
    1. Thorpe KE,
    2. Zwarenstein M,
    3. Oxman AD,
    4. et al.
    (5, 2009) A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol 62(5):464–475, pmid:19348971.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Grossmann C,
    2. Sanders J,
    3. English RA.
    (2013) Roundtable on Value & Science-driven Health Care, Forum on Drug Discovery, Development, and Translation, Institute of Medicine. The Learning Health Systems Series, Large Simple Trials and Knowledge Generation in a Learning Health System: Workshop Summary (National Academies Press, Washington, DC) http://www.ncbi.nlm.nih.gov/books/NBK201274/. Accessed November 28, 2014.
    1. Saag KG,
    2. Mohr PE,
    3. Esmail L,
    4. et al.
    (11, 2012) Improving the efficiency and effectiveness of pragmatic clinical trials in older adults in the United States. Contemp Clin Trials 33(6):1211–1216, pmid:22796098.
    OpenUrlCrossRefPubMed
  25. ↵
    1. Hartung DM,
    2. Guise JM,
    3. Fagnan LJ,
    4. Davis MM,
    5. Stange KC.
    (1, 2012) Role of practice-based research networks in comparative effectiveness research. J Comp Eff Res 1(1):45–55, pmid:23105964.
    OpenUrlPubMed
  26. ↵
    1. Greene SM,
    2. Reid RJ,
    3. Larson EB.
    (8 7, 2012) Implementing the learning health system: from concept to action. Ann Intern Med 157(3):207–210, pmid:22868839.
    OpenUrlCrossRefPubMed
  27. ↵
    1. Umscheid CA,
    2. Williams K,
    3. Brennan PJ.
    (12, 2010) Hospital-based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med 25(12):1352–1355, pmid:20697961.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Agency for Healthcare Research and Quality
    AHRQ Practice-based Research Networks (PBRNs) website, About, http://pbrn.ahrq.gov/about. Updated May 2013. Accessed November 28, 2014.
  29. ↵
    1. Agency for Healthcare Research and Quality
    PBRN Slides. AHRQ Practice-based Research Networks (PBRNs) website, http://pbrn.ahrq.gov/pbrn-registry/pbrn-slides. Updated January 2014. Accessed November 28, 2014.
  30. ↵
    1. Mold JW,
    2. Peterson KA.
    (May-Jun 2005) Primary care practice-based research networks: working at the interface between research and quality improvement. Ann Fam Med 3((Suppl 1)):S12–S20, pmid:15928213.
    OpenUrlAbstract/FREE Full Text
  31. ↵
    1. Lobb R,
    2. Colditz GA.
    (2013) Implementation science and its application to population health. Annu Rev Public Health 34:235–251, pmid:23297655.
    OpenUrlCrossRefPubMed
  32. ↵
    1. Masica AL,
    2. Ballard DJ.
    (12, 2009) The protean role of health care delivery organizations in comparative effectiveness research. Mayo Clin Proc 84(12):1062–1064, pmid:19955242.
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

Ochsner Journal
Vol. 15, Issue 2
Jun 2015
  • Table of Contents
  • Index by author
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on Ochsner Journal.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Clinical Comparative Effectiveness Research Through the Lens of Healthcare Decisionmakers
(Your Name) has sent you a message from Ochsner Journal
(Your Name) thought you would like to see the Ochsner Journal web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Clinical Comparative Effectiveness Research Through the Lens of Healthcare Decisionmakers
Eboni G. Price-Haywood
Ochsner Journal Jun 2015, 15 (2) 154-161;

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Clinical Comparative Effectiveness Research Through the Lens of Healthcare Decisionmakers
Eboni G. Price-Haywood
Ochsner Journal Jun 2015, 15 (2) 154-161;
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • INTRODUCTION
    • LITERATURE SEARCH
    • HEALTH POLICIES
    • STAKEHOLDER ENGAGEMENT
    • METHODOLOGICAL CONSIDERATIONS
    • FUTURE IMPLICATIONS
    • CONCLUSION
    • ACKNOWLEDGMENTS
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • References
  • PDF

Cited By...

  • Patient preferences for venous thromboembolism prophylaxis after injury: a discrete choice experiment
  • Google Scholar

More in this TOC Section

  • Impact of Coffee Consumption on Cardiovascular Health
  • Brief History of Opioids in Perioperative and Periprocedural Medicine to Inform the Future
  • COVID-19 Physician Burnout: Louisiana's Workforce Vulnerability and Strategies for Mitigation
Show more REVIEWS AND CONTEMPORARY UPDATES

Similar Articles

Keywords

  • Comparative effectiveness research
  • health policy
  • patient outcome assessment

Current Post at the Blog

Elsevier Editors Resign Over Open-Access Fees

Our Content

  • Home
  • Current Issue
  • Ahead of Print
  • Archive
  • Featured Contributors
  • Ochsner Journal Blog
  • Archive at PubMed Central

Information & Forms

  • Instructions for Authors
  • Instructions for Reviewers
  • Submission Checklist
  • FAQ
  • License for Publishing-Author Attestation
  • Patient Consent Form
  • Submit a Manuscript

Services & Contacts

  • Permissions
  • Sign up for our electronic table of contents
  • Feedback Form
  • Contact Us

About Us

  • Editorial Board
  • About the Ochsner Journal
  • Ochsner Health
  • University of Queensland-Ochsner Clinical School
  • Alliance of Independent Academic Medical Centers

© 2023 Ochsner Clinic Foundation

Powered by HighWire