What Does Productivity Mean for Teaching?
Nutter DO, Bond JS, Coller BS, et al. Measuring faculty effort and contributions in medical education. Acad Med 2000;75:199–207.
Comments
It is common to have clear measures of productivity for clinicians—number of patients seen, major benchmarks met, etc. For those of us who have time divided between clinical work and teaching/administration/scholarship, what are the measures of productivity for our non-clinical work? The authors outline a 4-step process to develop these. Step 1 is listing relevant activities and a comprehensive list is provided from which to choose. Each activity is then given units of measure and a value or weight by which it is to be multiplied. Step 2 incorporates the quality of each activity with an additional multiplier. Step 3 looks at how activities fall into broad categories (teaching, curriculum development, administration, and scholarship), and whether an additional multiplier for this should be factored in to adjust for the institution's priorities. Step 4 develops a final multiplier based on who the learner is (allied health student, medical student, resident) and what the institutional mission is. This provides a practical and useful method to develop a program's theoretical groundwork for performance evaluation of teaching, administration, and scholarship.
The authors state that success in implementing this depends on collecting information and processing it using a web-based system in an ongoing fashion. They argue that paper forms for recording, reporting, and processing activity data regularly are unworkable. This is believable, but the web-based system for real time, online reporting and feedback is difficult for many of us to implement. This would require programmers working with us to develop a useful web interface, which may be an expensive item that is difficult to fund as budgets get tighter.
For those interested, additional comments on implementation issues are in the October 2000 issue of the same journal (pp. 993–5).
A national panel on medical education was appointed as a component of the AAMC's Mission-based Management Program and charged with developing a metrics system for measuring medical school faculty effort and contributions to a school's education mission. The panel first defined important variables to be considered in creating such a system: the education programs in which medical school faculty participate; the categories of education work that may be performed in each program (teaching, development of education products, administration and service, and scholarship in education); and the array of specific education activities that faculty could perform in each of these work areas. The panel based the system on a relative value scale, since this approach does not equate faculty performance solely to the time expended by a faculty member in pursuit of a specific activity. Also, a four-step process to create relative value units (RVUs) for education activities was developed. This process incorporates quantitative and qualitative measures of faculty activity and also can measure and value the distribution of faculty effort relative to a school's education mission. When adapted to the education mission and culture of an individual school, the proposed metrics system can provide critical information that will assist the school's leadership in evaluating and rewarding faculty performance in education and will support a mission-based management strategy in the school.
Do Teaching Hospitals Provide Better Quality of Care?
Allison JJ, Kiefe CI, Weissman NW, et al. Relationship of hospital teaching status with quality of care and mortality for Medicare patients with acute MI. JAMA 2000;284:1256–1262.
Comments
The authors use a large national data set of elderly patients to show that teaching hospitals provide more aspirin, beta2-blockers, and ACE inhibitors to Medicare patients hospitalized with acute myocardial infarctions, and there was a gradient of increasing performance from non-teaching to minor to major teaching hospitals. Unadjusted mortality was better at teaching hospitals, perhaps because they offer better care.
This is good news for those of us at teaching institutions. It is reassuring to know that our standard line that patients being cared for by interns and residents under staff supervision receive excellent medical care is proven here in this study—at least in the limited area it looked at.
The next question is more central to the survival of teaching hospitals. Is the better quality of care worth the extra cost? To take the largest of the three outcome differences in this study—is a 12.4% increase in use of beta2-blockers after AMIs worth an X% increase in cost? What is X? Is it worth it?
Context:Issues of cost and quality are gaining importance in the delivery of medical care, and whether quality of care is better in teaching vs nonteaching hospitals is an essential question in this current national debate. Objective: To examine the association of hospital teaching status with quality of care and mortality for fee-for-service Medicare patients with acute myocardial infarction (AMI). Design, Setting, and Patients: Analysis of Cooperative Cardiovascular Project data for 114,411 Medicare patients from 4361 hospitals (22,354 patients from 439 major teaching hospitals, 22,493 patients from 455 minor teaching hospitals, and 69,564 patients from 3467 nonteaching hospitals) who had AMI between February 1994 and July 1995. Main Outcome Measures: Administration of reperfusion therapy on admission, aspirin during hospitalization, and β-blockers and angiotensin-converting enzyme inhibitors at discharge for patients meeting strict inclusion criteria; mortality at 30, 60, and 90 days and 2 years after admission. Results: Among major teaching, minor teaching, and nonteaching hospitals, respectively, administration rates for aspirin were 91.2%, 86.4%, and 81.4% (P<.001); for angiotensin-converting enzyme inhibitors, 63.7%, 60.0%, and 58.0% (P<.001); for β-blockers, 48.8%, 40.3%, and 36.4% (P<.001); and for reperfusion therapy, 55.5%, 58.9%, and 55.2% (P = .29). Differences in unadjusted 30-day, 60-day, 90-day, and 2-year mortality among hospitals were significant at P<.001 for all time periods, with a gradient of increasing mortality from major teaching to minor teaching to nonteaching hospitals. Mortality differences were attenuated by adjustment for patient characteristics and were almost eliminated by additional adjustment for receipt of therapy. Conclusions: In this study of elderly patients with AMI, admission to a teaching hospital was associated with better quality of care based on 3 of 4 quality indicators and lower mortality.
Sometimes Our Training Method May Not Matter Much!
Gazewood JD, Mehr DR. Predictors of physician nursing home practice: does what we do in residency training make a difference? Fam Med 2000;32:551–555.
Comments
The authors hypothesized that making nursing home rounds with attending physicians during residency would be associated with currently seeing nursing home patients. This makes sense since we expect that a training program influences practice. For example, they point out that previous studies showed exposure in a family practice residency to faculty with a strong interest in obstetrics increased the likelihood of residency graduates' including obstetrics in their practice. However, it was not a predictor of whether a graduate practiced in nursing homes. The significant predictors appeared to be longer number of hours worked per week, larger number of outpatients seen per week, having an active hospital practice, living in a smaller community, and not being in a multispecialty group. The subjects studied were in only one Midwestern residency, so we may want to withhold judgment that this is generally true in residencies. Nevertheless, it raises some important points in medical education.
Training residents to be good at nursing home care and to navigate well in that particular setting may not translate into graduates including nursing homes in their practices. Sometimes our training methods may not matter much—from the viewpoint of whether a graduate has an active nursing home practice. Rather, other factors seem to determine this: how busy they are, whether they have a hospital practice, where they live, and what type of group they are in.
What we do hope is true is that if a well-trained resident did choose to go to nursing homes after graduation, he or she would provide good, quality care. And since we do not know who will go into what practice settings, good, quality care to nursing home patients is something that should be included in the training of all primary care residents.
Background and Objectives:The number of physicians who care for nursing home patients is inadequate. This study determined predictors of current nursing home practice, including whether making nursing home rounds with an attending physician during residency is a predictor of subsequent nursing home practice. Methods: We used a cross-sectional survey to study 170 family physicians in private or academic practice in a large, university-based Midwestern family practice residency program. Results: The response rate was 86%. Fifty-five percent of respondents had an active nursing home practice. Rounding in a nursing home with an attending during residency had no relation to current nursing home practice. In comparison to physicians without an active nursing home practice, physicians with an active nursing home practice were more likely to reside in a smaller community, have a hospital practice (60.5% versus 39.5%), see more outpatients per week (105 versus 78), and work more hours per week (57 versus 49). In a logistic regression model, decreasing community size, number of hours worked per week, and having an active hospital practice were associated with active nursing home practice. Conclusions: Factors other than educational experience have an effect on physician nursing home practice.
Can Concept Mapping Explain Why Some Residents' Exam Scores Seem Mismatched With Their Abilities?
West DC, Pomeroy JR, Park JK, et al. Critical thinking in graduate medical education: A role for concept mapping assessment? JAMA 2000;284:1105–1110.
Comments
This article provides a good description of concept mapping assessment. The method focuses on how learners organize their knowledge, especially with cross-links between different areas of learning about a subject. More experienced residents seemed to organize their knowledge differently, with cross-links between concepts that were not observed by less experienced residents. It appears to be a valid measure of the resident's conceptual framework of an area of knowledge and can be used to measure a change in that framework with new learning. It may be a useful measuring tool for the effectiveness of specific learning experiences.
Concept mapping assessment may evaluate aspects of knowledge distinct from the usual “knowledge base” of multiple choice, standardized examinations. There did not seem to be much, if any, correlation between examination scores and scores on concept mapping assessment. The authors feel it can provide insight into why some residents score well on standardized tests but have difficulty applying their knowledge to clinical situations. Perhaps they know lots of facts in the expected knowledge base the standardized examinations are testing but are weak in the cross-links between all these facts that may be critical in clinical skills.
Context:Tools to assess the evolving conceptual framework of physicians-in-training are limited, despite their critical importance to physicians' evolving clinical expertise. Concept mapping assessment (CMA) enables teachers to view students' organization of their knowledge at various points in training.
Objective: To assess whether CMA reflects expected differences and changes in the conceptual framework of resident physicians, whether concept maps can be scored reliably, and how well CMA scores relate to the results of standard in-training examination. Design, Setting, and Participants: A group of 21 resident physicians (9 first-year and 12 second- and third-year residents) from a university-based pediatric training program underwent concept map training, drew a preinstruction concept map about seizures, completed an education course on seizures, and then drew a postinstruction map. Maps were scored independently by 3 raters using a standardized method. The study was conducted in May and June 1999. Main Outcome Measures: Preinstruction map total scores and subscores in 4 categories compared with postinstruction map scores; map scores of second- and third-year residents compared with first-year residents; and interrater correlation of map scores. Results: Total CMA scores increased after instruction from a mean (SD) preinstruction map score of 429 (119) to a mean postinstruction map score of 516 (196) (P = .03). Second- and third-year residents scored significantly higher than first-year residents before instruction (mean [SD] score of 472 [116] vs 371 [102], respectively; P = .04), but not after instruction (mean [SD] scores, 561 [203] vs 456 [179], respectively; P = .16). Second- and third-year residents had greater preinstruction map complexity as measured by cross-link score (P = .01) than first-year residents. The CMA score had a weak to no correlation with the American Board of Pediatrics In-training Examination score (r = 0.10–0.54). Interrater correlation of map scoring ranged from weak to moderate for the preinstruction map (r = 0.51–0.69) and moderate to strong for the postinstruction map (r = 0.74–0.88). Conclusions: Our data provide preliminary evidence that concept mapping assessment reflects expected differences and change in the conceptual framework of resident physicians. Concept mapping assessment and standardized testing may measure different cognitive domains.
What Happened After They Left the Hospital?
Wright SM, Durbin P, Barker LR. When should learning about hospitalized patients end? Providing housestaff with post-discharge follow-up information. Acad Med 2000;75:380–383.
Comments
Four to six weeks after discharge, the primary care physician was sent a copy of the patient's discharge summary along with a 1-page follow-up form. This was completed and mailed or faxed to the residents who had cared for the patient in the hospital. Most (87%) of the community primary care physicians agreed it was “not a big inconvenience” to complete the follow-up forms. The majority (89%) of the residents felt that receiving follow-up information about their inpatients was educationally valuable.
Feedback about patients post-discharge can serve as a valuable learning tool. When a working diagnosis at discharge is later found to be incorrect, sharing this information with the residents will help them identify errors in their clinical judgment and avoid similar errors in the future. Feedback to residents about patients who did well after discharge can provide some positive reinforcement for the resident's hard work.
The ease of communication provided by email may make instituting a similar system realistic for many residency programs that currently lack a mechanism for providing post-discharge feedback to their residents.
Purpose:As hospital stays grow shorter, many patients are discharged to follow up with their primary care physicians before their diagnoses and responses to treatment are clear. The authors studied the value and feasibility of providing housestaff with follow-up information about their former inpatients. Method: Patients included in the study (1) had been admitted to the housestaff service during the study period (January to March 1997), (2) had received follow-up care from a primary care physician in the Johns Hopkins Bayview Physicians' Professional Association, and (3) had been hospitalized for at least three days. The primary care physician completed a single-page follow-up form four to six weeks after the patient's discharge from the hospital; that form was given to the house officers who had cared for that patient. Results: Responses to a preintervention questionnaire completed by 28 of 39 house officers (72%) showed that 92% felt it to be important or extremely important to get follow-up information about inpatients; 86% indicated that they rarely or never receive such information. During the study period, house officers were sent follow-up information for 65 of 76 eligible patients (85%). In their responses to a post-intervention questionnaire (response rate 73%), the house officers most valued learning about the accuracy of the discharge diagnosis, the results of additional diagnostic tests, and information about the patient's quality of life since discharge. Housestaff's satisfaction with the follow-up information received about inpatients improved (p =.001). Conclusions: Providing follow-up information was a feasible intervention that was valued by housestaff.
Building a Better Lecture
Copeland HL, Longworth DL, Hewson MG, Stoller JK. Successful lecturing: A prospective study to validate attributes of the effective medical lecture. J Gen Intern Med 2000;15:366–371.
Comments
The medical lecture is one of the major teaching tools used in graduate medical education. While the content of a lecture is clearly important, a successful presentation requires much more than accurate and up-to-date information. This article provides objective validation for some of the commonly recommended components of an effective lecture. Case-based lectures were slightly preferred, while the use of humor did not correlate with the effectiveness of a lecture. Engaging the audience, identifying key points, and lecture clarity were identified as key components of an effective lecture.
Applying these features can help us create better lectures. Engage the audience by demonstrating the relevance of the lecture using sample cases or provocative questions. Demonstrate interest and enthusiasm to engage the audience. Identify key points and support them with clear examples. When concluding, summarize by applying the key points to the introductory case or question. Ensure clarity of the lecture by using clear and understandable visual aids. Slides should be easily read from anywhere in the room. Tables and charts taken from journal articles should be simplified to leave out distracting extraneous information.
Objective:In a study conducted over 3 large symposia on intensive review of internal medicine, we previously assessed the features that were most important to course participants in evaluating the quality of a lecture. In this study, we attempt to validate these observations by assessing prospectively the extent to which ratings of specific lecture features would predict the overall evaluation of lectures. Measurements and Main Results: After each lecture, 143 to 355 course participants rated the overall lecture quality of 69 speakers involved in a large symposium on intensive review of internal medicine. In addition, 7 selected participants and the course directors rated specific lecture features and overall quality for each speaker. The relations among the variables were assessed through Pearson correlation coefficients and cluster analysis. Regression analysis was performed to determine which features would predict the overall lecture quality ratings. The features that most highly correlated with ratings of overall lecture quality were the speaker's ability to identify key points (r=.797) and be engaging (r=.782), the lecture clarity (r=.754), and the slide comprehensibility (r=.691) and format (r=.660). The three lecture features of engaging the audience, lecture clarity, and using a case-based format were identified through regression as the strongest predictors of overall lecture quality ratings (R2 =0.67, P =0.0001). Conclusions: We have identified core lecture features that positively affect the success of the lecture. We believe our findings are useful for lecturers wanting to improve their effectiveness and for educators who design continuing medical education curricula.
Medical Students Don't Decrease Patient Satisfaction
Simon SR, Peters AS, Christiansen CL, Fletcher RH. The effect of medical student teaching on patient satisfaction in a managed care setting. J Gen Intern Med 2000;15:457–461.
Comments
Many managed care organizations have avoided the teaching of medical students in ambulatory settings due to concerns about decreasing patient satisfaction. This study compared the satisfaction of patients seen by the physician alone to that of patients seen by the medical student and the physician. There was no significant difference in patient satisfaction. This provides one piece of convincing evidence to invalidate the reluctance of managed care organizations to include the training of medical students in their clinics.
With proper training, medical students can contribute to productivity in the outpatient setting (Lipsky MS, Egan M. Students as assets. Fam Med 1999;31:387-388). Students can effectively counsel patients about smoking cessation, weight loss, and safe sex practices. They can also assist with making follow-up phone calls and notifying patients of lab and study results. Many students have computer skills that exceed those of their preceptor and can put those skills to use doing literature searches and finding patient education materials on the Internet. Medical students can learn a lot by studying the outside medical records of patients new to the practice and summarizing important information.
Contrary to the commonly held belief, medical students do not decrease patient satisfaction in the ambulatory clinic. Physician satisfaction derived from teaching and potentially added productivity is a strong reason to consider increased medical student teaching in the outpatient setting.
Objective:To measure the effect on patient satisfaction of medical student participation in care and the presence of medical student teaching.
Design: Prospective cohort study.
Setting: Eight outpatient internal medicine departments of a university-affiliated HMO in Massachusetts. Patients: Two hundred seven patients seen on teaching days (81 patients who saw a medical student-preceptor dyad and 126 patients who saw the preceptor alone), and 360 patients who saw the preceptor on nonteaching days. Five hundred (88%) of 567 eligible patients responded.
Measurements and Main Results: Thirteen closed-response items on a written questionnaire, measuring satisfaction with specific dimensions of care and with care as a whole. Visit satisfaction was similar among patients on teaching and nonteaching days. Ninety-one percent of patients seeing a medical student, 93% of patients seeing the preceptor alone on teaching days, and 93% of patients on nonteaching days were satisfied or very satisfied with their visit; less than 2% of patients in each group were dissatisfied with their visit. Satisfaction on all measured dimensions of care was similar for patients seeing a medical student, patients seeing the preceptor alone on teaching days, and patients seeing the preceptor on nonteaching days. Conclusions: Medical student participation and the presence of medical student teaching had little effect on patient satisfaction. Concerns about patient satisfaction should not prevent managed care organizations from participating in primary care education.
Autopsy: Far From Obsolete
Durning S, Cation L. The educational value of autopsy in a residency training program. Arch Intern Med 2000;160:997–999.
Comments
Autopsy rates have declined from 50% in the 1950s to less than 11% today (Hanzlick R, Baker P. Institutional autopsy rates. Arch Intern Med. 1998;158:1171-1172). The necessity of the autopsy has been called into question due to improved premortem diagnostic techniques, lack of reimbursement, and fear of malpractice suits. This study evaluated the autopsies done over a 2-year period on the patients of an Internal Medicine teaching service. Of the 29 autopsies performed, one third (10 cases) yielded an unexpected major diagnosis that contributed to the patient's death. These diagnoses included fungal sepsis (2 cases), pulmonary embolus (2 cases), and one case each of bacterial sepsis, subdural hematoma, ventricular wall rupture, bowel infarction, portal vein thrombosis, and myocardial infarction. Other studies have shown that physicians are unable to accurately predict which autopsies are likely to yield unexpected diagnoses. In this study, all but two of the 29 autopsies were rated by the attending physician as being a valuable educational experience. The article supports the importance of the autopsy as a learning tool for residents.
Background: Historically, the autopsy has been an indispensable educational tool. Over the past several decades, however, the national autopsy rate has declined and the educational role of autopsy in modern medicine is being questioned. Objective: To assess the educational value of autopsy attendance in an internal medicine residency program. Methods: We performed a retrospective review of all autopsies performed on the general internal medicine teaching service between October 1996 and September 1998. Premortem and postmortem diagnoses were determined and compared and attending physician surveys were reviewed.
Results: Eighty-eight deaths occurred during the study period. Twenty-nine (33%) patients underwent autopsy. All autopsies were observed by the primary team and the attending physician completed an autopsy survey on each patient. An unexpected pathological diagnosis directly contributing to death was detected in 10 (34%) patients at autopsy. Additional unexpected pathological diagnoses were discovered in 23 (79%) cases. Attending physician surveys revealed that all 10 unexpected diagnoses contributing to death were observed by the primary team at the time of autopsy. Autopsy attendance was rated as a valuable educational experience in 27 cases (93%). Conclusion: Autopsy is a valuable educational tool and autopsy attendance should remain an integral part of internal medicine residency training.
- Ochsner Clinic and Alton Ochsner Medical Foundation