Monday, December 28, 2015

Book Smarts and Common Sense in Medicine - Why Highly Intelligent People Make Bad Decisions

In the presentation on Epistemic Problems in Medicine on the Medical Evidence Blog, I begin by highlighting the difference between intelligence (book smarts) and rationality (common sense).  Oftentimes thought to be one and the same, they are distinctly different, and understanding failures of common sense among very intelligent people can illuminate many problems that we see in medicine, several of which have been highlighted on this blog.

Intelligence is the ability of the mind to function algorithmically, like a computer.  Intelligent people are good at learning, through rote memorization, rules that can be applied to solve well defined problems.  They are also good at pattern recognition which allows them to recognize a problem type to know which rule applies to it.  This kind of intelligence is very precisely measured by IQ tests.  It is correlated with scores on college entrance exams like the ACT and SAT and with other entrance tests such as MCAT.  Of course, intelligent people need to devote the time to learn the rules to answer the questions on these tests which measure both aptitude and achievement.

Rationality, I think, is more closely aligned to the notion of common sense and it shows very little significant correlation to IQ in any domain in which it has been investigated.  Cognitive psychologists talk about two kinds of rationality.  The first is how well a person's beliefs map onto reality (the actual structure of the world), and it has been termed epistemic rationality (sometimes also called theoretical or evidential rationality).  Persons with epistemic rationality have beliefs that are congruent with the world around them and which are strong in proportion to the strength of the evidence supporting them.  Thus a physician who believes that bloodletting or mercury therapy cures disease in the 21st century would be considered to have suboptimal epistemic rationality, as would a person whose fear of Hantavirus while hiking in New Mexico is grossly disproportionate to the actual statistical risk.

Thursday, November 26, 2015

Jugular Venous Pulsations Video - How to Examine it Properly and not Mistake it for the Carotid Pulsations

In the video below, watch the jugular venous pulsations to know what you ought to be looking for.  In my experience, most of the time, physicians at all levels cannot identify confidently and accurately the pulsations that are clearly identified in the video.  Indeed, in many videos purporting to show the JVP on youtube, the pulsations are being shown in the external jugular veins, or carotid arterial pulsations are seen and are being mistaken for jugular venous pulsations.


In two other positions with this particular "jugular model" (keep OJ away from her!), the pulsations were not visible enough to make a compelling video image, emphasizing the finicky nature of the pulsations, the need to position the patient correctly to see them, and the general difficulty of confidently and accurately identifying the pulsations during a cardiac examination which is all too often cursory and unreliable in its findings.

The key feature of the JVP, to differentiate it from the carotid arterial pulsations is to watch to see if the most prominent feature of the "waves" is a rapid descent or a rapid ascent.  In the former case, as in the video, it is the venous X and Y descents of the venous A and V waves which are most obviously seen.  All too often, the rapid ascending waves of the carotid arterial pulses are mistaken for the JVP.  Look for rapid descents - when you find them you know you have found what you're looking for.

Tuesday, November 17, 2015

Beliefs That Dictate Evidence: Open Visitation in the ICU (Again)

The cart belongs behind the horse.
I recently blogged on idealogues who haven't any interest in the truth, rather their interest is in defending their beliefs.  For these true believers, evidence is sought selectively and strength of belief is not apportioned to strength of evidence.  Beliefs reign supreme, and evidence serves the beliefs.  The cart leads the horse.

And so let it be with open visitation in the ICU.  I'm interested in this because it is an issue of practical concern for me, and my interest was recently piqued because in nursing school, my wife was taught that open visitation is better for everyone and that ample evidence supported this contention.  Today, I came across a tweet about ICU visitation policies, a statement from the American Association of Critical Care Nurses.  So I decided to investigate a bit further the evidence upon which their policy proposals are predicated.

The very first statement in the "Supporting Evidence" section of this document is "In practice, 78% of ICU nurses in adult critical care units prefer unrestricted policies."  This statement is at odds with my personal experience working with ICU nurses for the better part of the past 20 years.  While they are patient and family advocates generally, they also recognize that the exigencies of the ICU environment require some limitation of visitation, and so does their own psychological well-being.  So I began by investigating references 7-13 which are proffered in support of this statement which for some (many?) lacks face validity.  Here are those seven references, a description, and a synopsis taken from the abstract of each:

Tuesday, November 10, 2015

Messed Up Seven Ways To Sunday: Communication About Course and Prognosis

I was reminded the other day about the importance of narrative storytelling and theory of mind in communicating with patients' families.  A good storyteller, say Stephen King, has theory of mind - he can see into the minds of his readers and anticipate how they are going to react to what he writes, to the story he's narrating to them.  He knows full well that if he uses a vocabulary that they don't understand that they can't possibly engage with his story.

So if you EVER use the word "intubation" while talking to a family, or "mesenteric ischemia" or "lumbar puncture" or similar technical jargon, you are not going to engage them with your story, and you are going to confuse and frustrate them.  You must use your theory of mind to infer what parts of your medical vocabulary that laypeople do not understand (most of them).

Next, you cannot enter the room of a patient who, say, crumped from flash pulmonary edema and was intubated, and start talking about "mitral stenosis" and "wedge pressures" and "diuresis".  They have NO IDEA what those things mean.  A better narrative would be:
"She had rheumatic fever when she was a child, right?  Well rheumatic fever injures and inflames the heart valves and over the years they can stiffen from that inflammation and injury.  It's just like a guy who injures his knee playing football in high school and then years later has arthritis in the area of that injury.  Same thing, basically.  Anyway, the heart compensates for that stiff or constricted valve over the years by building up pressure behind the valve, just like pressure builds up behind the kink in a garden hose.  You can live like that for a long time because the heart and body are good at compensating, but there comes a point where the pressure behind the kink in the hose or the stiff valve causes fluid to leak into the lungs and then it's hard to breathe with the lungs wet and heavy.  This is essentially what's happened to her - she came in with low oxygen and trouble breathing from fluid in the lungs caused by a stiff valve in the heart.  So we have to remove fluid with water pills to get her breathing without the assistance of the breathing machine, and then she's going to need surgery to replace that valve at some point, which will be determined by the surgeon."

Wednesday, September 23, 2015

Moral Heuristics in Medicine: Judge Not Lest Ye Be Judged

We often evaluate people's health choices and the resulting outcomes through a moral lens, even though many people think it is politically incorrect to do so, to "pass judgment" on others, a proscription that is especially strong within the medical profession.

But it is natural to do so.  The invocation "Thy body is thy temple" rings true from both a moral and a medical perspective.  If only people would treat their body as their temple, how the woes of humanity would melt away, and we would not have a looming physician shortage but rather a surplus!

I am not here concerned with whether or not it is appropriate to pass a moral judgment upon people for behavioral choices that affect their health.  It could be argued either way, i.e., that morality is nonabsolute and individual and that healthcare professionals have no right to make judgments based upon their conception of what is moral, or that there are certain "sins" such as sloth and gluttony that should be universally frowned upon because they are inherently bad for individual and public health.  Rather I am concerned with whether we selectively apply moral condemnation but disguise it under the veil of medical judgment, and how other choices that have health impacts are not imbued with a moral essence, also selectively.

When the alcoholic with cirrhosis is admitted with terminal liver failure, there is a collective sigh with the subtext "he did this to himself."  We condemn alcohol as a personal moral failing, and there is the everpresent lurking tendency to say, with moralistic overtones, "you reap what you sow."

I was reminded of this the other day when I admitted a woman with gastrointestinal hemorrhage who, surprisingly, was found to have esophageal varices.  Most likely, she is not an alcoholic but instead has NASH non-alcoholic steatohepatitis related to obesity, an increasingly common cause of end stage liver disease in obese people (including children).  But there is no collective sigh of exasperation with her moral failings, her gluttony of food rather than alcohol.  She's the poor woman with NASH, rather than the wretch with self-induced disease.

Pulmonologists can hardly talk about tobacco and its ills without overtones of moral condemnation.  So it would appear that while moral judgments can be made under the pretext of medical considerations, the opposite can also happen: the health impacts of a certain substance or behavior can be so dire as that the crusade against the substance or behavior can take on moral dimensions.

I'm not sure any of this is wrong.  Morality is a universal part of humanity and provides a set of intuitional guides for our daily behavior.  But I think we should be consistent.  We should separate morality from medicine or join them and consider the moral dimensions of behaviors that we don't currently have strong moral intuitions about.  Ask yourself which is worse among the following health behaviors and more importantly ask yourself why you make that judgment.  Do you have medical evidence that suggests that one behavior has worse overall holistic health impacts than another?

Are the health consequences worse from:

a.)  daily smoking of marijuana
b.)  drinking 4-5 drinks per night
c.)  being 50-100 pounds overweight
d.)  watching 3-4 hours of TV per day
e.)  not exercising (at all)
f.)  smoking 1-2 cigarettes while at the bar on weekends
g.)  working 80 hours per week
h.)  being socially isolated 
i.)  moving your family for your career every 3-4 years
j.)  using smokeless tobacco
k.)  taking prescription opioid medications chronically for pain
l.)  living in Baltimore, Maryland as compared to Salt Lake City, UT
m.)  remaining single throughout one's adult life
n.)  not reading and staying otherwise mentally active
o.)  riding a motorcycle or driving a car
p.)  owning a firearm or owning a swimming pool

My intuition is that, when we evaluate these behaviors, we utilize a heuristic based on moral intuitions about the "badness" or degree of "sin" of each of these behaviors, and that that heuristic is likely to be highly fallible.  The moral overtones of a behavior are poor surrogates for its real health effects, and moreover we can extend moral judgment to many behaviors that affect health but which we frequently overlook as important determinants of it.

Because of a comment from a former colleague on Facebook, I have modified and updated the list and will make an additional comment.  When I was attending the Johns Hopkins Bloomberg School of Public Health from 2002-2003, I would sometimes get weird looks as I showed up to class in motorcycle riding gear.  There were several group projects about mandatory helmet laws (Maryland already had one), and riding a motorcycle was considered in many ways an affront to the public health.  So was firearms ownership.  During one lecture, attended by two hundred or so students, I challenged a professor of public health who was maligning firearms ownership or motorcycle riding, I forget which, to justify why he "exposed" himself and  his family to the astronomical violent crime rate in Baltimore, Maryland, for the satisfaction of working at Johns Hopkins.  He was as speechless in that moment as I was unpopular in the School of Public Health.

This anecdote is emblematic of what I'm trying to point out in the post - that some behaviors have moral weight, and other similarly "bad for you" behaviors do not.  There is moral weight to riding a motorcycle or owning a firearm, there is no moral weight in the choice to live in Baltimore or install a swimming pool.  But, compared to living in SLC, UT, living in Baltimore statistically increases your risk of being murdered by 30 per 100,000.  That number is very close to the increase in the risk of death between riding a motorcycle and driving a car (increase of about 45/100,000).  But riding motorcycles has taken on a moral (or judgmental, if you will) character.  The choice of where you live has not.  Think very, very carefully about that.

Monday, August 17, 2015

A Poor Predictor is Worse than No Predictor: On the Superiority of Empiricism in Some Medical Decisions

John Locke, empiricist.
The moral of this story is that much maligned empiricism is sometimes (often?) both the only thing to guide you and also the best thing to guide you.

I recently received a call (at an odd hour on the weekend) from an otolaryngologist (ENT) regarding a patient from whom she had drained a large submandibular abcess.  She was calling to tell me that she planned to leave the patient intubated in the ICU overnight and she wanted "help with ventilator management" (which of course the patient does not need - he can be managed with an endotracheal tube not connected to any mechanical ventilator).  The patient did not have airway compromise or concerns thereof prior to surgery, but, she said, there was swelling noted after the case that (for her) raised concerns about the patency of the patient's airway if the endotracheal tube were to be removed.

(There is a second moral to this story: far too often, patients such as this are left intubated post-operatively not for their own safety, but rather for the convenience of surgeons and anesthesiologists who do not wish to spend the extra time awakening them from anesthesia and observing them carefully in the post-anesthesia care unit.  It is far easier to not fret over the depth of anesthesia, atelectasis, oxygen levels, fluid status, and leave the patient intubated and send them to the ICU and let somebody else sort it out.  If I had a family member undergo a relatively routine, even if urgent or emergent operation at an odd hour [holidays, weekends, after hours] and they were sent to the ICU post-op for no apparently good reason, there would be hell to pay.  Note also that for the surgeon and anesthesiologist to save an hour of their time, another physician has to drive to the hospital to take over for them spending hours of his time, and also often a nurse must be called in from home to accommodate the unexpected post-op admission [as was the case here].  The sheer arrogance and egocnetricity of this is mind-boggling.  But I digress.)

Back to the story.  I naturally inquired as to what criteria we would use the next day to determine if the patient's oropharyngeal swelling had abated sufficiently such that we could safely extubate him.  The ENT replied that she would scope (endoscopy) the patient again in the morning and if the swelling had decreased we could proceed with extubation (removing the endotracheal tube).  Well and good.  Or is it?

Monday, August 3, 2015

Accidental Survival from Beneficent Neglect: When "There's Nothing More We Can Do" Becomes Your Salvation

"There's nothing more we can do", according to this NYT article, is a terrible thing for a physician to say to a patient or his family member, even if the intention is much needed candor.

Yet sometimes, a physician's resignation or a patient's refusal becomes the patient's salvation.  There is something to be learned about the futility of many of our treatments and our arrogant ignorance of our impotence in many situations.  Several examples, I hope, will cause physicians to reflect on many of our practices.

A study showing that cancer patients choosing palliative care outlived those choosing aggressive care should have caused a lot of introspection about the possibility that many things we do harm rather than help patients.  How are we to know?  In the ICU, we have several unique opportunities to observe the futility or downright harm of many things we do.

A young woman came to the ICU with mental status changes, an EEG was ordered, and a diagnosis of "non-convulsive status epilepticus" (NCSE) was made.  She was intubated and heavily sedated and treated with every manner of anticonvulsant and CNS depressants and coma-inducing agents.  The EEG continued to show, according to the report, NCSE two weeks later.  The family was told that "there's nothing more we can do" and a decision was made to stop all therapy and withdraw care and prepare to send her to hospice.  This was done, but over the next 24 hours, she awakened and was alert and oriented. She walked out of the hospital later that week.

Tuesday, May 26, 2015

Technological Crutches and Agenesis and Atrophy of Procedural Skills

This article in the New York Times describes the possibility that with increasing reliance on technology and automation, there is atrophy of human skillsets which can lead to untoward outcomes, especially when technology fails and humans have to take back the steering wheel.  One example it called upon was a crash in 2009 of an Air France jetliner that was caused by icing over of the airspeed sensors upon which the autopilot program relied.  When the autopilot failed and the pilots took over, they were confused and ill prepared, and the plane crashed into the Atlantic Ocean.
I am no general fan of romanticizing dated technology (except for the pager) such as the physical examination when superior and ubiquitous technology supercedes it.  Spending five or ten minutes flipping the patient into different contortions trying to identify a gallop or a subtle murmur seems quixotic if an echo has been ordered or the result is pending (although if this interests you as it did me, indulge yourself, its performance and ponderment reinforces the underlying physiology poignantly).  On the other hand, if a patient in the coronary care unit crumps and you cannot identify the obvious holosystolic murmur from a chordae rupture….

I am reminded specifically of certain technological crutches graduates of internal medicine and critical care training programs have come to depend upon in the past decade such as ultrasounds for the placement of central lines and performance of thoracenteses, and fiberoptic aids for endotracheal intubations.  These devices certainly have a role in both training and patient care, and I am generally familiar with the favorable data on success and complication rates, but something is certainly lost when a trainee’s or a practitioner’s efficacy is overly dependent upon use of these technological crutches.

What to do during a Code Blue on the floor when there is no ultrasound and no intravenous access?  I recall several Code Blues where I inserted a subclavian line during brief epochs when chest compressions were held, but it is not uncommon nowadays that trainees leave a critical care fellowship with no proficiency in the subclavian approach whatsoever (or worse, that they learned erroneously that the jugular approach is generally superior to the subclavian approach).  What to do when there is a Code Blue but the Glidescope is in the ER, or there is no Glidescope, the Glidescope malfunctions, or there is a Glidescope but there is also a GI bleed or profuse vomiting and no fiberoptic visibility?  How can you know how to instinctually position the head and neck for a direct view of the larynx if you have trained almost exclusively on a device that obviates a direct view of the larynx?  How do you percuss and tap a pleural effusion when there is no ultrasound available if you have learned this procedure by the “point and poke” method?

One approach to this problem is to insist that trainees learn the tried and true methods first, and resort to the technological aids only for difficult cases or those in which the simple methods have failed.  Make an attempt with the Miller 2 blade (one brief attempt) and if that fails, proceed to the Glidescope.  Identify the internal jugular using proper patient positioning and identification of anatomical landmarks and make a pass with the finder needle before resorting to the use of the ultrasound, or use the ultrasound to confirm or refute your estimation of the jugular position prior to making a pass, rather than relying on it from the get-go.  In this way, the technology can be a way to calibrate predictions and can enhance learning of the underlying basic techniques, while also bolstering proficiency in their performance, and increasing optionality in procedural approaches.


Even with widespread availability of echocardiograms, cardiologists must be able to identify basic murmurs.  If trainees are leaving their programs where 90% or more of their procedures were performed with a technological crutch or aid, they may have rude awakenings when atrophy of basic skills (or the absence of their development) becomes apparent during exigent circumstances in real world settings.

Tuesday, February 24, 2015

Bayes' Theorem Explained, No Math Required


I was asked by a medical student to explain Bayes' Theorem.  This blog is about lack of common sense in medicine, so it follows that education about first principles will contribute to uncommon sense, and I will oblige.

Bayes' theorem is simply a long or holistic way of looking at the world, one which is more in keeping with reality than the competing frequentist approach.  A Bayesian (a person who subscribes to the logic of Bayes' Theorem) looks at the totality of the data, whereas a frequentist is concerned with just a specific slice of the data such as a test or a discrete dataset.  Frequentist hypothesis testing is where we get P-values from.  Frequentists are concerned with just the data from the current study.  Bayesians are concerned with the totality of the data, and they do meta-analyses, combining data from as many sources as they can.  (But alas they are still reluctant frequentists, because they insist on combining only frequentist datasets, and shun attempts to incorporate more amorphous data such as "what is the likelihood of something like this based on common sense?")

Consider a trial of orange juice (OJ) for the treatment of sepsis.  Suppose that 300 patients are enrolled and orange juice reduces sepsis mortality from 50% to 20% with P<0.001.  The frequentist says "if the null hypothesis is true and there is no effect of orange juice in sepsis, the probability of finding a difference as great or greater than what was found between orange juice and placebo is less than 0.001; thus we reject the null hypothesis."  The frequentist, on the basis of this trial, believes that orange juice is a thaumaturgical cure for sepsis.  But the frequentist is wrong.

Thursday, February 12, 2015

Countless Hours, Part 3: Uncalibrated Interns and Immediate and Accurate Feedback

Immediate, accurate feedback begets calibration
In this third and final installment of Countless Hours:  How to Become a Stellar Student and An Incredible Intern, I will discuss the role of immediate and accurate feedback for the refinement of a skill, prediction, or prognostication to expert levels.

Imagine you are learning to play golf, but you can't see where your balls are going - it would be very difficult, without any feedback to learn to modify your swing to improve your game.  Similarly, if the feedback you received were from an observer with poor vision, and it was not accurate, you would be trying to calibrate your swing to unreliable information and your game would not improve insomuch as the feedback was inaccurate.  Finally, if you did not receive the feedback on your swings until days later, it would be difficult to analyze it and adapt your game to it, compared with iterative feedback incorporated after each swing.

The same principles apply to learning the practice of medicine.  One of the reasons that the case study books mentioned in the previous post are so instructive is that they provide immediate, and accurate feedback - you get to know if your diagnosis was correct immediately after rendering it, and this feedback, from the experts who wrote the book (often with formal or informal peer review and editing), is presumably as accurate as you can hope for.  Thus, case based practice is a very very effective way to become an expert.

Then there is the "hands on" work you do on the wards and in clinic.  Here, feedback is, on average, less immediate, and less accurate, and this is one of the ways your learning in "real life" scenarios is compromised - but there are several things you can do about it to maximize the immediacy and accuracy of feedback in these environments.

Monday, February 9, 2015

Countless Hours: How to Become a Stellar Student and an Incredible Intern, Part 2: Iterative Practice

Iterative practice is the second component of becoming an expert at medical diagnosis or therapeutics (or, arguably, anything).  It is what you begin to do in the third year of medical school after you have mastered the domain specific knowledge of medicine.  And, the more you practice, the more experience you get, the better you will become, especially with immediate, accurate feedback (the topic of Part 3 of this series).

The need to see many many cases during your training so you can get lots of practice has been undermined in the last 10 years as efforts to limit "work hours" has scaled back the volume of patients interns and residents see.  I think this has created a deficit, and I wager that graduates are leaving residencies less prepared (and more entitled) than in the past.  If the problem was hours worked, it was not the volume of patients that was at its root, but rather the horse manure scut work that medical students and residents were expected to do as low wage and low status workers in the system.  Scut work has no meaningful educational value.  You should try to minimize scut work as much as possible, without drawing accusations that you're "not a team player" (a euphemism for "he fails to accept our prostitution of him") so that you can focus on meaningful learning.  Sadly, scut work will always be a part of the culture of medicine, but make no mistake, it is the enemy of learning.  Running a sample to the lab is hardly more educational than cleaning the men's room because the hospital won't hire enough janitors.

Fortunately, there are ways to get iterative practice, lots of it, without the distractions of scut work, if you can escape the wards when you're not doing meaningful patient care activities with associated learning opportunities.  On my third year rotation, I used to sneak off to the medical library in the hospital and go through old issues of Chest, looking for the "Pearls" section in the back of each issue, and read the brief case summary and try to figure it out.  The answer and a brief discussion were on the second page of the case.  If "working up" a new admission takes 3 hours and you can read a Chest Pearl in 10 minutes, reading the Pearl is 20 times more efficient than working on the wards.  Moreover, as a medical student, you are often told or you overhear the diagnosis before you even see a new patient, so it is NOT an unknown case, and it does NOT qualify for iterative practice of diagnosis.  It has value to work up that patient from the perspective of eliciting the history and physical exam, organizing the narrative, and making the presentation to your superiors, but make no mistake, you are NOT practicing diagnosis when the diagnosis is known.

I soon learned that Sahn and Heffner, the editors of the Chest Pearls, began a book series called Pearls.  I bought and devoured almost every one they published (except Sleep Pearls and TB Pearls - in keeping with what I said above, they were not unknowns and were thus not valuable to me - you knew every case was going to be Sleep Apnea or TB!).  I also discovered these little picture books that the British put out, one after another called Diagnostic Picture Tests in Clinical Medicine, that have just an image of a rash, a deformity, a physical finding, an image, a slide, whatever, with the answer on the next page.  They are awesome little books - I think I bought 30 or more of them (and they are going cheap on Amazon right now, so get on it!)  By going through these Pearls and Picture Tests books during Med 3 and Med 4 (hint:  Picture Tests fit in your coat pocket, so when you're "hurrying up to wait on the wards", you can study them), I "saw" literally thousands of unknown cases (with immediate, accurate feedback), literally the epitome of efficient, expert learning and iterative practice.  Because of this, by the time I got to internship, many many things were simple, rapid pattern recognition for me.  It was like I was years ahead of my training as a result of this kind of study.  I have not recently looked, but I would bet that nowadays, the palette of such unknown case practice books has expanded significantly.

Besides asking to take on more patients during your rotations (at the risk of being labelled a "gunner" - which is utter complete hogwash, by the way - you are not gunning for anybody, you just want to be the best physician you can be.  But you have been warned - being labelled a gunner can have impacts on your social reputation and thus your rotation evaluations, so conceal your "gunner" instincts if you can), there are other things you can do to enhance your learning opportunities.  One is to not blow off 4th year.  Fifteen years ago it was common to take easy "elective" rotations during the fourth year and travel and party a lot before the hard work of internship begins.  Do NOT do this.  I signed up for sub-internships, FOUR of them (maybe five, I don't remember).  I did the usual Sub-I in cardiology, but also did two in Critical Care (one at my medical school, another at the Cleveland Clinic), and finally a Hepatology Sub-I.  I recognized that there was a lot to learn in cardiology and in the ICU that I could not learn from my case books, and I wanted every opportunity to master those skills before internship, so I could, to my own satisfaction, take care of those patients as an intern.  I changed an elective rotation to a Sub-I in hepatology when, after my MICU rotation, I realized that I was still "scared of" bleeding - that is, nothing I had ever read about in my books or seen up until then on my rotations had prepared me for what to do on a practical level when a patient is "bleeding out".  (I learned on that hepatology rotation that it is actually quite simple, you get big IVs in place and order a lot of blood products.)  Sub-Internships are far more efficient learning rotations than are the third year rotations because you know more and you're more effective, and as a result "they 'let you' do more."  They were very very enriching experiences and they helped tremendously to prepare me for internship and residency.  I suggest you fill your fourth year with as many "hard core" rotations (such as Sub-Internships) as you can to maximize your opportunities for dense, meaningful iterative practice.  It pays off in spades during internship and residence, trust me.

Do it however you must, but "see" as many patients as you can, whether on the wards or in case or picture test books.  But make sure the feedback you get on the accuracy of your predictions and diagnoses is both immediate and accurate - the subject of Part 3 of this series.

Saturday, February 7, 2015

Countless Hours: How to Become a Stellar Student and an Incredible Intern, Part 1: Domain Specific Knowledge

In his book Outliers, Malcolm Gladwell popularized the idea that to get really, really good at something, you need to work at it for 10,000 hours.  Some debate surrounds the validity of the 10000 hour rule, but I accept it because it dovetails with the theory of expert decision making in terms of prediction, which I think is representative of medical diagnosis.  (The rule would also seem to apply to fields that require technical skill such as surgery - the more Whipples you do, the better you become at doing Whipples.)  In order to become a good predictor (the best ones are weather forecasters, professional bridge players, and horse race handicappers, by the way, for reasons I will touch on below) you need three things (besides base intelligence)

  1. Domain Specific Knowledge
  2. Iterative practice, the more the better
  3. Immediate, accurate feedback
I will discuss each of these in three parts in this mini series, with critical commentary on how "the system" does either a good or a poor job of promoting them, and give suggestions on how to supplement the system to do even better.

Domain Specific Knowledge:  This is what you learn in the first two years of medical school in a structured way, and thereafter in a less structured way.  It is impossible to overemphasize how important most of this information is, with some variance depending on specialty (embryology did me absolutely no good, but if I had pursued OB/GYN it may have been crucial).  One of the best things you can do to foster basic knowledge and its retention during the first two years of medical school is to buy the board review books from the outset.  There are seven (give or take) sections of USMLE Step 1, and you can get a review book for each of them.  (BRS Pathology, BRS Physiology, BRS Behavioral Sciences, A&L Medical Microbiology and Immunology, A&L Pharmacology, A&L BioChemistry were my preferred ones.)  If you study these books while you first learn the material, they serve as an ongoing review of that material, and point out gaps in what they're teaching you in medical school lectures.  But more important, when you go to study for Step 1 after the second year, it will be a relative breeze because you're familiar with the review materials and their organization, and have made annotations and cross references in them and will have figured out anything that you would have struggled with the first time through the books.  Almost every person who has followed this recommendation after I gave it to them (it was given to me by a good friend a year ahead of me, bless him) has scored in the top decile on boards and many of them in the top percentile (scores over 250 - you know who you are).

But the studying does not end there.  You must continue to read through years 3 and 4, internship, residency, fellowship, and thereafter.  I am perhaps an extreme example, but my example can give you an idea of the upper limit that a person can take it to.  I read the 13th edition of Harrison's Principles of Internal Medicine from cover to cover during Med 3, again cover to cover during Med 4, and I read the 14th edition cover to cover during internship and almost made it through again during residency.  I also did the Harrison's and the Cecil's board question books during medical school, as well as any other question set I could get my hands on.  That's right, I was studying for Internal Medicine Boards as a medical student.  During Med 3 and Med 4 I also read Principles of Critical Care, Critical Care Medicine The Essentials, and about 70% of Braunwald's Textbook of Cardiovascular Medicine.  I even bought Principles and Practice of Infectious Disease, but I didn't make it very far through that, and sold it before parting for internship.  And this list is not comprehensive, there were many more books and study guides and reviews I read, basically anything I could get my hands on.  I studied day in and day out, weekends and evenings, on rotations, on vacations.  And it paid off in spades in many many ways.  There was hardly a disease, a syndrome, a drug, a device that I was not familiar with when I first encountered it, and any case I did encounter was a far richer learning experience because I was able to see so much more nuance, so much more subtlety because of the preparation I had done far ahead of time.

The system does a relatively good job of structured knowledge education for the first two years, but it largely falls apart after that, and during the 3rd and 4th years and thereafter, you are expected to just absorb knowledge and experience, or to read in an unstructured way "about your patients".  In my opinion, this unstructured approach does not work optimally, because if you're just reading about lupus on www.uptodate.com (a very good resource, by the way) when you see a lupus patient you will a.) not be able to competently handle your first case of anything; b.) only learn about what you have seen; c.) not be able to diagnose things on the fly.   Many things you will never see in your training or your career, but you must still be familiar with them.

In the next parts, I will segue to iterative practice and immediate, accurate feedback.

Friday, February 6, 2015

The Medical History as an Exposure Narrative: A Didactic for Medical Students and Young Physicians

I just received an email from a medical student on the other side of the pond asking for my advice for junior doctors for learning the practice of medicine.  I will oblige.  When students are taught how to take a medical history, they are taught a rote sequence and its components, but are not taught what the point really is, why we ask certain questions, how we string the answers together, which components need more or less emphasis in a given case.  Here I will present a framework for that understanding, which may make history taking more meaningful and useful for those learning and refining it.

What we are really trying to do with history taking is to make a narrative of the patient's exposures in his or her environment.  This exposure narrative allows us to use Bayes' Theorem to determine the most likely causes for a given chief complaint.  Bayes Theorem should be reviewed for its own sake in order to understand its use here and elsewhere, but simply understanding that the base rate of a disease in a certain population is the "prior probability" of that disease in a patient will suffice for now.  So, if I asked you what mammal you are likely to see on your hike in the Rocky Mountains, you will list squirrel and deer and elk before mountain lion and badger.  It's just common to see deer and elk there.  In the jungle, or the desert, the answers would be different - because different animals have different probabilities in different environments.  Likewise, when a 70-year-old comes with pain in the joints, osteo- and rheumatoid arthritis are more likely than lupus and juvenile rheumatoid arthritis, which would have higher probabilities in younger patients.  Thus, age is an exposure, perhaps one of the most important ones, and this is why a student's presentation often begins with something like "Mr. Jones is a 78-year-old man..."  (It is also why I frequently interrupt physicians who call me to admit a patient or consult on one, because I can't begin to order the probabilities until I know the age of the patient, which they frequently omit because of indolence.)  The older person has been exposed to wear and tear on the body for a longer time, and this figures prominently in the probabilities of the diseases that s/he is likely to have.

Tuesday, February 3, 2015

Running AMOC: How the ABIM Sowed the Seeds of Its Own Destruction

As I preciently predicted, in response to the ABIM's MOC mandates, a group of enterprising physicians has created a new certification board for internists, and today ABIM appears to have relented at least a little bit in response to the threat of competition.  David has brought Goliath to his knees by hitting him where it hurts - in the pocketbook - and Goliath is literally begging for mercy.

But hold on kids, the credits are not rolling and this battle is not over.  ABIM has not done away with MOC, they are just backpedaling - for now.  And conspicuously absent from the mea culpa sent out by their president, Richard Baron, is any mention of cost effectiveness, cost containment, or cost reform.  One of the biggest benefits of the new board, besides less onerous busywork requirements, is that it saves you upwards of 90% in certification fees.

And make no mistake, this is all about money, my friends.  Just three short weeks ago, the ABIM staunchly defended its position and MOC requirements in this NEJM piece that was published alongside this piece by the architect of the new board, Paul Teirstein.  Why the change in heart?  The answer is quite simple:  math and money.  20,000 physicians signed the petition against the MOC requirements.  If just those 20,000 physicians jump ship and join the new board, the ABIM stands to lose tens of millions of dollars in revenue.  So it is no coincidence that the same ogrenization that two weeks ago thumped on its chest backed down today, just one week after the new board opened for applications.

Surely, the ABIM has calculated that it can arrest the mutiny "aboard" the ship before all is lost if it acts quickly and placates and appeases its diplomates with lipservice and token concessions.  I am reminded of the cheating girlfriend (or boyfriend).  You discover her infidelity and jump ship.  Then, when her romantic liaison turns sour, she comes crawling back to you begging for forgiveness and promising to never do it again.  But now that she has shown you her stripes and you know what she is capable of, you know better than to ever trust this conniving, duplicitous wo/man ever again.

And so let it be with ABIM.  I encourage all physicians to sign up for the new board.  The cost is nominal, just $169 for two years.  That is a small price to pay for a back-up plan as we wait to see if ABIM will make good on the promises outlined in the mea culpa issued today.  And even if it does, I'm already "aboard" the new ship - the cost savings alone are reason enough to make the change.

Wednesday, January 14, 2015

Specious Ideas: Trending Troponins and Chasing Lactates

I want to use this post to discuss an article in this week's JAMA called Lactate in Sepsis, which I think is fatally flawed and misleading.  But first...

Several years ago on the Medical Evidence Blog I talked about cardiac troponins and how their use is often misguided.  Not long after this post a young woman e-mailed me to describe a diagnostic and therapeutic misadventure that ensued after an abnormal troponin was "discovered" during work-up for a urological problem.  This led to transfer to another facility via ambulance for a cardiac catheterization with multiple complications including stroke.  It was a sad and unfortunate tale, but I fear it is not too uncommon.

Troponin, like all tests, needs to be ordered on the basis of a clinical suspicion (prior probability) that, when combined with the likelihood ratio of the test using Bayes Theorem (see calculator on the right of the blog), results in a posterior probability of disease that crosses a decision threshold.  (Because of the woeful inadequacy of medical education in regards to basic decision theory, I would not be surprised if the majority of physicians cannot correctly describe priors, Bayes, posteriors, or decision thresholds.  But this is old news, and beyond the scope of this post.)  The low prior probability of acute coronary syndromes in critically ill patients with non-cardiac primary diagnoses (PE, AECOPD, sepsis, etc.) leads me to list "non-specific troponin increase in the setting of critical illness" as a problem (an artificially begotten one) in my assessments after colleagues regretfully order tests that should never have been ordered.  And I will defer discussion of all those d-dimers and the needless CT angiograms they engender, lest I descend into unmitigated belligerence.