Tuesday, February 24, 2015

Bayes' Theorem Explained, No Math Required


I was asked by a medical student to explain Bayes' Theorem.  This blog is about lack of common sense in medicine, so it follows that education about first principles will contribute to uncommon sense, and I will oblige.

Bayes' theorem is simply a long or holistic way of looking at the world, one which is more in keeping with reality than the competing frequentist approach.  A Bayesian (a person who subscribes to the logic of Bayes' Theorem) looks at the totality of the data, whereas a frequentist is concerned with just a specific slice of the data such as a test or a discrete dataset.  Frequentist hypothesis testing is where we get P-values from.  Frequentists are concerned with just the data from the current study.  Bayesians are concerned with the totality of the data, and they do meta-analyses, combining data from as many sources as they can.  (But alas they are still reluctant frequentists, because they insist on combining only frequentist datasets, and shun attempts to incorporate more amorphous data such as "what is the likelihood of something like this based on common sense?")

Consider a trial of orange juice (OJ) for the treatment of sepsis.  Suppose that 300 patients are enrolled and orange juice reduces sepsis mortality from 50% to 20% with P<0.001.  The frequentist says "if the null hypothesis is true and there is no effect of orange juice in sepsis, the probability of finding a difference as great or greater than what was found between orange juice and placebo is less than 0.001; thus we reject the null hypothesis."  The frequentist, on the basis of this trial, believes that orange juice is a thaumaturgical cure for sepsis.  But the frequentist is wrong.

Thursday, February 12, 2015

Countless Hours, Part 3: Uncalibrated Interns and Immediate and Accurate Feedback

Immediate, accurate feedback begets calibration
In this third and final installment of Countless Hours:  How to Become a Stellar Student and An Incredible Intern, I will discuss the role of immediate and accurate feedback for the refinement of a skill, prediction, or prognostication to expert levels.

Imagine you are learning to play golf, but you can't see where your balls are going - it would be very difficult, without any feedback to learn to modify your swing to improve your game.  Similarly, if the feedback you received were from an observer with poor vision, and it was not accurate, you would be trying to calibrate your swing to unreliable information and your game would not improve insomuch as the feedback was inaccurate.  Finally, if you did not receive the feedback on your swings until days later, it would be difficult to analyze it and adapt your game to it, compared with iterative feedback incorporated after each swing.

The same principles apply to learning the practice of medicine.  One of the reasons that the case study books mentioned in the previous post are so instructive is that they provide immediate, and accurate feedback - you get to know if your diagnosis was correct immediately after rendering it, and this feedback, from the experts who wrote the book (often with formal or informal peer review and editing), is presumably as accurate as you can hope for.  Thus, case based practice is a very very effective way to become an expert.

Then there is the "hands on" work you do on the wards and in clinic.  Here, feedback is, on average, less immediate, and less accurate, and this is one of the ways your learning in "real life" scenarios is compromised - but there are several things you can do about it to maximize the immediacy and accuracy of feedback in these environments.

Monday, February 9, 2015

Countless Hours: How to Become a Stellar Student and an Incredible Intern, Part 2: Iterative Practice

Iterative practice is the second component of becoming an expert at medical diagnosis or therapeutics (or, arguably, anything).  It is what you begin to do in the third year of medical school after you have mastered the domain specific knowledge of medicine.  And, the more you practice, the more experience you get, the better you will become, especially with immediate, accurate feedback (the topic of Part 3 of this series).

The need to see many many cases during your training so you can get lots of practice has been undermined in the last 10 years as efforts to limit "work hours" has scaled back the volume of patients interns and residents see.  I think this has created a deficit, and I wager that graduates are leaving residencies less prepared (and more entitled) than in the past.  If the problem was hours worked, it was not the volume of patients that was at its root, but rather the horse manure scut work that medical students and residents were expected to do as low wage and low status workers in the system.  Scut work has no meaningful educational value.  You should try to minimize scut work as much as possible, without drawing accusations that you're "not a team player" (a euphemism for "he fails to accept our prostitution of him") so that you can focus on meaningful learning.  Sadly, scut work will always be a part of the culture of medicine, but make no mistake, it is the enemy of learning.  Running a sample to the lab is hardly more educational than cleaning the men's room because the hospital won't hire enough janitors.

Fortunately, there are ways to get iterative practice, lots of it, without the distractions of scut work, if you can escape the wards when you're not doing meaningful patient care activities with associated learning opportunities.  On my third year rotation, I used to sneak off to the medical library in the hospital and go through old issues of Chest, looking for the "Pearls" section in the back of each issue, and read the brief case summary and try to figure it out.  The answer and a brief discussion were on the second page of the case.  If "working up" a new admission takes 3 hours and you can read a Chest Pearl in 10 minutes, reading the Pearl is 20 times more efficient than working on the wards.  Moreover, as a medical student, you are often told or you overhear the diagnosis before you even see a new patient, so it is NOT an unknown case, and it does NOT qualify for iterative practice of diagnosis.  It has value to work up that patient from the perspective of eliciting the history and physical exam, organizing the narrative, and making the presentation to your superiors, but make no mistake, you are NOT practicing diagnosis when the diagnosis is known.

I soon learned that Sahn and Heffner, the editors of the Chest Pearls, began a book series called Pearls.  I bought and devoured almost every one they published (except Sleep Pearls and TB Pearls - in keeping with what I said above, they were not unknowns and were thus not valuable to me - you knew every case was going to be Sleep Apnea or TB!).  I also discovered these little picture books that the British put out, one after another called Diagnostic Picture Tests in Clinical Medicine, that have just an image of a rash, a deformity, a physical finding, an image, a slide, whatever, with the answer on the next page.  They are awesome little books - I think I bought 30 or more of them (and they are going cheap on Amazon right now, so get on it!)  By going through these Pearls and Picture Tests books during Med 3 and Med 4 (hint:  Picture Tests fit in your coat pocket, so when you're "hurrying up to wait on the wards", you can study them), I "saw" literally thousands of unknown cases (with immediate, accurate feedback), literally the epitome of efficient, expert learning and iterative practice.  Because of this, by the time I got to internship, many many things were simple, rapid pattern recognition for me.  It was like I was years ahead of my training as a result of this kind of study.  I have not recently looked, but I would bet that nowadays, the palette of such unknown case practice books has expanded significantly.

Besides asking to take on more patients during your rotations (at the risk of being labelled a "gunner" - which is utter complete hogwash, by the way - you are not gunning for anybody, you just want to be the best physician you can be.  But you have been warned - being labelled a gunner can have impacts on your social reputation and thus your rotation evaluations, so conceal your "gunner" instincts if you can), there are other things you can do to enhance your learning opportunities.  One is to not blow off 4th year.  Fifteen years ago it was common to take easy "elective" rotations during the fourth year and travel and party a lot before the hard work of internship begins.  Do NOT do this.  I signed up for sub-internships, FOUR of them (maybe five, I don't remember).  I did the usual Sub-I in cardiology, but also did two in Critical Care (one at my medical school, another at the Cleveland Clinic), and finally a Hepatology Sub-I.  I recognized that there was a lot to learn in cardiology and in the ICU that I could not learn from my case books, and I wanted every opportunity to master those skills before internship, so I could, to my own satisfaction, take care of those patients as an intern.  I changed an elective rotation to a Sub-I in hepatology when, after my MICU rotation, I realized that I was still "scared of" bleeding - that is, nothing I had ever read about in my books or seen up until then on my rotations had prepared me for what to do on a practical level when a patient is "bleeding out".  (I learned on that hepatology rotation that it is actually quite simple, you get big IVs in place and order a lot of blood products.)  Sub-Internships are far more efficient learning rotations than are the third year rotations because you know more and you're more effective, and as a result "they 'let you' do more."  They were very very enriching experiences and they helped tremendously to prepare me for internship and residency.  I suggest you fill your fourth year with as many "hard core" rotations (such as Sub-Internships) as you can to maximize your opportunities for dense, meaningful iterative practice.  It pays off in spades during internship and residence, trust me.

Do it however you must, but "see" as many patients as you can, whether on the wards or in case or picture test books.  But make sure the feedback you get on the accuracy of your predictions and diagnoses is both immediate and accurate - the subject of Part 3 of this series.

Saturday, February 7, 2015

Countless Hours: How to Become a Stellar Student and an Incredible Intern, Part 1: Domain Specific Knowledge

In his book Outliers, Malcolm Gladwell popularized the idea that to get really, really good at something, you need to work at it for 10,000 hours.  Some debate surrounds the validity of the 10000 hour rule, but I accept it because it dovetails with the theory of expert decision making in terms of prediction, which I think is representative of medical diagnosis.  (The rule would also seem to apply to fields that require technical skill such as surgery - the more Whipples you do, the better you become at doing Whipples.)  In order to become a good predictor (the best ones are weather forecasters, professional bridge players, and horse race handicappers, by the way, for reasons I will touch on below) you need three things (besides base intelligence)

  1. Domain Specific Knowledge
  2. Iterative practice, the more the better
  3. Immediate, accurate feedback
I will discuss each of these in three parts in this mini series, with critical commentary on how "the system" does either a good or a poor job of promoting them, and give suggestions on how to supplement the system to do even better.

Domain Specific Knowledge:  This is what you learn in the first two years of medical school in a structured way, and thereafter in a less structured way.  It is impossible to overemphasize how important most of this information is, with some variance depending on specialty (embryology did me absolutely no good, but if I had pursued OB/GYN it may have been crucial).  One of the best things you can do to foster basic knowledge and its retention during the first two years of medical school is to buy the board review books from the outset.  There are seven (give or take) sections of USMLE Step 1, and you can get a review book for each of them.  (BRS Pathology, BRS Physiology, BRS Behavioral Sciences, A&L Medical Microbiology and Immunology, A&L Pharmacology, A&L BioChemistry were my preferred ones.)  If you study these books while you first learn the material, they serve as an ongoing review of that material, and point out gaps in what they're teaching you in medical school lectures.  But more important, when you go to study for Step 1 after the second year, it will be a relative breeze because you're familiar with the review materials and their organization, and have made annotations and cross references in them and will have figured out anything that you would have struggled with the first time through the books.  Almost every person who has followed this recommendation after I gave it to them (it was given to me by a good friend a year ahead of me, bless him) has scored in the top decile on boards and many of them in the top percentile (scores over 250 - you know who you are).

But the studying does not end there.  You must continue to read through years 3 and 4, internship, residency, fellowship, and thereafter.  I am perhaps an extreme example, but my example can give you an idea of the upper limit that a person can take it to.  I read the 13th edition of Harrison's Principles of Internal Medicine from cover to cover during Med 3, again cover to cover during Med 4, and I read the 14th edition cover to cover during internship and almost made it through again during residency.  I also did the Harrison's and the Cecil's board question books during medical school, as well as any other question set I could get my hands on.  That's right, I was studying for Internal Medicine Boards as a medical student.  During Med 3 and Med 4 I also read Principles of Critical Care, Critical Care Medicine The Essentials, and about 70% of Braunwald's Textbook of Cardiovascular Medicine.  I even bought Principles and Practice of Infectious Disease, but I didn't make it very far through that, and sold it before parting for internship.  And this list is not comprehensive, there were many more books and study guides and reviews I read, basically anything I could get my hands on.  I studied day in and day out, weekends and evenings, on rotations, on vacations.  And it paid off in spades in many many ways.  There was hardly a disease, a syndrome, a drug, a device that I was not familiar with when I first encountered it, and any case I did encounter was a far richer learning experience because I was able to see so much more nuance, so much more subtlety because of the preparation I had done far ahead of time.

The system does a relatively good job of structured knowledge education for the first two years, but it largely falls apart after that, and during the 3rd and 4th years and thereafter, you are expected to just absorb knowledge and experience, or to read in an unstructured way "about your patients".  In my opinion, this unstructured approach does not work optimally, because if you're just reading about lupus on www.uptodate.com (a very good resource, by the way) when you see a lupus patient you will a.) not be able to competently handle your first case of anything; b.) only learn about what you have seen; c.) not be able to diagnose things on the fly.   Many things you will never see in your training or your career, but you must still be familiar with them.

In the next parts, I will segue to iterative practice and immediate, accurate feedback.

Friday, February 6, 2015

The Medical History as an Exposure Narrative: A Didactic for Medical Students and Young Physicians

I just received an email from a medical student on the other side of the pond asking for my advice for junior doctors for learning the practice of medicine.  I will oblige.  When students are taught how to take a medical history, they are taught a rote sequence and its components, but are not taught what the point really is, why we ask certain questions, how we string the answers together, which components need more or less emphasis in a given case.  Here I will present a framework for that understanding, which may make history taking more meaningful and useful for those learning and refining it.

What we are really trying to do with history taking is to make a narrative of the patient's exposures in his or her environment.  This exposure narrative allows us to use Bayes' Theorem to determine the most likely causes for a given chief complaint.  Bayes Theorem should be reviewed for its own sake in order to understand its use here and elsewhere, but simply understanding that the base rate of a disease in a certain population is the "prior probability" of that disease in a patient will suffice for now.  So, if I asked you what mammal you are likely to see on your hike in the Rocky Mountains, you will list squirrel and deer and elk before mountain lion and badger.  It's just common to see deer and elk there.  In the jungle, or the desert, the answers would be different - because different animals have different probabilities in different environments.  Likewise, when a 70-year-old comes with pain in the joints, osteo- and rheumatoid arthritis are more likely than lupus and juvenile rheumatoid arthritis, which would have higher probabilities in younger patients.  Thus, age is an exposure, perhaps one of the most important ones, and this is why a student's presentation often begins with something like "Mr. Jones is a 78-year-old man..."  (It is also why I frequently interrupt physicians who call me to admit a patient or consult on one, because I can't begin to order the probabilities until I know the age of the patient, which they frequently omit because of indolence.)  The older person has been exposed to wear and tear on the body for a longer time, and this figures prominently in the probabilities of the diseases that s/he is likely to have.

Tuesday, February 3, 2015

Running AMOC: How the ABIM Sowed the Seeds of Its Own Destruction

As I preciently predicted, in response to the ABIM's MOC mandates, a group of enterprising physicians has created a new certification board for internists, and today ABIM appears to have relented at least a little bit in response to the threat of competition.  David has brought Goliath to his knees by hitting him where it hurts - in the pocketbook - and Goliath is literally begging for mercy.

But hold on kids, the credits are not rolling and this battle is not over.  ABIM has not done away with MOC, they are just backpedaling - for now.  And conspicuously absent from the mea culpa sent out by their president, Richard Baron, is any mention of cost effectiveness, cost containment, or cost reform.  One of the biggest benefits of the new board, besides less onerous busywork requirements, is that it saves you upwards of 90% in certification fees.

And make no mistake, this is all about money, my friends.  Just three short weeks ago, the ABIM staunchly defended its position and MOC requirements in this NEJM piece that was published alongside this piece by the architect of the new board, Paul Teirstein.  Why the change in heart?  The answer is quite simple:  math and money.  20,000 physicians signed the petition against the MOC requirements.  If just those 20,000 physicians jump ship and join the new board, the ABIM stands to lose tens of millions of dollars in revenue.  So it is no coincidence that the same ogrenization that two weeks ago thumped on its chest backed down today, just one week after the new board opened for applications.

Surely, the ABIM has calculated that it can arrest the mutiny "aboard" the ship before all is lost if it acts quickly and placates and appeases its diplomates with lipservice and token concessions.  I am reminded of the cheating girlfriend (or boyfriend).  You discover her infidelity and jump ship.  Then, when her romantic liaison turns sour, she comes crawling back to you begging for forgiveness and promising to never do it again.  But now that she has shown you her stripes and you know what she is capable of, you know better than to ever trust this conniving, duplicitous wo/man ever again.

And so let it be with ABIM.  I encourage all physicians to sign up for the new board.  The cost is nominal, just $169 for two years.  That is a small price to pay for a back-up plan as we wait to see if ABIM will make good on the promises outlined in the mea culpa issued today.  And even if it does, I'm already "aboard" the new ship - the cost savings alone are reason enough to make the change.