Skip to content

Our Border Patrol

November 9, 2010

The so-called “immune system” is actually a border patrol.

Physicians and medical researchers have used the term “immune system” to identify the cellular and protein elements that fight off infecting organisms.  However, this system is only called into service when there has been a breach in the integument — the skin and other epithelial linings — that allows microbes to enter the interstitium in the first place.  It is a paradigm shift in medicine to consider this so-called “immune system” to actually function as a border patrol.  Like the border patrol, our immune system doesn’t respond at all unless there has been a defect in the integument allowing for microbial invasion.

Consider the strategic locations of our immunocytes:

  • Waldeyer’s ring: A large accumulation of lymphocytes and other immunologic cells that comprise the tonsils and adenoids at the back of the nose and mouth.  This serves as a protective area for any potential invaders that enter our bodies by breathing or swallowing them. 
  • Lamina propria: This layer of the small intestine lies just under the epithelial lining and is home to huge number of immunocytes.  The intestinal lumen on the other side of the epithelium is full of bacteria, so that any break in the single-cell-layer thin epithelium could spell disaster without border patrol lined up to engulf the invaders.
  • Peyer’s patches: An accumulation of lymphocytic tissue in the terminal ileum, where the concentration of bacteria are especially high.
  • Kuppfer cells: Macrophages in the liver, positioned to deal with any microbes that entered the portal venous system through epithelial defects.
  • Lymph nodes: Accumulation of lymphoid tissues draining lymphatic ducts that bring interstitial fluid from integumentary regions.

It truly appears that our immune system is positioned around potential integumentary breaches.  In contrast, it is interesting to note that our most vital organs, such as the brain or the heart, don’t have any lymph nodes or other immunologic structures.  Given our primitive heritage, there were never any infections that were able to get that deep in a surviving patient.  Such a need would only be necessary with modern critical care.

Blood Pressure and Shock

November 4, 2010

Shock is what happens when the energy supply to cells is inadequate for their demands.  Unfortunately, shock is often equated with a reduction in blood pressure, but there are several mechanisms for reduction in blood pressure, and they don’t all result in decreases in the energy supply to cells.

A low blood pressure can result from one or more of these physiologic alterations:

  • Low circulating volume
  • Low pump performance which can be due to
    • poor cardiac filling (i.e., low circulating volume, but also can be due to obstructive problems such as tension pneumothorax or cardiac tamponade)
    • poor contractility (often due to ischemia from coronary artery disease)
    • low metabolic demand
  • Vasodilation

The above phenomena relate to the hydraulic properties of the circulation, characterized by pressures and flows.  While in many cases, these conditions could be associated with a shock condition, but not necessarily so.  For example, many individuals normally have a low blood pressure resulting from a low cardiac output and/or a vasodilated vascular bed.  However, this circulation matches their metabolic demands. 

If metabolic demands are not met by the circulatory hydraulics, then shock is likely present or soon will be present.  On some occasions, this is due to circulatory insufficiency, such as with hypovolemic or cardiogenic shock.  But in other states, it may be due to excessive metabolic demands, such as during septic shock.

The possible mechanisms by which the energy supply for cells can deteriorate

  • inadequate circulatory volume (hypovolemic shock)
  • poor pump performance (cardiogenic shock)
  • inadequate fuel content (acute anemia, severe hypoxemia)
  • poor uptake of fuel (septic [or distributive] shock, anaphylactic shock)
  • excessive metabolic demands (septic shock)

Neurogenic “shock” is not on the list above because there is no problem with fuel supply that is impaired in neurogenic shock. Unless the individual tries to stand up, in which case the brain gets inadequate oxygen delivery because the heart doesn’t have the vascular tone in the lower extremities to prevent blood pooling.  Of course, this is a self-correcting problem, because the decreased oxygen delivery to the brain causes them to pass out, fall down, and once again be level so that the brain’s blood flow is restored.

It would be a true paradigm shift in medicine if these physiological principles were understood in managing critically ill patients.

The Trendelenberg Position and Shock Management

November 2, 2010

For about a century or more, clinicians have placed patients who are in circulatory shock into the Trendelenberg position, which is accomplished by tilting the patient’s bed into a head down position.  It was thought that by having the legs higher than the heart, there would be an improvement in venous return for cardiac filling, thereby resuscitating the circulation.  The procedure appeared to raise the low blood pressure to more acceptable levels, suggesting that cardiac output was improved.  Physicians and nurses have been doing so ever since, believing that such a procedure was a life-saving intervention.  Disappointingly, there has never been any evidence that this practice alters outcome whatsoever.

On the other hand, there is data that suggests the Trendelenberg position does not actually improve cardiac output in patients in shock.  In fact, the position could worsen their condition, despite the fact that the blood pressure is improved.  In 1979, Sibbald performed a study of the hemodynamic effects of Trendelenburg position in 61 normotensive and 15 hypotensive patients with acute cardiac illness or sepsis; no beneficial hemodynamic effects were observed.  A more recent meta-analysis found adverse consequences to the use of the Trendelenburg position and recommended it be avoided.

It is not clear if the gravitational impact of a head-down position improves blood flow to the brain.  Conceptually, it would potentially increase both the arterial and venous pressures.  Therefore, it is quite possible that it may pool more blood in the brain, without really circulating it.  This should then increase the intracranial pressure, which could be harmful in head injured patients.

Why do physicians and nurses still put patients into the Trendelenberg position when hypotension develops?  Has solid physiologic work escaped their attention?  It would be a real paradigm shift in medicine if physiology were to be the foundation for our actions instead of blind tradition.

The Surgical Infection Paradox

November 2, 2010

The Paradox: Cutting skin increases the chances for infection.  Closing skin increases the chances for infection.

The integument (i.e., the skin and the epithelium lining the digestive and urinary tracts) is our immune system.  When the integument is injured, microorganisms have the opportunity to invade the interstitium.  This invasion triggers a response by the so-called immune system, which really functions as a border patrol.  Several of such invasions can be managed by the local defenses, which are various types of white blood cells resident in the tissues.

For example, a paper cut on the finger is an opportunity for organisms to invade.  Because the total number of organisms that actually invade in a paper cut are usually very meager when people are washing their hands frequently, the local defenses can handle the invasion without even demonstrating the signs of infection.  When a higher concentration of bacteria get into the tissue, such as a cut with a dirty and highly contaminated knife, the number of organisms overwhelm the local defenses.  As a result, a call for backup occurs, and white blood cells of various types are called into the fray, producing the typical signs of an inflammatory responses (redness, pain, swelling, warmth, etc.) which we recognize as an infection.

When we surgeons incise skin, we enable the potential for bacterial invasion.  If the amount of contamination is excessive, we will trap those organisms in tissue at the time we close the wound.  In such cases, it has been demonstrated that timely administration of antibiotics immediately before and during the contamination will reduce the rate of surgical infections observed.  Prolonged antibiotics following wound closure have been shown to offer no benefit, but rather increase costs and the potential for resistant strains to emerge.  Despite extensive data regarding the necessarily brief course of antibiotic effectiveness as prophylaxis (and national initiatives promoting antibiotic discontinuation by 24 hours postoperatively), many clinicians still feel compelled to continue prophylactic perioperative antibiotics for days and days following surgery.

The paradigm shift in medicine is to understand the biological nature of surgical wound infections and use antibiotics in a targeted strike against the invaders.

How Monitors and Alarms Should Work

October 28, 2010

We seem to have become numb to the beeps. 

When I’ve stayed in an ICU room as a patient or with a relative, I’ve noticed how it seems that the only thing accomplished by our current monitors is to wake up the patient.  The alarms start beeping, the patient is rudely awakened, but the nurse doesn’t come to the bedside.  It’s true that, in many cases, the alarms shut off on their own if the problem disappears.  But why should it go off in the room at all?  We don’t have one-to-one nursing anymore, so the nurse is more commonly out of the room than in it.  Thus, the only individual consistently hearing the alarms is the patient.

I guess I’ve got a personal beef with this issue, because it seemed like a stupid reason for a sleepless night.  I had just undergone back surgery for a herniated disc.  Because I was over 50, I was placed in a monitored stepdown unit on the hospital’s beta-blocker protocol to prevent my heart from getting stressed.  When I fell asleep, my heart rate would drop below 60 beats per minute, and the alarm would sound off.  I’d immediately wake up, which caused my heart rate to speed up, and the alarm would stop.   I’d fall back asleep, and the process would happen all over again.  I think I woke up needlessly about 4 or 5 times from the alarms firing off, but the nurse never responded.  (You also have to wonder if those sudden increases in the heart rate are considered a good thing.)  Finally, I hit the call button.  The nurse came in, and I explained what was happening.  I asked her if she could turn the alarm’s trigger to 50 beats per minute, and I promised her I wouldn’t code.  She did, and I was able to sleep through the rest of the night.

I’d like to see a paradigm shift in medicine where beeping alarms are obsolete.  Instead, an alarm should state the precise problem.  The technology for computerized devices to speak has existed for decades.  “The heart rate is 140 beats per minute” would be an important message, and one that would produce a more immediate response than a beep that sounds like all the other beeps.  The monitor should announce its message in the room while simultaneously sending the nurse (and possibly the physician) a page or text message with the information.  Or, even better, if the monitor knows that only the patient is in the room, it wouldn’t sound off there at all, but only send a page with the message to the responding clinicians.

That way we could get our sleep while others scurry around to fix the problem.

The Meaning of a Shift to the Left

October 26, 2010
Physicians should stop confusing the presence of granulocytosis with a left shift.
One of the features that can be obtained in a complete blood count, or CBC, is a differential of the various cell forms making up the white blood count.  The primary white cell types include lymphocytes, polymorphonuclear neutrophils (PMNS, also call granulocytes), monocytes, basophils, and eosinophils.  Within the PMN cell line are several cell types that represent various degrees of immaturity.  These include promyelocytes, myelocytes, metamyelocytes, and band cells, in order of increasing maturity.  In the mid-1900s, clinical laboratories determined the different count manually, with technicians counting the percentage of cells of each type.  To assist with their counting, a standard form was used consisting of individual columns for the tallies the various cell forms.  At some point, a mechanical device was developed to facilitate the task further, tallying each cell type’s numbers as the technician depressed a specific button, stopping them when they’d pressed buttons for 100 cells.  In both systems, the columns and the buttons were oriented with the most immature PMN forms on the left and increasingly mature cells in the columns or buttons moving toward the right.  Thus, when the number of immature forms present exceeded their normally small or non-existent percentages, it was often defined as a “shift to the left”. 

In recent years, it seems that many clinicians have confused an increase in the total number of granulocytes, or a granulocytosis, with a left shift, presumably thinking that if the total granulocyte numbers are increased, it must be due to more immature forms.  However, that assumption is frequently incorrect.  For example, granulocytosis is often present after a dose of steroids is administered because the steroids cause the white blood cells to leave their stationary positions on the inner walls of blood vessels and spill into the circulating blood, a process known as demargination; despite the increased numbers of PMNs, there is no increase in immature forms.  The presence of an actual left shift indicates the presence of an overwhelming infectious process.  However, granulocytosis without a left shift indicates that the immune system is handling the infection without resorting to desperate measures.  These different degrees of immunologic responses should warrant different degrees clinical response.  The paradigm shift in medicine that must occur is for clinicians to understand their terminology and the implications it provides.

Creating the “Aha!” moment

October 20, 2010

As we all know, the practice of medicine is not a perfect science.  The study of human biology has some unique aspects that have challenged its progress. 

First, we didn’t invent, design, or build the human body.  While the field of communications evolved from smoke signals to newspapers to websites on a smart phone, the science of medicine has been that of reverse engineering the human body.  In order to fix it, we need to know how the darn thing works.  While we have made progress, it’s been estimated that we can only back up about 20% of what we do in our practice of medicine with actual science.  There’s a great deal more out there that we still have to discover.

A great deal has been learned about how the human body works in the past half-century.  In large part, this has been due to the growth in research funding provided by the National Institutes of Health (NIH) as well as growing opportunities in private industry.  Much of the work done by the NIH has been by extramural investigators, each working incrementally to unravel some area of biology that fascinates them.  Such work is critically important, albeit tediously and frustratingly slow.  Yet it produces the necessary background knowledge for major discoveries.  There are several spectacular discoveries that seem to about through epiphanies, such as the discovery of endorphins.  Once the opoid recepter was found in mammalian cells, investigators started to ask why there would be a receptor in the brain for a molecule derived from a plant.

Such curiosity is critical for the evolution of our industry.  This blog hopes to stimulate that curiosity.  We seek fresh perspectives in health care science and delivery, often using the 30,000 foot view, maybe even the 30,000 mile view, to stimulate new questions and, ultimately, new answers.

The physician and the self-educated patient

October 18, 2010

Physicians should realize their patients are now able to learn a great deal of information about their condition. 

I was confronted one day in the clinic by a patient and her family who were upset that she’d been treated with a “nose-stomach tube” while in the hospital.  After some questioning, I understood that they meant her nasogastric tube, which is a routine device employed in patients following abdominal surgery to prevent intestinal distension.  It turns out, they had researched the device and found a web page stating that the tube works contrary to popular belief and actually slows recovery from surgery.  The web page actually cited a study from the esteemed Cochrane Library that evaluated 37 randomized controlled trials, involving 2,866 patients randomized to routine nasogatric tube use and 2,845 randomized to selective use or non-use of the tubes.  Not having the tube used routinely resulted in significantly earlier return of bowel function after surgery along with a decrease in pulmonary complications and other potential benefits.  The tubes did not appear to reduce the rate of leakage from intestinal repairs.

So how was I supposed to defend our actions?  I told the family that using nasogastric tubes is standard practice, and that often my residents would do these things because they had been taught to do them by others.  I also told them I was excited to learn about the study as it reflected my own biases.  And I made a commitment to them and to myself to continue to use them selectively or not at all, and that I would pass the information on to more and more of my colleagues.

However, the real message to my colleagues is to become much more informed,  understanding the vast amount of information that is available to our patients and their families.

Checklists for the Checklists

October 14, 2010

Atul Gawande’s book Checklist Manifesto recommends that health care providers employ checklists more widely to prevent us from overlooking essential details.  This is an excellent suggestion, but the misapplication of a checklist may not be all that beneficial, and could be potentially harmful.  That is, it is always tempting to extend a checklist’s oversight to areas where it should never be seen.  In the operating room, for example, we have been performing a “Time-Out” for several years now.  During the Time-Out, the entire operating room crew goes through a brief preoperative checklist to make sure we’re operating on the correct patient, the proper planned procedure is announced, and so forth.  One of the items on the checklist is whether or not the operative site has been marked.  Normally this makes sense, as it helps make us think about whether we’re fixing the left groin hernia or the right.  Or we’re amputating the right foot, not the left.  However, when I was confronted one day about whether or not I’d marked the patient’s anus, I just about lost it.  I was operating on a huge abscess of the anal region.  Not only would marking the area cause severe pain to the patient, but the ink wouldn’t actually work on the moist anal lining.

We don’t seem to ask ourselves if the checklist should be applied as is for each patient.  For example, managed care organizations often use the performance of an annual eye exam in diabetics as a means of judging the health maintenance effectiveness of the organization.  Unfortunately, these processes have led to quality points against physicians who failed to perform an annual eye exam in their blind diabetics.

Rather than question the appropriateness of the checklist for any given patient, we seem perfectly content to follow the list mindlessly.   Will there be a paradigm shift in medicine to create checklists to see if the checklist is suitable for the patient in question.

The market in human urine

October 11, 2010

In the 1970s, physicians started to use diuretics in patients who appeared to be sliding into acute renal failure.  This was based on reports that nonoliguric renal failure carried a lower mortality rate (around 25%) than did the more traditional oliguric renal failure (with about a 50% mortality rate.  It appeared that, on occasion, patients could be converted from oliguric to nonoliguric renal failure by using a diuretic, such as furosemide (Lasix).

However, when real science was applied to this concept in the form of prospective randomized trials, it turns out that, despite an increased urine output with the use of diuretics, the desired improvement in mortality did not happen:

  • Cantarovich, 1971
  • Cantarovich, 1973
  • Chandra, 1975
  • Kleinknecht, 1976
  • Lucas, 1977
  • Brown, 1981
  • Shilliday, 1997
  • Sirivella, 2000
  • Cantarovich, 2004
  • Bagshaw, 2007

To summarize the findings from these studies, it appears that diuretic administration to patients with impending renal failure does not alter their outcome compared to patients who do not receive diuretics: there is no difference in length of stay, need for dialysis, or mortality.  However, there is an increased urine output in those patients who were in the diuretic study group.

Thus, the primary outcome of such therapy is the production of more human urine.  This outcome would be beneficial if there were some kind of market for human urine than needed to be satisfied.  But, of course, no such market exists, so it begs the question of why are we doing this. 

Despite the overwhelming evidence that it would make sense to stop the practice of administering diuretics to patients with impending renal failure, it would actually be a paradigm shift in medicine to do so.