Skip to content

How Monitors and Alarms Should Work

October 28, 2010

We seem to have become numb to the beeps. 

When I’ve stayed in an ICU room as a patient or with a relative, I’ve noticed how it seems that the only thing accomplished by our current monitors is to wake up the patient.  The alarms start beeping, the patient is rudely awakened, but the nurse doesn’t come to the bedside.  It’s true that, in many cases, the alarms shut off on their own if the problem disappears.  But why should it go off in the room at all?  We don’t have one-to-one nursing anymore, so the nurse is more commonly out of the room than in it.  Thus, the only individual consistently hearing the alarms is the patient.

I guess I’ve got a personal beef with this issue, because it seemed like a stupid reason for a sleepless night.  I had just undergone back surgery for a herniated disc.  Because I was over 50, I was placed in a monitored stepdown unit on the hospital’s beta-blocker protocol to prevent my heart from getting stressed.  When I fell asleep, my heart rate would drop below 60 beats per minute, and the alarm would sound off.  I’d immediately wake up, which caused my heart rate to speed up, and the alarm would stop.   I’d fall back asleep, and the process would happen all over again.  I think I woke up needlessly about 4 or 5 times from the alarms firing off, but the nurse never responded.  (You also have to wonder if those sudden increases in the heart rate are considered a good thing.)  Finally, I hit the call button.  The nurse came in, and I explained what was happening.  I asked her if she could turn the alarm’s trigger to 50 beats per minute, and I promised her I wouldn’t code.  She did, and I was able to sleep through the rest of the night.

I’d like to see a paradigm shift in medicine where beeping alarms are obsolete.  Instead, an alarm should state the precise problem.  The technology for computerized devices to speak has existed for decades.  “The heart rate is 140 beats per minute” would be an important message, and one that would produce a more immediate response than a beep that sounds like all the other beeps.  The monitor should announce its message in the room while simultaneously sending the nurse (and possibly the physician) a page or text message with the information.  Or, even better, if the monitor knows that only the patient is in the room, it wouldn’t sound off there at all, but only send a page with the message to the responding clinicians.

That way we could get our sleep while others scurry around to fix the problem.

The Meaning of a Shift to the Left

October 26, 2010
Physicians should stop confusing the presence of granulocytosis with a left shift.
One of the features that can be obtained in a complete blood count, or CBC, is a differential of the various cell forms making up the white blood count.  The primary white cell types include lymphocytes, polymorphonuclear neutrophils (PMNS, also call granulocytes), monocytes, basophils, and eosinophils.  Within the PMN cell line are several cell types that represent various degrees of immaturity.  These include promyelocytes, myelocytes, metamyelocytes, and band cells, in order of increasing maturity.  In the mid-1900s, clinical laboratories determined the different count manually, with technicians counting the percentage of cells of each type.  To assist with their counting, a standard form was used consisting of individual columns for the tallies the various cell forms.  At some point, a mechanical device was developed to facilitate the task further, tallying each cell type’s numbers as the technician depressed a specific button, stopping them when they’d pressed buttons for 100 cells.  In both systems, the columns and the buttons were oriented with the most immature PMN forms on the left and increasingly mature cells in the columns or buttons moving toward the right.  Thus, when the number of immature forms present exceeded their normally small or non-existent percentages, it was often defined as a “shift to the left”. 

In recent years, it seems that many clinicians have confused an increase in the total number of granulocytes, or a granulocytosis, with a left shift, presumably thinking that if the total granulocyte numbers are increased, it must be due to more immature forms.  However, that assumption is frequently incorrect.  For example, granulocytosis is often present after a dose of steroids is administered because the steroids cause the white blood cells to leave their stationary positions on the inner walls of blood vessels and spill into the circulating blood, a process known as demargination; despite the increased numbers of PMNs, there is no increase in immature forms.  The presence of an actual left shift indicates the presence of an overwhelming infectious process.  However, granulocytosis without a left shift indicates that the immune system is handling the infection without resorting to desperate measures.  These different degrees of immunologic responses should warrant different degrees clinical response.  The paradigm shift in medicine that must occur is for clinicians to understand their terminology and the implications it provides.

Creating the “Aha!” moment

October 20, 2010

As we all know, the practice of medicine is not a perfect science.  The study of human biology has some unique aspects that have challenged its progress. 

First, we didn’t invent, design, or build the human body.  While the field of communications evolved from smoke signals to newspapers to websites on a smart phone, the science of medicine has been that of reverse engineering the human body.  In order to fix it, we need to know how the darn thing works.  While we have made progress, it’s been estimated that we can only back up about 20% of what we do in our practice of medicine with actual science.  There’s a great deal more out there that we still have to discover.

A great deal has been learned about how the human body works in the past half-century.  In large part, this has been due to the growth in research funding provided by the National Institutes of Health (NIH) as well as growing opportunities in private industry.  Much of the work done by the NIH has been by extramural investigators, each working incrementally to unravel some area of biology that fascinates them.  Such work is critically important, albeit tediously and frustratingly slow.  Yet it produces the necessary background knowledge for major discoveries.  There are several spectacular discoveries that seem to about through epiphanies, such as the discovery of endorphins.  Once the opoid recepter was found in mammalian cells, investigators started to ask why there would be a receptor in the brain for a molecule derived from a plant.

Such curiosity is critical for the evolution of our industry.  This blog hopes to stimulate that curiosity.  We seek fresh perspectives in health care science and delivery, often using the 30,000 foot view, maybe even the 30,000 mile view, to stimulate new questions and, ultimately, new answers.

The physician and the self-educated patient

October 18, 2010

Physicians should realize their patients are now able to learn a great deal of information about their condition. 

I was confronted one day in the clinic by a patient and her family who were upset that she’d been treated with a “nose-stomach tube” while in the hospital.  After some questioning, I understood that they meant her nasogastric tube, which is a routine device employed in patients following abdominal surgery to prevent intestinal distension.  It turns out, they had researched the device and found a web page stating that the tube works contrary to popular belief and actually slows recovery from surgery.  The web page actually cited a study from the esteemed Cochrane Library that evaluated 37 randomized controlled trials, involving 2,866 patients randomized to routine nasogatric tube use and 2,845 randomized to selective use or non-use of the tubes.  Not having the tube used routinely resulted in significantly earlier return of bowel function after surgery along with a decrease in pulmonary complications and other potential benefits.  The tubes did not appear to reduce the rate of leakage from intestinal repairs.

So how was I supposed to defend our actions?  I told the family that using nasogastric tubes is standard practice, and that often my residents would do these things because they had been taught to do them by others.  I also told them I was excited to learn about the study as it reflected my own biases.  And I made a commitment to them and to myself to continue to use them selectively or not at all, and that I would pass the information on to more and more of my colleagues.

However, the real message to my colleagues is to become much more informed,  understanding the vast amount of information that is available to our patients and their families.

Checklists for the Checklists

October 14, 2010

Atul Gawande’s book Checklist Manifesto recommends that health care providers employ checklists more widely to prevent us from overlooking essential details.  This is an excellent suggestion, but the misapplication of a checklist may not be all that beneficial, and could be potentially harmful.  That is, it is always tempting to extend a checklist’s oversight to areas where it should never be seen.  In the operating room, for example, we have been performing a “Time-Out” for several years now.  During the Time-Out, the entire operating room crew goes through a brief preoperative checklist to make sure we’re operating on the correct patient, the proper planned procedure is announced, and so forth.  One of the items on the checklist is whether or not the operative site has been marked.  Normally this makes sense, as it helps make us think about whether we’re fixing the left groin hernia or the right.  Or we’re amputating the right foot, not the left.  However, when I was confronted one day about whether or not I’d marked the patient’s anus, I just about lost it.  I was operating on a huge abscess of the anal region.  Not only would marking the area cause severe pain to the patient, but the ink wouldn’t actually work on the moist anal lining.

We don’t seem to ask ourselves if the checklist should be applied as is for each patient.  For example, managed care organizations often use the performance of an annual eye exam in diabetics as a means of judging the health maintenance effectiveness of the organization.  Unfortunately, these processes have led to quality points against physicians who failed to perform an annual eye exam in their blind diabetics.

Rather than question the appropriateness of the checklist for any given patient, we seem perfectly content to follow the list mindlessly.   Will there be a paradigm shift in medicine to create checklists to see if the checklist is suitable for the patient in question.

The market in human urine

October 11, 2010

In the 1970s, physicians started to use diuretics in patients who appeared to be sliding into acute renal failure.  This was based on reports that nonoliguric renal failure carried a lower mortality rate (around 25%) than did the more traditional oliguric renal failure (with about a 50% mortality rate.  It appeared that, on occasion, patients could be converted from oliguric to nonoliguric renal failure by using a diuretic, such as furosemide (Lasix).

However, when real science was applied to this concept in the form of prospective randomized trials, it turns out that, despite an increased urine output with the use of diuretics, the desired improvement in mortality did not happen:

  • Cantarovich, 1971
  • Cantarovich, 1973
  • Chandra, 1975
  • Kleinknecht, 1976
  • Lucas, 1977
  • Brown, 1981
  • Shilliday, 1997
  • Sirivella, 2000
  • Cantarovich, 2004
  • Bagshaw, 2007

To summarize the findings from these studies, it appears that diuretic administration to patients with impending renal failure does not alter their outcome compared to patients who do not receive diuretics: there is no difference in length of stay, need for dialysis, or mortality.  However, there is an increased urine output in those patients who were in the diuretic study group.

Thus, the primary outcome of such therapy is the production of more human urine.  This outcome would be beneficial if there were some kind of market for human urine than needed to be satisfied.  But, of course, no such market exists, so it begs the question of why are we doing this. 

Despite the overwhelming evidence that it would make sense to stop the practice of administering diuretics to patients with impending renal failure, it would actually be a paradigm shift in medicine to do so.

Crappy recommendations

October 7, 2010

Doctors should stop telling their patients with diverticular disease to avoid nuts, seeds, corn, and popcorn. Even though this has been our practice for nearly a century, there has never been any sound evidence that such dietary precautions would result in fewer complications from diverticula. In fact, we now have some fairly sound evidence that those recommendations are completely off course: a large study involving 47,228 men over an 18-year period demonstrated that higher rates of popcorn and nut consumption led to lower rates of diverticulitis. Yet large numbers of physicians and other health care providers still provide outdated recommendations for their patients.

The concept that nuts, seeds, and kernels could cause problems with diverticulosis came about in the early 1900s. Prior to that time, diverticular disease of the colon was a rare condition.  The Industrial Revolution greatly reduced the fiber content of the Western diet as a result of milling and refining grains on an increasing scale. After 20 or 30 years of this, diverticulosis rates started climbing dramatically. Surgeons saw more and more cases of diverticular perforation, one of the more critical complications of diverticular disease. They encountered these patients with an obvious intraabdominal catastrophe, and had to operate on them to try to save their lives.  At surgery, they identified a hole in the colon with a little outpouching, the diverticulum, often with a kernel or seed stuck in it.  This observation led surgeons to conclude that the wedged item was somehow responsible for the calamity.

So the word went out that patients with diverticular disease should avoid these food items like the plague. A low-residue diet became the standard recommendation for diverticulosis patients. This concept persisted through the 1960s and 1970s when investigators were determining that diverticula developed from increased intraluminal pressure which, in turn, was a major consequence of low fiber diet. The recommendations for a low-residue diet were 180 degrees in the wrong direction.

Even though there is now solid evidence that the previous recommendations were absolutely misguided, it would take a paradigm shift for everyone in medicine to start handing out the correct recommendations.