The Bugaboo of the “Never Event”

This is one of the better articles I’ve read recently about the flawed concept of the “never event.”

The name is the first flaw – “never” is a poor word, implying that when such an event occurs (and it will) it must be due to failure on the part of the care system. “Never” is a pejorative term that ultimately restricts the development of introspective self-regulation at the system level, leading to guilt and shame within the hospital culture and the temptation to blame the individual at the sharp end.

The assumption of tangibility is the second flaw – as the authors point out, the numerator in the ratio changes as a function of how “never events” are defined. Since these events are constructs, cognitively pieced together after the fact, the boundaries separated these events from all other adverse events are fuzzy and shifting. But the denominator is also a problem – how exactly is an opportunity for the “never event” operationalized? Regardless of what we do to event counts, we can also “cook the books” by expanding the definition of what an “opportunity” is, inflating the denominator without a requisite change in the numerator. Our numbers look better, but the events are still there.

The authors state that, in their systems, they look beyond the singular concept of the “never event” and seek to understand adverse events as a whole, regardless of their artificial designation. By doing so, they show empirically-validated improvements in adverse events without arguing whether something should never happen.

If we truly want to advance safety, we must abandon a priori classifications of errors based on whether we feel that they should (or should not) occur frequently. This limits our problem solving ability and ultimately constrains our creativity as a discipline.

Current Topics: March 3, 2015

Occasionally I will post some links to current topics on patient safety along with some comments. This is the first of those posts.

 

 

  • Leadership rounds are discussed in this brief piece. This is not a new idea – physically connect those with decision-making power with those who are immersed in the dynamic chaos of patient care. There are many psychological benefits to this in theory, but care should be taken when implementing. There are several cultural, structural and perceptual factors that could make leaders “on the floor” seem like prowling predators rather than helpful allies.
  • Minnesota hospitals report relatively stable numbers of adverse events and iatrogenic deaths. Monitoring our systems for adverse events that are connected to “marginally safe” behavior by employees is admirable. However, as I have explained in other posts, these “year-over-year” studies are no more than rough approximations of the system that generates them. Errors are not a static category or species, and adverse events are only defined after the fact.
  • Significant names in the PS world are arguing that our efforts need to be “rebooted.” This may sound surprising, but I argue that it is a normal phase in the development of a body of knowledge. Ultimately, “patient safety” is just “safety literacy”. Expertise in any area of knowledge requires consolidation and “pruning” from time to time, and it is encouraging to see that leading voices are admitting that.
  • Is the “safety logjam” breaking? This is an insightful piece that makes a number of important points, couched in a healthy substrate of realism. “Evidence” for whether health care is safer will be extremely hard to come by, because the definition of “safe” is so fuzzy. Are airplanes safer to fly in now than 50 years ago? Yes, statistically. But the “why” of that statistic is fertile ground for what organizational psychologists call garbage can behavior – using an opaque issue to attach personal agendas that are primarily self-beneficial. Sometimes the answer is simple – safety is now valued more than it was. That alone will make a difference.

Please comment or contact me if you would like to discuss any of these points.