The Race to the Top of the Mountain

Remember the old joke about the two men who were racing to the top of the mountain and when they got there, a third man asked them what took them so long? The guy at the top has been a rabbi, a priest, a woman, and a host of other things, depending on the point the comedian was trying to make.

I feel like that joke just became real after reading this article. Nurses, doctors, and health care researchers clamoring to the top of the mountain as they discuss how to interpret data from interventions in health care settings, and realizing that the organizational scientists have been there for some time.

I couldn’t agree more with the conclusions drawn in the article. Randomization is a great tool, but it loses power in quasi-experimental settings. Field studies trade realism for control, and while the ecological validity of a field study is very valuable, extraneous variance due to internal validity violations is always problematic.

Organizational scientists are in a unique position to help those involved in RCTs with the identification and modeling of these variance sources, both in terms of measurement AND with respect to the meaning that they provide. For example, the article states that “changes in the skill and confidence of practitioners” was observed (Results section, first paragraph). Of course, this is not surprising from a human performance perspective, but it also constitutes a history effect, which is an internal validity concern. How are these concerns being addressed? Psychologists have many ways to do so, but how well have we applied these ideas to health care practice?

I encourage health care practitioners to seek out partnerships with organizational science. We can help each other make patient experiences in care of even higher quality.

The Bugaboo of the “Never Event”

This is one of the better articles I’ve read recently about the flawed concept of the “never event.”

The name is the first flaw – “never” is a poor word, implying that when such an event occurs (and it will) it must be due to failure on the part of the care system. “Never” is a pejorative term that ultimately restricts the development of introspective self-regulation at the system level, leading to guilt and shame within the hospital culture and the temptation to blame the individual at the sharp end.

The assumption of tangibility is the second flaw – as the authors point out, the numerator in the ratio changes as a function of how “never events” are defined. Since these events are constructs, cognitively pieced together after the fact, the boundaries separated these events from all other adverse events are fuzzy and shifting. But the denominator is also a problem – how exactly is an opportunity for the “never event” operationalized? Regardless of what we do to event counts, we can also “cook the books” by expanding the definition of what an “opportunity” is, inflating the denominator without a requisite change in the numerator. Our numbers look better, but the events are still there.

The authors state that, in their systems, they look beyond the singular concept of the “never event” and seek to understand adverse events as a whole, regardless of their artificial designation. By doing so, they show empirically-validated improvements in adverse events without arguing whether something should never happen.

If we truly want to advance safety, we must abandon a priori classifications of errors based on whether we feel that they should (or should not) occur frequently. This limits our problem solving ability and ultimately constrains our creativity as a discipline.

Safety News – April 10

Interesting article regarding the FAA’s scolding of UAL/Continental linked here. It is a good example of how one set of activities that are normal to business in so many ways (i.e., retirement, new hiring, etc.) can be easily scapegoated when regulations are violated. How much of the “problematic behavior” highlighted by the FAA might be due to power struggles between the union and the company? This is a potential side effect of bureaucratic safety – regulatory oversight forces some bad behaviors out while decreasing corporate adaptability and providing opportunities for employees to use safety as a means to achieve other ends. Solution? More training and oversight. There is much we don’t know, obviously, but this smells like good old Newtonian “broken parts” mentality.

Meanwhile, in another industry, we get the distinct scent of “system drift,” the gradual acceptance of abnormality as normal. PG&E will pay $1.6B in damages for the San Bruno gas explosion in 2010. The article targets “safety failings by the utility and lax oversight by state regulators,” suggesting inter-organizational drift that was likely fueled by years of drifting toward a riskier mode of operation and rooted in factors that had nothing to do with safety – motivation, costs, staffing, etc.

Punishing the “bad guys” still plays well in the papers, though. I wonder how much UAL and PG&E will actually learn? Or will they just be motivated to find sneakier ways to do what they want?

 

Current Topics: March 3, 2015

Occasionally I will post some links to current topics on patient safety along with some comments. This is the first of those posts.

 

 

  • Leadership rounds are discussed in this brief piece. This is not a new idea – physically connect those with decision-making power with those who are immersed in the dynamic chaos of patient care. There are many psychological benefits to this in theory, but care should be taken when implementing. There are several cultural, structural and perceptual factors that could make leaders “on the floor” seem like prowling predators rather than helpful allies.
  • Minnesota hospitals report relatively stable numbers of adverse events and iatrogenic deaths. Monitoring our systems for adverse events that are connected to “marginally safe” behavior by employees is admirable. However, as I have explained in other posts, these “year-over-year” studies are no more than rough approximations of the system that generates them. Errors are not a static category or species, and adverse events are only defined after the fact.
  • Significant names in the PS world are arguing that our efforts need to be “rebooted.” This may sound surprising, but I argue that it is a normal phase in the development of a body of knowledge. Ultimately, “patient safety” is just “safety literacy”. Expertise in any area of knowledge requires consolidation and “pruning” from time to time, and it is encouraging to see that leading voices are admitting that.
  • Is the “safety logjam” breaking? This is an insightful piece that makes a number of important points, couched in a healthy substrate of realism. “Evidence” for whether health care is safer will be extremely hard to come by, because the definition of “safe” is so fuzzy. Are airplanes safer to fly in now than 50 years ago? Yes, statistically. But the “why” of that statistic is fertile ground for what organizational psychologists call garbage can behavior – using an opaque issue to attach personal agendas that are primarily self-beneficial. Sometimes the answer is simple – safety is now valued more than it was. That alone will make a difference.

Please comment or contact me if you would like to discuss any of these points.

Bullies

In my discipline (organizational psychology), we have studied for some time the issue of “counterproductive work behaviors.” These are activities that employees generate for the purposes of harming the organization in some way. Typical examples might include theft, sabotage or abuse of breaks and lunch hours, but in other forms of the phenomenon another employee might be the target of the counterproductive behavior. Organizations often act as breeding grounds for individuals to use social power against one another in order to enhance or preserve their own statuses, and those who do not enjoy structural power in the organization may find themselves suffering at the hands of those that do.

The attached article addresses what the authors call “bullying”, which appears to be a new organizing label attached to behaviors that have been discussed in patient safety for some time. Nurses have long reported that doctors treat them poorly, pharmacists have long complained of marginalization, and administrators have long been vulnerable to perceptions among employees of mistrust, aloofness, and arbitrariness in punishments. Remedies are suggested, such as improving communication skills, decision making, and advocating for collaboration. Leadership and staffing are also identified as factors that contribute to “bullying.”

However, there are many nuances to how power differentials play out in organizations that are not addressed. For example, “bullies” in organizations are often not out to harm anyone, but to preserve  their own position and self-perception. Have hospitals considered what structural and cultural norms are in place to encourage those with power to be so hesitant and fearful about sharing it? Teaching the appropriate skills is a wonderful idea, but skills are just tools that one has to be motivated to use.

Hope you enjoy the article, and comments are welcome.

 

1 2