Remember the old joke about the two men who were racing to the top of the mountain and when they got there, a third man asked them what took them so long? The guy at the top has been a rabbi, a priest, a woman, and a host of other things, depending on the point the comedian was trying to make.
I feel like that joke just became real after reading this article. Nurses, doctors, and health care researchers clamoring to the top of the mountain as they discuss how to interpret data from interventions in health care settings, and realizing that the organizational scientists have been there for some time.
I couldn’t agree more with the conclusions drawn in the article. Randomization is a great tool, but it loses power in quasi-experimental settings. Field studies trade realism for control, and while the ecological validity of a field study is very valuable, extraneous variance due to internal validity violations is always problematic.
Organizational scientists are in a unique position to help those involved in RCTs with the identification and modeling of these variance sources, both in terms of measurement AND with respect to the meaning that they provide. For example, the article states that “changes in the skill and confidence of practitioners” was observed (Results section, first paragraph). Of course, this is not surprising from a human performance perspective, but it also constitutes a history effect, which is an internal validity concern. How are these concerns being addressed? Psychologists have many ways to do so, but how well have we applied these ideas to health care practice?
I encourage health care practitioners to seek out partnerships with organizational science. We can help each other make patient experiences in care of even higher quality.
This is one of the better articles I’ve read recently about the flawed concept of the “never event.”
The name is the first flaw – “never” is a poor word, implying that when such an event occurs (and it will) it must be due to failure on the part of the care system. “Never” is a pejorative term that ultimately restricts the development of introspective self-regulation at the system level, leading to guilt and shame within the hospital culture and the temptation to blame the individual at the sharp end.
The assumption of tangibility is the second flaw – as the authors point out, the numerator in the ratio changes as a function of how “never events” are defined. Since these events are constructs, cognitively pieced together after the fact, the boundaries separated these events from all other adverse events are fuzzy and shifting. But the denominator is also a problem – how exactly is an opportunity for the “never event” operationalized? Regardless of what we do to event counts, we can also “cook the books” by expanding the definition of what an “opportunity” is, inflating the denominator without a requisite change in the numerator. Our numbers look better, but the events are still there.
The authors state that, in their systems, they look beyond the singular concept of the “never event” and seek to understand adverse events as a whole, regardless of their artificial designation. By doing so, they show empirically-validated improvements in adverse events without arguing whether something should never happen.
If we truly want to advance safety, we must abandon a priori classifications of errors based on whether we feel that they should (or should not) occur frequently. This limits our problem solving ability and ultimately constrains our creativity as a discipline.