Not Just Human Error                              

 

We are taught to look for the person who made the mistake.
The nurse who gave the wrong medication.
The technician who missed the lab value.
The doctor who failed to notice the change in vitals.

We find the individual, we highlight the error, and far too often, we stop there.
We call it a “human error” and move on.

But human error isn’t the root cause.
It’s the symptom of a system that failed to protect its people from being human.

Humans Will Always Be Fallible. That’s Not a Flaw, It’s a Fact.
Every single person working in healthcare, from front-line clinicians to support staff, works in environments full of noise, fatigue, interruptions, pressure, and information overload.
We expect them to make perfect decisions in imperfect settings.
We rely on memory where we should rely on systems.
We accept risk where we could build in resilience.
And when a mistake happens, we ask, “Who messed up?”
Rarely do we ask, “Why was it so easy to do the wrong thing?”

What If We Designed Systems That Assume People Will Make Mistakes, and Catch Them Before They Cause Harm?
Instead of asking humans to be perfect, what if we built systems that are forgiving?
What if systems were engineered to augment and protect the users like anti-lock brakes and transmission interlocks?
What if every critical HVAC system was monitored for constant commissioning relevant to patient care?
What if every medication safety zone was designed, commissioned, and continuously assessed with rounding to confirm compliance with USP/FGI?
What if the technology devices and interfaces showed warnings that were actually clear, meaningful, and actionable?
What if our policies didn’t normalize workarounds, but created conditions where shortcuts aren’t needed?

Every time we call it “human error,” we obscure the real issue: the environment, process, or tool failed to protect the human from inevitable imperfection.

We Don’t Need Perfect People. We Need Better Systems.
The goal isn’t to eliminate human error. That’s impossible.
The goal is to design systems where a human error doesn’t become a patient tragedy.

So let’s stop asking: Who made the mistake?
And start asking: How did our system allow that mistake to reach the patient?

Because every preventable error is not just a failure of a person,
It’s a failure of design.
And that’s something we can change.

Here’s How We Change It:
If we truly believe that every preventable error is a failure of design, then we must commit, without exception, to embedding safety and resilience into the DNA of every healthcare project. That means:

  • Implementing FGI-required functional programming at the very start of planning, so that the intended use of every space, system, and workflow is clearly defined and aligned with the realities of care delivery.

  • Conducting formal safety risk assessments for every project. no matter the size, to identify where human error could occur and to design safeguards that catch mistakes before they cause harm.

  • Developing and maintaining the Owner Project Requirements (OPR) as a living document that captures the operational, safety, and performance expectations for every system and space, ensuring they are tested, verified, and sustained over time.

This is not optional work. It is the foundation of a healthcare environment that protects patients and staff alike.

From this day forward, no project that touches the delivery of care should move forward without these three pillars in place—functional programming, safety risk assessment, and a robust OPR. Because when we design for human fallibility, we design for patient safety. And that is the only acceptable standard.

Next
Next

Beginning with the correct end in mind