ѻý

Getting the Data Correct, Every Time

<ѻý class="mpt-content-deck">— A lab collection report showed us that fixing problems requires getting the right information
MedpageToday
 A photo of an annoyed-looking young male physician gesturing at his computer.
  • author['full_name']

    Fred Pelzman is an associate professor of medicine at Weill Cornell, and has been a practicing internist for nearly 30 years. He is medical director of Weill Cornell Internal Medicine Associates.

We've all gotten data reports on different aspects of our practice that make us cringe, make us crazy, sometimes make our skin crawl.

Data can be really powerful in primary care, like in almost any area of medicine and healthcare, and if we're getting it at the right time, in the right place, in the right format, it can often make a huge difference. So can trusting that the data are correct.

Over and over in this heavily bureaucratized, monetized, and data-driven world of healthcare, we are inundated with reports that purport to be helping us, showing us how we can improve our practice, take better care of our patients, and improve outcomes. But, as I've often written about, sometimes the people who send us this data think it's really amazing. They think their data and reports are perfect, but when we dig a little deeper, we see the flaws.

One example we recently encountered was some data reports about errors that occurred in our lab collection process -- data that have been reported by us or the central lab regarding patients' labs and how they were drawn at our practice. Whenever any of us discover an issue, or if an issue is found by the central lab or anyone else involved in the specimen collection process, that person files a report so we can try to get to the root causes of what went wrong, with an eye to improving the process and preventing future errors.

The most recent data set we were shown in gave a variety of reasons for lab collection errors in our practice. These included incorrect labeling, wrong tube for specimen, illegible labels, wrong patient, specimen spilled during transport, tube overfilled or underfilled, and so on. But it turned out that the Number One reason why the lab reports an error on their end is "No specimen received."

Our practice improvement team is looking to address these many issues with the lab, in hopes of improving the processes and streamlining the delivery of care, from the doctor's orders in the electronic health record (EHR) to communication of the final results to the patients. The problem is, if the data are no good, then what you're fixing may not make things any better.

Most of these reasons for errors are assigned at the central lab, rather than by those involved upstream or downstream, and it seems strange to put all of that responsibility on them. It would probably be better to have everyone involved in the process analyze and talk about these things, to see where the flaws are that lead to these errors.

In particular, it turns out that the "No specimen received" error is generated not when the lab receives an empty bag with a printed order but no specimen, or even when they receive an empty specimen container such as a phlebotomy tube or urine cup. No, this particular error is generated when someone at our end electronically "releases" the order in the EHR, which allows them to print the barcoded labels for the phlebotomy tubes with the correct patient identifying information, but then they are unsuccessful in collecting the actual specimen, for any of a number of reasons.

If, for example, the phlebotomist goes into the room, and the patient has now decided that they don't want to do their blood tests today, the specimen is already considered active and collected by the lab, so it is turned into a "No specimen received" error. Or, if the phlebotomist attempts multiple times to draw the labs from a patient and they are unsuccessful, this generates the same error message.

A smarter system would be to simply allow them to reset the order and cancel out the one that had already been electronically sent to the central lab, letting them know that, in fact, nothing is heading their way. On the other hand, if we are really sending them that many empty tubes, we should definitely know about that.

I'm hoping that we are going to be able to address this in a more system-wide, systematic process, and maybe design a smarter system using interventions like checklists and smarter fixes to the EHR. Perhaps scanning the barcodes, and giving the phlebotomist the ability to toggle things back to "ordered but not collected" if they are unsuccessful or if the patient chooses to do it another day, would help, as would some buy-in from the patient -- having them confirm that the lab was done on them, that the tube was filled in front of them with their blood, that the correct specimen for the correct test was collected in the correct specimen container.

So many opportunities for improvement, but just blindly accepting the data has led to this team trying to fix a problem that doesn't really exist. Instead, we should apply our resources where it makes the most sense, where we can really make a difference. And this many perceived errors certainly seem as if they warrant a massive amount of attention and practice-improvement resources. There really aren't that many ghost lab orders that arrive in the central lab with an empty urine container or empty phlebotomy tube.

Putting the right people -- everyone involved -- to work on this kind of a project could lead to improvement in how we do things, how we get things done, and a decrease in the signal-to-noise ratio that makes us think we're doing it wrong, when really, we just need to be doing it better.

Each time one of these errors occurs, a smart system should be able to look around, figure out what happened, and suggest some alternative ways to improve the process and minimize errors. But this can't be an empty promise, just like it can't be an empty tube that shows up at the lab.