Are you a health care worker? We’d love to hear from you. Email editor at northcarolinahealthnews.org
<div style=”display:none;”><br />
<img src=”//pixel.quantserve.com/pixel/p-fNeHdWqgrbVC8.gif” border=”0″ height=”1″ width=”1″ alt=”Quantcast”/><br />
<p>A local project allows patients and family members to give real-time feedback on patient safety and satisfaction.
By Rose Hoban
Last summer, a man with liver disease showed up at the UNC Hospitals emergency department because he had run out of a drug, lactulose, that removes toxins from his blood. Without the medication, he had started to have problems. But when he arrived at UNC, there was no lactulose in the emergency department.
After waiting five hours on a hard gurney without the medication, the man needed to be admitted to the hospital for an expensive stay.
Several weeks later, the situation recurred. But when the man arrived at the emergency department with his wife, the lactulose was there. Within an hour, he’d received the drug and was on his way home.
“That was a miracle,” his wife commented.
The difference was that between the two visits, the patient and his wife had responded to a survey. Their confidential comments came to UNC emergency room physician Abhi Mehrotra as part of a project he’s been piloting at UNC: using text messages to measure patients’ experience and satisfaction with their care.
“Patients have an evaluation done 24 to 48 hours after an ED visit,” Mehrotra said. “They get either an SMS text or email, or both, asking to respond to the survey.”
More information from fewer people
Many companies are using smart phones to gather data about customer satisfaction. But what’s unique about Schulman and Mehrotra’s technique is that embedded in the program running the survey is an algorithm to generate the questions that emergency room managers need to have answered … now.
“Instead of asking 100 questions, we ask 10 questions, but we don’t ask questions that are already answered,” said Kevin Schulman, a health economist from Duke University who’s been working with Mehrotra on the text survey project.
The algorithm delves into information that’s been gathered from previous responses and determines if there’s enough information to move on to the next question.
“We can ask if the blue wall color is soothing,” Schulman said. “But you don’t have to ask everyone if the blue color on the wall is soothing.”
And in a health care climate that’s putting increased emphasis on quality – better outcomes, more efficiency – at less cost, having almost real-time data can give a health care provider an edge.
“Say we have a hand-washing campaign, and I think that it’s important to follow up on whether caregivers washed their hands before touching a patient. I can prioritize those questions and those get oversampled,” Mehrotra said.
Researcher Tom Croghan from Mathematica Policy Research, a think tank in Washington, D.C., said the technique is similar to the one used on standardized tests like the SAT.
“You ask the first question, and when you get a result, then the second question gets generated based on the response of the first question,” Croghan said. “This gets you more quickly to a valid result and it’s less burdensome to everyone.”
Faster than paper
“How many times have you gone to a hotel and gotten a paper survey three to six weeks after you get home?” asked Schulman. “Who can remember?”
In that instance, Schulman said what ends up happening is that people who were dissatisfied with their service are more likely to respond to the pen-and-paper surveys, so negative responses are oversampled. While it can be useful to find out what a hotel or a doctor’s office or emergency department is doing wrong, they lose out because they don’t get to learn what they’re doing right.
Schulman said he found that most surveys today are long, they happen long after the patient encounter, response rates are low and the data collected are often not meaningful.
“We wanted to use technology instead of pen and paper,” he said. “So we built a cloud-based platform that sends out a text or email – depending on what the patient has available – with an invitation to a survey.” Patients then click on a link that takes them back to a portal that’s protected for patient privacy.
After they answer their 10 questions – usually by choosing an answer based on a 1-5 scale – patients get a chance to write in additional comments. Schulman said about half the respondents add something.
According to Schulman, one person wrote, “I’m a pediatric ICU nurse and I brought my daughter to the emergency department and your nurses didn’t know how to properly use the pediatric pulse ox,” a device to measure blood oxygen levels.
Another comment read, “I brought my sister who’s a cancer patient to the emergency department and your nurses didn’t use sterile techniques to access her permacath,” an indwelling intravenous device.
Text comments also include compliments. Schulman said one emergency-department doctor told him that in 20 years of practice she’d never before gotten a positive comment.
More and more, insurers and government payers are looking to pay for quality, rather than for how many tasks doctors and nurses perform, so Schulman and Mehrotra think they’re in a sweet spot for marketing their technique to hospitals and physician practices.
Recently, UNC applied for a patent on the method and Schulman and Mehrotra formed a company, called Bivarus, to market the technique to other health care organizations.
Croghan, the Mathematica researcher, said his reservation about a Bivarus-type system is that patient satisfaction does not reliably correlate with quality or good clinical outcomes.
“There’s no relationship between mortality at a hospital and how much people like it,” he said. “Patients do not have a good sense of the quality of care they’re getting. While patient satisfaction is an important dimension, it’s not necessarily a quality measure.”
Schulman said they’ve anticipated such concerns, but argues they’re looking for more than simply patient satisfaction, they’re aiming at patient safety.
“Over the first nine months, we documented 220 different patient-safety concerns from the text responses,” he said. “These were concerns that were never known and never would have been caught on a traditional survey.”
And Schulman said their data can provide more meaningful results than services such as the popular website Angie’s List, which has gotten into the doctor-rating business.
“The Angie’s List people are trying to help patients grade their docs, but what are they rating them on?” he asked.
He expressed concern that emphasis on the behavior of front-desk staff and the condition of waiting rooms doesn’t really get at what’s important in a medical practice.
“This allows a practice to take ownership and circumvents any ability to game the system,” Schulman said.