Thursday, September 11, 2008

The SOAP Cycle: Collaborative Quality



Over the past two posts we looked at the care provider’s approach to quality at the level of the patient using the SOAP method, and the hospital approach to systems improvement using the PDSA Cycle. We noted that both methods are very effective when used conscientiously and how easy it can be to deviate from their intended use. It is interesting to observe that each method by itself, even when followed to the letter, understates key elements about their effective use.

The effectiveness of the SOAP format requires that the care provider and patient cycle through the process continuously, and the assumption is made that this will occur on a regular basis. As we have seen, there is a tendency for care providers to come up with a care plan and to make modifications to the plan only when a significant unanticipated outcome occurs. Adjustment is more reactive than proactive, and the adjustments that do occur frequently skip elements of the preceding “SOA” steps: we jump to solution and perpetuate suboptimal results.

The effectiveness of the PDSA Cycle, while emphasizing the continuous nature of an improvement process, makes the assumption that the initial plan is on target and meaningful. However, we have seen how frequently an improvement plan begins without thorough preparation. If we cycle a poorly conceived improvement initiative we get an “improved” poorly conceived improvement, which is not the same as a good result. Again we get a misguided, data-driven result that is suboptimal.

_________________________________________



What happens if we were to combine both cycles? Suddenly we have a way of making visible these key assumptions for each individual method, creating a complementary formula for complete quality. The common element to both methods is “P” or plan. When SOAP is inserted into the “P” of the PDSA cycle a unifying message is made clear. To the care provider, the message is that a care plan must continuously be evaluated and adjusted, even when things may appear on the surface to be going well. It is also a reminder that a care plan may be a subset of other issues that may be going on with the patient. Cycling through a care plan does not mean that the patient as a whole becomes secondary – we are reminded that on top of the treatment there is a patient whose other “systems” are being affected by our intervention. We need to maintain oversight of the big picture as well.

From the hospital systems improvement perspective, SOAP inserted into the “P” emphasizes that a successful PDSA Cycle outcome depends on a well thought out plan based on the input from patients and staff experiencing the challenge, the collection of meaningful data to validate and to narrow down the possible causes of the challenge, and a joint assessment to define the critical intervention points, from which a collaborative plan is developed. The message is to look at the challenge from big picture to small, to be inclusive, data driven and collaborative.

Not only does the SOAP Cycle serve care providers and hospitals in their respective areas of focus in quality care, but it can also serve to unify the language of quality in a way that enables care providers to become part of care process and system improvement at the hospital level. Although SOAP is used at the level of the individual patient, it is in essence a method of continuous improvement when properly used. In the context of SOAP, care providers can understand the PDSA Cycle with ease, and vice versa for hospital QI teams. Care providers bring invaluable expertise to hospital quality improvement, and in many ways the SOAP Cycle can align incentives and eliminate the barriers to entry into process improvement that result when outside methodologies such as Lean and Six Sigma are introduced. Indeed, if you look at any successful quality program you are likely to find that all of the SOAP Cycle elements have been incorporated into the process, regardless of quality program label. In that regard the SOAP Cycle can serve as a diagnostic tool to identify the weaknesses in any quality initiative that is not achieving the desired results. Very cool!

Food for thought:
Think about quality improvement initiatives that you are participating in or are a recipient to the results. What made them successful? What was missing from those that didn’t work out as intended?

Next post will incorporate universal systems principles into the model, completing the core elements for any QI process: The Universal SOAP Cycle.

Friday, August 22, 2008

The PDSA Cycle: Spinning in the right direction


Last week we looked at the SOAP format as an effective but under utilized care improvement tool at the level of the individual patient. Let’s now look at the PDSA cycle, a systems improvement tool that underlies most of the complex and designer quality improvement (QI) methodologies that circulate in healthcare (e.g. Six Sigma, Lean, Kaizen, TQM). Like SOAP, the PDSA cycle addresses many of the same “new” concepts that surface in QI, including evidence-based care, continuous and collaborative improvement. When used as it is intended, the results are measurable, significant and sustained. As with SOAP, however, the challenge is to stay on course with the PDSA process and we often overlook key elements, which lead to suboptimal results.

“P” is for plan – the objective of using the PDSA cycle is to identify the critical variable(s) in a system needing improvement and to develop a corrective plan. Rather than implement the plan on a large scale, the corrective plan is executed in a pilot area, a controlled environment where data collection, action and oversight is manageable. Identification of the system in need of improvement and its critical variable(s) are the key to a successful plan. This requires that the PDSA team completely understand and visualize the system being addressed, which is only possible if those using this system, namely frontline staff, are part of the team. Further, since it is the frontline staff that will be directly affected by the proposed systems changes, it is vital for them to be part of the crafting of the plan if the improvement is to be sustained. Finally, including only voluntary frontline staff to participate on a PDSA team is critical; by including only those who are interested in active participation, collaborative effort and a willingness to engage in the next step of the cycle are optimized.

“D” is for do, or action – This is where the team implements the plan in the pilot area. The key element for successful action is team member accountability: NO ACCOUNTABILITY = NO ACTION = NO CHANGE! It is the responsibility of the PDSA team leader(s) and senior leadership to hold team members accountable to the action plan. This is facilitated by making well-defined action items: WHO, WHAT, and BY WHEN? Action is also facilitated by a plan that has been created and agreed upon by the voluntary frontline staff comprising the PDSA team. Action items must be manageable: SMALL ACTIONS TAKEN ALL THE TIME are what create great results. Team members in action must be proactively visible in their efforts so that other staff members in the pilot area are aware of the improvement initiative underway. Not only does this generate overall interest but it also can generate a willingness of staff to assist team members with their action items.

“S” is for study – the results of the prior action taken are analyzed for success and for opportunities to make further improvements. WHAT is measured and HOW it is measured are key (sound familiar?). This data drives the continuous improvement process. Wrong data = wrong direction. The team makes an assessment of the collected information relative to the initial plan; any revisions that might further improve the existing plan are discussed and incorporated before proceeding to the next implementation step.

“A” is for act, or do, or action. In this step the PDSA returns to the pilot area with a revised action plan. Accountability, well-defined action items and proactive visibility again take center stage to bring successful action forward. Data is collected and results are measured so that critical revisions can again be incorporated into the plan in the next step, the “P” at the beginning of the cycle.

The PDSA cycle continues to circle in this fashion until an optimized improvement process has been created in the pilot area. If the PDSA cycle has been successful, not only is there an effective improvement plan but there is also the necessary visibility and interest among other staff members and leadership for broader implementation. This may require the creation of additional teams comprised of voluntary staff and always the continued strong support of leadership.

Food for thought:

- Think about the planning step: We often don’t spend the necessary time and effort in the planning process, nor do we include the right team members. How often do we bring in the expert consultants into our organizations to implement their one-size-fits all improvement plan? How often do our own quality departments assume complete responsibility for improvement initiatives? How often are frontline staff actively included in the process of making the system to be improved visible and understandable? If there is staff participation, is it active, sustained and voluntary? What’s the buy-in of the staff that participate?
- Think about the action step: Absence of accountability plagues healthcare in general. Most improvement initiatives are not voluntary and in an already stressed work environment, participants understandably try to avoid additional tasks. Again, how often are teams voluntary in nature? How many times do teams become gripe committees, all talk and little action? How frequently to we create well-defined action items that are small and manageable? Do staff in improvement areas really understand what is going on, what the goals are?
- Think about the study step: Many times our data collection and results measurement are poor and misdirected. Many times we take short cuts in our reassessment or skip over it completely. How many times do we “jump” to solution? How many times to we let an improvement team disassemble? How many times do the “experts” assume control of the process once the initial plan is underway?

We have seen that both SOAP and the PDSA cycle have inherent weak spots when used individually. What happens when we combine them under an umbrella of universal systems principles? It turns out that we can create a robust model that incorporates all the critical elements and that can serve as a template for any care improvement process. Next week we will look at this combination: The Universal Soap Cycle.

Saturday, August 16, 2008

SOAP: The Forgotten Systems Improvement Tool


David Dibble, my systems-thinking mentor at New Agreements Healthcare, and I were discussing the challenges of making systems improvement accessible to care providers a few weeks ago. Most times underlying system challenges manifest in the form of symptoms, and it takes some digging to uncover what is actually malfunctioning. I was struck by the similar challenges that care providers face when they are taking care of patients, and it occurred to me that we utilize a tool that organizes our care approach in a consistent manner: the SOAP note. Inherent in that four letter acronym is a very powerful, time tested template for ensuring that all the necessary bases are covered to correctly diagnose and treat a patient, irrespective of their condition. In a lot of ways, however, we have forgotten this system improvement power and SOAP has instead simply become a way of formatting a patient note. Many of the concepts that are now surfacing as new ideas in care improvement, such as patient centeredness, being data driven, and taking a team approach to an evidence-based care plan have all been long embedded in this SOAP acronym. Let’s remind ourselves about this by taking the acronym apart.

“S” is for subjective, or symptoms. The entire care process for the patient begins by the care provider listening to the patient describe their complaint. It’s a frontline driven approach (sound familiar?) – the patient guides the care provider to the health problem. An effective care provider listens with respect, without judgment and asks questions to elicit the complete picture from the patient’s point of view. If additional information is required, a family member may be asked to contribute to the history. Not only is the patient heard and able to describe what it’s like to be experiencing the problem, but he/she is integral to identifying the next step: differential diagnosis and data collection.

“O” is for objective, or data collection. As we know, the most important element to data collection is WHAT is measured and HOW it’s measured. The patient’s subjective input and the care provider’s skilled assessment of the described symptoms direct data collection. This includes physical exam, review of SYSTEMS (interesting how that word shows up!), old records, labs, and studies. We all know what happens when the patient data collected is not focused on the critical elements (remember the Critical 20?) – think about the medical student or intern who orders everything “just to cover the bases.” Not only is it expensive and inefficient, but it frequently sends us all off on wild goose chases with unrelated incidental findings – as Deming would say, “off into the Milky Way we go!” Tight data collection based on the Critical 20 makes it much more likely for us to proceed effectively to the next step: assessment and diagnosis.

“A” is for assessment, or likely diagnosis. When the patient and care provider combine the focused symptoms, signs and data further narrowing of the differential diagnosis is usually possible. We arrive at a working diagnosis, which usually results in a treatment plan. Again, the working diagnosis and treatment plan is only as good as the information collected in the preceding two steps. Think about how many times we go down the less optimal path because we overlooked or were too rushed to recognize a critical input.

“P” is for plan, which leads to action. An effective plan incorporates all of the prior steps, resulting in clear action and improvement to the patient’s condition. An effective plan requires excellent communication, interdisciplinary team work and patient input (if the patient isn’t part of this, implementing a sustainable improvement is not likely to occur! Remember, change is easy until we’re directly affected by it – nobody is closer to the change than the patient).

Once the plan is activated, effective care providers will ideally begin a SOAP "loop" to evaluate and to modify the care process as appropriate. Thus, on a regular basis we check in with the patient to see how they’re doing with the treatment (“S”), we do a physical exam, check labs and studies (“O”), we re-assess our care path (“A”) and continue or modify our plan as necessary (“P”). As the major issues are resolved, we evaluate the patient for any other treatment interventions that might present as important. If we’ve done our job, the patient improves and the patient continues with the care plan or makes lifestyle adjustments such that he/she remains well (sustained improvement).

Food for thought:
- Think about how easy it is for us to cut patients off when they describe their symptoms. Not only are they frequently frustrated with their care providers and disenfranchised from the care process, but we frequently head down the suboptimal care path by jumping to solution. Rather than taking the necessary time to be thorough, we end up course correcting repeatedly in our data collection and treatments, which is an incredible waste of time and resources. Patient and staff satisfaction plummet.
- Think about how we deliver care in silos – every specialist to themselves. Data isn’t shared, frequently it conflicts and the result is a patient who has no idea what’s going on and a care process that’s suboptimal. We can be almost certain that our care interventions will not be sustained when patients are on their own. Instead they continue to cycle through healthcare with the same issues.
- Think about how disjointed we cycle through the SOAP process to make the necessary adjustments to our care intervention. Frequently the patient is excluded from further significant input once we’ve made the initial “diagnosis” and we often transition from proactive to reactive additional data collection, many times at the expense of patient discomfort or harm.
- Think about how the SOAP process relates to improving the systems within which we all work. Who is affected by the broken processes? Who needs to be heard so that the symptoms and relevant data can be collected? Who needs to be part of the solution so that an effective plan resulting in sustainable change is made possible?

Next week we will look at a process improvement tool used in systems improvement at the hospital level and explore its connection to patient care. SOAP is applicable to process improvement and care providers have a lot more to contribute to this than we think.

Sunday, August 3, 2008

Rubber Side Down!


I am on a mountainbiking trip in Southern Utah this week (Brian Head) - no new post until the week of 8-11. Happy Trails!

Saturday, July 26, 2008

Provider Training Provider: Variability Guaranteed


The goal in developing efficient, high quality and user-friendly systems is to eliminate as much variability as possible. Creating such systems depends on the collection of meaningful data and on the identification of the best practices that will serve as the backbones of the processes. What is frequently overlooked in the implementation of care processes is how we train care providers effectively in their use, particularly at the physician level. We usually fail to recognize that training is itself a system and that the outcome of the training is greater than 90% dependent on how the training is set up. Contrary to our popular belief in medicine, there are best practices in virtually all care processes and what we should be doing to provide the best care is to train directly to these practices. What we usually use instead is the model of care provider training care provider, and it guarantees the following:

1. Deviation from every best practice
2. Variability of care delivery between virtually every care provider
3. Inefficiency
4. Increased cost
5. Staff dissatisfaction
6. Compromised quality and safety

Care provider training care provider is suboptimal because of the following: When a care provider relies on his or her understanding and experience to pass on a best practice, it is inevitable that the transfer of the practice will be incomplete and that it will be modified; disagreement with the parts of the best practice or a particular anecdotal patient experience are frequent causes. The recipient of this training then applies his or her own interpretation and experience to the passed-on practice, which in turn gets passed on to the next recipient. And on and on… Before you know it, everybody is doing things their own way and the best practice becomes a distant memory. It’s not a big stretch to see how efficiency, quality and safety plummet while costs skyrocket.

Our suboptimal training model becomes invisible because it fits right into our culture of autonomous care – in fact it reinforces the culture! The dysfunctional training model, however, becomes visible to the user when it is taken to extremes (catch phrases to explain processes such as, “do as I say, not as I do” or “this is the way we’ve always done it” are good markers). Occasionally a dangerous practice points to the risk of the provider training provider model, as I experienced recently.

I was working with a senior resident who was supervising a very junior resident. We induced general anesthesia and the junior resident intubated our patient without incident. While I was holding the endotracheal tube in place so that the junior could presumably secure the tube in place with tape, he instead proceeded to protect the eyes with eye guards. I suggested to him that the first order of business should be to secure the airway and that he please do so, to which he replied “I’m sorry, but I’ve been told by many others to tape the eyes first,” while continuing to take care of the eyes. The senior resident then informed me that several years prior she had missed a coffee break because her attending, while taking care of a patient in another room, had accidentally spilled benzoin (liquid adhesive) into his patient’s eye during tube taping – since then she had trained everyone to protect the eyes first.

Fortunately, there were plenty of us around so that I could continue to hold the tube until it was taped in place, but you can imagine what could happen if this becomes his standard and there is less help available or during an emergency. The junior resident will be exposed to multiple variations to airway management as he works with different staff members during his training – what will he teach his junior when the time comes?

Food for Thought:
Everything is connected. Think about the perpetual loop of dysfunction that results in a healthcare system that embraces autonomy, the provider training provider model and the underlying absence of basic care processes. Each component reinforces the next and around we go. Systems-thinking is the only way to break the cycle, and education is another area that needs a systems fix.

Friday, July 18, 2008

Bells and Whistles vs. Tape and Safety Pins


Our obsession with providing the latest clinical interventions and technologies for our patients is both a blessing and a curse. As we have seen in previous posts, our attraction to the shiny new, the technically sophisticated, the next best thing results in our losing sight of the basic care processes that provide the critical support to innovation. Aside from the fact that we have never really learned how to implement and integrate new technologies using systems-thinking, our disregard for the basics repeatedly sabotages the success of any new intervention or technology.

As I was rounding on the post-operative pain service this week, I hit upon a great example of the new being implemented upon the crumbling old. We manage a large percentage of our post-surgical patients’ pain with epidurals and with peripheral nerve catheters. Regional anesthesia is particularly useful for major thoracic, abdominal and orthopedic cases, and by the end of the week we are usually managing pain for thirty to forty patients. Patients with epidurals and peripheral nerve catheters are generally very appreciative of the pain control provided with this modality, which validates and reinforces our use of regional anesthesia.

Regional anesthesia has been around for a long time and traditionally blocks were placed without direct visualization. When I did my regional anesthesia fellowship eleven years ago, we relied on superficial anatomic landmarks, tactile and verbal feedback from patients to guide our block placement. To reduce the risk of injury and to improve the success of the peripheral nerve block, we started using electrical nerve stimulators, which served as a type of homing signal and provided visual feedback as the nerve to be blocked was approached. The sophistication of the nerve stimulator increased but this was not good enough, so along came the next technology: Ultra-sound guidance. Now the possibility exists for direct visualization of the nerves to be blocked, presumably with another jump in successful pain control and a reduction of complications.

On the post-surgical side, the technology focus was on the pumps used to deliver the local anesthetics. What started out as adaptations to pumps used for intravenous medications evolved into sophisticated, patient controlled pumps designed specifically for local anesthetics. They were safer and provided the opportunity patient-centered pain control. The technology development for these devices continues.

Enter the snafu. In the fifteen years that I have been doing regional anesthesia and acute post-operative pain management, I would be hard pressed to say that the quality of pain management with nerve blocks has improved significantly. How can this be? We have better techniques for placing nerve catheters, the pumps are better, the technology in these areas keeps advancing. There are numerous reasons, but the one that jumped out at me was in the picture above: the connector piece between the block catheter and the pump tubing. The technical challenge is to secure the small bore nerve catheter to the large bore pump tubing such that it will be resistant to disconnection; disconnection disrupts the continuous delivery of local anesthetic, resulting in pain, and disconnection contaminates the nerve catheter end, resulting in an increased risk of infection. Catheter disconnects occur on an almost daily basis and in spite of this, look at the technology applied to this problem: safety pins, three way stop-cocks, tongue depressors, tape, more tape. This method of connection has not changed in at least fifteen years, not because we can’t improve upon it but because it just isn’t sexy enough to grab our attention! The absurd juxtaposition of the high tech supported by the primitive is not lost on the patients, who frequently let out a nervous chuckle as we jerry rig their lifeline for pain control.

What struck me about this example is how analogous it is to the current condition of healthcare. Technology advances and the focus of most quality initiatives occur on top of an invisible infrastructure of basic care processes that are barely being held together by creative repair jobs. When they break, we add more tape. They get our reactive attention only when something catastrophic happens. Usually we add more tape…

Until we make these basic care processes visible and important (the critical 20%), quality improvement will remain marginal and expensive.

Thursday, July 3, 2008

Wrong Site Surgery: What do we expect?


An article was published in the Boston Globe this week about a wrong site surgery that happened at the Beth Israel Deaconess Hospital (BIDMC) on Monday (see http://www.boston.com/news/health/blog/2008/07/surgeon_operate.html). Unfortunately, wrong site surgery continues in many hospitals in spite of all of the attention placed on prevention. Why does this continue? Here are two likely contributors.

1. Nobody is paying attentions to the broken care processes that support the high profile safety initiatives. Even though my blog post from last week did not depict BIDMC, the chaotic OR work environment that the nurse describes in her hospital has parallels in almost every US hospital. Nurses spend as much as 45% of their time looking for missing instruments and equipment every day – if everyone is hunting and gathering just to get a case going, is it any wonder that site verification and the safety pause gets overlooked? There’s another systems principle that states that an organization will only measure what it considers to be important – if it’s not measured, it’s not important. Reflect on the computer entries for OR efficiency in my preceding post and you will notice that the only thing that’s important is getting into the room on time; there’s no valuable measure of the underlying process for room preparation that would flag anything that the nurse describes. If it’s not measured, it’s not important.

2. We have misunderstood the use of inspection in a quality process. Here’s a fundamental rule on the use of inspectors: they are not to be used as part of the quality process; instead they are there to identify problems that have already occurred in the process so that they can be fixed! Further, the more inspectors you have the worse the quality oversight; nobody takes responsibility for overseeing the process. Think about how we do site verification. Every care provider involved is typically responsible to site verify before they do anything with the patient. Most of the time every care provider is already multi-tasking, hunting and gathering and doesn’t have the time to be thorough; “No problem,” they think, “somebody else will catch any issues I miss before we get to the OR.” The patient comes into the OR, and now we have the safety pause being performed by everyone who has already done their own verification process in preop: think they’re focused, especially now where they’re under pressure to get the case started? To them it’s a redundant formality and there are more important things to do.

Food for thought:
Care providers are set up for failure when they work within these care and quality processes. The 90/10 rule rears it's ugly head again to create catastrophic patient harm! Until we make these underlying care processes visible and fix them with frontline input, we don’t have a prayer in eliminating wrong site surgery. We also have to learn to use inspection (site verification and the safety pause) in a way that it can be effective – with undiluted accountability and to improve the care process.

--------
I will be on vacation next week, so no blog post until the week of 7/14/08.