Much has been said about the unwinding of the Public Health Emergency for well over a year now. We were instructed to prepare ourselves and given various forms of guidance to do so. We attended webinars, hired consultants, and exchanged information. Now, six months into unwinding, it has lived up to the expectations. By August, 5.3 million Americans had been disenrolled from Medicaid, with 75% of that number disenrolled due to procedural reasons. These procedural issues are causing a significant capacity crisis across the country within our agencies. This crisis affects not only those who should not have been disenrolled but also the many workers and agencies striving to ensure that their customers have access to critical healthcare.
A portion of the procedural denials will require attention from the agency. In one state, for example, at least 40,000 clients had to reapply over a three-month period. This high volume poses a genuine problem for an agency that likely faces a vacancy rate of 20 to 30% among its workforce, not to mention the scarcity of experienced staff.
To address these challenges, we have two options and tools at our disposal. The first is to reduce the workload that workers need to handle through expedited, no-touch decisions. The second is to streamline the workflow and business processes for the remaining work that necessitates staff involvement. In today’s world, both approaches are absolutely necessary. Today, we will focus on how the use of data can contribute to achieving success in both approaches.
Analyzing the Numbers
To understand the magnitude of the task at hand, it’s crucial to examine the numbers involved. We start with the total number of redeterminations, denoted as X. This represents the pool of cases that require assessment for continued eligibility. For example, let’s say that X = 1,000,000.
Additionally, we anticipate a certain number of cases that can be resolved through ex parte, meaning they can be determined without direct involvement from the eligibility worker. Let’s refer to this expected number as Y. Where Y = 200,000
Consequently, the remaining number of cases that demand manual review can be calculated by subtracting Y from X. Let’s denote this value as Z, where Z = 800,000.
(1,000,000 (X) – 200,000 (Y) = 800,000 (Z)
Considering the average time it takes to complete a redetermination case, denoted as T, we can estimate the total number of hours that would be invested in the redetermination process. By multiplying Z by T, we arrive at the projected number of hours required. Where T = 45 minutes, based on our workload tracking of 12 states.
Z*T=Projected Hours (800,000 * 0.75 hours = 600,000 hours)
Depending on your agency’s staffing methodology, you can figure out how many staff it will take over how many months to complete the work. Using the methodology that C!A deploys when working with clients, the example of 800,000 cases would require approximately 416 staff over 1 year to complete, whether state staff or contractors. It is nearly impossible to staff at this level in the current workforce environment.
Leveraging Data for Efficiency
Now comes the crucial question: How can we reduce the time and effort spent on redeterminations? The answer lies in harnessing the power of data.
Speeding up Validation and Redeterminations
It is essential to achieve a decrease in both the amount of work and the time required to complete it. By leveraging data effectively, we can increase the percentage of ex parte renewals, therefore reducing the overall pile of cases requiring manual intervention. Using the numbers above, if you were to increase the ex parte rate to 30% (from 20%), you would leave just 700,000 cases for manual intervention, or a capacity of approximately 52 staff. Utilizing these 52 staff on the remaining 700,000 cases would be like hiring 52 new staff that were already trained!
Returning to the math, at this point, with the reduction of 100,000 extra cases via ex parte, our need for 416 staff has become 364 staff, but we don’t have that many staff and need more capacity. Let’s explore.
(700,000 * 0.75 hours = 525,000 hours)
Another way data can be used is by providing eligibility workers with the necessary information to verify the circumstances of a household efficiently. By delivering the right case with the right information to the right worker at the right time, eligibility workers won’t be required to consult multiple sources, thereby eliminating a significant amount of manual labor.
Right now, many workers are so overwhelmed, they do not feel as though they have time to search multiple sources, therefore, they will pend the case and place the burden on the customer so that they can move to the next case. This is not something workers wish to do, but feel it is a matter of survival to get to the next family. I recently visited with workers that are spending 10-20 minutes searching the different data sources. Another unintended consequence is that state data contracts are often underutilized. Having the data is only half of the equation – making it useful in the logical flow of a worker’s process is the second critical piece to reduce pended cases and extra client contacts.
Providing the curated verification data as part of the workflow keeps the worker focused and dramatically increases the rate of first contact resolution. Our capacity is often drained not by the initial client contact, but by the 2nd, 3rd and 4th when we can’t achieve first contact resolution. This approach not only reduces the workload for eligibility workers but also aims to decrease the elapsed time for each case (from 45 days to 1 day). In the example above, reducing work time from 45 minutes to 30 minutes for the remaining 700,000 would yield a capacity savings of 121 staff (again, already trained) to apply to the work.
Z*T=Projected hours (700,000 * 0.50 hours = 350,000 hours)
Improving and Optimizing Operations
In addition to expediting individual redeterminations, data can also provide valuable insights to improve overall operations. By gaining visibility into the workflow and workforce, we can gather key information such as the quantity of work, its age, who is handling it, and the number of ongoing determinations.
This level of visibility allows us to be proactive rather than reactive, enabling us to address potential bottlenecks or allocate resources strategically. By analyzing this data, we can optimize our operations, improve efficiency, and achieve better outcomes in the redetermination process.
Bringing our discussion full circle, it is evident that capacity is a central concern for Medicaid agencies across the country. The first step towards addressing this is to reduce the workload through ex parte resolutions. However, to truly empower the agency’s workforce, it is crucial to equip them with the right tools and eliminate the need to search through multiple systems. While various data options exist, the key lies in proper curation – how we utilize, serve, and refine the data. By doing so, we unlock the full capacity of our existing workforce. It’s important to note that the value lies not merely in data itself but in how it is strategically employed.
Effective curation enables states to fully leverage their data contracts. Low utilization often stems from a lack of integration into existing processes. By aligning the right worker with the right case at the right time, whether through ex parte resolutions or first contact resolution, states can harness their true capacity and derive maximum value from their data contracts.
In conclusion, by the math, saving the capacity of 52 staff through ex parte and 121 staff through better data integration and first contact resolution, an agency would essentially have saved the capacity of 173 eligibility workers while tackling the 700,000 remaining cases. For agencies that did not have the initial 416 workers, this is great, as the same 1,000,000 cases can be addressed with 243. For agencies with 416 workers, you can now reduce the number of months it takes to address the unwinding – a win for both the agency and customers.