In 2020, a much-talked-about randomized controlled trial (RCT) found that the Camden Coalition’s care management program for people with complex medical and social needs — referred to as the Camden Core Model — had no significant impact on hospital readmissions. These results spurred many examinations of what led to the null findings, including questions about how to approach complex care evaluation differently to avoid closing the door on promising approaches. While the 2020 study evaluated program impact for all patients in the intervention group, a subsequent study published in JAMA Network Open in September 2023 used a new framework to analyze the results in a more nuanced way. The follow-up study, which specifically looked at participants identified as most likely to engage, found that the care management intervention led to a significant reduction in hospital readmissions. The new findings offer an opportunity to better understand the impact of care management programs for adults with complex needs and apply this new evidence to refine programs.
The Better Care Playbook recently spoke with Dawn Wiest, PhD, Director for Research and Evaluation at the Camden Coalition, to discuss findings from this analysis. This conversation is the first in a two-part series exploring takeaways from the evaluation. A follow-up discussion will explore insights on the study’s methodology and lessons for future complex care program evaluations.
Q. Can you describe how patient engagement with the care management program varied?
A. During the RCT, we were striving for a three- to four-month intervention duration with our clients. We considered that to be the average amount of time that an intervention participant would require to address their complex health and social needs. However, we’ve long recognized the significant variation in how much of our care teams’ time and effort a client will end up needing — some clients might drop off immediately and spend very little time with our care team and others might require considerable staff time and end up working with our team for months.
"Randomization cannot balance out intervention effort unless the trial is designed to take disparate levels of intervention exposure into account, so we knew we had to dig into the data to investigate whether dose variability contributed to the null results we saw in the original analysis."
When we studied the data on how many hours intervention participants spent engaged with the care teams, and the consistency of engagement over time, we weren’t surprised to see variability, but it really was astonishing to see such a high level. Building genuine, trusting relationships with participants is a central and necessary element of our model, and making those connections with participants as soon as possible after hospital discharge is critical. Yet, there were participants who had almost no contact with care team staff during the first week they were home from the hospital, while others had more than 10 hours of engagement that first week. Randomization cannot balance out intervention effort unless the trial is designed to take disparate levels of intervention exposure into account, so we knew we had to dig into the data to investigate whether dose variability contributed to the null results we saw in the original analysis.
Q. What characterized the patients who were the most likely — and the least likely — to engage in the care management program?
A. Our goal with this analysis and others is to dig into the data to learn what we can do for our participants and find sustainable pathways to do it. The individuals most likely to engage with the intervention were less likely to: have been arrested prior to enrollment; hospitalized three or more times in the six months prior to enrollment; or have unstable housing. These findings affirmed what our care teams had already observed, which is why we have expanded our services to address these needs through our Housing First program, which provides participants with stable housing as well as wraparound services, and our Medical-Legal Partnership, which operates today in partnership with a Camden-based addiction medicine clinic, the Cooper University Health Care Center for Healing.
Some of the findings around the clinical conditions were interesting — for example, participants who were more likely to engage were also more likely to have kidney disease and chronic obstructive pulmonary disease (COPD); we are still working to understand the impact of these findings on our model. A technique like cluster analysis could provide more actionable insight by helping us identify patient subgroups with various combinations of medical conditions and social risk factors that influence intervention outcomes.
Q. What have been important takeaways from the multiple analyses of the RCT on how Camden Coalition has refined the Camden Core Model?
A. In the original trial paper, the authors asserted that earlier analyses attributing reductions in hospitalizations to the intervention failed to account for regression to the mean. In other words, the circumstances leading to high hospital utilization during a specific time period will resolve, irrespective of intensive care management. This is correct only to a degree. For some patients, reducing readmissions is the right goal and can be achieved through intensive care management focused on readmission drivers. But other intervention goals may be much more relevant to the health and well-being of other patients, and also for demonstrating program impact overall.
Q. How can complex care providers think about applying these lessons in their own work?
A. This study validates our thinking that short-term care management is just not enough for a certain segment of this population. Some of the individuals we engaged during the RCT are still connected to our care teams in some way, even if not at the most focused level of the core intervention. So, there should be some understanding of the variability in engagement required.
"Be very clear on what you hope to accomplish with your patients, what outcomes patients themselves are striving for, and whether your program is designed to achieve those outcomes."
Also, be very clear on what you hope to accomplish with your patients, what outcomes patients themselves are striving for, and whether your program is designed to achieve those outcomes. The old-fashioned logic model is very helpful there. Next, identify measures that will help you capture program operation and the impact of your program on desired outcomes and ensure that the measures are appropriate for your staff and program participants. This means you will spend time piloting new measures with your staff and assessing whether the act of measurement fits neatly into clinical workflows or is cumbersome, in which case you will need to reevaluate how the measure is being implemented or even if the measure is appropriate. All of that will require a data system that is well integrated into clinical workflows, so that staff can easily input data and access data to support their clinical work. It will also mean consistent oversight of data quality, and a willingness to be responsive to data collection challenges staff may face.
At the Camden Coalition, we’ve expanded our measurement efforts since the RCT. For example, as part of a national learning collaborative convened by the National Committee for Quality Assurance and supported by The SCAN Foundation and The John A. Hartford Foundation, we are now applying a measurement strategy called goal attainment scaling to capture patients’ self-defined goals and measure the extent to which those goals have been accomplished through the intervention more systematically. We have also focused on improving our data-capture capabilities and refining our care management database, so data capture is less burdensome on clinical staff. This has allowed care teams to access data more readily at the point of care and helps the data team evaluate program operation, including frequency and consistency of contact between care teams and patients, and program impact in real time.
Q. Another analysis of the Camden care management intervention published in the American Journal of Managed Care prior to the JAMA study demonstrated the association between a higher frequency of care management intervention engagements and reduced hospitalization rates. What are the important lessons for what staffing and funding structures are needed to maintain this higher frequency of care management intervention?
A. There is an important caveat here, and that is not every patient will require the same amount of intervention, although it could be helpful to establish a minimum engagement threshold such as a baseline number of hours of engagement during the first week of intervention. Allocating staff resources efficiently is especially important for large patient panels, so it is critical to identify which patients need more care management, based on clinical and social needs, and match staff effort to program and patient goals. For example, if an intervention goal is to stabilize an individual’s medical situation to reduce readmission risk, someone who is experiencing housing instability will require more effort than an individual whose social situation is more stable. So staffing structures and workflows should reflect the diversity of needs that the intervention seeks to address, and draw on a mix of qualitative (e.g., from conversations with patients and providers) and quantitative data (e.g., from cross-sector data sources) to help staff ascertain patients’ risk level and engagement needs right from the start.
"Staffing structures and workflows should reflect the diversity of needs that the intervention seeks to address and draw on a mix of qualitative and quantitative data to help staff ascertain patients’ risk level and engagement needs right from the start."
We actually worked with Hari Balasubramanian, PhD, Associate Professor of Mechanical and Industrial Engineering at the University of Massachusetts Amherst to figure out how to optimally staff care management programs given the variation in workload. Preliminary findings showed more than 20 percent of staff effort occurs in the first two weeks of enrollment, 70 percent of care coordination effort occurs face-to-face, and indicators of social vulnerability (e.g., housing instability, behavioral health needs) were associated with more time-intensive program enrollments. These findings help us, and other programs being implemented elsewhere, better understand staffing needs, panel size construction, and other logistical questions of how to provide care management and care coordination efficiently and effectively.
Q. How can health care providers integrate the findings from this new analysis to build support for systems-level change to better meet the needs of people experiencing housing instability, poverty, and incarceration?
A. This work is hard and no one entity can go it alone. The factors that make an individual more or less likely to be engaged in care management are too varied, which is why so much of our programmatic work in Camden is done in collaboration with our ecosystem partners and why our technical assistance and training work around the country brings this same ecosystem focus. For other providers, I would recommend that they seek out these partnerships and work on building up partnerships in their communities. Identify a goal, assess its strengths and weaknesses and build from there.
Care management is only one part of our portfolio. We are also working with our ecosystem partners to focus on populations we know are less likely to engage, such as individuals with substance use or behavioral health issues through our Pledge to Connect work, which was just designated by SAMHSA as one of 10 national program winners of its behavioral health equity challenge.
Q. Based on these study findings, what types of metrics (and time windows) are most important to demonstrate success in complex care management interventions?
A. Given what we know now about the wide variation of complexities this population faces, both social and medical, evaluating an intervention’s success against readmission rates is too narrow a metric. We also need to think beyond a three- to four-month window.
"The metrics that matter most are those that holistically capture what your program hopes to accomplish with patients. But this doesn’t mean 'more is better' — you don’t want to throw a bunch of metrics into the mix and see what comes out."
Really, the metrics that matter most are those that holistically capture what your program hopes to accomplish with patients. But this doesn’t mean “more is better” — you don’t want to throw a bunch of metrics into the mix and see what comes out. As stated previously, it is critical to be very clear on what you hope to accomplish, and what you can reasonably expect to accomplish with your patients. If you work with diverse populations, be prepared to apply different metrics to different patient subsets, as the outcomes appropriate to measure may not be the same across those subsets.
As for time windows, it depends on the intervention goals, the patient population, the nature of the intervention, and the quality of the ecosystem within which you are working. Intermediate measures are key here. For example, let’s say a goal of your intervention is to keep your patients out of the hospital unless necessary. So, if you want to take a long view and measure readmissions 180-days or even one year after program enrollment, you need to ask yourself, “What is put in place through my program to accomplish that goal?” An intermediate goal on the way to reducing readmission risk may be to connect patients to essential care and resources in the community. This would mean being able to measure: (1) if that connection is being made; and (2) if that connection is sustained, and then assessing the quality of that connection. So, in this case, it’s not necessarily the length of the intervention that matters from a program standpoint, but rather the effectiveness of the intervention on establishing those pathways for patients within the ecosystem and monitoring their journey along those pathways.
If you have a short-term program but expect long-term results, you need to have eyes on the ecosystem to truly understand program impact and, importantly, to identify weaknesses/challenges within the ecosystem that need attention.
Lastly, it is extremely important to listen to the voices and feedback of program participants. As one of our community advisory members said to us after the first RCT results were published: “They said it’s not making a difference, but it is making a difference to us.”