The science of Alzheimer’s disease has never moved faster. Disease-modifying therapies are reaching patients for the first time. Blood-based tests can now detect biological signs of the disease years before symptoms appear. In 2025 alone, 182 Alzheimer’s treatment trials were actively recruiting.
But of the 50,000 people needed to fully enroll in these trials, only about 11,000 do so each year. This shortfall is hardest on the communities that carry the greatest disease burden — Black, Hispanic, Asian, and American Indian and Alaska Native populations — who remain dramatically underrepresented in research.
Despite years of effort and growing urgency, we have almost no rigorous, standardized evidence about what actually works to recruit patients into trials, according to our new analysis of nearly 1,000 published studies. In fact, only 50 studies we reviewed provided usable data—and even within that small pool, the data were inconsistent, incomplete and largely drawn from non-pharmacological trials.
Put plainly, the field has been operating without an adequate evidence base, prompting the creation of the USC Clinical Trial Recruitment Lab (CTRL) to test and evaluate recruitment strategies. Even with these research gaps, however, our analysis indicates crucial opportunities to advance our knowledge of effective enrollment strategies.
Here are key takeaways from our study published in the Journal of Prevention of Alzheimer’s Disease:
Only a fraction of studies provided relevant data
Our research team started with 965 studies on Alzheimer’s clinical trials and applied a straightforward standard for inclusion: quantifiable recruitment outcomes. Ninety-five percent of the literature didn’t make the cut — not because the research lacked rigor, but because recruitment methods and results simply weren’t reported in ways that allow comparison or replication. Only 18% of studies reported how many individuals contacted to participate in trials were screened (yield rate), while only 14% reported how many of those screened were enrolled (conversion rate). Only 4% reported monthly enrollment figures (recruitment rate). And 65% reported nothing about the time, labor or cost of their recruitment efforts. The science of recruitment has been treated as an afterthought, not a discipline.
The handoff between clinical care and clinical trials is broken
Healthcare providers are considered one of the most reliable pathways for connecting patients with clinical trials, but there’s strikingly little evidence on how well that handoff works. We know that overburdened clinician workflows, insufficient tools for early diagnosis, lack of utilization of existing codes, and limited provider awareness of ongoing trials are significant barriers. Only four studies that met review criteria reported on recruitment strategies from trials testing treatments — where the stakes are highest and enrollment challenges are most acute. That blind spot has real consequences for the pace of Alzheimer’s drug development.
What works requires more than one approach
Across the 50 studies that provided usable data, the clearest finding was that no single strategy wins consistently. Community engagement — including health fairs, memory screenings, and partnerships with faith communities and senior centers — outperformed other strategies when it was evaluated. But its effectiveness varied significantly across racial and ethnic groups. Social media showed real promise in some trials and negligible results in others. Direct mail, word-of-mouth referrals, registries, and digital tools all had moments of success and moments of failure. The lesson isn’t that community engagement is the answer. It’s that multi-pronged approaches, tailored to specific populations and settings, are necessary — and we still lack the data to know how to optimize them.
Pharma sponsors most Alzheimer’s trials but rarely shares recruitment data
This is the core structural problem underlying everything else. The biopharmaceutical industry sponsors 62% of Alzheimer’s clinical trials, yet recruitment data from those trials remains almost entirely unpublished. While this reflects the economic realities and incentives involved in commercial research, it also means the field is flying blind on the most consequential trials. Academic-sponsored trials didn’t fare much better in our review. The field must move toward transparent, standardized reporting of recruitment outcomes as a baseline practice, whether through changes to National Institutes of Health grant requirements, updated publication norms or voluntary data-sharing commitments.
Two paradigm shifts can close the gap
Our review points to two changes that would fundamentally advance this field. First, recruitment evaluation must be built into trial protocols from the start, not bolted on retroactively. Second, the field needs standardized metrics reported consistently across studies. Enrollment numbers alone aren’t enough. Conversion rates, yield rates, recruitment rates and cost per enrollee are the data points that enable comparison, replication and scale.
Nearly a decade after the National Institute on Aging called for an “applied science of recruitment” for Alzheimer’s clinical research, we are still in the early stages of building it. Without sweeping action, the consequences — slower trials, higher costs, treatments delayed — will fall on patients and families who cannot afford to wait.
Phyllis Barkman Ferrell is a Nonresident Scholar at the USC Schaeffer Institute for Public Policy & Government Service and co-leads the Clinical Trial Recruitment Lab, a joint initiative of the USC Schaeffer Center for Health Policy & Economics and the USC Epstein Family Alzheimer’s Therapeutic Research Institute.
The full article in the Journal of Prevention of Alzheimer’s Disease can be found here.