Home » Blog » Three Ways That Recruitment in Randomized Controlled Trials May Not Reflect Real Life

Three Ways That Recruitment in Randomized Controlled Trials May Not Reflect Real Life

By: Chad Cook, Amy McDevitt, Derek Clewley, Bryan O’Halloran

As we wind up a year of recruitment on the SS-MECH trial [1], we are compelled to reflect on our recruitment strategies and study participants. Our study has included four recruitment sites and we’ve enrolled over 110 participants, which is nearly 85% of our targeted sample. We are using well-rehearsed and successful strategies at our work sites, providing access to a wide range of individuals with chronic neck disorders. As an example, the recruitment process at Duke University uses the electronic medical record to identify individuals who have recently been seen for neck related conditions, who are not seeking a physical therapist’s care at the given time. This process and the processes at all recruitment sites have been very effective, leading to high conversion rates (enrollment) and strong study retention. The study investigators provide care for both arms, which increases the fidelity of the interventions, as each of us has a vested interest in doing this right. Further, thanks to generous external funding (https://foundation4pt.org/), we have financial support for our six-month follow-ups, which has also been instrumental in a very high completion rate.  

All of this sounds like wonderful news for any clinical trialist. And indeed, by mid 2025, we will complete the last six-month follow-ups for the SS-MECH trial and will be able to report on our findings. In fact, of the >20 randomized clinical trials (RCTs) that we’ve independently been involved in, this one has one of the strongest implementation plans and efforts toward improving the study quality. However, we would be remiss if we did not outline some of the concerns for ALL RCTs, concerns that are not specific to our study but should be considered when reading any published paper. The purpose of this blog is to outline the potential limitations of the samples in RCTs.  

Concern Number One: All RCTs have specific inclusion/ exclusion criteria, which may influence the type of participant seen in the trial. This can lead to selection bias, which occurs when the volunteers for the study differ from those who do not volunteer. All RCTs may select a more homogeneous group of patients to reduce variability. The homogeneity of the sample reduces the generalizability of the results, which is whether the results are reflective of a broader patient population seen in everyday clinical practice. All RCTs identify a sample representative of a pre-specified target population [2], which may be dissimilar to the general population with chronic neck pain presenting to clinicians. Individuals who agree to participate in a study are often healthier, live close to the study site, are younger, have higher health literacy, and have higher socioeconomic status [3]. All of these features are also moderators of an outcome and could influence the results of the study. An example of selection bias in our study is our requirement that the research participants not attend physical therapy during the time of their treatment. This is likely to increase non-care seeker enrollment, which is a very different population than a care seeking one [4]; care-seekers tend to have more severe symptoms and may be more motivated to pursue a change in their status.  

Concern Number Two: Non-pragmatic RCTs are conducted under idealized and controlled conditions, which may not accurately represent the complexities and variability of real-world clinical settings. This often increases patient compliance and reduces dropouts, influencing a study’s results. Participants in RCTs are often more compliant with treatment protocols and follow-up visits compared to the general patient population, leading to differences in outcomes. Study dropouts can introduce bias, reduce power, and lead to missing data. This can lead to an overestimation or underestimation of the treatment effect. With fewer participants completing the study, the statistical power to detect a difference between treatment groups is reduced. Lastly, missing data from dropouts can complicate the analysis and interpretation of results, requiring the use of statistical methods to handle the missing information.  

Concern Number Three: Because of costs, nearly all RCTs have shorter follow-up periods than what might be observed in clinical practice, potentially missing long-term effects and outcomes. The typical follow-up time for physical therapy-led randomized controlled trials (RCTs) can vary, but it often ranges from 6 months to 1 year [5,6]. Short-term outcomes can lead to limited insight into long-term efficacy, failure to capture reoccurrence rates, and a poorer understanding of variability in patient response. Past studies on trajectories demonstrate that outcomes change markedly over a 1-year period [7]. Lastly, short-term outcomes fail to capture the potential behavioral changes that occur because of the treatment and, conversely, the potential for lack of implementation of self-management strategies over the long term. Participants might alter their behavior or adherence to treatment protocols once the trial ends, affecting long-term outcomes.  

Summary: This blog highlights three concerns about RCTs germane to all studies. We emphasize the importance of closely examining the inclusion/exclusion criteria to determine if the study population accurately reflects the patients that clinicians encounter in clinical practice. Additionally, consider the demographics, social status, and other relevant factors that describe the sample. How you integrate the findings into your workflow and care plan should be guided by a clear understanding of these limitations.  

References 

  1. Cook CE, O’Halloran B, McDevitt A, Keefe FJ. Specific and shared mechanisms associated with treatment for chronic neck pain: study protocol for the SS-MECH trial. J Man Manip Ther. 2024;32(1):85-95. 
  2. Stuart EA, Bradshaw CP, Leaf PJ. Assessing the generalizability of randomized trial results to target populations. Prev Sci. 2015;16(3):475-85. 
  3. Holmberg MJ, Andersen LW. Adjustment for Baseline Characteristics in Randomized Clinical Trials. JAMA. 2022;328(21):2155-2156. 
  4. Clewley D, Rhon D, Flynn T, Koppenhaver S, Cook C. Health seeking behavior as a predictor of healthcare utilization in a population of patients with spinal pain. PLoS One. 2018;13(8):e0201348. 
  5. Herbert RD, Kasza J, Bø K. Analysis of randomised trials with long-term follow-up. BMC Med Res Methodol 2018;18:48.  
  6. Llewellyn-Bennett R, Bowman L, Bulbulia R. Post-trial follow-up methodology in large randomized controlled trials: a systematic review protocol. Syst Rev 2016;5:214.  
  7. Nim C, Downie AS, Kongsted A, Aspinall SL, Harsted S, Nyirö L, Vach W. Prospective Back Pain Trajectories or Retrospective Recall-Which Tells Us Most About the Patient? J Pain. 2024 Nov;25(11):104555. 

Leave a comment

Your email address will not be published. Required fields are marked *