Preprint
Concept Paper

A Novel Feasibility Study Design Enabling Informative Clinical Trials in Low Resource Settings: the Pop-Up Prevalence study with Private Ethics (PUPPE)

Altmetrics

Downloads

111

Views

32

Comments

0

This version is not peer-reviewed

Submitted:

29 December 2023

Posted:

03 January 2024

You are already at the latest version

Alerts
Abstract
The design of randomized controlled trials for low-resource settings has unique challenges. One challenge is ensuring recruitment success in the face of a lack of information about prevalence of disease in the population. What disease burden data may exist is old and from distant regions, adding guesswork into site design. This leads to a high chance that the sample size will not be recruited in any reasonable timeframe—a challenge even in high-resource settings. While cross-sectional prevalence studies could be used as a type of feasibility study informing trials by generating near real-time prevalence estimates for the future trial site, such studies take years to complete. This duration makes insertion of such a study in trial design impossible. An empirically supported combination of new approaches that sidesteps unnecessary hurdles is presented, the Pop-Up Prevalence with Private Ethics (PUPPE) design. PUPPE includes using a COVID-validated, more targeted method of data collection, use of accepted ethics review from universities or hospitals to avoid lengthy public ethics review, and—if needed—post-study in silico participant randomization. Only this design would enable clinical trial designers to insert fit-for-purpose prevalence studies within the design phase of their trials, leading to informative outcomes.
Keywords: 
Subject: Public Health and Healthcare  -   Other

1. Introduction

One summary measure of randomized controlled trial (RCT) success is trial informativeness. Trial informativeness is the state of an RCT ending with trustworthy answers to its research questions, to enable a change in policy or make a go/no-go decision [1]. Trial informativeness is a challenge to attain in the Global North, with a mean success rate as low as 26% [2]. The challenge is likely to be higher in RCTs in sub-Saharan African (sSA), where unique hurdles are well documented [3,4]. One of the most common points of RCT failure in all geographies is the lack of feasibility in recruiting participants in a reasonable timeframe [5,6,7].
One of the causal factors of infeasible RCT designs is having a poor estimate of future participant availability. When RCT designers settle on design choices and site selection, they use an estimate of disease burden at that location to gauge the volume of eligible potential participants. This disease burden, prevalence, or incidence estimate is the core building block for decisions on number of sites, location, staffing, and even the core trial design. If the disease burden estimate is off, there will be no ability to recruit enough participants for the trial to finish. Losses to RCTs that used incorrect prevalence estimates could include increased cost and duration, prolonged product development cycles, and exposing participants to risks with no scientific reward.
In sSA and locations where there is a lack of highly mapped prevalence data, rare electronic health records, and virtually no secondary use health data, the only retrospective prevalence data options are using past estimates from more remote locations. This dated information, from communities dissimilar to the RCT target community, is likely to be inaccurate. Use of such estimates can lead to RCTs whose forecast of participant volume is not helpful.
The solution offering the most accurate prevalence result is estimating with no time nor distance lag from the RCT start at the RCT site. Measuring prevalence at the target site, close to the intended start time means high proximity and recency. Classically, a cross-sectional prevalence study is the design for such a feasibility exercise [6]. Why don’t RCT designers, in low-resource settings with no retrospective data, use fit-for-purpose cross-sectional studies to inform design? Likely, it is the very long timeline of prevalence studies. The times between RCT conception and key design choices—the window for generating a prevalence estimate—is not long enough for today’s prevalence studies. Feasibility studies that find prevalence last, on average, about two years [8,9]. If cross-sectional prevalence studies completed in months rather than years, they could fit inside the RCT conception-to-design window.

2. Current and Traditional Cross-Sectional Prevalence Studies in sSA

A recent scoping review [8] of prospective cross-sectional prevalence studies in sSA tied exclusively to diseases and conditions communicable, maternal, neonatal and nutritional found certain characteristics. In a random sample of these studies, the most commonly measured conditions were malaria, intestinal parasitic infection, and malnutrition. The most common population measured was children followed by antenatal or postnatal women. Many countries hosted studies; the most frequent was Ethiopia, with Cameroon and Uganda next most frequent. In the random sample, study sample sizes ranged from 35 to 9,812 (median: 384), and duration of data collection from 30 days to 510 days (median: 90). The most common setting for data collection was hospitals and clinics, followed by randomized individual households. Less frequently, data collection took place institutionally within schools, workplaces, and prisons, or in more private community approaches, such as referral sampling. Surprisingly, most of these studies were unfunded [8]. Taken together, the high volume of studies, the breadth of participating countries, wide range of diseases, populations, and sample sizes, and prevalence of unfunded studies indicates this type of research is thriving.

3. Challenges With Current and Traditional Cross-Sectional Prevalence Studies

All current design variants of these studies are slow. Prospective cross-sectional prevalence studies concluding in sSA in recent years lasted no less than a median of 2 years, an average of 2.5 years, and as long as 14 years, in a full coding of data from a recent scoping review [8]. Fewer than 3% of the total prevalence studies likely completed in six months or less [8]. Outside of sSA, a set of traditional feasibility studies found all studies took at least 20 months to complete; the mean time for completion of a larger set of studies was 31 months [9].
The largest barrier to timeliness in such studies in sSA was the duration the length of a public ethics review—one administered by a national or regional authority. Prevalence studies that were ethically reviewed by these national or regional government entities, unaffiliated from public or private hospitals or universities, took much longer to progress to publication than those reviewed by established local university or hospital review boards. A full coding of the data set in a recent scoping review showed public ethics review adding, on average, at least a full 365 days to study duration compared to the span of prevalence studies with fully mature non-public, or ‘private’, ethics reviews (1,173 days vs 808 days, n=286) [8].
Considering these timespans, using fit-for-purpose cross-sectional prevalence studies for contribution to a specific RCT’s planning is not feasible. This seems to mesh with evidence showing roughly 3% of non-industry trials attempt to use feasibility studies as design inputs [7,10] Traditional prevalence study variants are not crafted or ready for inclusion as part of RCT design. In fact, a fully-coded version of the data in Dolley et al. showed that only 2% of the 292 sSA cross-sectional prevalence studies were part of an RCT—and perhaps none of those actually informed RCT design [8].
Further, sSA prevalence studies are not designed to optimally target sub-populations. Since traditional designs are not mobile, the only selection of populations happen in the macro: large swaths such as districts or zones are randomized. Globally, though, there are motions toward selecting specific populations to understand. Whether this is in service of precision public health [11,12], tied to target product profiles, or is “implementation forward”, the capability of measuring prevalence in ever more specific ways has consistent appeal.
Traditional design hides research from the community. The challenge of recruitment can be partially or mildly assuaged as individuals and groups are exposed to research as normative and part of daily life. However, most prevalence studies perform data collection and participant communication within houses, hospitals, and clinics: ‘behind closed doors’. A fully-coded version of the recent sSA prevalence studies showed that at least 81% of studies included data collection in those settings [8]. While maximizing privacy, current designs are not helping breakdown suspicion or caution.

4. Introducing a New Study Design

The already high standard of ethics, measurement, and good clinical practice in sSA prevalence studies leaves length of study as the prohibiting criterion toward such studies’ insertions in RCT design windows. A novel study design that addresses the challenges uses new and empirically sound approaches in combination. The exclusive intention to use this study design to collect data for the design phase of an RCT, without the study’s rigorous adoption of the RCT’s eligibility criteria, assets, infrastructure, or pilot questions makes this an “external feasibility study” in the Consolidated Standards for Reporting Trials (CONSORT) model, a type of exploratory study [6,13]. Delivering a prevalence-only study with testing and data collection performed in public spaces using a pop-up approach, exclusively with local university or hospital ethical review, and in silico randomization after study’s end is an innovative approach that may work in sSA to inform RCT design. The combination of those approaches, used together systematically, represents a new study design: a Pop-Up Prevalence study with Private Ethics (PUPPE).
First, a PUPPE design is very simple and narrow in terms of data collection. Investigators may be tempted to collect a wide variety of data outside of disease burden. This prolongs an already-lengthy study, and is not part of PUPPE. Equally disqualifying is an attempt to turn a feasibility study into a pilot study, where various aspects of the follow-on RCT are tested. These learnings will slow what is an attempt to rapidly measure only prevalence. All decisions ought to be evaluated not only on participant impact and costs, but on the number of days added or saved in total study length. PUPPE is a sprint. Keeping key decisions simple, watching recruitment daily, continuously forecasting the day the study will end, and frequent communication with sponsors are all investments in shortening study span.
Second, the study’s data collection is done by way of a “pop-up” site, in order to maximize the number of samples collected per day. The trend of the modern ‘pop-up’ retail or services site began in the early 2000’s. Originally relevant to retail sales, pop-ups have been slowly adapted the health sphere. Pop-up health studies are temporary, targeted, and “appear at a location close to a population of participants…who can be invited to engage in the research studies” [14]. The COVID-19 pandemic (COVID) showed the most widespread and effective use of pop-ups to estimate prevalence. During COVID, New York City collected 239,109 specimens through pop-up testing [15]. Often, pop-up testing stations are mobile, such as a van or bus. Others are assembled daily, such as a tent. Some are semi-permanent structures, such as huts that might persist for weeks in one place. More critical than the form factor is the choice of sites, as “research is brought to the participants, rather than the other way around” [14].
“A major challenge with vaccine trials at fixed study sites is that unexpectedly low attack rates can delay progress” [16]. One of the largest, most credible vaccine trials decided to deploy “mobile (pop-up) research sites” to address this challenge [16]. Programs like NYC Test & Trace and the WHO Solidarity Vaccines Trial identified cohorts of prospective participants they were more interested in recruiting. It is clear pop-up approaches can find a targeted mix of participants. That mix might be an overall higher total volume of participants, a higher volume of certain types of participants, or a higher volume of sick participants [17].
Prior to COVID, a “novel, time-limited pop-up community HIV testing service was introduced in 2013” in Australia [18]. The pop-up was compared with similar testing offered by a traditional clinic. The pop-up was set in “a major thoroughfare for traffic and pedestrians” [18]. The pop-up testing service performed 7 tests per hour to the clinic’s 4 per hour (p<.01) [18]. More compelling was that, over the course of the experiment, the pop-up service performed 36.4 tests per day to the clinic’s 10.6 tests per day [18]. No recent comparative data from sSA was available, as none of the 292 randomized cross-sectional prevalence studies, coded from a recent scoping review’s data, used a pop-up collection model [8].
Third, the “private” ethics review organizations, local universities and hospitals, are shown to contribute to much faster study completion than when studies have their ethical reviews performed by national organizations, in sSA [8]. In a full coding of data in a recent scoping review, prevalence studies with a community approach to data collection were the most positively affected by using a “private” review: they were 55% faster to complete than when using “public” review [8]. Across all types of data collection, studies with “private” ethical review showed the fastest timespan between the start of data collection and results submission (74 days), and were 31% faster on average than studies with “public” ethics reviews [8]. In sSA, strong networks often exist amongst health professionals, academics, and researchers. It is often not a challenge for an investigator seeking to perform a prevalence study to associate or affiliate with a university or hospital, in hopes of partaking in ethical review services. In the definition of “private” ethics review organizations, federally funded hospitals and universities are included. The “public” ethical review organization is often operating as a national entity.
Fourth, current and traditional cross-sectional prevalence studies are almost always randomized prior to study start, sometimes a lengthy process. This randomization can reduce bias, usually while honing a large population to a manageable one. However, it may be that randomization in cross-sectional prevalence studies—depending on their collection setting—is not necessary or “goes too far” [13]. PUPPE studies are ‘first come, first served’ and not individually nor cluster randomized in advance. In a PUPPE study, investigators are not applyingstrict inclusion/exclusion criteria at the point-of-measurement. More representative of PUPPE is the choice to randomize after the study. Once the data has been collected and exists in a digital study management software, such as REDCap or OpenClinica, the data can be randomized in silico, with the resulting, smaller random subset of the collected data being used as the official dataset. The in silico approach is more appropriate in PUPPE, as applying a real-time randomization scheme in a bustling public environment—when participation might include prizes or incentives or psychological or social positives—will be more difficult. Immediately after the study, raw or randomized data could be shared with sponsors for RCT design.

5. Benefits of PUPPE

A pop-up prevalence study with a ‘private’ ethics review is likely to be faster than any current sSA traditional design. This means a PUPPE design could be inserted into the design window of an RCT with no disruption. A wealth of benefits accrues from such insertion, starting with the recruitment advantage. RCT sites can be chosen realistically, avoiding undue pressure on those conducting the trial and recruiting participants. Other secondary benefits include, first, that an RCT with accurate prevalence estimates will lead to a better estimate of when the trial will end. Second, improved decisions can be made by the RCT design team on an appropriate number of trial sites or clusters. Third, optimal planning for staff, intervention supplies, and other inventory lead to lower costs and less waste. Fourth, RCT designers can make a number of adjustments, related to pragmatism, inclusion/exclusion criteria, and analysis, based on likely participant mix. Finally, PUPPE-informed RCTs are more likely to end informatively, treating participants’ investments in research appropriately.
"Pop-up research centers are a solution" to challenges associated with participant recruitment [14]. First and foremost, they can help access a large number of participants in a short time, including if prevalence is changing [16]. Pop-ups can be deployed to the type of participants the study wants to target. This becomes especially important if particular subsets of the general population are at risk, and those subsets are not frequent participants of testing. Whether in targeted situations or for the full community, PUPPE is geared toward volume. After fully coding data from a recent sSA scoping review, the studies using a method of data collection closest to pop-ups collected two times more samples per day, in situations where participants were most scarce, than the next most fruitful collection method—going house to house (1.87 vs .87 samples per day, n=83) [8].
PUPPE is more flexible to trial teams. If circumstances change, the mobility of the site, materials, and staff means moving to a different location is low-effort. There is a wide variety of places that a pop-up can place itself. If the disease is not communicable, a pop-up could be in or near "places of worship, retail stores, parks, libraries, food halls," or a town square [17].
PUPPE is likely to be more inclusive than existing variants of prevalence studies. A wider base of participants can avail themselves of the pop-up. In the case of randomized community-based studies where health workers visit individual pre-identified households, there are a limited number of participants 'invited'. In a pop-up scenario, if the majority of a village wanted to participate, they could. The “pragmatic trial” sensibility in RCT design is exemplified in PUPPE.
Pop-ups offer an easier experience for participants. First, they can offer a “no-admin” experience for a participant. The collection station is presumably first-come, first-served and open long hours; a participant arrives at their most convenient time. Participants could avoid the additional work involved in scheduling, re-scheduling, reminders, and managing a calendar involved in an institution-based prevalence study [17]. Second, pop-ups that move or have multiple stations offer a decrease in the time for any participant to travel to get to the test, "because the test is moving to them” [17].

6. Discussion and Conclusion

PUPPE creates opportunities beyond simply enabling RCTs to be more efficient and informative. This prospective prevalence design could act as a catalyst to raise the capability and funding of both private and public ethics boards in sSA. The number of entrepreneurial contract research organizations and academic investigators responding to a new stream of demand might increase. Additional users of the research outcomes might include policymakers, potentiating a more precision public health approach. More young people will see public displays of health research, instilling comfort and confidence at an age that will see long-term rewards for science.
In addition to strengths, PUPPE may have a number of limitations, including:
  • Certain specific diseases or conditions, due to ethical or feasibility reasons, might be a poor fit for a PUPPE study.
  • Folk who might appear in a public place and participate in PUPPE may be different from populations accessible in more traditional data collection models. Individuals currently homebound or inpatient in a hospital could be ideal participants, yet vastly underrepresented in a PUPPE study.
  • COVID was unique, so any learnings from COVID, including those related to pop-up approaches, could be non-generalizable. While the pop-up aspect of PUPPE is underpinned by non-COVID examples of pop-up success, one might argue for discounting the COVID evidence.
  • The selective investment in non-public ethics boards might hinder public ethics boards’ opportunities to get stronger.
  • Any new design without a historical basis has risk due to novelty.
PUPPE is a new study design. It is informed by sSA data, it could be used in any low-resource setting, toward an efficient and fast measurement of disease burden at a site. “There is now popular consensus that early work…confirming the size of the eligible population is paramount” [6]. PUPPE is empirically based, and leverages progress in public health approaches and technology. Used in toto, and with other speed enhancers such as a master protocol or methods for pop-up data randomization, this design has the chance to make a significant, qualitative leap forward to become a tool in the design arsenal of RCTs.

Author’s Contributions

SD: Conceptualization, Investigation, Formal Analysis, Writing-Original Draft Preparation, Writing-Reviewing & Editing, Methodology, Project Administration, Funding Acquisition

Funding

Funding was provided from The Bill & Melinda Gates Foundation.

Data Availability

Data referred to in this Commentary is available upon reasonable request from author.

Ethics and Consent for Publication

This commentary does not require ethical approval; it does not report on or involve the use of any animal or human data or tissue. Consent for publication is non-applicable.

Declaration of Competing Interests

The author declares that, beyond the financial support provided by The Bill & Melinda Gates Foundation, he has no known competing financial interests, intellectual property interests, nor personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Hartman D, Heaton P, Cammack N, Hudson I, Dolley S, Netsi E, Norman T, Mundel T. Clinical trials in the pandemic age: What is fit for purpose?. Gates Open Research. 2020;4. [CrossRef]
  2. Hutchinson N, Moyer H, Zarin DA, Kimmelman J. The proportion of randomized controlled trials that inform clinical practice. Elife. 2022 Aug 17;11:e79491. [CrossRef]
  3. Joseph PD, Caldwell PH, Tong A, Hanson CS, Craig JC. Stakeholder views of clinical trials in low-and middle-income countries: a systematic review. Pediatrics. 2016 Feb 1;137(2). [CrossRef]
  4. Alemayehu C, Mitchell G, Nikles J. Barriers for conducting clinical trials in developing countries-a systematic review. International journal for equity in health. 2018 Dec;17:1-1. [CrossRef]
  5. Collette L, Bogaerts J, Paoletti X. Why Do Clinical Trials Fail?. Oncology Clinical Trials: Successful Design, Conduct, and Analysis. 2018 Mar 28:48.
  6. Bond C, Lancaster GA, Campbell M, Chan C, Eddy S, Hopewell S, Mellor K, Thabane L, Eldridge S. Pilot and feasibility studies: extending the conceptual framework. Pilot and Feasibility Studies. 2023 Feb 9;9(1):24. [CrossRef]
  7. Cooper CL, Whitehead A, Pottrill E, Julious SA, Walters SJ. Are pilot trials useful for predicting randomisation and attrition rates in definitive studies: a review of publicly funded trials. Clinical Trials. 2018 Apr;15(2):189-96. [CrossRef]
  8. Dolley S, Miller CJ, Quach P, Norman T. Recent cross-sectional prevalence studies in sub-Saharan Africa for communicable, maternal, neonatal, and nutritional diseases and conditions: a scoping review. MedRxiv. 2023 Dec 27. [CrossRef]
  9. Morgan B, Hejdenberg J, Hinrichs-Krapels S, Armstrong D. Do feasibility studies contribute to, or avoid, waste in research?. PloS one. 2018 Apr 23;13(4):e0195951. [CrossRef]
  10. Laursen DR, Paludan-Müller AS, Hróbjartsson A. Randomized clinical trials with run-in periods: frequency, characteristics and reporting. Clinical Epidemiology. 2019 Feb 11:169-84. [CrossRef]
  11. Dolley S. Big data’s role in precision public health. Frontiers in public health. 2018 Mar 7;6:68. [CrossRef]
  12. Olstad DL, McIntyre L. Reconceptualising precision public health. BMJ open. 2019;9(9). [CrossRef]
  13. Moore L, Hallingberg B, Wight D, Turley R, Segrott J, Craig P, Robling M, Murphy S, Simpson SA, Moore G. Exploratory studies to inform full-scale evaluations of complex public health interventions: the need for guidance. J Epidemiol Community Health. 2018 Oct 1;72(10):865-6. [CrossRef]
  14. Toomey RJ, McEntee MF, Rainford LA. The pop-up research centre–challenges and opportunities. Radiography. 2019 Oct 1;25:S19-24. [CrossRef]
  15. Jiménez J, Parra YJ, Murphy K, Chen AN, Cook A, Watkins J, Baker MD, Sung S, Kaur G, Kress M, Kurien SJ. Community-informed Mobile COVID-19 testing model to addressing health inequities. Journal of Public Health Management and Practice. 2022 Jan 1;28(1):S101-10. [CrossRef]
  16. Krause P, Fleming TR, Longini I, Henao-Restrepo AM, Peto R, Dean NE, Halloran ME, Huang Y, Fleming TR, Gilbert PB, DeGruttola V. COVID-19 vaccine trials should seek worthwhile efficacy. The Lancet. 2020 Sep 12;396(10253):741-3. [CrossRef]
  17. Della Vella D, Rayo MF. The Final Inch: What Pop-Up COVID Testing Tells Us about Community Engagement. In Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 2022 Sep (Vol. 11, No. 1, pp. 76-81). Sage CA: Los Angeles, CA: SAGE Publications. [CrossRef]
  18. Knight V, Gale M, Guy R, Parkhill N, Holden J, Leeman C, McNulty A, Keen P, Wand H. A novel time-limited pop-up HIV testing service for gay men in Sydney, Australia, attracts high-risk men. Sexual Health. 2014 Aug 28;11(4):345-50. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated