All posts by Jonathan McCall

PCORI Announces First PCORnet Demonstration Project: The ADAPTABLE Aspirin Study


PCORnetThe Patient-Centered Outcomes Research Institute (PCORI) has approved the first pragmatic clinical trial to be performed through the National Patient-Centered Clinical Research Network (PCORnet)—the ADAPTABLE study (Aspirin Dosing: A Patient-centric Trial Assessing Benefits and Long-term Effectiveness).

Over the course of the trial, 20,000 study participants with cardiovascular disease will be randomly assigned to receive one of two commonly used doses of aspirin—a low dose of 81 mg per day versus a higher dose of 325 mg per day—in order to determine which provides the optimal balance between protecting patients with cardiovascular disease from heart attack and stroke, and minimizing bleeding events associated with aspirin therapy. The trial will also employ a number of innovative methods, including electronic health record (EHR)-based data collection and a patient-centered, web-based enrollment model in partnership with the Health eHeart Alliance Patient-Powered Research Network (PPRN).

The ADAPTABLE trial, which includes six of PCORnet’s Clinical Data Research Networks (CDRNs), will be led and coordinated through the Duke Clinical Research Institute (DCRI).


Read more about the ADAPTABLE Aspirin Trial here:
Fact Sheet (PDF)
Infographic (PDF)
DCRI Coordinating Center Announcement

Study Examines Public Attitudes Toward Data-Sharing Networks


A new study examining public attitudes about the sharing of personal medical data through health information exchanges and  distributed research networks finds a mixture of receptiveness and concerns about privacy and security. The study, conducted by researchers from the University of California, Davis and University of California, San Diego and published online in the Journal of the American Medical Informatics Association (JAMIA), reports results from a telephone survey of 800 California residents. Participants were asked for their opinions about the importance of sharing personal health data for research purposes and their feelings about related issues of security and privacy, as well as the importance of notification and permission for such sharing.

The authors found that a majority of respondents felt that sharing health data would “greatly improve” the quality of medical care and research. Further, many either somewhat or strongly agreed that the potential benefits of sharing data for research and care improvement outweighed privacy considerations (50.8%) or the right to control the use of their personal information (69.8%), although study participants also indicated that transparency regarding the purpose of any data sharing and controlling access to data remained important considerations.

However, the study’s investigators also found evidence of widespread concern over privacy and security issues, with substantial proportions of respondents reporting a belief that data sharing would have negative effects on the security (42.5%) and privacy (40.3%) of their health data. The study also explored attitudes about the need to obtain permission for sharing health data, as well as whether attitudes toward sharing data differed according to the purpose (e.g., for research vs. care) and the groups or individuals among which the data were being shared.

The authors note that while data-sharing networks are increasingly viewed as a crucial tool for enabling research and improving care on a national scale, they ultimately rely upon trust and acceptance from patients. As such, the long-term success of efforts aimed at building effective data-sharing networks may depend on accurately understanding the views of patients and accommodating their concerns.


Read the full article here: 

Kim KK, Joseph JG, Ohno-Machado L. Comparison of consumers' views on electronic data sharing for healthcare and research. J Am Med Inform Assoc. 2015 Mar 30. pii: ocv014. doi: 10.1093/jamia/ocv014. [Epub ahead of print]

New Biostatistical Guidance Document Available – “Frailty Models in Cluster-Randomized Trials”


Tools for ResearchThe NIH Collaboratory Biostatistics/Study Design Core has released a new guidance document concerning the use of frailty models in the setting of cluster-randomized trials (CRTs). This guidance, the fifth in a series from the Core, outlines considerations affecting power calculations in frailty models, as well as issues raised by the use of logistic regression models for time-to-event versus dichotomous outcomes in CRTs .

The guidance document can be found under Biostatistical Guidance Documents on the Tools for Research page on the Living Textbook, or accessed directly here (PDF).


Researchers Find Poor Compliance with Clinical Trials Reporting Law

A new analysis of data from the ClinicalTrials.gov website shows that despite federal laws requiring the public reporting of results from clinical trials, most research sponsors fail to do so in a timely fashion—or, in many cases, at all. The study, published in the March 12, 2015 issue of the New England Journal of Medicine, was conducted by researchers at Duke University and supported by the NIH Collaboratory and the Clinical Trials Transformation Initiative (CTTI). The study’s authors examined trial results as reported to ClinicalTrials.gov and evaluated the degree to which research sponsors were complying with a federal law that requires public reporting of findings from clinical trials of medical products regulated by the U.S. Food and Drug Administration (FDA).

“We thought it would be a great idea to see how compliant investigators are with results reporting, as mandated by law,” said lead author Dr. Monique Anderson, a cardiologist and assistant professor of medicine at Duke University.

Photograph of study author Monique L. Anderson, MD
Monique L. Anderson, MD.
Photo courtesy of Duke Medicine.

Using a publicly available database developed and maintained at Duke by CTTI, the authors were able to home in on trials registered with ClinicalTrials.gov that were highly likely to have been conducted within a 5-year study window and to be subject to the Food and Drug Administration Amendments Act (FDAAA). This federal law, which was enacted in 2007, includes provisions that obligate sponsors of non-phase 1 clinical trials testing medical products to report study results to ClinicalTrials.gov within 12 months of the trial’s end. It also describes allowable exceptions for failing to meet that timeline.

However, when the authors analyzed the data, they found that relatively few studies overall—just 13 percent—had reported results within the 12-month period prescribed by FDAAA, and less than 40 percent had reported results at any time between the enactment of FDAAA and the 5-year benchmark.

“We were really surprised at how untimely the reporting was—and that more than 66 percent hadn’t reported at all over the 5 years [of the study interval],” said Dr. Anderson, noting that although prior studies have explored the issue of results reporting, they have until now been confined to examinations of reporting rates at 1 year.

Another unexpected result was the finding that industry-sponsored studies were significantly more likely to have reported timely results than were trials sponsored by the National Institutes of Health (NIH) or by other academic or government funding sources. The authors noted that despite a seemingly widespread lack of compliance with both legal and ethical imperatives for reporting trial results, there has so far been no penalty for failing to meet reporting obligations, even though FDAAA spells out punishments that include fines of up to $10,000 per day and, in the case of NIH-sponsored trials, loss of future funding.

“Academia needs to be educated on FDAAA, because enforcement will happen at some point. There’s maybe a sense that ‘this law is for industry,’ but it applies to everyone,” said Anderson, who points out that this study is being published just as the U.S. Department of Health and Human Services and the NIH are in the process of crafting new rules that deal specifically with ensuring compliance with federal reporting laws.

According to Anderson, increased awareness of the law, coupled with stepped-up enforcement and infrastructure designed to inform researchers about their reporting obligations, have the potential to improve compliance with both the letter and the spirit of the regulations. “I think reporting rates will skyrocket after the rulemaking,” she says.

In the end, Anderson notes, reporting clinical trials results in order to contribute to scientific and medical knowledge is as much an ethical obligation for researchers as a legal one: “It’s something we really promise to every patient when they enroll on a trial.”


Read the full article here:

Anderson ML, Chiswell K, Peterson ED, Tasneem A, Topping J, Califf RM. Compliance with results reporting at ClinicalTrials.gov. N Engl J Med. 2015;372:1031-9. DOI: 10.1056/NEJMsa1409364.
Additional reading:

"Results of many clinical trials not being reported" (NPR)

"Clinical trial sponsors fail to publicly disclose report results, research shows" (Forbes.com)

Report from NIH Collaboratory Workshop Examines Ethical and Regulatory Challenges for Pragmatic Cluster Randomized Trials

A new article by researchers from the NIH Collaboratory, published online this week in the journal Clinical Trials, explores some of the challenges facing physicians, scientists, and patient groups who are working to develop innovative methods for performing clinical trials. In the article, authors Monique Anderson, MD, Robert Califf, MD, and Jeremy Sugarman, MD, MPH, MA, describe and summarize discussions from a Collaboratory workshop on ethical and regulatory issues relating to pragmatic cluster-randomized trials.


Pragmatic Cluster-Randomized Trials

Many of the clinical trials that evaluate the safety and effectiveness of new therapies do so by assigning individual volunteers to receive either an experimental treatment or a comparator, such as an existing alternative treatment, or a placebo. However, this process can be complex, expensive, and slow to yield results. Further, because these studies often take place in specialized research settings and involve patients who have been carefully screened, there are  concerns that the results gathered from such trials may not be fully applicable to “real-world” patient populations.

For these reasons, some researchers, patients, and patient advocacy groups are interested in exploring different methods for conducting clinical trials, including designs known as pragmatic cluster-randomized trials, or CRTs. In a pragmatic CRT, groups of individuals (such as a clinic, hospital, or even an entire health system) are randomly assigned to receive one of two or more interventions being compared, with a focus on answering questions about therapies in the setting of actual clinical practice—the “pragmatic” part of “pragmatic CRT.”

Pragmatic CRTs have the potential to answer important questions quickly and less expensively, especially in an era in which patient data can be accessed directly from electronic health records. Just as importantly, that knowledge can then be fed back to support a “learning healthcare system” that is constantly improving in its approach to patient care.  However, while cluster-randomized trials are not themselves new, their widespread use in patient-care settings raises a number of potential challenges.

For example: in a typical individually randomized clinical trial, patients are enrolled in a study only after first providing written informed consent. However, in a CRT, the entire hospital may be assigned to provide a given therapy. In such a situation, how should informed consent be handled? How should patients be notified that research is taking place, and that they may be part of it? Will they be able to “opt out” of the research? What will happen to the data collected during their treatment? And what do federal regulations governing clinical trials have to say about this? These are just a few of the questions raised by the use of pragmatic CRTs in patient-care settings.


The NIH Collaboratory Workshop on Pragmatic Cluster-Randomized Trials

The NIH Collaboratory Workshop of Pragmatic CRTs, held in Bethesda, Maryland in July of 2103, convened a panel of experts in clinical trials, research ethics, and regulatory issues to outline the challenges associated with conducting  pragmatic CRTs and to explore ways for better understanding and overcoming them. Over the course of the intensive 1-day workshop, conference participants identified key areas for focused attention. These included issues relating to informed consent, patient privacy, oversight of research activities, insuring the integrity of data gathered during pragmatic CRTs, and special protections for vulnerable patient populations. The article by Anderson and colleagues provides a distillation of discussions that took place at the workshop, as well as noting possible directions for further work.

In the coming months and years, the NIH Collaboratory and its partners, including the National Patient-Centered Clinical Research Network (PCORnet), plan to build on this workshop experience. Together, they hope to explore these issues in greater detail and propose practical steps for moving forward with innovative clinical research methods, while at the same time maintaining robust protections for patients’ rights and well-being.


Jonathan McCall, MS, and Karen Staman, MS, contributed to this post.


Read the full text of the article here:

Anderson ML, Califf RM, Sugarman J. Ethical and regulatory issues of pragmatic cluster randomized trials in contemporary health systems. Clin Trials 2015 [e-Pub ahead of press].
doi:10.1177/1740774515571140 
For further reading:

Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: Increasing the value of clinical research decision making in clinical and health policy. JAMA 2003;290(12):1624-32. PMID:14506122; doi:10.1001/jama.290.12.1624.

The Ottawa Hospital Research Institute Ethical Issues in Cluster Randomized Trials Wiki.

Special Report: Ethical Oversight of Learning Health Systems. Hastings Center Report 2013;43(s1):S2–S44, Si–Sii.

Sugarman J, Califf RM. Ethics and regulatory complexities for pragmatic clinical trials. JAMA 2014;311(23):2381-2. PMID: 24810723; doi: 10.1001/jama.2014.4164.

PCORnet “Writing for Research” Webinars Available via Living Textbook


“Writing for the Clinical Research Setting, “A 4-part webinar series sponsored by the National Patient-Centered Clinical Research Network (PCORnet) is now available on Rethinking Clinical Trials. PCORnet Originally given in the fall of 2014 as part of the PCORnet “Office Hours” series, the recorded sessions are presented by Living Textbook managing editor Jonathan McCall, MS, and can be accessed as streaming video under the Tools for Research tab, or directly here.

The series provides a basic introduction to various facets of writing for the clinical research environment. Individual sessions, each roughly 1 hour in length, include the following topics:

  • Writing Peer-Reviewed Research Articles
  • Organizing and Writing Your White Paper
  • Writing Guidance Documents
  • Managing the Process of Writing and Publication

National Institutes of Health Revises Clinical Trial Definition


The National Institutes of Health’s Office of Science Policy has released a revised definition of the term “clinical trial.” According to the OSP, this change was made in order to:

…make the distinction between clinical trials and clinical research studies clearer and to enhance the precision of the information NIH collects, tracks, and reports on clinical trials. The change is not intended to expand the scope of the category of clinical trials. No changes have been made to the NIH definition of a “Phase III clinical trial.”

The revised definition, which is available in full here, along with detailed footnotes, now defines “clinical trial” as:

A research study in which one or more human subjects are prospectively assigned to one or more interventions (which may include placebo or other control) to evaluate the effects of those interventions on health-related biomedical or behavioral outcomes.


Collaboratory Phenotypes, Data Standards, and Data Quality Core Releases Data Quality Assessment White Paper


The NIH Collaboratory’s Phenotypes, Data Standards, and Data Quality Core has released a new white paper on data quality assessment in the setting of pragmatic research. The white paper, titled Assessing Data Quality for Healthcare Systems Data Used in Clinical Research (V1.0) provides guidance, based on the best available evidence and practice, for assessing data quality in pragmatic clinical trials (PCTs) conducted through the Collaboratory. Topics covered include an overview of data quality issues in clinical research settings, data quality assessment dimensions (completeness, accuracy, and consistency), and a series of recommendations for assessing data quality. Also included as appendices are a set of data quality definitions and review criteria, as well as a data quality assessment plan inventory.

The full text of the document can be accessed through the “Tools for Research” tab on the Living Textbook or can be downloaded directly here (PDF).


Collaboratory Biostatistics and Study Design Core Releases Guidance Documents


The NIH Collaboratory’s Biostatistics and Study Design Core has released the first in a series of guidance documents focusing on statistical design issues for pragmatic clinical trials. Each of the four guidance documents are intended to help researchers by providing a synthesis of current developments in the field, discuss possible future directions, and, where appropriate, make recommendations for application to pragmatic clinical research.

The guidance documents are available through the Living Textbook and can be accessed on the “Tools for Research” tab or directly here.


New Living Textbook Chapter – Learning Healthcare Systems

A new Living Textbook topic chapter, “Learning Healthcare Systems,” has just been published. The topic includes background information on the creation and evolution of the concept of the learning healthcare system and the key attributes that define such systems, as described by the Institute of Medicine:

A learning healthcare system is [one that] is designed to generate and apply the best evidence for the collaborative healthcare choices of each patient and provider; to drive the process of discovery as a natural outgrowth of patient care; and to ensure innovation, quality, safety, and value in health care [1].

Also included in the topic chapter are ethical and regulatory implications for learning healthcare systems, patient and public engagement, the application of electronic heatlh records and other information technology, logistical and organizational challenges to bulding learning healthcare systems, and early examples of such systems in practice.


Reference

1. Institute of Medicine. The Learning Healthcare System: Workshop Summary. Olsen L, Aisner D, McGinnis JM, eds. Washington, DC: National Academies Press; 2007. Available at: http://www.iom.edu/Reports/2007/The-Learning-Healthcare-System-Workshop-Summary.aspx. Accessed April 4, 2014.