Category Archives: Uncategorized

Final Rule for Clinical Trial Data Reporting Published

On Friday of last week, the US Department of Health and Human Services published a long-awaited final rule (PDF) that governs the registration and data reporting for clinical trials with ClinicalTrials.gov. The final rule and an accompanying complementary policy issued by the National Institutes of Health (NIH) represents the formal codification and clarification of requirements first described in Section 801 of the 2007 Food and Drug Administration Amendments Act (FDAAA). These requirements oblige research sponsors or other responsible parties to register most kinds of clinical trials with an accepted, publicly available registry (such as ClinicalTrials.gov) and to report certain key data about the trial design, study population, and outcomes.

However, despite the enactment of FDAAA in 2008, compliance with many of its requirements has generally been poor, as both scholarly investigations and media reports have documented. Although registration of trials has improved during this interval, possibly due to many scientific journals refusing to publish reports from unregistered studies, basic summary data (including information about adverse events) from many clinical trials have gone unreported in the ClinicalTrials.gov registry, with academic researchers being among the worst offenders for late reporting or failure to report. In addition, although Section 801 of FDAAA includes penalties for not meeting reporting obligations, no enforcement actions have yet been taken.

The final rule, which goes into effect in January of 2017, clarifies reporting requirements and responsibilities, provides checklists for research sponsors, establishes penalties for failing to fulfill reporting obligations in a timely fashion, and obligates sponsors to furnish the full research protocol to ClincalTrials.gov. Importantly, the HHS rules and NIH policy also articulate new standards for gathering and reporting data about the race and ethnicity of trial participants—information that has often been lacking from many trials datasets.

For further details:

NIH news release summarizing new reporting requirements

National Public Radio web article and audio segment on the final rule (Francis Collins [NIH], Robert Califf [FDA], and Monique Anderson [Duke Clinical Research Institute] interviewed)

Summary of Final Rule in New England Journal of Medicine

ClinicalTrials.gov summary on Final Rule/NIH Policy

NIH Policy on Funding Opportunity Announcements for Clinical Trials

NIH Policy on Good Clinical Practice Training for NIH Awardees

 

New NIH Collaboratory resource for the transparent reporting of PCTs


The NIH Collaboratory has developed a tool to assist authors in the complete and transparent reporting of their pragmatic clinical trials (PCTs). In the PCT Reporting Template, users will find descriptions of reporting elements based on CONSORT guidance as well as on expertise from the NIH Collaboratory Demonstration Projects and Core working groups.

Particularly relevant to PCTs are recommendations on how to report the use of data from electronic health records. Other elements of importance to PCTs include reporting wider stakeholder engagement, monitoring for unanticipated changes in study arms, and specific approaches to human subjects protection. The template contains numerous links to online material in the Living Textbook, CONSORT, and the Pragmatic–Explanatory Continuum Indicator Summary tool known as PRECIS-2.

This resource is intended to assist authors in developing primary journal publications. It will be updated over time as new best practices emerge for the transparent reporting of PCTs.

Download the PCT Reporting Template.

Please note: this document opens as an Adobe PDF. If you do not have software that can open a PDF, click here to download a free version of Adobe Acrobat Reader.


This work was supported by a cooperative agreement (U54 AT007748) from the NIH Common Fund for the NIH Health Care Systems Research Collaboratory. The views presented in this document are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.


Originally published on September 1, 2016.


  • Questions or comments can be submitted via email. Please add “Living Textbook” to the Subject line of the email.

American Journal of Bioethics publishes special issue on ethics of research in usual care settings


The American Journal of Bioethics has published a special issue on ethics of research in usual care settings. These publications were supported by a bioethics supplement awarded to the NIH Collaboratory’s Regulatory/Ethics Core by the NIH Office of the Director. The issue includes an introduction from Jeremy Sugarman, MD, MPH, MA, along with 5 additional articles:


Journal Editors Propose New Requirements for Data Sharing

On January 20, 2016, the International Committee of Medical Journal Editors (ICMJE) published an editorial in 14 major medical journals in which they propose that clinical researchers must agree to share the deidentified data set used to generate results (including tables, figures, and appendices or supplementary material) as a condition of publication in one of their member journals no later that six months after publication. By changing the requirements for manuscripts they will consider for publication, they aim to ensure reproducibility (independent confirmation of results), foster data sharing, and enhance transparency. To meet the new requirements, authors will need to include a plan for data sharing as a component of clinical trial registration that includes where the data will be stored and a mechanism for sharing the data.

Evolving Standards for Data Reporting and Sharing

As early as 2003, the National Institutes of Health published a data sharing policy for research funded through the agency, stipulating that “Data should be made as widely and freely available as possible while safeguarding the privacy of participants, and protecting confidential and proprietary data.” Under this policy, federally funded studies receiving over $500,000 per year were required to have a data sharing plan that describes how data will be shared, that shared data be available in a usable form for some extended period of time, and that the least restrictive method for sharing of research data is used.

In 2007, Congress enacted the Food and Drug Administration Amendments Act. Section 801 of the Act requires study sponsors to report certain kinds of clinical trial data within a specified interval to the ClinicalTrials.gov registry, where it is made available to the public. Importantly, this requirement applied to any study classified as an “applicable clinical trial” (typically, an interventional clinical trial), regardless of whether it was conducted with NIH or other federal funding or supported by industry or academic funding. However, recent academic and journalistic investigations have demonstrated that overall compliance with FDAAA requirements is relatively poor.

In 2015, the Institute of Medicine (now the National Academy of Medicine) published a report that advocates for responsible sharing of clinical trial data to strengthen the evidence base, allow for replication of findings, and enable additional analyses. In addition, these efforts are being complemented by ongoing initiatives aimed at widening access to clinical trial data and improving results reporting, including the Yale University Open Data Access project (YODA), the joint Duke Clinical Research Institute/Bristol-Myers Squibb Supporting Open Access to clinical trials data for Researchers initiative (SOAR), and the international AllTrials project.

Responses to the Draft ICMJE Policy

The ICMJE recommendations are appearing in the midst of a growing focus on issues relating to the integrity of clinical research, including reproducibility of results, transparent and timely reporting of trial results, and facilitating widespread data sharing, and the release of the draft policy is amplifying ongoing national and international conversations taking place on social media and in prominent journals. Although many researchers and patient advocates have hailed the policy as timely and needed, others have expressed concerns, including questions about implementation and possible unforeseen consequences.

The ICMJE is welcoming feedback from the public regarding the draft policy at www.icmje.org and will continue to collect comments through April 18, 2016.

Resources

Journal editors publish editorial in 14 major medical journals stipulating that clinical researchers must agree to share a deidentified data set: Sharing clinical trial data: A proposal from the International Committee of Medical Journal Editors (Annals of Internal Medicine version). January 20, 2016.

A New England Journal of Medicine editorial in which deputy editor Dan Longo and editor-in-chief Jeffrey Drazen discuss details of the ICJME proposal: Data sharing. January 21, 2016.

A follow-up editorial in the New England Journal of Medicine by Jeffrey Drazen: Data sharing and the Journal. January 25, 2016.

Editorial in the British Medical Journal: Researchers must share data to ensure publication in top journals. January 22, 2016.

Commentary in Nature from Stephan Lewandowsky and Dorothy Bishop: Research integrity: Don’t let transparency damage science. January 25, 2016.

National Public Radio interview on Morning Edition: Journal editors to researchers: Show everyone your clinical data with Harlan Krumholz. January 27, 2016.

Institute of Medicine (now the National Academy of Medicine) report advocating for responsible sharing of clinical trial data: Sharing clinical trial data: maximizing benefits, minimizing risk. National Academies Press, 2015.

Rethinking Clinical Trials Living Textbook Chapter, Acquiring and using electronic health record data, which describes the use of data collected in clinical practice for research and the complexities involved in sharing data. November 3, 2015.

NIH Health Care Systems Research Collaboratory data sharing policy. June 23, 2014.

Commentary from Richard Platt and Joakim Ramsberg in New England Journal of Medicine on challenges of data sharing from healthcare systems research. April 20, 2016.

List of International Committee of Medical Journal Editors (ICMJE) member journals.

Applying PRECIS Ratings to Collaboratory Pragmatic Trials

A new article published in the journal Trials provides a look at how the  Pragmatic–Explanatory Continuum Indicator Summary, or PRECIS, rating system can be applied to clinical trials designs in order to examine where a given study sits on the spectrum of explanatory versus pragmatic clinical trials.

The PRECIS-2 criteria are used to rate study designs as more or less “pragmatic” according to multiple domains that include participant eligibility, recruitment methods, setting, organization, analysis methods, primary outcomes, and more. In this context, “pragmatic” refers to trials that are designed to study a therapy or intervention in a “real world” setting similar or identical to the one in which the therapy will actually be used. Pragmatic trials stand in contrast to explanatory trials, which are typically designed to demonstrate the safety and efficacy of an intervention under highly controlled conditions and in carefully selected groups of participants, but which may also be difficult to generalize to larger or more varied populations.

Schematic of PRECIS-2 Wheel used to evaluate where a given trial design resides upon the explanatory-pragmatic spectrum.
PRECIS-2 Wheel.  Kirsty Loudon et al. BMJ 2015;350:bmj.h2147. Copyright 2015 by British Medical Journal Publishing Group. Used by permission.

Clinical trials are almost never wholly “explanatory” or wholly “pragmatic.” Instead, many studies exist somewhere on a spectrum between these two categories. However, understanding how these different attributes apply to trials can help researchers design studies that are optimally fit for purpose, whether that purpose is to describe a biological mechanism (as in an explanatory trial) or to show how effective an intervention is when used across a broad population of patients (as in a pragmatic trial).

In their article in Trials, authors Karin Johnson, Gila Neta, and colleagues  applied PRECIS-2 criteria to 5 pragmatic clinical trials (PCTs) being conducted through the NIH Collaboratory. Each trial was found to rate as “highly pragmatic” across the multiple PRECIS-2 domains, highlighting the tool’s potential usefulness in guiding decisions about study design, but also revealing a number of challenges in applying it and interpreting the results.

Study authors Johnson and Neta will be discussing their findings during the NIH Collaboratory’s Grand Rounds on Friday, January 22, 2016 (an archived version of the presentation will be available the following week).


Johnson KE, Neta G, Dember LM, Coronado GD, Suls J, Chambers DA, Rundell S, Smith DH, Liu B, Taplin S, Stoney CM, Farrell MM, Glasgow RE. Use of PRECIS ratings in the National Institutes of Health (NIH) Health Care Systems Research Collaboratory. Trials. 2016;17(1):32. doi: 10.1186/s13063-016-1158-y. PMID: 26772801. PMCID: PMC4715340.
You can read more about the NIH Collaboratory PCTs featured as part of this project at the following links:

ABATE (Active Bathing to Eliminate Infection)

LIRE (A pragmatic trial of Lumbar Image Reporting with Epidemiology)

PPACT (Collaborative Care for Chronic Pain in Primary Care)

STOP-CRC (Strategies & Opportunities to Stop Colon Cancer in Priority Populations)

TIME (Time to Reduce Mortality in End-Stage Renal Disease)
Additional Resources

An introductory slide set on PCTs (by study author Karin Johnson) is available from the Living Textbook:

Introduction to Pragmatic Clinical Trials

The University of Colorado Denver - Anschutz Medical Campus publishes an electronic textbook on pragmatic trials:

Pragmatic Trials: A workshop Handbook

 

 

 

New White Paper from Collaboratory PRO Core on the Impact of Patient-Reported Outcomes on Clinical Practice

Patient-reported outcome (PRO) measures are often used in pragmatic clinical trials to assess endpoints that are meaningful to stakeholders. These measures may also support patient care, although there is mixed evidence about effects of PROs on (1) improved patient-provider communication, clinical decision-making, and patient satisfaction; (2) enhanced patient outcomes; and (3) helped ensure better quality of care from a healthcare systems perspective. In a new white paper from the Collaboratory Patient-Reported Outcomes (PRO) Core, the available evidence in the literature is examined to determine when PROs have the potential to provide added value to patient care.

The full text of the white paper can be found here: Impact of Patient-Reported Outcomes on Clinical Practice_V1.0

Study Recommends Shared Decision Making for Research on Medical Practices

Research on medical practices (ROMP) includes medical record reviews, comparative effectiveness research, quality improvement interventions, and point-of-care randomization, and may improve the efficiency, quality, and cost-effectiveness of medical care.

In a study by Maureen Kelly and colleagues recently published in The American Journal of Bioethics, researchers found that patients may not fully understand the rationale for ROMP or the extent to which this type of research already exists. Patients care most about how risks and consent are managed and communicated within the physician-patient relationship, view research as separate from usual care, and place their trust in their physician, whom they rely on to identify and filter risks.

Because current approaches to oversight, risk assessment, and informed consent are poorly suited to ROMP, the authors suggest a model of Shared Decision Making (SDM) as an approach to disclosure, consent, randomization and data sharing. With SDM, the physician engages the patient in the decision-making process and encourages conversations regarding the uncertainty of treatment options.

In a related commentary, Dr. Jeremy Sugarman urges consideration of the appropriateness of this analytic frame for ROMP due to the important differences between the primary aims of research and clinical care: in research the primary goal to generate information, while for clinical care, the primary goal is to benefit patients.

Reference: Kelley M, James C, Alessi Kraft S, et al. Patient Perspectives on the Learning Health System: The Importance of Trust and Shared Decision Making. 2015;15:4–17. PMID: 26305741. doi: 10.1080/15265161.2015.1062163.
For more information on RoMP, see the Grand Rounds Presentation from December 2014: A RoMP through the Empirical Ethics of Pragmatic Clinical Trials

ClinicalTrials.gov Analysis Dataset Available from CTTI

Tools for ResearchAs part of a project that examined the degree to which sponsors of clinical research are complying with federal requirements for the reporting of clinical trial results, the Clinical Trials Transformation Initiative (CTTI) and the authors of the study are making the primary dataset used in the analysis available to the public. The full analysis dataset, study variables, and data definitions are available as Excel worksheets from the CTTI website and on the Living Textbook’s Tools for Research page.


Researchers Find Poor Compliance with Clinical Trials Reporting Law

A new analysis of data from the ClinicalTrials.gov website shows that despite federal laws requiring the public reporting of results from clinical trials, most research sponsors fail to do so in a timely fashion—or, in many cases, at all. The study, published in the March 12, 2015 issue of the New England Journal of Medicine, was conducted by researchers at Duke University and supported by the NIH Collaboratory and the Clinical Trials Transformation Initiative (CTTI). The study’s authors examined trial results as reported to ClinicalTrials.gov and evaluated the degree to which research sponsors were complying with a federal law that requires public reporting of findings from clinical trials of medical products regulated by the U.S. Food and Drug Administration (FDA).

“We thought it would be a great idea to see how compliant investigators are with results reporting, as mandated by law,” said lead author Dr. Monique Anderson, a cardiologist and assistant professor of medicine at Duke University.

Photograph of study author Monique L. Anderson, MD
Monique L. Anderson, MD.
Photo courtesy of Duke Medicine.

Using a publicly available database developed and maintained at Duke by CTTI, the authors were able to home in on trials registered with ClinicalTrials.gov that were highly likely to have been conducted within a 5-year study window and to be subject to the Food and Drug Administration Amendments Act (FDAAA). This federal law, which was enacted in 2007, includes provisions that obligate sponsors of non-phase 1 clinical trials testing medical products to report study results to ClinicalTrials.gov within 12 months of the trial’s end. It also describes allowable exceptions for failing to meet that timeline.

However, when the authors analyzed the data, they found that relatively few studies overall—just 13 percent—had reported results within the 12-month period prescribed by FDAAA, and less than 40 percent had reported results at any time between the enactment of FDAAA and the 5-year benchmark.

“We were really surprised at how untimely the reporting was—and that more than 66 percent hadn’t reported at all over the 5 years [of the study interval],” said Dr. Anderson, noting that although prior studies have explored the issue of results reporting, they have until now been confined to examinations of reporting rates at 1 year.

Another unexpected result was the finding that industry-sponsored studies were significantly more likely to have reported timely results than were trials sponsored by the National Institutes of Health (NIH) or by other academic or government funding sources. The authors noted that despite a seemingly widespread lack of compliance with both legal and ethical imperatives for reporting trial results, there has so far been no penalty for failing to meet reporting obligations, even though FDAAA spells out punishments that include fines of up to $10,000 per day and, in the case of NIH-sponsored trials, loss of future funding.

“Academia needs to be educated on FDAAA, because enforcement will happen at some point. There’s maybe a sense that ‘this law is for industry,’ but it applies to everyone,” said Anderson, who points out that this study is being published just as the U.S. Department of Health and Human Services and the NIH are in the process of crafting new rules that deal specifically with ensuring compliance with federal reporting laws.

According to Anderson, increased awareness of the law, coupled with stepped-up enforcement and infrastructure designed to inform researchers about their reporting obligations, have the potential to improve compliance with both the letter and the spirit of the regulations. “I think reporting rates will skyrocket after the rulemaking,” she says.

In the end, Anderson notes, reporting clinical trials results in order to contribute to scientific and medical knowledge is as much an ethical obligation for researchers as a legal one: “It’s something we really promise to every patient when they enroll on a trial.”


Read the full article here:

Anderson ML, Chiswell K, Peterson ED, Tasneem A, Topping J, Califf RM. Compliance with results reporting at ClinicalTrials.gov. N Engl J Med. 2015;372:1031-9. DOI: 10.1056/NEJMsa1409364.
Additional reading:

"Results of many clinical trials not being reported" (NPR)

"Clinical trial sponsors fail to publicly disclose report results, research shows" (Forbes.com)

Principles and Guidelines for Reporting Preclinical Research


In June 2014, the NIH held a joint workshop with the Nature Publishing Group and Science on the issue of reproducibility and rigor of research findings. The workshop’s goal was to strengthen approaches to support biomedical research that is reproducible, robust, and transparent. An editorial appears in the November 5, 2014, online edition of Nature.

Workshop participants included journal editors representing more than 30 basic/preclinical science journals in which NIH-funded investigators have most often published. Attendees reached consensus on a set of principles and guidelines to facilitate the interpretation and repetition of experiments as they have been conducted in published studies. Principles endorsed by the group cover five areas, recommended to be delineated in each journal’s Information for Authors section or other public place:

  • Rigorous statistical analysis: Outline the journal’s policy for statistical analysis and have a method of checking the statistical accuracy of submissions
  • Transparency in reporting: Provide a checklist of reporting standards (replicates, statistics, randomization, blinding, sample-size estimation, inclusion/exclusion criteria) and require authors to state where this information is located in the manuscript
  • Data and material sharing: Stipulate that all datasets on which the conclusions of the paper rely must be made available upon request, where appropriate, during manuscript review and upon publication
  • Consideration of refutations: Include the journal’s policy for considering refutations of the paper, subject to its usual standards of quality
  • Best practices guidelines: Establish methods for dealing with image-based data and biological material (antibodies, cell lines, animals)

The existence of these guidelines does not preclude the need for replication or independent verification of research results, but should make it easier to perform such replication. Journals endorsing the proposed principles and guidelines are listed here.