Skip to content

Advancing the Use of Real-World Evidence with a Noteworthy Industry Collaborative Pilot Project

Published

August 2018

By

Amy Abernethy, MD, PhD, Nick Brown

Advancing the Use of Real-World Evidence with a Noteworthy Industry Collaborative Pilot Project

On Tuesday, July 10th, Friends of Cancer Research held a meeting called The Future of Real-World Evidence. Friends is a key driver in D.C. behind collaborative efforts to move healthcare policies such as the 21st Century Cures Act forward. The meeting convened stakeholders in cancer real-world evidence (RWE), including Flatiron, many other cancer data organizations, industry partners, patient advocacy organizations and, importantly, the FDA.

This project was in the planning stages for quite some time; it began with several conversations with senior leadership at PCORI about how we might develop a general framework for generating and analyzing cancer-specific real-world endpoints, which could be applied and validated across multiple RWE datasets. The focus was on endpoints because of their critical role in evidence generation and frequent inconsistencies.

The ultimate goal of this project was to see how multiple datasets representing different data sources and populations could contribute more complete and reliable understanding to an important current question in cancer care. Friends led the effort and brought in a number of data providers, including Cota, IQVIA, PCORnet sites, Cancer Research Network / Kaiser Permanente, Mayo Clinic/OptumLabs, and Flatiron, as well as other stakeholders like FDA and NCI. We chose a common question, and agreed on common analytic plans and common outputs.

The project focused on immunotherapies in lung cancer, asking the following question: "What outcomes can be evaluated for aNSCLC patients treated with immune checkpoint inhibitors?"

Once the participants were convened and the topic outlined, things moved along surprisingly quickly. Participants had just two short months to generate and analyze the datasets in preparation for public review at the meeting planned for July 10.

At the meeting, the output from each vendor was displayed side-by-side (see here, page 8). One of the main areas of focus during the discussion was the remarkable similarity in demographic and clinical characteristics despite differences in data source (e.g., electronic health records vs. claims; academic medical centers vs. health systems vs. community oncology practices).

Public discussion about the results presented on July 10 was fascinating. First, there was a lot of enthusiasm about the collaboration and how quickly the work itself got done (in just a few months!), which in the cancer research world, as we know, is virtually unheard of. Second, everyone who was there was particularly impressed by the data vendors' willingness to collaborate in a pre-competitive way and contribute their data to the effort. These observations were repeatedly referenced by the broad range of stakeholders who participated on the panels throughout the day.

Beyond answering the research question at hand, what did this project teach us?

As expected, differences in datasets lead to differences in results; it is important to know the characteristics of a dataset in order to put research findings into context. Different data sources have different advantages and limitations depending on the topic; for example, healthcare claims data may represent broader populations whereas electronic health records may have more clinical information; laying datasets side-by-side helps to clarify what to choose for what purpose. Different cancer-focused endpoints help to characterize a result. For example, overall survival is a relatively consistent "hard" endpoint (i.e., patients are either alive or deceased), but may be difficult to ascertain depending on availability of mortality data; other information such as time-to-treatment-discontinuation or real-world progression-free survival might be more helpful. How the endpoints are calculated matters a lot.

This is a major milestone, as the industry has been working towards standardization and quality control of real-world evidence, and we believe that this pilot is a critical step in surfacing and documenting what needs to be standardized in a side-by-side way. At Flatiron, like the concept of open-source software, we believe that consistent standards are not only a public good, but will also act as a catalyst to move the whole industry along more quickly.

In addition, there was a lot of talk about the importance of a high-quality, national mortality dataset with enough recency and linkability to support analyses like these. This has been a big focus for us at Flatiron and we are delighted that our recent publication could contribute to this conversation: "Development and Validation of a High-Quality Composite Real-World Mortality Endpoint," published by Health Services Research and about which our colleague Melissa Curtis wrote on Flatiron's blog, here.

Most importantly, we're excited that this project has helped shine a light on the potential of RWE to be used in innovative and high-impact ways, ultimately accelerating the time it takes to get the most effective therapies into the hands of patients.

Share