In a previous podcast, we heard the first part of the story of how big data solutions are being applied to federal programs at the National Institutes of Health and the National Cancer Institute. This technical solution has a very human face, given the global interest and reach of cancer research and the community of providers, patients and families impacted by it.
In the second half of our conversation, we dug into the details to learn more about the program that McCargo and his team support at NIH, and the sheer volume of data the team manages. “There are numerous documents that support a clinical trial for cancer treatment, or any medical treatment,” McCargo explained. The team at ARServices collects and archives the documents associated with protocols, such as letters of intent, amendments, and privacy agreements. Those documents, perhaps unremarkable on their own, collectively enable the National Cancer Institute to analyze data more quickly.
“Essentially, we are looking at thousands of records and instances. We analyze them, we aggregate them, and we present the aggregate of that finding for analysis to NIH scientists.” This aggregated data enables the NIH to better evaluate the effectiveness of drugs that are being used in clinical trials in an effort to treat cancer.
Ultimately, big data delivers a decrease in clinical trial development and approval timelines.
To hear more about how National Cancer Institute uses big data to analyze clinical trials, listen below.