This week’s question: What is the most important thing that could happen in 2012 to ensure better utilization of big data—housed in EMRs or other platforms—for drug development?
There are massive amounts of data from the mapping of the human genome, expansion of health IT, and the increased use of electronic health records. And yet, “big data” to advance development of new therapies for disease remains largely untapped. We know the potential behind these vast troves of information. Earlier this week, McKinsey Quarterly outlined how various sectors are grappling with the best ways to manage “big data” to guide their decisions.
The Office of the National Coordinator (ONC) at HHS has overseen progress in doctors’ and hospitals’ use of Health IT to enhance care; but how can those of us in the medical research advocacy community make sense of the sea of data and make it useful? With the Medicare and Medicaid incentive programs for the "meaningful use" of certified EHR technology—that is, their use by providers to achieve significant improvements in care—taking off, health information exchange standards continuing to be defined, and patient engagement in their health information starting to increase through initiatives such as Blue Button, the opportunities to leverage big data for medical solutions are plentiful. And with the Healthcare Information and Management Systems Society (HIMSS) conference underway this week, it’s as good a time as any to contemplate the possibilities.
But where do we start? And what matters most in the next year? We asked a few individuals on the frontlines of health technology and research innovation to weigh in:
|Leslie Power , Practice Owner, Health IT, and Will FitzHugh, Chief Science Officer and Practice Owner, Research, 5AM Solutions
Straightforward Broad Consent Mechanism: The issue of consent is not about big data or technology – it is trust. The most efficient, cost effective way to further a national dialogue on the benefits of research and participation would be to broaden consent beyond a specific study. Many people (perhaps most?) would participate in research - but only with clarity on the benefits and risks, and the control over who can use their data for what purpose.
Read more . . .
|Rich Elmore, Query Health Initiative Coordinator
Read more . . .
Kris Joshi, Global Vice President – Healthcare, Oracle
Prospective Informed Consent:
As technical and security barriers to extracting data from EMRs decreases, and motivation to collaborate grows, there remains one significant hurdle - patient consent. In 2012, institutions that want to participate in clinical R&D collaborations would do well to begin establishing patient consent and privacy guidelines that support R&D processes.
Read more . . .
Since our inception, FasterCures has spotlighted the need to leverage health IT for research purposes. We’ve comprehensively analyzed the issue, spotlighted solutions that are in play and yielding outcomes, and provided recommendations. To accelerate the process of turning ideas into therapies that will benefit patients, we the medical research advocacy community, must find tangible ways to make sure our health IT framework is built to improve care and advance cures. For more information, read our recent Still Thinking Research report.
Straightforward Broad Consent Mechanism
By Leslie Power and Will FitzHugh, 5AM Solutions
Electronic medical records (EMRs) can transform our healthcare system by making medical data accessible and available to everyone concerned with a patient’s health. Significant progress, particularly with regard to technology adoption, has been isolated. So, we wave a magic wand creating widespread EMR adoption - what value could we attain, who would benefit, and when?
Researchers who want to improve standards of care and identify predictive markers for treatment efficacy and adverse events would be one set of cross-industry stakeholders who could benefit immediately. Significant barriers to value would still remain. The lack of a consistently-applied, straightforward consent mechanism and the lack of support for molecular data storage and use of genomic data are just two.
The issue of consent is not about big data or technology - its trust. The solution is national dialogue on the social, economic and personal benefits of research and participation. Many people (perhaps most?) would participate in research - but only with clarity on the benefits and risks, and the control over who can use their data for what purpose. With expanded adoption of EMRs, this discussion becomes easier. If an EMR contains clinical and genomic data on a person, trips to a research center may never be required - people can consent to have their clinical and genomic data used for research purposes which would largely be computational efforts in data mining and biomarker discovery. Whether we choose existing or novel technologies to implement, the most efficient, cost effective way to further the dialogue would be to broaden consent beyond a specific study. This will address the current patchwork quilt of consent burdening biomedical research. Efforts like ‘Consent To Research’ (http://www.weconsent.us) target this issue.
The issue of broader consent becomes even more important when molecular data is considered. This data has value across areas of research, such as identifying risk factors for rare and common diseases, and finding predictive markers for efficacy and adverse reactions to pharmaceuticals. Requiring narrow consent for specific uses limits efficiency. In addition, current EMRs are not set up to store such data, and the fact that DNA sequence can be generated from different devices with different quality parameters renders platform comparisons suspect. Modules to securely store and process DNA genotype data need to integrate into EMR systems, and those models should be focused on creating a layer of interpretation above the raw genotype data.
Research focused on improving healthcare requires large sample sizes; the effects of genetic factors and other methods for personalizing medicine can be modest. Only by creating consented, large, nationwide sample sets can we reach the statistical power to identify these factors and create the predictive models to personalize healthcare and make more efficient use of our current crop of therapeutic, diagnostic and preventative tools. It will take a magic wand to make this happen in 2012, but making progress on these fronts will ignite faster cures and better health the fastest.
Standards for Distributed Population Health Queries
By Rich Elmore, Query Health Initiative Coordinator
“Big data” is typically managed in large pooled data sets, combining data from many settings of care. While there are terrific applications of pooled data, including registries and successful use of large research databases, there are critical issues of policy and strategy. Pooled “Big data” in healthcare has its benefits but also has several drawbacks.
From a policy perspective, pooled data approaches are problematic. Large pools of PHI are targets for attack from bad actors. Also, many PHI-holders have their own consent agreements with their patients. It is difficult to manage these different consent agreements when pooling PHI in one place. Additionally, HIPAA requires covered entities to control the flow of PHI, either directly or through agreements. When data is pooled, the party pooling the data must have a business associate agreement or data use agreement (in the case of research databases) with each covered entity that contributes data to the pool, with the same (or similar terms). This can be impracticable for the third party or undesirable for covered entities, as they often have to agree to non-negotiable terms in the agreement in order to pool their data.
From a strategic standpoint, pooled data is inflexible, stale and inaccurate. Pooled data approaches aren’t generally sustainable: the benefits of pooled approaches are too indirect to support the operational costs and complexity. Furthermore, health care organizations are unwilling to lose control of their information not just for policy reasons, but also due to competitive considerations.
But the absence of a standards-based alternative has given rise to pooled data approaches with all of these substantial drawbacks.
2012 is the defining moment for new standards that will enable big data analytics in a distributed environment. An ONC sponsored open government initiative, Query Health, is defining the standards and specifications for distributed population queries. Researchers will be able to leverage these standards to be “send questions to the data”. Questions can be sent to data sources including EHRs, HIEs, PHRs, payers’ clinical record or any other clinical record. Aggregate responses leave patient level information secure behind the data source’s firewall. Aggregate responses support questions related to disease outbreak, quality, CER, post-market surveillance, performance, utilization, public health, prevention, resource optimization and many others.
The path for these new standards will dramatically cut cycle time for deployment of new questions from years to days – making possible support for a learning health system.
The focus of 2012 should be laying the foundation for success: defining the standards and services for distributed population health queries. This is one extremely impactful way to leverage the potential of big data for research. For more information, visit QueryHealth.org.
Prospective Informed Consent
By Kris Joshi, Global Vice President – Healthcare, Oracle
Utilizing EMR data for secondary use, particularly to accelerate drug R&D, has been an Industry aspiration for a long time. However, lack of incentives for data sharing, technical hurdles, and security concerns all kept that vision out of reach for many years. In the last couple of years some leading institutions have finally made significant progress on multiple fronts to bring that long-held vision to life. The technology to extract data from EMRs has been around for a while, but it is now easier to deploy, cheaper, and more reliable thanks to investments in a new generation of analytic platforms. The motivation to share data has also grown significantly as health systems have become more confident in their ability to deal with security and privacy concerns, and see the potential benefits of better research collaboration. With the technical and security issues addressed, and motivation to collaborate growing, there remains one significant hurdle - patient consent.
Very few institutions today have a good system in place to gather prospective informed consent from patients for research use of clinical data. It takes a while to put a prospective consent mechanism in place with IRB oversight. Hence, in 2012, aspiring institutions that want to participate in clinical R&D collaborations would do well to begin establishing patient consent and privacy guidelines that support R&D processes. Even with de-identified data, full use for research purposes including the ability to re-contact a patient if needed requires proper consent. It would be a shame if after all the hard work to address the technical and business barriers, institutions discover in the end that the lack of patient consent prevents them from moving forward. Now is the time to get moving on it!