Skip to main content
  • FEB 20, 2020

    Patient Data in a Google World (Part 2)

    Enforme

    In the first installment of this post, I commented on the well-reported article by Rob Copeland, Dana Mattioli, and Melanie Evans in The Wall Street Journal entitled, “Inside Google’s Quest for Millions of Patient Records”. This exceptional article offers a detailed look at Google’s remarkably clumsy efforts to aggregate patient data. Google is attempting to collect and analyze vast amounts of patient data from some of America’s most prestigious medical centers and health systems. Despite the best intentions of the participating institutions, these initiatives have been met with false starts, backpedaling, patient outrage, and, in some cases, federal investigations.

    The WSJ article states that Google executives were “shocked” that patients objected to these large-scale clinical data collection initiatives. The fact that Google would find this shocking is the shocking part. Fundamentally, big tech has never taken the time to understand the culture of healthcare. Tech generally sees healthcare as a big GDP number and wants a piece of it. As long as that’s the case, these efforts are doomed to failure.

    Google Health has now brought in Dr. David Feinberg to lead their efforts. Dr. Feinberg has stated that he believes Google can be tremendously “helpful” to the healthcare community. I believe this to be true and would go further to suggest that it’s essential that Google, and other major tech companies, establish a significant and lasting presence in healthcare. With their vast technical, analytical, and data science resources, these entities can make a tremendous contribution to our nation’s health. I believe that big tech holds the key to remarkably important insights in areas such as epidemiology, evidence-based practice, health economics, clinical trials, care disparities, and much more.

    The question, then, is not “Should Google be able to collect large amounts of patient data?” The question is “How best to do it?” What will be the rules that govern data collection, storage, transmittal, and access? What rights will patients have to refuse to participate or to have access to their own data?

    I believe that the professional associations that represent patients, health systems, and practitioners are the entities best positioned to take on this challenge. These organizations were each founded with a mission to educate, set quality standards, and advocate for improved patient care. In addition, many associations understand the fundamentals of proper clinical data governance through their own data registry and real-world evidence initiatives. I believe that a road map agreement between big tech, patient advocates, and medical societies—a non-traditional alliance for each of those stakeholders—is essential to the transparent, ethical, and clinically appropriate use of patient data.

    So how exactly could such a potentially unwieldy alliance be managed?

    I can envision a multidisciplinary consortium of key professional associations, perhaps a subset of the Council of Medical Specialty Societies, in partnership with care team members such as the AAPA along with the American Hospital Association, (https://www.aha.org/) and the National Patient Advocate Foundation, meeting with tech leaders to form a new advisory body. Let’s call it the Clinical Data Advisory Council or “CDAC” for short.

    CDAC could operate in a fashion similar to Facebook’s newly created “oversight board”. CDAC would play the role of ombudsman with a charge and scope that would, by necessity, be wide-ranging. It would need to deliberate on issues as far-flung as EHR integration standards, data harmonization, data encryption, de-identification and anonymization, HIPAA compliance, data access and analytic rights, data privacy, patient consent, and relations with industry, to name but a few. These deliberations, carried out in a public and transparent fashion, would then result in draft principles that would be made available for public comment, refinement, and redistribution.

    CDAC would essentially be a regulatory start-up. It would require its own independent staff, governance procedures, rotating slate of officers, and bylaws. Further, it would need a budget derived from a trust established by the tech companies who wish a seat at the table. Establishing and maintaining this entity would no doubt be expensive. However, this expenditure would be infinitely more affordable than the expenses incurred in this current climate of uncertainty and mistrust.

    Are there problems with health systems, professional associations, and advocacy groups serving in this role? Of course. The deliberate and consensus-based decision-making culture of organized medicine runs counter to the “move fast and break things” ethos of tech (a philosophy diametrically opposed to the guiding principles of healthcare). Each side would need to recognize the motivations and cultural norms of the other and find ways to accommodate their differences. But while debating these policies would take considerable time and effort, it would ultimately speed the appropriate participation of big tech in healthcare and create a foundation that all parties could build upon.

    What does success look like? Data from large populations of patients, carefully curated in conformance with clearly established guidelines and thoughtfully applied analytics, could be one of the most significant advances in modern healthcare. A meaningful alliance and a true partnership between big tech and medicine, one that respects the rights of patients, can have a profoundly beneficial effect on patient care across the globe.