Scott Bennett Discusses the Implications of Artificial Intelligence on Health Care in Email Alert to American Health Lawyers Association Members
Scott Bennett shared his expertise in health care law and artificial intelligence in an email alert to members of the American Health Lawyers Association (AHLA). In the alert, Scott discusses a recent report from JASON, an advisory group of independent scientists, that identified opportunities and drawbacks for the integration of artificial intelligence (AI) into health care.
Scott is a member of AHLA’s Health Information and Technology practice group and serves as the Vice Chair of Publications for the Digital Health affinity group.
Read the full alert below.
Implications of Artificial Intelligence in Health Care
JASON, an advisory group of independent scientists, has released a report that analyzes the potential and implications of Artificial Intelligence (AI) in health care. The report was commissioned by the National Coordinator for Health IT (ONC) and the Agency for Healthcare Research and Quality (AHRQ), with support from the Robert Wood Johnson Foundation. According to an ONC blog postregarding the report, the ONC and AHRQ requested the study in order to “cut through the hype, assess the potential realistic implications of AI applications in health and healthcare, and understand the risks.” JASON’s specific mandate was to focus on the potential capabilities, limitations, and applications for AI in health care in the next 10 years.
AI refers to a bundle of different types of technologies that can perform tasks that normally require human intelligence. Examples of AI that are very familiar to consumers include voice recognition (such as Amazon’s Alexa and Apple’s Siri), image recognition (such as the Facebook feature that recognizes the faces of friends), and machine learning (such as the programs that have defeated human champions in chess, the TV show Jeopardy , and the ancient strategy game Go). The JASON report focuses on one type of AI technology, which the report calls “computer-based decision procedures.” Those are programs that can assist human health care providers with making decisions about diagnosis and treatment. The programs can make recommendations based upon clinical guidelines, medical literature, and knowledge that the machine has gained through its training from real-life examples. The technology has advanced to the point where some of those programs can, in the right circumstances, match the performance of human experts. As the JASON report notes: “Two recent high profile research papers have demonstrated that AI can perform clinical diagnostics on medical images at levels equal to experienced clinicians, at least in very specific examples.”
The report notes that there have been previous periods of hype and enthusiasm around the possibilities for AI in health care. However, the report concludes that the circumstances are now right for AI “to play a growing role in transformative changes now underway in both health and health care, in and out of the clinical setting.” The report cites three specific factors that could accelerate the use of AI in health care:
- Widespread frustration about the cost and quality of care in the U.S. health care system, which has made people more open to new options.
- The explosion in the number of technologies that monitor health, including smart devices, apps, and websites.
- Public acceptance of AI technologies, because of consumers’ exposure to those technologies in their daily lives.
The report identifies the obstacles to the use of AI in health care. Those include the lack of the kind of rigorous, peer-review testing that would be necessary in order to integrate AI applications into clinical practice; gaps in the large amounts of quality training data necessary to develop AI applications in health care; and concerns about accessibility, privacy, and security of the data used in AI applications.
Finally, the report makes recommendations to address the obstacles, and facilitate the use of AI in health care. JASON’s recommendations include:
- Create methods to test and validate AI programs using data different from the training set.
- Develop a data infrastructure to capture and integrate data from smart devices into AI programs.
- Require that the development of AI include measures to ensure transparency of data use, and privacy of the data.
- Investigate potential methods to create incentives to share health data, and new paradigms for data ownership.
- Support creative ways to collect information about environmental exposures, such as through smart devices or wearable technology.
- Use crowdsourcing to generate new forms of AI.
- Support measures to educate clinicians and the public about the limitations of AI in health care.
Going forward, the ONC and AHRQ have stated their intention to work with other federal agencies to identify possible uses for AI in biomedical research, precision medicine, and generally improving medical care and outcomes. A copy of the full report is available here.
The American Health Lawyers Association is the publisher and editor of this Work and holds the exclusive license to the Work. Any further reproduction of the Work requires the advance written permission of American Health Lawyers Association.