July 08, 2019
Health Care Alert
Author(s): Sarah E. Swank
The term artificial intelligence or AI is already emerging in several industries and has begun to penetrate the health care market. Images of AI include robots, self-driving cars and computers replacing human workers. What does it mean for health care and is the future already here? This alert will discuss the seven common questions about AI in health care, including the current and future applications of AI in health care, AI’s benefits and the barriers to its advancement.
Artificial intelligence or AI is the term for the development of computer algorithms that come up with predictive models. Computers learn to “think” through computer programs, to analyze and answer questions previously requiring human intervention. If successful, AI delivers results reportedly faster and more accurately than humans and becomes better over time. Software receives “training” through real-time feedback and “adaptation” through improved performance. On April 9, 2019, the U.S. Food & Drug Administration (FDA) put forward a proposed regulatory framework for regulating medical devices with AI technology. Under this proposed framework, the FDA defined AI broadly as “the science and engineering of making intelligent machines, especially intelligent computer programs.” This can be done through models based on:
AI is in a long line of new disruptors in the health care space. For example, the use of data and data analytics is growing in importance in health care, along with mobile health care and retail pharmacy or health plan spaces used for flu shots, dieticians and yoga. The use of data to guide health care is widespread and seen as critical to navigating patients. This data is integrated with other data (such as product purchases) to predict potential health care needs. Where AI can differ from pure data analytics is that it uses “intelligent machines” and “machine learning” to review.
In short—yes, AI is new, although the movement toward machine learning and AI technologies has been happening for decades. You may remember the 1990s TiVo Suggestions, which tracked television recordings to provide suggestions on additional programs to watch based on past viewings, or the video games that tracked body movement. Today, trained surgeons use surgical robots to perform procedures (such as minimally invasive surgeries).
The first AI efforts in health care are underway. AI in health care includes attempts at prevention and improved patient outcomes. For example, current applications are mostly part of studies and early in development, including:
Many of these market disruptive uses of AI are aligned with where the industry is heading, which includes getting accurate, real-time data into the hands of practitioners. CMS considered, as part of the Accountable Care Organization Medicare Shared Savings Program (ACO MSSP), the need for real-time data in the hands of physicians and practitioners to make the best, highest quality decisions for patients. The idea is that this data would ensure that patients receive the right care in the right care setting. AI may be the solution to this problem. AI can be used across care settings, including in hospitals, physician offices and direct to consumer as digital applications.
As with other technological advances, the laws have not caught up with the technology. As discussed above, the FDA set out a proposed framework program for AI and medical devices. Previously, the FDA promulgated regulations regarding the use of apps to determine when an app is considered a medical device that falls under FDA regulations. The FDA is trying to get ahead of AI’s use in health care with its discussion paper requesting submission of comments. The FDA acknowledged that the current medical device regulatory framework could not be adapted to AI and machine learning technologies. In short, the FDA is expecting transparency, real-world performance monitoring and periodic updates to the FDA on software changes.
In the current state of health care, data exchange is considered critical to successfully managing populations and individuals who can get lost in the system without proper navigation. At the same time, electronic health records and other digital systems are not always able to talk to each other. In the preamble of the HIPAA Security Rule, it is clear that in the early 2000s Health and Human Services (HHS) did not want to select one national medical record system but instead opted to promote competition among IT vendors to spur innovation. The outcome was disparate systems that are not connected.
As demonstrated from experiences in the Center for Medicare and Medicaid Services Innovation Center (CMMI) Pioneer ACO Program and the MSSP ACO program, the inability to exchange data can be a barrier to the success of improving patient outcomes and reducing costs. An analogy can be drawn between AI and the Pioneer ACO program on the potential problem of data exchange. Under that program, CMMI sought organizations to enter the program advanced in the development of populations health and adoption of technology. Some Pioneer ACOs dropped out of the program. These networks cited the inability to exchange data as one of the main reasons for leaving the program.
To be successful, AI technology and patient information must be exchanged across providers and integrated into patient medical records.
Data integration is critical to caring for patients. As with other innovation and technology-driven changes in health care, the data is not always shared back to the patient’s care team (such as their primary care physician) or integrated into the patient’s medical records. For example, many early employer navigation and telehealth programs did not integrate data from a patient encounter or conversation with information in the patient’s medical record.
As with any technology using, disclosing, creating and storing protected health information (PHI), HIPAA and associated regulations apply. Some argue that data compliance will be easier with AI. That being said, those developing AI should ensure HIPAA-compliant technology. Providers should consider updates to current HIPAA policies, procedures, notices, consents and forms as AI technology continues to expand.
In addition, as with telehealth and digital health there are legal barriers regarding reimbursement for services. In telehealth, we have seen the slow adoption of reimbursement by private and public payors. It is uncertain if those changes will be seen with advances in AI. That being said, AI may assist with more predictable and accurate revenue cycles, something currently accomplished with routine and regulator audits of electronic health records (EHRs).
As the use of AI becomes more widespread, traditional fraud and abuse laws (such as Stark, the Anti-Kickback Statute and the Civil Monetary Penalties (CMP) laws) must be a consideration in reviewing the financial relationships among hospitals, physicians, providers, patients and technology companies. The fraud and abuse laws are seen by some as the largest barrier to physicians and patients receiving new technology that advances patient care. For example, AI technology rolled out as part of a digital health solution (such as virtual physician visits) should be analyzed under the CMP law in the same way as digital health applications without AI. AI is generally a component of another technology and should be included in the fraud and abuse analysis under current laws. Careful consideration of the fraud and abuse laws must be part of the review and analysis of any AI arrangement.
At some point, it will need to be determined when the computer is supplementing or gathering information from patients or when the computer is “practicing medicine.” This analysis will be needed, for example, if AI is used to gather information regarding a patient’s current condition where each answer to a question then prompts additional questions depending on those answers.
These questions will likely be answered state by state under licensing laws, similar to the roll-out of telemedicine across state lines. Ultimately, the answer will likely be a national answer since the practice of medicine will be transformed with machine learning.
Once developed, AI technologies can be used across borders and in more countries than just the United States. This leaves the developers with a decision: what standards should be included as part of AI development? Different countries not only regulate technology differently but also maintain different medical standards and accreditations. In certain cases (e.g., security in Europe), standards and certain research protocols may be more stringent than in the United States. In other countries, there may be no laws or a limited standard of care for a particular medical discipline. Developers must decide whether the technology should be customized by customer, country or at all. Purchasers and consumers should question the premises going into machine learning and data analysis to ensure they meet standards in the United States.
Those developing or investing in AI initiatives or companies should be mindful of intellectual property (IP) laws. Preservation of ownership rights of processes, marks and other IP developed as part of AI is critical to the success of these companies. In addition, those contracting for the use of AI in their health care organizations should ensure IP protections are fully laid out in the contract. For example, health care organizations should include robust indemnification provisions that encompasses IP in AI agreements.
It is easy to imagine a time when even within the same community certain physicians are practicing without electronic medical records and yet other physicians in the same community are using AI technology as part of their practice. When EHRs, mandatory quality measure reporting and telehealth became more commonplace in health care the question became—will this change the standard of care, will the standard of care be local or nationalized as these initial standardized care modalities move across the country? At the same time, creative plaintiff attorneys sometimes used this newly found data to question the care provided to their clients. This debate will likely arise as machine learning evolves in health care.
Many are skeptical of AI. This is especially true after the spurts and stops of the self-driving cars and the errors found in early health care projects because of mismatched health care standards. Now that the application of AI is being discussed as a facet of digital health and direct patient care, some are asking if AI is ready for the health care industry when patients health and safety are on the line. Which leads to the question—is AI safe in health care?
We do not currently have the answer to that question. It is important to note that the FDA-proposed review process would consider an initial premarket assurance of safety and effectiveness, as well as transparency and update reporting. The early failure and success of AI highlight the need for testing, refinement and transparency in AI development.
The FDA proposed a broad AI definition as it relates only to medical devices. It appears at first glance that AI’s application to health care could be endless. The definition of AI and machine learning is not set by law, nor is it consistent across laws. Examples of future uses include:
Once the computer can “learn,” it could make an accurate decision each and every time. Of course, this assumption ignores the requirement that physicians and other licensed professionals are required to use professional judgement in health care.
When AI will be commonly used is a matter of debate. Many see it around the corner, others believe it is years or decades away from integration into our health care programming. Given the adoption of market-disrupting technology in health care thus far, the future of AI is already here.
AI in health care faces unique challenges beyond developing the technology. AI in health care requires navigating legal considerations and patient safety concerns. It is critical that those developing, investing and using AI in health care, ensure that key stakeholders (such as the cross-disciplinary team, physicians and patients) be part of the development and selection process. In addition, quality committees, IT departments and the board of directors and governmental agencies (such as the medical boards) should be informed and consulted as appropriate. The old data saying “garbage in garbage out” applies to AI. Ensure that the data and assumptions going into the AI technology are consistent with the standard of care and current best practices. “Question, test and test again.”
The foregoing has been prepared for the general information of clients and friends of the firm. It is not meant to provide legal advice with respect to any specific matter and should not be acted upon without professional counsel. If you have any questions or require any further information regarding these or other related matters, please contact your regular Nixon Peabody LLP representative. This material may be considered advertising under certain rules of professional conduct.
Higher Education Webinar Series | 09.20.19
NP Privacy Partner | 07.19.19