AI Issues for Physicians: Highlights from AMA Report

Health Highlights

Click here to register for our May 1 webinar on this topic.


“Physicians are at a crossroads: intrigued by the transformative potential of AI to enhance diagnostic accuracy, personalize treatments, reduce administrative burden, and accelerate advances in biomedical science, yet concerned about AI’s potential to exacerbate bias, increase privacy risks, introduce new liability concerns, and offer seemingly convincing yet ultimately incorrect conclusions or recommendations.”
– Future of Health: The Emerging Landscape of Augmented Intelligence in Health Care

The American Medical Association, in collaboration with Manatt, recently published the Future of Health: The Emerging Landscape of Augmented Intelligence in Health Care. The foundational report serves as a resource for physicians who are navigating the massive influx of academic literature and other information regarding AI tools and services. The report explains fundamental terms and definitions, details opportunities and current use cases across specialties, outlines challenges and risks and describes key considerations for physician leaders looking to implement AI in their practices.

Key takeaways from the report include:

  • We have entered into the third era of AI. The first generation of AI, originating in the mid-20th century, was built on probabilistic models (i.e., rules-based algorithms such as “if-then” rules or decision trees). The second wave in the early 2000s saw the growth of prediction models and more sophisticated classification models (e.g., models that could identify characteristics within data, often images). Most recently, the third wave has seen the introduction of foundation models (e.g., “generative AI”) that are trained on enormous data sets and are capable of generating new information (text, video, sound, images) based on a given prompt. Taken together, AI today is capable of several different functions: identifying characteristics within data, translating data inputs into other data types, summarizing data inputs into shorter and/or more accessible outputs, forecasting future events based on historical data and, in some instances, providing recommendations or advice.
  • There are already dozens—if not hundreds—of AI use cases in practice today. Physicians have expressed significant interest in implementing AI tools into practice: 65% of physician respondents to the AMA’s 2023 AI Physician Survey agree that there is some or definite advantage to implementing AI tools into clinical settings. All specialties are evaluating the use of AI for real-time clinical documentation, how to effectively leverage AI to answer patient questions (e.g., chatbots and/or drafting responses to in-basket messages), or predicting adverse clinical effects based on vitals or other biomarkers. But, as one might expect, the volume and type of AI use varies dramatically by specialty. For example, radiology, ophthalmology and cardiology are leveraging AI’s imaging analysis capabilities—identifying malignant melanomas, screening for diabetic retinopathy through retinal imaging or analyzing cardiac images for anomalies. Medicine-based specialties such as internal medicine, intensive care, emergency medicine and primary care are relying on AI’s risk prediction potential from analyzing a wide range of data sources in near real time.
  • There are significant known challenges and risks. The potential benefits of AI are not without their risks—bias, liability, privacy and security, among others.
    • Bias. States, developers and deployers are concerned that prejudices or unconscious biases in training datasets may inform AI models’ outputs, which could lead to discriminatory access or outcomes. There is also risk of AI models being used in an inequitable and/or discriminatory way (e.g., over- or under-use for certain populations).
    • Liability. Who bears the burden of responsibility for an AI tool that recommends a clinical course of action that a physician follows and ultimately harms the patient in some way? Given the wide range of AI tools’ applicability in health care, the question of liability is nonobvious and highly situational.
    • Explainability and Transparency. Explainability is the ability for a model to explain how an AI outputs was generated from inputs. Newer models (e.g., foundational models) have low degrees of explainability due to their large size and significant complexity. Transparency is the ability for an individual to access information about an AI model’s training data and model details—an important attribute for end-users to determine whether the model will work as expected and is appropriate for their population and intended use. Transparency is similarly difficult to achieve with increasingly complex and constantly evolving models.
    • Hallucinations/Confabulation. Hallucinations describes when a generative AI model creates outputs that are either nonsensical or appear credible but are factually inaccurate. As AI models become increasingly complex, identifying when and why hallucinations occur becomes increasingly complex, but simultaneously critical, to minimize in a healthcare context.
    • Coding and Payment. Until recently, there was no common terminology to describe health care services or procedures delivered via AI. The AMA’s CPT Editorial Panel has established select codes to provide guidance, but developing common terminology for categorizing AI tools and services remains necessary for future utilization of AI tools across the industry.
    • Privacy and Security. AI development and training rely on access to large data sets and yet few, if any, technical controls are available to help end users specify (i) how systems are trained and/or (ii) how secure the data that entered into AI systems are; and/or (iii) how the data in the system is used/reused.
  • The validation of AI models—who, what, when, where and how—is evolving. The effective validation of health care AI tools—tools that have enormous data training sets and are constantly evolving—is an immense challenge. Organizations recognize the importance of clinician involvement, but the specific role different stakeholders—federal, state, private, public—will (or should) take is unclear. The FDA has expressed that the agency would need to double in size and gain new legal authority to conduct post-market monitoring and evaluation. Simultaneously, private organizations have formed, positioning themselves as neutral third‑party arbiters. States themselves are toying with the role, some going so far as to introduce bills that would require AI developers to register each AI tool with the state and/or submit training data directly to state departments.
  • The federal and state policy landscape is evolving, and we expect significant activity throughout 2024. Although there is currently no federal law specifically governing AI, the White House, Congress, ONC, CMS, OCR, FDA, DOJ, FTC and others have begun or are expected to generate regulations and legislation for governing AI.1 Furthermore, states are actively introducing legislation to study the impacts of AI, regulate payer use of AI and establish transparency requirements between stakeholders.2

What is clear is that AI is here to stay. Health systems and physicians need to be thinking about where and how to deploy AI in their organizations—starting with the highest value and lowest risk use cases.

For more information, please see the AMA’s full Future of Health: The Emerging Landscape of Augmented Intelligence in Health Care report. In addition, Manatt Health is launching a quarterly Health AI Policy Tracker, detailing key policy trends at the state level and summarizing federal activity. This tracker will begin publishing on a quarterly basis later this month. For more information on what this tracker will include and/or to ensure you are notified of its launch, please reach out to bjefferds@manatt.com.


1 For more information on: ONC’s final rule on AI transparency, health IT certification, and information blocking, see here; HHS’s regulation on AI in healthcare, see here; FDA’s activities regulating AI, see here.

2 For more information on Utah’s recently signed law which significant impacts health care, see here; Georgia’s recently introduced bill to regulate physician and insurer use of AI, see here

manatt-black

ATTORNEY ADVERTISING

pursuant to New York DR 2-101(f)

© 2024 Manatt, Phelps & Phillips, LLP.

All rights reserved