The American Medical Association (AMA) this week called for requiring physician consent for some uses of artificial intelligence as it released its recommended principles for the use of AI in medicine.
“The AMA recognizes the immense potential of health care AI in enhancing diagnostic accuracy, treatment outcomes, and patient care,” said AMA President Jesse M. Ehrenfeld, MD, MPH, in a statement. “However, this transformative power comes with ethical considerations and potential risks that demand a proactive and principled approach to the oversight and governance of health care AI.”
The AMA said health plans should seek to use AI to simplify administrative tasks and reduce workflow burdens. There’s potential for AI-enabled technologies to
cut down on paperwork, but they may “not be designed or supervised effectively, creating access barriers for patients and limiting essential benefits.”
The AMA principles also noted that physicians could face increased risks for liability when they rely on AI-enabled tools and systems sold with little transparency about data and algorithms underpinning these products.
AMA said it will advocate to ensure that physician liability for the use of AI-enabled technologies is limited and adheres to current approaches to medical malpractice, even as legal theory about liability and accountability in this field evolves.
And the AMA’s principles called for greater transparency about insurers’ use of AI in automated decision-making systems that can “deny care more rapidly, often with little or no human review.” Recent lawsuits against insurers have highlighted new concerns over insurers’ use of AI in prior authorization to deny care.
Health plans using automated decision-making systems should be required to engage in regular audits to ensure use of these systems do not increase claims denials or coverage limitations, or otherwise decrease access to care, AMA said.
“In some instances, payors instantly reject claims on medical grounds without opening or reviewing the patient’s medical record,” the AMA said. “Rather than payors making determinations based on individualized patient care needs, reports show that decisions are based on algorithms developed using average or ‘similar patients’ pulled from a database.”
Patient privacy remains a concern as “data hungry” AI tools gain greater access to patient records via interoperable systems, AMA said. In many cases, companies developing AI-enabled products create legal arrangements that bring them under the Health Insurance Portability and Accountability Act (HIPAA) rules.
“Yet even HIPAA cannot protect patients from the ‘black box’ nature of AI, which makes the use of data opaque,” AMA said. “AI system outputs may also include inferences that reveal personal data or previously confidential details about individuals. This can result in a lack of accountability and trust and exacerbate data privacy concerns.”
“Lagging” Oversight
In the principles, the AMA also highlighted the current gaps in US government oversight and regulation of AI in medicine. There’s currently no national standard to guide the development and adoption of many applications of AI to medicine, AMA said.
Instead there’s a patchwork.
For example, the Food and Drug Administration (FDA) regulates AI-enabled medical devices, but many other AI-enabled technologies fall outside the scope of the agency’s oversight. The Federal Trade Commission and the Health and Human Services Office for Civil Rights have oversight over some aspects of AI, but their authorities “are limited and [are] not adequate to ensure appropriate development and deployment of AI,” AMA said.
“With a lagging effort toward adoption of national governance policies or oversight of AI, it is critical that the physician community engage in development of policies to help inform physician and patient education, and guide engagement with these new technologies,” the AMA said.
Kerry Dooley Young is a freelance journalist based in Washington, DC.
Source: Read Full Article