The use of artificial intelligence (AI) in clinical health care has the potential to transform health care delivery but it should not replace physician decision-making, says the American College of Physicians in a new policy paper published in Annals of Internal Medicine.
The paper offers recommendations on the ethical, scientific, and clinical components of AI use, and says that AI tools and systems should enhance human intelligence, not supplant it.
To navigate the risks and ensure best practices, ACP recommends that AI-enabled technology should be limited to a supportive role in clinical decision-making. ACP notes that when being used for clinical decision-making, the technology would more appropriately be called "augmented" intelligence, since the tools should ideally be used to assist clinicians, not replace them.
The tools must be developed, tested, and used transparently, while prioritizing privacy, clinical safety, and effectiveness.
The use of technology should actively work to reduce, not exacerbate, disparities, ensuring a fair and just health care system. ACP recommends that to ensure accountability and oversight of AI-enabled medical tools,
there should be a coordinated federal strategy involving oversight of AI by governmental and non-governmental regulatory entities. The tools should be designed to reduce physician and other clinician burdens in support of patient care, while guided by unwavering principles of medical ethics.
Additionally, to ensure that AI tools are administered safely, ACP advises that training on AI in medicine be provided at all levels of medical education.
Physicians must be able to both use the technology and remain able to make appropriate clinical decisions independently, in the case that AI decision support becomes unavailable. Lastly, efforts to quantify the environmental impacts of AI must continue and mitigation of those impacts should be considered.
Source: American College of Physicians