Despite the potential for negative health outcomes with the use of artificial intelligence (AI), most research is biased toward its benefits, without sufficient consideration of effective regulations that would be needed to avoid those harms, according to an expert analysis.
There are three key threats the technology poses to human health and well-being, including upending social and political standards for liberty and privacy, disruption of peace and safety for society, and the potential for replacement of human livelihoods, according to David McCoy, BMed, DrPH, of the International Institute for Global Health at United Nations University in Kuala Lumpur, Malaysia, and co-authors in .
Furthermore, more careful consideration of the speed of development and the design of new AI technologies is paramount, they noted, especially the potential creation of artificial general intelligence (AGI), a long-theorized, self-improving AI technology capable of performing almost any tasks that humans perform.
The ability for a potential AGI to improve its own code could mean the technology will eventually learn to bypass human-determined constraints and develop its own purposes, McCoy and team said. These potential outcomes are the reason the researchers are calling for a moratorium on the continued development of AGI technology.
"It is far from clear that the benefits of AI outweigh the risks and harms in the medical and care sector," McCoy told ѻý. "Furthermore, it is imperative doctors and other health professionals also consider the risks and threats that lie beyond medicine, as we highlight in our article."
"The current social, political, and legal circumstances are such that the risks of catastrophic social and societal harm are not being adequately mitigated -- a moratorium is needed until such time that we have adequate legal and regulatory safeguards," he added.
Still, the most tangible threat of AI for human health can be found in its current capabilities, according to McCoy.
"Many of the benefits of AI in medicine depend on the use of extended and expansive personal data," he said. "There are presently insufficient guarantees that data collected and used ostensibly for medical purposes will not become part of the broader and near-constant surveillance and collection of personal data, which pose profound threats to privacy, autonomy, and dignity."
"In addition to the risk of surrendering large amounts of personal data and its use by powerful AI-driven systems, we see the risk of health professionals also surrendering epistemic authority to machines in ways that could dehumanize healthcare and undermine the role and value of human healthcare providers," he added.
McCoy and co-authors also acknowledged that AI has the potential to bring meaningful benefits to healthcare delivery, but they cautioned that misuse or thoughtless implementation could lead to equally negative outcomes.
For example, McCoy said AI could erode independent human medical judgement, lead to loss of human skills, or even make human healthcare professionals redundant in certain cases. He also emphasized that the speed and scale of AI systems could increase the occurrence and effect size of clinical errors or perpetuate existing social inequities in healthcare.
These problematic outcomes from the use of AI in healthcare delivery would likely be accompanied by new gray areas around clinical responsibility, accountability, and liability, he noted.
"We are entering into new territory with AI," he said. "But there is very little being done to anticipate the different future scenarios that are possible or likely, and planning for change. Whether these changes will produce a net benefit or a net harm will vary almost certainly from professional group to professional group and from place to place -- but this is not something that clinicians can do on their own as individuals -- it's a collective policy and systems issue."
McCoy also highlighted the economically powerful commercial companies that are driving AI research forward at a rapid pace. This dynamic will likely have profound effects on the current governance, financing, and management of healthcare systems, he said.
"Crucially, this must be seen as a systemic and paradigmatic issue," he concluded. "Health professionals need to balance the potential for AI to advance medicine and health with the fact that there are inadequate safeguards against the potential for data collected to service AI applications in medicine and health to be used in harmful ways or for nefarious purposes."
Disclosures
The authors reported no conflicts of interest.
Primary Source
BMJ Global Health
Federspiel F, et al "Threats by artificial intelligence to human health and human existence" BMJ Global Health 2023; DOI: 10.1136/bmjgh-2022-010435.