This site is intended for UK healthcare professionals
Medscape UK Univadis Logo
Medscape UK Univadis Logo
News

Regulate AI to Prevent Harm, Say Doctors and Public Health Experts

Doctors and public health experts called for a moratorium on the development of self-improving artificial intelligence (AI) until the sector is properly regulated .

Writing in the journal BMJ Global Health, the international group identified how AI could pose a threat to human health, ranging from simple errors that harm patients through to fundamental changes to society that impact on wellbeing in the general population.

In the analysis, authors Dr Frederik Federspiel from the London School of Hygiene and Tropical Medicine, and Professor David McCoy from the United Nations University in Kuala Lumpur, and colleagues, called on the medical and public health community to deepen its understanding about the emerging power and transformational potential of AI and participate in the debate over how to mitigate its dangers without losing its potential benefits.

The analysis differentiated between AI systems used to rapidly clean, organise, and analyse enormous sets of data, and self-improving artificial general intelligence (AGI).

Three Threats to Society

The authors argued that AI posed three potential threats to human health and wellbeing. Those were:

  • Threats to democracy, liberty, and privacy from marketing and misinformation campaigns and enhanced surveillance that could lead to social divisiveness and inequalities
  • Threats to peace and safety from autonomous weapons systems and enhanced lethal capacity
  • Threats to work and livelihoods in which people are replaced in the workforce, causing unemployment and associated adverse physical and mental health outcomes

According to the authors, "we do not know how society will respond psychologically and emotionally to a world where work is unavailable or unnecessary, nor are we thinking much about the policies and strategies that would be needed to break the association between unemployment and ill health".

In the field of medicine and healthcare, AI errors could cause patients harm, they said, citing an example of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of their hypoxia. More broadly, populations which were subject to discrimination were under-represented in datasets underlying AI solutions and could be denied the full benefits of AI in healthcare.

'Existential Threat' from Self-Learning AI

Meanwhile, the development of AGI, in which the system can boost intelligence over time by designing and implementing improvements to itself, posed an "existential threat to humanity", according to the authors. They suggested that while there was "some acknowledgement of the risks and potential harms associated with the application of AI in medicine and healthcare", less attention was being paid to "the broader and more upstream social, political, economic and security-related threats" that AI posed.

"Effective regulation of the development and use of artificial intelligence is needed to avoid harm," the authors concluded. "Until such effective regulation is in place, a moratorium on the development of self-improving artificial general intelligence should be instituted."

The analysis comes as key figures in artificial intelligence called for training of powerful AI systems to be suspended because of the risks to society and humanity. In an open letter, published in March by the not-for-profit organisation, the Future of Life Institute, leading lights including Elon Musk, CEO of SpaceX, Tesla, and Twitter, and Steve Wozniak, co-founder of Apple, called for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4".

GPT-4 was launched earlier this year by its creator, OpenAI, and was said to be its most powerful large language model to date for solving difficult problems and creating text .


References


UP NEXT