UN Security Council discusses AI risks, need for ethical regulation

Members of the UN Security Council met with two experts on Tuesday to discuss the potential risks and benefits of AI, highlighting the need for countries worldwide to coordinate efforts to regulate the technology.

In a meeting chaired by the UK’s Foreign Secretary James Cleverly, the 15-member council heard from Jack Clark, co-founder of leading AI company Anthropic, and Zeng Yi, co-director of the China-UK Research Center for AI Ethics and Governance.

The UN Security Council is made up of five permanent members — China, France, Russia, the UK, and the US — and 10 non-permanent members that are elected for a two-year term. Current non-permanent members include Albania, Brazil, Ghana, Japan, Malta, Switzerland, and the United Arab Emirates.

During the meeting, members stressed the need to establish an ethical, responsible framework for international AI governance. The UK and the US have already started to outline their position on AI regulation, while at least one arrest has occurred in China this year after the Chinese government enforced new laws relating to the technology.

Malta is the only current non-permanent council member that is also an EU member state and would therefore be governed by the bloc’s AI Act, the draft of which was confirmed in a vote last month.

Although AI can bring huge benefits, it also poses threats to peace, security and global stability due to its potential for misuse and its unpredictability — two essential qualities of AI systems, Clark said in comments published by the council after the meeting.

“We cannot leave the development of artificial intelligence solely to private-sector actors,” he said, adding that without investment and regulation from governments, the international community runs the risk of handing over the future to a narrow set of private-sector players.

While the council discussed transformative opportunities presented by AI, such as monitoring the climate crisis to breakthroughs in medical research, concerns around the technology’s potential to spread misinformation or fuel malicious cyberoperations were also debated, according to a read out from the council after the conclusion of the meeting

Members additionally highlighted the importance of retaining human decision-making when it comes to the technology’s military applications, such as autonomous weapons systems.

Tackling AI-fuelled biases

The finance industry estimates that AI could contribute up to $15 trillion to the global economy by 2030, said UN Secretary-General António Guterres, noting that the significance of the technology was underscored by the fact that almost every government, large company, and organization in the world is working on an AI strategy. 

However, echoing comments made earlier this year by Margrethe Vestager, the European Commissioner for Competition, Guterres said that the High Commissioner for Human Rights has expressed alarm over evidence that AI can amplify bias, reinforce discrimination and enable new levels of authoritarian surveillance.

“[Generative AI] could be a defining moment for disinformation and hate speech,” he said, arguing that governments across the globe need to work together to support the development of AI that bridges social, digital and economic divides.

Copyright © 2023 IDG Communications, Inc.

Source