SAN FRANCISCO: Four US leaders in artificial intelligence (AI) announced on Wednesday the formation of an industry group devoted to addressing risks that cutting edge versions of the technology may pose.

Anthropic, Google, Microsoft, and ChatGPT-maker OpenAI said the newly created Frontier Model Forum will draw on the expertise of its members to minimise AI risks and support industry standards. The companies pledged to share best practices with each other, lawmakers and researchers.

“Frontier” models refer to nascent, large-scale machine-learning platforms that take AI to new levels of sophistication — and also have capabilities that could be dangerous.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” Microsoft president Brad Smith said in a statement.

Anthropic, Google, Microsoft and OpenAI say the forum will draw on expertise of its members to cut risks

“This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

‘Responsible innovation’

US President Joe Biden evoked AI’s “enormous” risks and promises at a White House meeting last week with tech leaders who committed to guarding against everything from cyberattacks to fraud as the sector grows.

Standing alongside top representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, Biden said the companies had made commitments to “guide responsible innovation” as AI spreads ever deeper into personal and business life.

Ahead of the meeting, the seven AI giants committed to a series of self-regulated safeguards that the White House said would “underscore three principles that must be fundamental to the future of AI: safety, security and trust.”

Watermarks for AI-generated content were among topics EU commissioner Thierry Breton discussed with OpenAI chief executive Sam Altman during a visit to San Francisco last month.

“Looking forward to pursuing our discussions — notably on watermarking,” Breton wrote in a tweet that included a video snippet of him and Altman.

In the video clip Altman said he “would love to sh­ow” what OpenAI is do­ing with watermarks “very soon”.

The White House said it would also work with allies to establish an international framework to govern the development and use of AI.

In their pledge, the companies agreed to develop “robust technical mechanisms,” such as watermarking systems, to ensure users know when content is from AI and not humans.

Core objectives of the Frontier Model Forum include minimising risks and enabling independent safety evaluations of AI platforms, the companies involved said in a release.

The Forum will also support the development of applications intended to take on challenges such as climate change, cancer prevention and cyber threats, according to its creators.

Published in Dawn, July 27th, 2023

Opinion

Editorial

When medicine fails
18 Nov, 2024

When medicine fails

WHO would have thought that the medicine that was developed to cure disease would one day be overpowered by the very...
Nawaz on India
18 Nov, 2024

Nawaz on India

NAWAZ Sharif is privy to minute details of the Pakistan-India relationship, for, during his numerous stints in PM...
State of abuse
18 Nov, 2024

State of abuse

DESPITE censure from the rulers and society, and measures such as helplines and edicts to protect the young from all...
Football elections
17 Nov, 2024

Football elections

PAKISTAN football enters the most crucial juncture of its ‘normalisation’ era next week, when an Extraordinary...
IMF’s concern
17 Nov, 2024

IMF’s concern

ON Friday, the IMF team wrapped up its weeklong unscheduled talks on the Fund’s ongoing $7bn programme with the...
‘Un-Islamic’ VPNs
Updated 17 Nov, 2024

‘Un-Islamic’ VPNs

If curbing pornography is really the country’s foremost concern while it stumbles from one crisis to the next, there must be better ways to do so.