Talk of AI dangers has ‘run ahead of the technology’, says Nick Clegg

Meta announced the opening of its new open-source large language model, Llama 2, on Tuesday.
Sir Nick Clegg has worked at Meta since 2018 (Stefan Rousseau/PA)
PA Archive
Harry Stedman19 July 2023

Talk of artificial intelligence (AI) models posing a threat to humanity has “run ahead of the technology”, according to Sir Nick Clegg.

The former Liberal Democrat leader and deputy prime minister said concerns around “open-source” models, which are made freely available and can be modified by the public, were exaggerated, and the technology could offer solutions to problems such as hate speech.

It comes after Facebook’s parent company Meta said on Tuesday that it was opening access to its new large language model, Llama 2, which will be free for research and commercial use.

Generative AI tools such as ChatGPT, a chatbot that can provide detailed prose responses and engage in human-like conversations, have become widely used in the public domain in the last year.

The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid

Sir Nick Clegg

Speaking on BBC Radio 4’s Today programme on Wednesday, Sir Nick, president of global affairs at Meta, said: “My view is that the hype has somewhat run ahead of the technology.

“I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy and agency on its own, where it can think for itself and reproduce itself.

“The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid.”

Sir Nick said a claim by Dame Wendy Hall, co-chair of the Government’s AI Review, that Meta’s model could not be regulated and was akin to “giving people a template to build a nuclear bomb” was “complete hyperbole”, adding: “It’s not as if we’re at a T-junction where firms can choose to open source or not. Models are being open-sourced all the time already.”

He said Meta had 350 people “stress-testing” its models over several months to check for potential issues, and that Llama 2 was safer than any other large language models currently available on the internet.

Meta has previously faced questions around security and trust, with the company fined 1.2 billion euros (£1 billion) in May over the transfer of data from European users to US servers.

Create a FREE account to continue reading

eros

Registration is a free and easy way to support our journalism.

Join our community where you can: comment on stories; sign up to newsletters; enter competitions and access content on our app.

Your email address

Must be at least 6 characters, include an upper and lower case character and a number

You must be at least 18 years old to create an account

* Required fields

Already have an account? SIGN IN

By clicking Create Account you confirm that your data has been entered correctly and you have read and agree to our Terms of use , Cookie policy and Privacy policy .

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged in