Artificial intelligence warning over human extinction labelled ‘publicity stunt’

Professor Sandra Wachter said the risk raised in the letter that AI could wipe out humanity is ‘science fiction fantasy’.
Professor Sandra Wachter (Sandra Wachter/PA)
PA Media
Jordan Reynolds1 June 2023

The probability of a “Terminator scenario” caused by artificial intelligence is “close to zero”, a University of Oxford professor has said.

Sandra Wachter, professor of technology and regulation, called a letter released by the San Francisco-based Centre for AI Safety – which warned that the technology could wipe out humanity – a “publicity stunt”.

The letter, which warns that the risks should be treated with the same urgency as pandemics or nuclear war, was signed by dozens of experts including artificial intelligence (AI) pioneers.

Prime Minister Rishi Sunak retweeted the Centre for AI Safety’s statement on Wednesday, saying the Government is “looking very carefully” at it.

Professor Wachter said the risk raised in letter is “science fiction fantasy” and she compared it to the film The Terminator.

She added: “There are risks, there are serious risks, but it’s not the risks that are getting all of the attention at the moment.

“What we see with this new open letter is a science fiction fantasy that distracts from the issue right here right now. The issues around bias, discrimination and the environmental impact.

“The whole discourse is being put on something that may or may not happen in a couple of hundred years. You can’t do something meaningful about it as it’s so far in the future.

“But bias and discrimination I can measure, I can measure the environmental impact. It takes 360,000 gallons of water daily to cool a middle-sized data centre, that’s the price that we have to pay.

“It’s a publicity stunt. It will attract funding.

It's a publicity stunt. It will attract funding.

Professor Sandra Wachter

“Let’s focus on people’s jobs being replaced. These things are being completely sidelined by the Terminator scenario.

“What we know about technology now, the probability [of human extinction due to AI] is close to zero. People should worry about other things.”

AI apps have gone viral online, with users posting fake images of celebrities and politicians, and students using ChatGPT and other “language learning models” to generate university-grade essays.

But AI can also perform life-saving tasks, such as algorithms analysing medical images like X-rays, scans and ultrasounds, helping doctors to identify and diagnose diseases such as cancer and heart conditions more accurately and quickly.

The statement was organised by the Centre for AI Safety, a non-profit which aims “to reduce societal-scale risks from AI”.

It says: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Senior bosses at companies such as Google DeepMind and Anthropic signed the letter along with a pioneer of AI, Geoffrey Hinton, who resigned from his job at Google earlier this month, saying that in the wrong hands, AI could be used to to harm people and spell the end of humanity.

Create a FREE account to continue reading

eros

Registration is a free and easy way to support our journalism.

Join our community where you can: comment on stories; sign up to newsletters; enter competitions and access content on our app.

Your email address

Must be at least 6 characters, include an upper and lower case character and a number

You must be at least 18 years old to create an account

* Required fields

Already have an account? SIGN IN

By clicking Create Account you confirm that your data has been entered correctly and you have read and agree to our Terms of use , Cookie policy and Privacy policy .

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged in