Never Mind Sci-Fi, Malicious AI is a Real Threat and ‘We’re Not Prepared’CC0 / PixabayTech20:50 21.02.2018(updated 20:51 21.02.2018) Get short URL
Artificial Intelligence or AI offers huge potential for improving people’s lives but ever since science-fiction first appeared in the 1950s there have been fears of creating robotic Frankensteins which could overthrow mankind. A new report by some of the greatest minds in the world has concluded that AI is a potential threat.
A new report says Artificial Intelligence could easily be exploited by rogue states, criminals and terrorists.
Drones could be turned into missiles, fake videos used to manipulate public opinion and industrial-scale hacking carried out by AI robots, according to the report which was written by 26 authors from 14 institutions.
It says governments around the world must start considering new laws to stop AI from being hijacked by nefarious interests.
Among the technologies which the authors fear could be susceptible are AlphaGo, which was developed by Google’s DeepMind and enables AI to outwit human players of Go, a strategy game which dates back thousands of years but is now popular on mobiles and tablets.
But the report’s authors said AlphaGo could be used to find patterns in data and exploits in computer code.
Echoes of TV Show Black Mirror
Many of the scenarios in the report are reminiscent of scenes from Charlie Brooker’s popular TV series Black Mirror, which imagines a dystopian world in which technology has taken on a life of its own.
The report says malicious individuals could fit drones with facial recognition software so they could target an individual.
In this way someone could kill their spouse or a business rival while having an airtight alibi and without needing to hire a hitman.
The report says hackers could use speech synthesis to impersonate people in telephone calls and AI automatons could be used to fake embarrassing situations to manipulate politicians.
‘AI is a Game Changer’
“Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to 10 years,” said one of the co-authors, Dr Seán Ó hÉigeartaigh, executive director of the Centre for the Study of Existential Risk in Cambridge.
“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems — because the risks are real. There are choices that we need to make now, and our report is a call to action for governments, institutions and individuals across the globe,” he said.
The report says AI creates new opportunities to enhance “fake news.”
“AI systems may simplify the production of high-quality fake video footage of, for example, politicians saying appalling (fake) things. Currently, the existence of high-quality recorded video or audio evidence is usually enough to settle a debate about what happened in a given dispute, and has been used to document war crimes in the Syrian civil war,” says the report.
“At present, recording and authentication technology still has an edge over forgery technology. A video of a crime being committed can serve as highly compelling evidence even when provided by an otherwise untrustworthy source. In the future, however, AI-enabled high quality forgeries may challenge the ‘seeing is believing’ aspect of video and audio evidence. They might also make it easier for people to deny allegations against them, given the ease with which the purported evidence might have been produced,” the report adds.