Connect with us

Artificial Intelligence

2035’s biggest A.I. threat is already here

Published

on

Share this:

Rating threats on their potential harm, profitability, achievability, and defeatability, the group identified that deep fakes — a technology that already exists and is spreading — posed the highest level of threat.

As if 2020 wasn’t going badly enough, a team of academics, policy experts, and private sector stakeholders warn there is trouble on the horizon. They’ve pinpointed the top 18 artificial intelligence threats we should be worried about in the next 15 years.

While science fiction and popular culture would have us believe that intelligent robot uprisings will be our undoing, a in Crime Science reveals that the top threat may actually have more to do with us than A.I. itself.

Rating threats on their potential harm, profitability, achievability, and defeatability, the group identified that deep fakes — a technology that already exists and is spreading — posed the highest level of threat.

Unlike a robot siege that might damage property, the harm caused by these deep fakes was the erosion of trust in people and society itself.

The threat of A.I. may seem to be forever stuck in the future — after all, how can A.I. harm us when my Alexa can’t even correctly give a weather report? — but , Director of the Dawes Centre for Future Crimes at UCL which funded the study, explains that these threats will only continue to grow in sophistication and entanglement with our daily lives.

“We live in an ever-changing world which creates new opportunities – good and bad,” Johnson . “As such, it is imperative that we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur.”

While the authors concede that the judgments made in this study are inherently speculative in nature and influenced by our current political and technical landscape, they argue that the future of these technologies cannot be removed for those environments either.

How did they do it — In order to make these futuristic judgments, the researchers gathered a team of 14 academics in related fields, seven experts from the private sector, and 10 experts from the public sector.

These 30 experts were divided evenly into groups of four to six people and given a list of potential A.I. crimes, ranging from physical threats (like an autonomous drone attack) to digital forms of threat like phishing schemes. In order to make their judgments, the team considered four main features of the attacks:

  • Profitability
  • Achievability
  • Defeatability

Harm, in this case, could refer to physical, mental, or social damages. The study authors further define that these threats could cause harm by either defeating an A.I. (e.g. evading facial recognition,) or using an A.I. to commit a crime (e.g. blackmailing people using a deep fake video).

Share this:
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *