Close

Hmmm, you are using a Gmail.com email address...

Google has declared war on the independent media and has begun blocking emails from NaturalNews from getting to our readers. We recommend GoodGopher.com as a free, uncensored email receiving service, or ProtonMail.com as a free, encrypted email send and receive service.

That's okay. Continue with my Gmail address...

26 global experts on AI ring the alarm bells: Dangers of artificial intelligence to humanity in the next 10 years revealed


To date, humanity has been able to advance the field of artificial intelligence (AI) a great deal. The field may have been developed far too quickly, however. Soon, humans could end up being helpless in a fight against highly intelligent, well-connected, and possibly all-knowing robot beings.

That’s just one of the many possible threats to humanity that could arise from the proliferation of advanced AI technology. To prove how serious of a threat this is, a total of 26 different global AI experts came together to write a 100-page report wherein they detail exactly what’s wrong with current efforts to prepare against possible AI problems in the future, as well as how to fix them.

The report, titled, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” focuses on the potential of AI to be used maliciously by certain rogue states, criminals, and even terrorists. It foresees rapid growth in both cybercrime and the misuse of the drones in the next 10 years, plus the unprecedented rise robots that can be used to manipulate all sorts of things from the news to social media postings.

In a post about the report posted on the official University of Cambridge website, it is described as a “clarion call for governments and corporations worldwide” to address the elephant in the room that is AI development and all the problems that come along with it.

Some examples of possible cyber attacks performed with the use of AI are automated hacking, highly-targeted spam email that’s based on social media user data, speech synthesis that aims to impersonate certain targets, and simply exploiting base-level vulnerabilities in the AI programs themselves.

Meanwhile, cyber-physical systems like autonomous vehicles and drones are seen as possible threats as well, with the possibility that they could be used to perform physical attacks, such as making a self-driving car crash into specified targets or turning drones into target-based missiles. Any autonomous or AI-controlled vehicle or device could be turned into autonomous weapons, and those could be quite difficult to disarm, much less stop.

According to Dr. Seán Ó hÉigeartaigh, the executive director of University of Cambridge’s Center for the Study of Existential Risk and a co-author of the report, there’s no better time to face this issue than now. “We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real,” he said. “Artificial intelligence is a game changer, and this report has imagined what the world would look like in the next five to ten years.”

Miles Brundage, Research Fellow at Oxford University‘s Future of Humanity Institute, shares similar thoughts. He said in a statement, “AI will alter the landscape of risk for citizens, organizations, and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling, and repression – the full range of impacts on security is vast.”

Brundage further added that AI doesn’t just match human abilities when it comes to hacking, it can far surpass them. “It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification,” he said, “as well as AI capabilities that are subhuman but nevertheless much more scalable than human labor.”

As they say in the healthcare profession, an ounce of prevention is better than a pound of cure. The same principle applies here as it would be better to cut off as many possible avenues of attack as possible for AI instead of waiting for the inevitable to happen before taking any action. The report also includes some possible interventions that could mitigate the threats once they do occur.

Read more about the future of automation at Robotics.news.

Sources include:

CAM.ac.uk

MaliciousReport.GodaddySites.com

Receive Our Free Email Newsletter

Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.



Comments
comments powered by Disqus

RECENT NEWS & ARTICLES