Quantcast
Channel: Analytics India Magazine
Viewing all articles
Browse latest Browse all 21267

Will Google’s ‘AI Fight Club’ Knock Out All The Malicious AI?

$
0
0
Hooded computer hacker stealing information with laptop on colored studio background

When it was announced that Google Brain is organising a competition on Adversarial Attacks and Defenses within the NIPS 2017 competition track, it came as no surprise. With the company announcing several developments around artificial intelligence and machine learning, the “AI fight club”, as this recently announced competition is popularly known as, is another attempt by Google to explore the rapidly evolving space.

The competition that essentially aims at training the machine learning systems to combat malicious AI is fetching interesting reviews. Moreover, it promises to be of significant relevance to the cyber world. And relying on AI that has been widely lauded for its ability to analyse enormous amounts of data and match human intelligence, makes it a much awaited “fight”.

Let’s try and understand what is AI fight club and what can it achieve.

AI and Cyber-attack :

Kaggle, the Google Cloud-owned platform where AI coders compete on projects, is all set to prepare us against super smart cyber attacks. The AI setup has been planned to fight against the deadliest cyber attacks.

The world is growing smarter and so is the pace at which AI is catching up. But the speed is equally matched by cyber attackers, and it is becoming increasingly difficult to defend computer systems against fatal cyber attacks. It wasn’t long back when MongoDB websites was under a major ransomware attack.

This contest is aimed at developing solutions to do away with unforeseen vulnerabilities in the cyber world, and what can be a better way than to exploit AI? Artificial intelligence/ machine learning and deep learning algorithms have become an integral part of all major domains. Be it healthcare, retail, media, banking or other financial sector, everyone is increasingly relying on AI, and many processes have an underlying framework to set it up.

Researchers have been training the systems to receive data and produce specified outcome, and while they have got significant outcomes, they are yet to produce a system that is as smart as humans. The completion by Google is one such step.

An Insight :

The competition intends on preparing an AI system for super smart cyber attacks by framing AI against AI. The competition that is likely to become the future of cybersecurity would have offensive and defensive AI algorithms battling it out. Scheduled to be completed over a span of five months, it will see researchers pitting algorithms against one another in an attempt to confuse and trick each other. The whole idea is to bring an AI in place that can combat with the enemy and have the machine learning systems prepped up against future attacks.

And off course it’s not an easy way! It is believed that adversarial machine learning is more difficult to study than conventional machine learning, as it is more about defense and standing up to it.

However, researchers foresee it to be an excellent opportunity to develop deep neural networks that can be both easily fooled and cannot be fooled at the same time. Designed with three main adversarial challenges, the first step is to get algorithms in place that can confuse machine learning system so that it won’t work properly. The next step is to train one AI to force another to classify data incorrectly. And the final challenge focuses on developing robust defense system.  

Can AI be a way?

While machine learning and deep learning has topped the popularity charts in most industries, it is not completely averse to being tricked. The technology that involves feeding data into special computer program and developing an algorithm to achieve an outcome, can often be tweaked to divert the results.

For long, spammers have been able to evade the spam filters by cracking the code to which kind of patterns can filter’s algorithm identify. The same way hackers have been able to play with smartest of algorithms to result in awry results. It is not surprising to see that deep learning algorithms with near human skills can also be fooled to extract undesired results for profit or pure mischief. Hacker can also install malware to worsen the scenario.

In the past, IBM has ventured out into the space of introducing AI into cybersecurity with its Watson. It would help users to respond threats across endpoints, networks, users and the cloud. Though it has been trained in the language of cybersecurity and has studied more than one million security documents, there are voids to be filled.

Last word :

The results which would be presented at a major AI conference later this year, can bring major revolution in the cyber industry in dealing with cyber attacks that often end up destroying resources at large. Having said that, unpredictability around AI is one thing that we need to be careful about. After all it wasn’t long ago when Elon Musk had shared his views on having stricter policies and regulations for AI at place.

The post Will Google’s ‘AI Fight Club’ Knock Out All The Malicious AI? appeared first on Analytics India Magazine.


Viewing all articles
Browse latest Browse all 21267

Trending Articles