fbpx

The Last Selfie on Earth | AI-generated selfie photographs show what the last day on Earth may look like. They depict some quite horrific visions of how it all may end. All thanks to DALL-E 2, an artificial intelligence picture generator.

But what about AI in cyber security?

The Last Selfie on Earth… Artificial intelligence (AI) technologies, such as machine learning and natural language processing, give speedy insights to break through the noise of daily cyber security alerts. 

These tools compile threat intelligence from millions of academic papers, blogs, and news reports.

But there are signs that hackers are also using AI to further their campaigns. Despite the many positive applications of artificial intelligence and machine learning, our adversaries have discovered methods to utilize them to their ends.

Cloud computing facilitates quick and easy AI experimentation and the development of robust learning models.

A few examples of how hackers are using AI include:

1. Putting viruses to the test using artificial intelligence tools | The Last Selfie on Earth

There are several methods in which attackers might employ machine learning. The first and most accessible is for attackers to construct their own machine learning environments, modelling their malware and attack techniques to learn what defenders are looking for in terms of events and behaviours.

Complex malware, for instance, may tamper with system libraries and components, launch processes in memory, and make network connections to domains controlled by the attacker. When taken as a whole, these actions constitute what is described as a “tactical, technical, and operational profile” (TTPs). Furthermore, TTPs may be observed by machine learning models, which can then utilize to improve their detection skills.

Adversaries might circumvent defenders that rely on AI-based techniques to identify attacks by quietly and regularly changing indications and behaviors by watching and anticipating how security teams recognize TTPs.

2. Destroying AI with bad information

In addition to using machine learning and artificial intelligence, attackers poison AI models with false data to infiltrate environments. Accurate and repeatable detection profiles in machine learning and AI models require properly labelled data samples. Attackers can fool AI models into thinking attack behaviours are not malicious if they introduce innocuous files that appear normal or create patterns of activity that prove false positives.

Attackers can likewise introduce malicious files that have been deemed safe by AI training to poison AI models.

3. Plotting the locations of currently-available AI models

Intruders are always looking for new ways to map the AI models employed by cyber security companies and operation teams. When attackers get insight into the inner workings of AI models, they are better equipped to disrupt machine learning processes and models during their cycles. This can make the model more susceptible to manipulation by hackers, who can then use the model to their advantage in attacks. Additionally, hackers can use this to their advantage by subtly altering data to thwart pattern-based detection.

How to Protect Against AI-Based Attacks?

It is incredibly tough to defend against AI-focused attacks. First, security professionals must guarantee that data labels used in learning models and pattern creation are correct. By ensuring that data has precise label identities, the data sets required to train models are likely to shrink, which is detrimental to AI efficiency.

Second, introducing adversarial strategies and tactics during modelling can assist align pattern recognition with tactics encountered in the wild for those constructing AI security detection models.

The TrojAI Software Framework (https://arxiv.org/pdf/2003.07233.pdf), created by Johns Hopkins University researchers, can aid in the generation of AI models for Trojans and other malware patterns.

Also, TextFooler (https://arxiv.org/pdf/1907.11932.pdf), a tool developed by MIT researchers, accomplishes the same for natural language patterns and might be valuable in developing more robust AI models that detect concerns such as bank fraud.

As AI becomes more critical, attackers will try to surpass defences’ efforts by doing their research. To protect against attackers, security personnel must remain up to speed on their strategies.

MANAGED CYBERSECURITY SOLUTIONS

Rhyno delivers a range of activities that combine to fully protect your infrastructure and data from cybercriminals, anywhere and everywhere, 24/7/365.

GO TO CYBERSECURITY SOLUTIONS

About Rhyno Cybersecurity Services

Rhyno Cybersecurity is a Canadian-based company focusing on 24/7 Managed Detection and Response, Penetration Testing, Enterprise Cloud, and Cybersecurity Solutions for small and midsize businesses.

Our products and services are robust, innovative, and cost-effective. Underpinned by our 24x7x365 Security Operations Centre (SOC), our experts ensure you have access to cybersecurity expertise when you need it the most.

This website uses cookies to improve your online experience. By continuing, we will assume that you are agreeing to our use of cookies. For more information, visit our Cookie Policy.

Privacy Preference Center