7 Ways AI and ML Are Helping and Hurting Cybersecurity

In the right hands, AI and ML can enrich our cyber defenses. In the wrong hands, they can do significant damage.

Artificial intelligence (AI) and machine learning (ML) are now part of our everyday lives, and cybersecurity is no different. In the right hands, AI/ML can expose vulnerabilities and reduce incident response time. But in the hands of cybercriminals, they can also cause significant damage.

Here are seven positive and seven negative impacts of AI/ML on cybersecurity.

7 Positive impacts of AI/ML on cybersecurity

7 Ways AI and ML Are Helping and Hurting Cybersecurity

Fraud and anomaly detection

This is the most common way AI tools are used in cybersecurity. Composite AI engines for fraud detection show excellent results in detecting complicated fraud patterns. The advanced analytics dashboards of fraud detection systems provide comprehensive incident details. This is an extremely important area within the general field of anomaly detection.

Email spam filters

Defensive rules filter out messages with suspicious words to identify dangerous emails. In addition, spam filters protect email users and reduce the time needed to sift through unwanted correspondence.

Botnet detection

Supervised and unsupervised ML algorithms not only facilitate detection but also prevent sophisticated bot attacks. They also help identify patterns in user behavior to detect undetected attacks with an extremely low false positive rate.

READ:  What is TKIP (Temporal Key Integrity Protocol)?

Vulnerability management

It can be difficult to manage vulnerabilities (manually or with technology tools), but AI systems make it easier. AI tools look for potential vulnerabilities by analyzing basic user behavior, endpoints, servers, and even dark web discussions to identify code vulnerabilities and predict attacks.

Anti-malware

AI helps antivirus software identify good and bad files, making it possible to identify new forms of malware even if they have never been seen before. While completely replacing traditional techniques with AI-based methods can speed up detection, it also increases the number of false positives. Combining traditional methods and AI can detect 100% of malware.

Preventing data leakage

AI helps detect certain types of data in text and non-text documents. Trainable classifiers can be taught to recognize different sensitive information types. These AI approaches can search data in images, voice recordings, or videos using appropriate recognition algorithms.

SIEM and SOAR

ML can leverage security information and event management (SIEM) and security orchestration, automation, and response (SOAR) tools to improve data automation and intelligence gathering, detect suspicious behavior patterns and automate response depending on the input.

AI/ML is used in network traffic analysis, intrusion detection systems, intrusion prevention systems, secure access service edge, user and entity behavior analysis, and most technology areas described in Gartner’s Impact Radar for Security. In fact, it’s hard to imagine a modern security tool without some kind of AI/ML magic in it.

READ:  What is a Red Team in IT Security?

7 Negative impacts of AI/ML on cybersecurity.

Data collection

Using social engineering and other techniques, ML is used to better profile victims, and cybercriminals use this information to accelerate attacks. In 2018, for example, WordPress websites were massively infected by ML-based botnet infections that gave hackers access to users’ personal data.

Ransomware

Ransomware is experiencing an unfortunate renaissance. Examples of criminal success stories abound; one of the worst incidents led to the six-day shutdown of Colonial Pipeline and the payment of $4.4 million in ransom.

Spam, phishing, and spear phishing

ML algorithms can create fake messages that look like real ones and aim to steal users’ credentials. In a Black Hat presentation, John Seymour and Philip Tully described how an ML algorithm created viral tweets with fake phishing links that were four times more effective than a human-created phishing message.

Counterfeiting

In voice phishing, fraudsters use ML-generated deepfake audio technology to carry out more successful attacks. Advanced algorithms such as Baidu’s “Deep Voice” require only a few seconds of a person’s voice to reproduce their speech, accents, and tones.

Malware

ML can hide malware that tracks node and endpoint behavior and creates patterns that mimic legitimate network traffic on the victim’s network. It can also build a self-destructive mechanism into malware that increases the speed of an attack. Algorithms are trained to extract data faster than a human could, making it much harder to prevent.

READ:  PAM alone is not enough!

Passwords and CAPTCHAs

Neural network-based software claims to easily crack human recognition systems. ML allows cybercriminals to analyze huge password datasets to better guess passwords. PassGAN, for example, uses an ML algorithm to guess passwords more accurately than common password cracking tools that use traditional techniques.

Attacks on AI/ML itself

Misuse of algorithms at the heart of healthcare, the military and other high-value sectors could lead to disaster. The Berryville Institute of Machine Learning’s Architectural Risk Analysis of Machine Learning Systems helps analyze taxonomies of known attacks on ML and performs an architectural risk analysis of ML algorithms. Security engineers need to learn how to secure ML algorithms at every stage of their lifecycle.

It’s easy to see why AI/ML is getting so much attention. The only way to combat devious cyberattacks is to harness the potential of AI to defend against them. The enterprise world needs to recognize how powerful ML can be when it comes to anomaly detection (e.g., traffic patterns or human error). With the right countermeasures, potential damage can be prevented or drastically reduced.

Overall, AI/ML is of great value in protecting against cyber threats. Some governments and companies are using or discussing the use of AI/ML to combat cybercriminals. While the privacy and ethics concerns surrounding AI/ML are valid, governments need to ensure that AI/ML regulations do not prevent companies from using AI/ML for protection. Because, as we all know, cybercriminals do not abide by regulations.