Research Hub > How Cybercriminals Use AI in Their Attacks — and Why You Should Use It for Defense
Article
3 min

How Cybercriminals Use AI in Their Attacks — and Why You Should Use It for Defense

Artificial intelligence is increasing the number and effectiveness of threats, but organizations can put this powerful technology to work in their security efforts as well.

GettyImages-892701338-2hero

Artificial intelligence is the hottest technology trend of this decade, and it’s appearing in a wide variety of applications. From approving mortgage applications to identifying potential customers, AI allows organizations to improve their effectiveness, increase their productivity and control costs. It’s no surprise that AI tools and techniques have also found their way into the cybersecurity space in the hands of both attackers and defenders.

In my work at CDW, I’ve seen our customers grow increasingly concerned about the risk posed by AI tools being used by their adversaries. These techniques can dramatically improve the scale and effectiveness of traditional social engineering attacks, such as spear phishing emails and voice impersonations.

Next-Level Spear Phishing

Historically, spear phishing has been a labor-intensive process for attackers, beginning with time-consuming research on the target — trying to seek out patterns of behavior, organizational roles and personal interests by scouring corporate websites, social media and other sources. After developing a strong profile of the intended target, the attacker carefully crafts an email designed to trick the target into revealing sensitive information, such as a password or Social Security number.

Spear phishers are now upping their game by bringing AI to bear on their work. Instead of taking days or weeks to gather information about a target, they can use automated algorithms to scour the web on their behalf, quickly building a profile of the target and then automatically determining the most appropriate type of message for getting the target to respond. AI brings mass personalization to spear phishing attacks, increasing the volume of attacks and the overall likelihood of success.

Voice Impersonation

The news media is full of reporting about “deep fakes” — contrived audio and video clips impersonating well-known politicians and other public figures. For example, comedian Jordan Peele created a widely publicized deep-fake video of President Obama that was shockingly believable.

The applications of this technology to cybersecurity are potentially devastating, particularly when it comes to social engineering. Attackers may gather public samples of a target’s voice or even engage with them in a seemingly innocuous conversation, and then use those samples to create a model of the target’s speech. This model can then be used to engage in real-time conversation over the phone with a third party, convincing them to bypass security controls or take other actions that can advance an attack.

Defending Against AI-Based Attacks

Organizations seeking to defend against these attacks should begin by educating users, who must be aware that these types of attacks exist to avoid falling victim. For example, if an administrative assistant is unaware that deep-fake voice impersonation is even possible, he or she would have no reason to suspect that a call from a senior executive might be fake.

In addition to education, cybersecurity professionals should use their own artificial intelligence tools to defend against AI-based attacks. Content screening mechanisms can scan incoming email and other content for signs of malicious activity. AI-based behavior monitoring tools can develop models of a user’s normal behavior and then detect deviations from those norms.

AI also comes into play in solutions that employ machine learning and sandboxing to protect against advanced threats, such as ransomware and zero-day attacks. AI provides an advanced layer of security, using intelligent decision-making to scan a file and determine if it’s malicious rather than basing that decision solely on mathematical probabilities or a signature. The process is similar to how the human brain makes decisions when solving a problem: We do research, gather data and facts, then decide how to solve the problem based on what we’ve learned. AI can do that on a much faster and larger scale. Rather than having a human analyze large quantities of data to decide if a file is malicious, a computer model using AI techniques can go through the same data and make a decision much faster — and, possibly, more accurately — than a human can.

The speed and autonomy of AI are crucial to improving security. First, timing is everything when it comes to blocking an attack; the faster you can identify a threat, the sooner you can block it, which minimizes the damage it can do. Second, the shortage of qualified IT security talent leaves some organizations exposed to risk, but AI can help fill the gap at companies that lack the skills or manpower to protect against many of the advanced attacks that are out there.

Technology leaders should move quickly to ensure they’re putting AI to work for them before they find themselves the victim of an AI-based attack.

Want to learn more about how CDW solutions and services can help protect your organization from ever-changing cyberattacks? Visit CDW.com/Security

This blog post brought to you by:

Brian Hill

Brian Hill

CDW Expert
Brian Hill is a cybersecurity technical specialist at CDW covering McAfee. He is responsible for architecting McAfee security solutions and growing the brand’s revenue, including creating build of materials, running demos and supporting sellers with technical enablement.