{"id":83463,"date":"2023-07-12T09:02:07","date_gmt":"2023-07-12T09:02:07","guid":{"rendered":"https:\/\/www.techopedia.com"},"modified":"2024-01-16T10:31:52","modified_gmt":"2024-01-16T10:31:52","slug":"the-state-of-ai-and-cybersecurity","status":"publish","type":"post","link":"https:\/\/www.techopedia.com\/the-state-of-ai-and-cybersecurity","title":{"rendered":"The State of AI and Cybersecurity in 2024"},"content":{"rendered":"

As the old expression goes, “speed kills,” and the world of cybersecurity<\/a> is no different. Artificial intelligence<\/a> (AI) cyber attacks <\/span>enable hackers to break into networks and find critical data assets before security analysts can spot them.\u00a0<\/span><\/p>\n

Unfortunately, AI-driven attacks aren’t a science fiction invention but a reality that security teams face daily.\u00a0<\/span><\/p>\n

For instance, the widespread adoption of <\/span>generative AI<\/span><\/a> tools, like <\/span>ChatGPT<\/span><\/a> and <\/span>Bard<\/span><\/a>, appears to have led to a dramatic increase in phishing attacks. A report produced by cybersecurity vendor <\/span>SlashNext<\/span><\/a> found that there’s been a 1,265% increase in malicious phishing emails since the launch of ChatGPT.\u00a0<\/span><\/p>\n

The State of AI in Cyber Attacks in 2024<\/span><\/h2>\n

For years, defenders have discussed how AI can be used in cyber attacks, and the rapid development of large language models<\/a> (LLMs) <\/span>has increased concerns over the risks presented.\u00a0<\/span><\/p>\n

In March 2023, anxiety over automated attacks was high enough that <\/span>Europol<\/span><\/a> issued a warning about the criminal use of ChatGPT and other LLMs. Meanwhile, NSA cybersecurity director Rob Joyce warned companies to “<\/span>buckle up<\/span><\/a>” for the weaponization of generative AI.<\/span><\/p>\n

Since then, threat activity has been on the rise. One <\/span>study<\/span>, released by Deep Instinct, surveyed over 650 senior security operation professionals in the U.S., including CISO<\/a>s and CIO<\/a>s, and found that 75% of professionals witnessed increased attacks over the past 12 months<\/a>.\u00a0<\/span><\/p>\n

READ MORE: <\/strong>Google Cloud CISO Phil Venables Talks Ethical Hackers<\/strong><\/a><\/p>\n

Furthermore, 85% of respondents attributed this increase to bad actors using generative AI.\u00a0<\/span><\/p>\n

If we identify 2023 as the year that generative AI-led cyber attacks moved from a theoretical to active risk, then 2024 is the year organizations need to be prepared to adapt to them at scale. The first step toward that is understanding how hackers use these tools.<\/span><\/p>\n

How Generative AI Can Be Used for Bad<\/span><\/h2>\n

There are several ways that threat actors can exploit LLMs, from generating <\/span>phishing emails<\/span><\/a> and social engineering scams to generating malicious code, <\/span>malware<\/span><\/a>, and <\/span>ransomware<\/span><\/a>.\u00a0\u00a0<\/span><\/p>\n

Data risk and privacy leader at PwC US, Mir Kashifuddin, told Techopedia:<\/span><\/p>\n

\u201cThe accessibility of GenAI has lowered the barrier to entry for threat actors to leverage it for malicious purposes. According to PwC\u2019s latest <\/span>Global Digital Trust Insights Survey<\/span><\/a>, 52% of executives say they expect GenAI to lead to a catastrophic cyber attack in the next year.<\/span><\/p><\/blockquote>\n

“Not only does it allow them to rapidly identify and analyze the exploitability of their targets, but it also enables an increase in attack scaling and volume. For example, using GenAI to quickly mass triage a basic phishing attack is easy for adversaries to identify and entrap susceptible individuals.”<\/span><\/p>\n

Phishing<\/a> attacks are widespread for attackers because they must <\/span>jailbreak<\/span><\/a> a legitimate LLM or use a purpose-built dark LLM like <\/span>WormGPT<\/span><\/a> to generate an email convincing enough to trick an employee into visiting a compromised website or downloading a malware attachment.\u00a0<\/span><\/p>\n

Using AI for Good<\/span><\/h2>\n

As concerns over AI-generated threats rise, more organizations are looking to invest in automation to protect against the next generation of fast-moving attacks.\u00a0<\/span><\/p>\n

According to a <\/span>study<\/span><\/a> by the Security Industry Association (SIA), 93% of security leaders expected to see generative AI impact their business strategies within the next five years, with 89% having AI projects active in their research and development (R&D) pipelines.\u00a0<\/span><\/p>\n

In the future, AI will be an integral part of enterprise cybersecurity. This is demonstrated by research from <\/span>Zipdo<\/span><\/a>, which finds that 69% of enterprises believe they cannot respond to critical threats without AI.\u00a0<\/span><\/p>\n

After all, if cybercriminals can create phishing scams at scale via language models, defenders need to upscale their ability to defend against them, as relying on human users to spot scams every time they encounter them simply isn’t sustainable in the long term.<\/span><\/p>\n

At the same time, more organizations are investing in defensive AI because these solutions offer security teams a way to decrease the time taken to identify and respond to data breaches while releasing the manual administration needed to make a <\/span>security operation center <\/span><\/a>(SOC) function.<\/span><\/p>\n

Organizations can’t afford to manually monitor and analyze threat data in their environments without the assistance of automated tools because it’s too slow \u2013 particularly when considering there is a 4 million <\/span>shortfall<\/span> in the cybersecurity workforce.<\/span><\/p>\n

READ MORE:<\/strong><\/p>\n