Can Cybersecurity Teams Make AI an Ally, Not an Antagonist?

AI is an unprecedented cybersecurity threat — but it can also be a valuable ally. Tech leaders can use these AI strategies to protect their organizations today.

It’s a challenging time for cybersecurity professionals. Breaches continue to increase at an alarming rate, ranging from relatively straightforward vectors like phishing to more complex and coordinated attacks like ransomware or zero-day exploits. Phishing, email compromise, and stolen credential attacks, when successful, cost businesses an average of $5 million each, while organizations need an average of 73 days to respond to and contain a breach. These factors and more continue to give bad actors plenty of incentive to keep up the pressure.  

Once you add the exploding popularity and power of AI tools to the mix, the situation becomes even more dire — or does it? 

“The question of AI and cybersecurity is a chicken-or-the-egg situation,” said Kolby Kappes, leader for Eliassen Group's AI and Data Services Practice. “AI is making the bad guys more effective, but it’s also doing the same for the good guys. It’s opening new vectors of attack, but it’s also enabling security teams to deploy new AI tools to mitigate those attacks.” 

Kappes pointed out that with the right tools, AI can help security teams not only anticipate threats as they continue to evolve, but also: 

  • Free up security professionals to focus on more valuable tasks or strategic initiatives 
  • Assess and address vulnerabilities proactively, rather than reactively 
  • Leverage machine learning’s pattern recognition powers to identify anomalies  

With so much at stake, and with so many AI-powered dangers looming, how can cybersecurity leaders ensure their teams stay ahead of the curve?  

 

AI Security Threats Are Real (and Growing) 

Before exploring ways organizations can mitigate the threat from AI, it’s important to understand both the size of the threat — and the size of the opportunity.   

First, Kappes pointed out that the volume of attacks has increased by almost 1,300% since the arrival of accessible generative AI platforms, and technology leaders are broadly aware of what’s at stake: A recent survey found that 69% of organizations cite AI-powered data leaks as their top security concern in 2025. But despite the size and severity of the threat, almost half (47%) have no AI-specific security controls in place.  

“AI isn’t a threat that’s just over the horizon,” Kappes said. “It’s a threat that’s already here. Your organization is likely already being targeted by attackers using AI in one form or another.” 

The second point Kappes stressed is that AI isn’t just being used by bad actors. It’s widely becoming a standard offering by leading security providers. In fact, 17 of the top 32 providers are already deploying AI use cases, and that number is only expected to grow in the near term.  

“AI is enabling bad actors to become faster and more sophisticated, but it’s doing the same for security professionals and their vendors,” he said. “Not only is security tooling getting more sophisticated, it’s getting more sophisticated at an exponentially faster rate than we’ve seen previously.” 

 

How to Keep Up in an AI-Powered Cybersecurity Arms Race  

If white hats and black hats alike have access to an increasingly advanced AI toolset, how can organizations ensure their cybersecurity functions are equipped to keep pace?  

Leverage vendors’ AI tooling to craft leaner, but more capable, security teams 

One thing Kappes stressed above all is that organizations should look to take advantage of the acceleration in AI-powered security tooling happening on the vendor side.  

“We’re now at the point where a CISO no longer needs an army of cybersecurity professionals, each with a long list of certifications,” he said. “The right SaaS offering can bolt right onto my ecosystem and, for the most part, keep my ecosystem safe.”  

Solutions like CrowdStrike Falcon, Microsoft’s Defender, and IBM’s QRadar already fulfill this promise, albeit in different ways. And while adopting solutions like these doesn’t mean security teams can take their eyes off the ball, it does mean that those teams can rebalance and reallocate resources away from large teams of internal resources and toward external, vendor-driven options.  

“A few very skilled security professionals who essentially manage a suite of AI-powered tools will likely become the more desirable — and more effective — option for enterprises in the very near future,” he added.  

Use AI to Automate Threat Detection and Empower More Proactive Cybersecurity Teams 

Security policy configuration, compliance monitoring, and vulnerability detection can — or will soon be able to — be automated using AI tools. AI can be used to detect and prioritize security threats, implement and/or monitor behavioral analytics to detect unusual activities, and even analyze network traffic to identify patterns and anomalies.  

But Kappes pointed out that AI can’t keep your ecosystem or enterprise on its own (at least not yet). CISOs and other technology leaders need to ensure that the data and insights generated by AI solutions can be leveraged by security teams to act faster and more decisively while in pursuit of bad actors.  

Recognize That Employees May Be Your Biggest Vulnerability 

Deep fakes, vishing, social engineering, and more account for an increasing number of attacks on companies in the U.S. and abroad, and as gen AI becomes more advanced, those malinformation attacks will only increase in frequency and sophistication. In fact, Gartner predicts that by 2028, enterprise organizations will spend $30 billion annually combatting the multifront threat of malinformation campaigns.   

To prepare, consider establishing clear governance, policies, and tactics to combat social engineering measures, including: 

  • Attack surface reduction: While surface reduction in cybersecurity terms often means limiting entry points and vulnerabilities, in a landscape driven by social engineering threats, the attack surface expands to public information about employees, org chart, contact information, or anything that can be used to craft things like phishing attacks. Consider implementing rigorous social media guidelines, limits to the information shared in job postings, and strict information-sharing protocols to keep information offline and away from bad actors.  
  • Repeated and rigorous employee training: Gartner’s research found that “90% of employees who admitted undertaking a range of unsecure actions during work activities knew that their actions would increase risk to the organization but did so anyway.” A continual cadence of training that reinforces risks and potential consequences will likely become even more necessary than ever in the near term.  
  • AI-powered analytics: AI-driven attacks can often be thwarted by AI-powered solutions. Consider implementing behavioral analytics tools to identify anomalous communication patterns, alongside email and messaging solutions with advanced natural language processing to identify AI-generated content. 

Takeaways for Tech Leaders 

AI may be giving bad actors a significant upgrade in terms of tools and capabilities, but it can also do the same for security teams. To get the most from AI and stay ahead of emerging threats, cybersecurity leaders should consider: 

  • Leveraging the power of AI to automate much of the threat detection and response process, and using the insights they capture to empower leaner, more effective teams  
  • Letting vendors’ AI solutions do the heavy lifting to mitigate risk and lower costs 
  • Shoring up vulnerabilities to social engineering attacks through attack surface reduction 

But while these strategies can form the foundation of an AI-ready cybersecurity function, Kappes stressed a final, crucial point:  

“Thanks to AI, the threat landscape is changing faster than ever. Leaders can’t just deploy tools and policies and think their organizations are secure as a result. Instead, they have to keep one eye over the horizon and never stop innovating and iterating to anticipate tomorrow’s threats today.”  

To get more expert insights like these on AI, cybersecurity, operational efficiency, technology talent, and more, visit our resources page today