As artificial intelligence becomes more deeply embedded in cybersecurity operations, it’s reshaping not only how organizations defend themselves, but also how they’re attacked.
We spoke with Andrew Carr, Senior Director at Booz Allen’s commercial incident response team, about the dual role AI now plays in the threat landscape. From accelerating threat detection and incident response to enabling more convincing phishing campaigns and deepfake attacks, AI is proving to be both a powerful ally and a growing risk.
How is AI being used to help defend against cyberattacks and support incident response?
Security vendors are now integrating AI and machine learning into their toolsets. That’s helping not just with identifying incidents and anomalous behavior, but also with making the response more efficient and faster.
One of the big challenges we see in security operations centers is the escalation process. A tier-one analyst might review an alert and flag it as potentially malicious, then escalate it to the next tier for confirmation. That takes time, especially when analysts are juggling large caseloads.
AI helps reduce that time. It allows teams to work more efficiently by using pattern recognition and predictive analytics to speed up detection and response. In many of these incidents, time is not on your side. The faster you can address an issue, the more likely you are to limit its impact on the organization.
It also helps improve the quality of response. Tier-one analysts may not have the same experience as more senior team members, but AI can enrich their understanding of the data they’re seeing. That leads to better, faster decisions at every level.
Are you seeing threat actors use AI to enhance their attacks?
Yes, absolutely. The biggest change we’ve seen since the rise of tools like ChatGPT is the improvement in phishing emails. Before, phishing attempts were often easy to spot—spelling errors, awkward grammar, strange formatting. Now, with generative AI, attackers can craft emails that are much more convincing.
And phishing is just one piece. We’re also seeing deepfakes—fake voices or videos—being used to impersonate people. For example, someone could call a help desk using a cloned voice to request a password reset. That’s especially concerning for people with a lot of public content online, like podcast hosts or executives.
AI also makes reconnaissance much easier. What used to take hours of manual research can now be done in seconds. Threat actors can gather detailed information about individuals, relationships, and even what security tools an organization uses, just by scanning public sources. It amplifies what a single attacker or small group can do.
Are AI models themselves vulnerable to cyberattacks? What role does MITRE ATLAS play?
Yes, AI models can absolutely be attacked. Many people are familiar with the MITRE ATLAS framework, which outlines how traditional networks are targeted, from initial reconnaissance to data exfiltration. MITRE ATLAS is an extension of that, but focused specifically on AI systems.
There are several ways AI models can be manipulated, intentionally or unintentionally. One example is data poisoning, where someone puts misleading information online that an AI model might ingest and learn from. That data could be crafted to create a loophole, trigger a specific behavior, or even jailbreak the model so it operates outside its intended boundaries.
Another concern is prompt engineering, where attackers craft inputs designed to extract the model’s training data. If that data includes intellectual property or sensitive information, it could lead to a serious breach.
These tools are powerful, but they’re not immune to attack. That’s why it’s critical to monitor how AI models are being used actively. You can’t just set them up and walk away. They need to be continuously evaluated for malicious interactions and corrected when necessary.