AI and cybersecurity: guide for professionals
AI: the new frontier of cybersecurity
Artificial intelligence is simultaneously a defensive tool and an attack vector. In 2026, 87% of organizations consider AI-related vulnerabilities the fastest-growing risk category. The AI cybersecurity market has reached $29.64 billion.
The speed of AI adoption has dramatically outpaced security frameworks. Organizations deployed LLM-based applications, AI agents with tool access, and AI-assisted development workflows faster than they established security standards for these technologies. The result is a generation of AI deployments where security was an afterthought, creating attack surfaces that did not exist three years ago.
The two faces of AI in security
AI as a threat:
- Deepfakes for social engineering and CEO fraud
- AI-generated phishing (more convincing, more personalized at scale)
- Poisoning of enterprise AI models and RAG knowledge bases
- Exploitation of AI framework vulnerabilities (Langflow, LangChain)
AI as a defense:
- Behavioral anomaly detection (UEBA)
- Automated malware analysis at scale
- Vulnerability research (Sec-Gemini, Big Sleep)
- Incident response automation and triage
The emerging threat landscape
AI lowers the skill floor for attackers. Phishing campaigns that previously required native-language copywriters can now be generated automatically in any language. Malware can be customized on the fly to evade signature-based detection. Voice cloning enables convincing impersonation of executives.
At the same time, defenders gain real advantages: AI can process behavioral telemetry at a scale no human analyst team can match, identifying subtle attack patterns that would otherwise go undetected.
Deepfake social engineering has moved from a theoretical concern to operational reality. In 2025, a finance employee at a multinational transferred $25 million after a video call with what appeared to be the CFO and other executives — all deepfaked. Voice cloning requires only a few seconds of audio; video deepfakes can now be generated in real time. Authentication of executive communications has become a genuine security requirement.
AI-powered vulnerability discovery has accelerated the exploitation window dramatically. AI systems can scan code for patterns associated with known vulnerability classes, analyzing codebases at a scale no human team could match. This capability is available to both security researchers and attackers, but attackers benefit disproportionately because they only need to find one exploitable vulnerability.
Shadow AI: the unseen risk
Shadow AI — unsanctioned use of AI tools by employees — represents a significant and often underestimated risk. Employees using personal AI assistant accounts to process corporate data, pasting sensitive information into public LLM interfaces, or installing AI coding assistants that exfiltrate code to external servers create data leakage risks that traditional DLP controls are not designed to detect.
A governance framework for AI tool usage should address: approved AI tools and vendors, data classification policies defining what may be processed by external AI systems, and monitoring for unauthorized AI service usage in network traffic.
The OWASP Top 10 for LLMs
The OWASP Top 10 for Large Language Model Applications provides a structured framework for LLM security risks:
- Prompt injection
- Sensitive information disclosure
- Supply chain vulnerabilities
- Data and model poisoning
- Improper output handling
- Excessive agency
- System prompt leakage
- Vector and embedding weaknesses
- Misinformation
- Unbounded consumption
Excessive agency — giving LLM agents permissions beyond what is needed for their function — is among the most practically dangerous risks for agentic applications. An LLM agent with access to email, file systems, and external APIs is a powerful attack target through both prompt injection and other manipulation techniques.
EU AI Act implications
The EU AI Act, in full application from August 2026, creates compliance obligations for AI system operators, particularly those deploying high-risk AI systems. Security functions (fraud detection, threat assessment, critical infrastructure monitoring) may qualify as high-risk, triggering requirements for technical documentation, human oversight, accuracy testing, and transparency.
Organizations deploying AI in security-relevant contexts should assess whether their systems fall under high-risk classifications and prepare the required compliance documentation.
In this guide
- Prompt injection: the number one threat to LLM applications
- Data poisoning: compromising training data