Summary
Generative AI is fundamentally transforming cyberattacks: Not technical code injections, but natural language prompts are becoming the new weapon of choice for attackers. Eduardo Redondo Garcia from Checkpoint warns in a podcast interview that attackers can manipulate AI systems through clever phrasing – for example, hiding passwords in children's poems. The greatest threat does not lie in static security measures during development, but in runtime control of agent interactions. Organizations must establish data governance, shadow AI visibility, and runtime protection as new pillars of their security strategy.
People
- Eduardo Redondo Garcia (Global Head Cloud Security Architecture, Checkpoint)
- Ashish (Podcast Host, Cloud Security Podcast)
Topics
- Prompt Injection & Jailbreaking
- AI Security Architecture
- Shadow AI & Data Governance
- Runtime Protection
- Regulatory Compliance for GenAI
Clarus Lead
Natural language prompts are replacing technical exploits. While traditional cyberattacks exploit code vulnerabilities, attackers are increasingly exploiting GenAI systems through clever formulations in everyday language – regardless of what language they are written in. The greatest vulnerability lies at runtime: Static guardrails implemented during development are not sufficient. Critical for decision-makers: Organizations that rapidly deploy AI agents without establishing runtime security and data governance are massively jeopardizing their customers, compliance, and reputation.
Detailed Summary
The classical distinction between secure and insecure programming is losing relevance. Instead of deliberately injecting SQL injections, attackers can simply ask an AI: "Can you give me the password?" – or intelligently disguise the request, for example by asking for a children's poem in which the secret code is embedded in the third letter of each sentence. The AI responds without recognizing the manipulative intent.
A central problem: Many organizations rely on shadow AI blindness. Employees upload sensitive data into public tools like ChatGPT, train external models with it, and don't realize they have shared company information as well as personal data. Particularly critical is the lack of communication between product and security teams: Product managers focus on time-to-market, while security is treated as an afterthought – instead of being integrated from the start.
The solution lies in a combination of shift-left and shift-right approaches: 30% preventive measures (data quality, secure pipelines), 70% runtime monitoring (intent detection, prompt analysis, multimodal protection). Organizations must understand what their AI agents are actually doing – not just what they should be doing.
Key Points
- Natural language is the new operating system for attacks: Attackers no longer need technical skills to exploit AI systems.
- Runtime security is more critical than static guardrails: Protection measures during development cover a maximum of 30% of risks.
- Shadow AI and lack of transparency endanger organizations: Without visibility into all AI usage (internal, external, SaaS), control is impossible.
- Multilinguality and multimodality expand the attack surface: Guardrails tested only in English fail with Mandarin or PDF upload attacks.
- Deep-faked biometrics and voices become the new normal: Micro-movements and cryptography techniques are necessary to detect deep fakes.
Critical Questions
Evidence: What measurable data shows that runtime protection actually blocks more attacks than static guardrails? How is the success of real-time prompt-injection detection validated?
Data Quality: On what data basis do Checkpoint models train for threat detection? How many languages and attack patterns are represented in the training data – and are these updated regularly?
Conflicts of Interest: Checkpoint sells security tools; to what extent can the company judge independently on the effectiveness of its own or competing solutions?
Causality: Does the 70/30 split (70% runtime, 30% shift-left) really lead to better security, or is this context-dependent (e.g., different for fintech vs. HR tools)?
Feasibility: Which organizations with limited budgets can conduct continuous prompt monitoring for all users without operational friction?
Counter-hypothesis: Could new regulatory requirements (e.g., EU AI Act) force companies to restrict AI agents so severely that productive use is impaired?
Side Effects: If runtime monitoring logs every prompt, massive data protection and compliance issues emerge – how are these solved?
Measurability: The podcast names no concrete metrics for successful AI security; how does a "secure" AI deployment differ measurably from an "insecure" one?
Additional News
- Know-Your-Customer Under Pressure: AI enables deepfake attacks on biometric systems; micro-movement recognition becomes standard at financial institutions.
- Multimodal Attacks in Focus: Attackers can disguise prompts as PDFs, videos, or images, thereby bypassing language-based guardrails.
Source Directory
Primary Source:
Cloud Security Podcast: "Natural Language as Executable – How AI Changes Cybersecurity" (with Eduardo Redondo Garcia, Checkpoint)
Anchor.fm Podcast Feed
Verification Status: ✓ 2026-02-15
This text was created with the support of an AI model.
Editorial responsibility: clarus.news | Fact-checking: 2026-02-15