Executive Summary
The security of AI workloads in the cloud requires a fundamental paradigm shift compared to traditional cloud security. Tony de la Fuente, founder of the open-source project Prowler, warns of a growing "Shared Responsibility Gap" with managed AI services such as AWS Bedrock or Google Vertex AI. The core problem: While cloud providers clearly define their areas of responsibility for classic services (such as S3 or EC2), it remains unclear which configuration options and security options customers must manage themselves. The complexity grows exponentially when AI services act as an intermediary layer to OpenAI, Claude, or other models – creating fourth and fifth levels of responsibility.
People
- Tony de la Fuente (Prowler Founder, Cloud Security Expert)
Topics
- AI Security vs. Cloud Security
- Shared Responsibility Model
- Open-Source Security Tools
- Infrastructure-as-Code Security
- LLM Configuration and Data Protection
Clarus Lead
The shift from cloud to AI security is not merely a technical evolution, but an architectural realignment. While 90% of all software contains open-source and AI models are increasingly deployed in production environments, a critical vacuum emerges: Vendor-documented security recommendations for services like AWS Bedrock have doubled within 18 months, demonstrating that responsibilities are successively shifting from providers to customers. For decision-makers, this means that traditional cloud governance models (networking, IAM, logging) are insufficient for AI services – additional security layers are needed at the model level, in tool integration (such as MCPs), and in data access.
Detailed Summary
The Core Problem of the Shared Responsibility Gap
In the classic cloud era, responsibility sharing was comprehensible: AWS manages physical infrastructure, hypervisors, networking; customers configure security groups, IAM roles, encryption. With AI services, this boundary becomes dramatically blurred. An AWS Bedrock deployment offers hundreds of configuration options – from model access control to logging to data retention policies – whose security implications are unknown to many teams. The situation becomes more acute when Bedrock itself acts as a gateway to ChatGPT, Claude, or proprietary models. Suddenly multiple trust boundaries exist: AWS (infrastructure), Bedrock configuration (governance), third-party LLM (service security). Customers often don't realize they must manage this complexity themselves.
Architecture Rather Than Configuration Alone
De la Fuente emphasizes a critical distinction: AI security is less a configuration problem than an architecture design problem. A common mistake is to directly connect Model Context Protocols (MCPs) – the interfaces between AI models and data sources – with database access. Instead, strict Role-Based Access Control (RBAC) should lie below the MCP layer. This means: The AI has no direct permissions on databases; instead, a dedicated API layer with RBAC ensures the model only accesses data it needs. This requires design decisions before the deployment phase, not after.
The Role of Open Source and Continuous Scanning
Prowler, as an open-source tool with 300+ contributors, has extended its focus from pure cloud security (S3 buckets, IAM policies) to AI workloads. The tool now uses OWASP-AI mappings and integration with frameworks such as "From" (an LLM assessment tool) to check both infrastructure and model security. The advantage of open-source: transparency enables trust, faster iteration, and – through AI extensions – automatic detection generation. An example: Google Cloud Code would have great difficulty performing a simple security check ("Are there publicly accessible S3 buckets?"); Prowler solves this elegantly by combining rule-based engines with AI-generated detections.
Shadow AI and Data Protection Compliance
An often-overlooked risk is "Shadow AI": employees upload contracts with personal data or business information to ChatGPT, Claude, or DeepSeek – without knowing that these services by default use inputs for model training. Only explicit opt-outs (such as ChatGPT Enterprise, API mode without learning) prevent this. This is not a technical problem, but one of governance and awareness – it requires policies, tools to control who can use which AI services, and monitoring of AI access to sensitive data.
Key Takeaways
- Shared Responsibility Gap is widening: Managed AI services shift more security responsibility to customers than classic cloud services; this shift is not transparently documented.
- Architecture before configuration: Security of AI workloads is a design problem (RBAC below MCPs, API gating), not just a configuration problem.
- Open-source enables faster adaptation: Tools like Prowler can address new security questions faster through AI integration (LLM assessments, automatic detection generation) than proprietary solutions.
- Continuous assessment is necessary: New AI models and services emerge monthly; static security checks are insufficient – continuous scanning across infrastructure, code, and runtime is required.
- Shadow AI is a governance problem: Control mechanisms for AI usage and data protection compliance must address the organizational level, not just technical layers.
Critical Questions
[Evidence/Data Quality] What sources support the claim that security recommendations for Bedrock doubled between launch and early 2026? Are AWS-internal audit reports or customer feedback available to track this shift?
[Data Quality/Validity] The statement "90% of all software is open-source" – is this figure based on a specific study (such as Black Duck, GitHub)? In which contexts (enterprise, startups) is this quota reliable?
[Conflicts of Interest] De la Fuente is the founder of Prowler, a security scanning tool. Could his focus on configuration gaps and architectural deficits unconsciously overestimate the market size for his product?
[Causality/Alternatives] Is the Shared Responsibility Gap actually due to Bedrock's service design, or does it simply reflect the greater complexity of generative AI systems – a problem that even a simplified shared responsibility model could not solve?
[Counter-hypothesis] The proposal to place MCPs "below" RBAC reduces AI flexibility and could increase latency. Are performance trade-offs and use cases where such architectures don't scale documented?
[Feasibility] How can mid-sized organizations without dedicated security teams practice the proposed architecture patterns (RBAC layer, MCP isolation, continuous scanning) when such designs require significant cloud and AI expertise?
[Side Effects] Open-source tools like Prowler enable transparency but also increase the attack surface for threat actors who understand and can bypass the tool's logic. How is this risk addressed?
[Implementation Risks] Shadow AI controls (policies, monitoring, opt-outs) require organizational culture acceptance. Which organizations have successfully implemented such programs, and which have failed?
Further Reports
- DeepSeek Incident: An unnamed security incident involving DeepSeek is mentioned; details remain unclear but could indicate data protection or security deficiencies in new AI models.
- Extended Prowler Integrations: In addition to AWS, Google Cloud, and Azure, niche cloud providers and SaaS solutions (Microsoft 365, MongoDB Atlas) are now supported.
Bibliography
Primary Source: Cloud Security Podcast Episode with Tony de la Fuente – https://anchor.fm/s/10fb9928/podcast/play/115712786/
Supplementary Sources:
- Prowler Open-Source Repository (GitHub)
- AWS Bedrock Documentation
- OWASP AI Security Framework
- From (LLM Assessment Tool) Documentation
Verification Status: ✓ 2026-02-20
This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-Checking: 2026-02-20