Summary

Three tech corporations back the AI company Anthropic after the Pentagon classified its products as a security risk. At the same time, Anthropic's AI model Claude demonstrates exceptional capabilities in security research in collaboration with Mozilla. In parallel, concerns are growing regarding data privacy at Meta and the lack of standards for copyright in AI training.

Persons

Topics

  • AI Safety and Regulation
  • Pentagon and Military AI Contracts
  • Copyright and AI Training Data
  • Data Privacy in Wearables
  • Multimodal AI Models
  • AI Agents in the Workplace

Clarus Lead

Amazon, Google, and Microsoft publicly signal support for Anthropic after the U.S. Department of Defense classified its tools as a security risk for military applications. The contract for AI services went instead to OpenAI, which sets less strict boundaries on autonomous weapons systems. This positioning led to internal conflicts: Caitlin Kalinowski, robotics chief at OpenAI, resigned in protest against insufficient safety concerns. At the same time, Anthropic's Claude Opus 4.6 demonstrates practical superiority – the model discovered over 100 Firefox security vulnerabilities in two weeks, including 14 critical cases.

Detailed Summary

The divide between Anthropic and OpenAI on AI safety issues is intensifying. While the Pentagon chose OpenAI for military AI applications, three industry giants confirm that their customers may continue using Anthropic tools – the Pentagon judgment applies only to government applications. The ideological difference becomes clear: Anthropic rejects autonomous weapons systems without human control, OpenAI does not. Kalinowski's departure signals growing internal tensions at OpenAI surrounding ethical standards.

Technically, Claude underscores superiority in specialized tasks. In collaboration with the Mozilla Foundation, the model identified 101 security vulnerabilities in the Firefox browser in 14 days – including 14 with high severity. For comparison: Mozilla patches approximately 73 critical bugs per year. However, Claude shows significantly weaker performance in practical attack scenarios: The model successfully created working exploits in only two of many attempts, without breaking the browser's security mechanisms.

Regulatory trouble looms for Meta over the use of videos from Ray-Ban glasses for AI training. British and American authorities as well as a civil rights law firm are investigating whether Meta violated data protection laws – particularly because the company forced Clickworkers in Kenya to annotate more intimate footage. Meta relies on terms of service that permit data sharing, but the British data protection authority criticizes the advertising promise of user control as misleading.

In the area of copyright and AI, EU parliamentarians are pushing for faster regulations. A report by CDU politician Axel Voss calls for binding standards for the so-called text-and-data-mining exception – currently, AI providers can train on copyrighted content as long as rights holders do not object in machine-readable form. The definition of "machine-readable" remains legally unclear. Furthermore, parliamentarians are calling for collective compensation agreements between AI providers and collecting societies, as media companies are losing users due to AI-generated summaries.

Key Statements

  • Security policy divides the industry: Anthropic rejects autonomous weapons systems, OpenAI does not – with consequences for talent development and partnerships.
  • Claude convinces in security research: 101 bugs in 14 days, but weaknesses in practical exploits underscore that AI solves specialized tasks precisely.
  • Data privacy risks in AI training: Meta comes under pressure over the use of smart glasses videos, Clickworkers report intimate content without sufficient consent.
  • Copyright regulations are missing: EU parliamentarians demand standards for text-and-data-mining, transparency about training data, and collective compensation.

Further News

  • Meta and Unlabeled Videos as Training Source: Research shows that multimodal models can be trained from scratch with unlabeled videos – text and image modes do not compete with each other.
  • Fake Sources in Scientific Literature: AI language models invent references; a new tool called CiteAudit with five specialized AI agents is designed to detect fabricated citations.
  • Netflix Buys AI Film Technology: Acquisition of startup InterPositive from Ben Affleck for postproduction tools without commercial marketing.
  • AI Agents Out of Step with Real Workplace: Carnegie Mellon and Stanford researchers show that agent benchmarks focus 92% on computer/math tasks, but only 7.6% of U.S. employment.
  • MIT Transforms Flat Plates into 3D Structures: Inspired by kirigami; a single cord pull folds connected tiles into arbitrary shapes – applications from space habitats to hard-to-reach regions.
  • Luma AI Integrates Video, Image, and Audio Generation: Web platform combines creative tools (Ray 3.14, Sora, Veo) for marketing teams and agencies.
  • SoundHound AI Brings AI Agents into Cars: Voice assistant with 20+ agents independently orders pizza and books services – series production starts 2025 at Genesis, Hyundai, Kia, Lucid, and Stellantis Group.

Critical Questions

  1. Evidence and Data Quality: How valid is the Pentagon's assessment that Anthropic tools pose a "security risk" when tech giants do not share this assessment – and what specific metrics underlie this decision?

  2. Conflicts of Interest: Are Amazon, Google, and Microsoft strategically using their public support for Anthropic to protect their own AI investments, or is this based on genuine safety considerations?

  3. Causality in Claude's Firefox Performance: Does the discovery of 101 bugs demonstrate fundamentally superior capabilities of Claude, or does the model benefit from targeted prompts and focus on a narrowly defined use case, which is not transferable to broader tasks?

  4. Feasibility of Data Protection Rules: How can regulatory standards for "machine-readable" opt-outs be technically enforced without paralyzing or fundamentally restructuring AI training?

  5. Counter-hypotheses to Meta Investigations: Could the use of video data have occurred with explicit consent from Ray-Ban purchasers who were unaware of the training purposes – and where is the legal boundary between terms of service and transparency obligations?

  6. Risks of Autonomous AI Agents in Vehicles: What liability structures exist if SoundHound AI agents make erroneous purchases or payments, and who bears financial or legal responsibility?

  7. Alternatives to Pentagon Contracts: Why does the Pentagon choose OpenAI over Anthropic – is this based on technical reasons or governance expectations that reject Anthropic's strict stance?

  8. Side Effects of Unified AI Creative Platforms: Does Luma AI's consolidation of video, image, and audio generation in one app endanger specialized creative professions or merely promote efficiency gains?


Source Directory

Primary Source: AI Update Compact: Support for Anthropic, Firefox Copyright, Netflix – heise.de

Verification Status: ✓ 2025


This text was created with the support of an AI model. Editorial Responsibility: clarus.news | Fact-Checking: 2025