AI in Government: The German List of Deficiencies and the Swiss Excellence
clarus.news | Analysis | May 11, 2026
Two audit offices, two reports, two worlds. In April 2025, the German Federal Audit Office dissects the Federal Press Office's BPAvis information system into its security and budgetary components – no IT security concept two years after go-live, cloud emergency in Microsoft Azure, no economic feasibility study. In January 2026, the Swiss Federal Audit Office (SFAO) praises the Federal Court for its responsible use of AI – ethics charter, own servers, open-source strategy. The comparison reveals more than two different administrative cultures. It shows how differently Berlin and Lausanne interpret the question of digital sovereignty even when they don't officially raise it.
Two Reports, One Question
On April 7, 2025, the German Federal Audit Office (BRH) concluded its audit of the "BPAvis" system – that system of the Federal Press Office (BPA) that supplies the Federal Chancellor, Federal President, and all ministries with agency reports around the clock. The BPA describes the system as an "essential component of crisis communication" for the federal government.
On January 23, 2026, the Swiss Federal Audit Office published its audit report on the "Use of Artificial Intelligence" at the Federal Court in Lausanne. Two applications were audited: AI-supported anonymization of judgments ("Anom", in use since 2021) and the internal ChatGPT alternative "ChatTF", which was gradually introduced in 2025.
Both reports deal with the use of AI and IT in central state institutions. Both reach diametrically opposed assessments. And both avoid – with different consequences – the central question: Who owns the infrastructure on which our state runs?
The German List of Deficiencies
The BRH report reads like a moral tableau of administrative negligence. More than two years after BPAvis was transferred to operational status on August 3, 2021, the BPA has no valid IT security concept – a clear violation of the Federal Implementation Plan 2017 (UP Bund), which requires such a concept "as a priority" for critical business processes. The BPA had internally noted that "the formal prerequisites for go-live would only be met with the acceptance of the security concept." It went live anyway.
The "audit log" of the underlying SAP HANA database – that function that would log security-critical events – had been deactivated by the BPA. Against the explicit recommendation of the Federal Office for Information Security (BSI) as well as the manufacturer SAP. Justification: data protection.
The "internal hotline" for a system that is supposed to ensure the federal government's crisis communication is reachable Mon–Fri from 9 am to 5 pm. The BRH simply describes this as "unacceptable." To the recommendation to train shift workers in the situation center accordingly, the BPA counters with reference to "collective bargaining classification" – in other words: too expensive.
There is no service level agreement for BPAvis. No emergency concept according to BSI standard 200-4. No economic feasibility study in the planning phase – a violation of § 7 BHO. No success monitoring. No call statistics. By October 2023, 6.4 million euros had been spent. For 2024 to 2027, another 4.4 million are planned – the budget-justifying documents "are essentially based on offers from external service providers."
Notable: The BPA operates a permanently available cloud emergency system in the Microsoft Azure Cloud for BPAvis. The "crisis infrastructure" of the federal government runs on a US hyperscaler in an emergency. The BRH mentions this in a footnote on page 8. No political, legal or sovereignty-related assessment is made.
The Swiss Excellence
The SFAO report on the Federal Court has a different tone. "The SFAO welcomes," "risks are addressed in a balanced manner," "important cornerstone for the responsible use of AI." Four recommendations, all accepted – two of them with the justification that the Federal Court is already implementing them anyway.
The Federal Court issued internal usage rules for AI in 2023, and in 2025 an ethics charter was developed that anchors central principles: autonomy (AI only supportive, decision always by legal professional), transparency, traceability. Employees must identify when work results were created with AI assistance. Mandatory training must be completed before access to ChatTF.
ChatTF runs on the Federal Court's own servers. The entered data does not leave the court's IT environment. As a base model, the Federal Court initially chose the open-source model "Llama 3.3" from Meta – with the medium-term goal of switching to "Apertus," the first large Swiss language model with disclosed training data, which emerged from the Swiss AI Initiative of the federal institutes of technology in September 2025.
The cost side is matter-of-fact: 170,000 CHF for AI integration in Anom, slightly more than half a million francs for ChatTF (around 6.5 percent of the annual IT budget of 7.8 million CHF). Half of this is investments in server hardware. The ongoing costs for external model queries are capped at a maximum of 10,000 CHF per year. The Federal Court expects to have "amortized the ChatTF investment after just one year through efficiency gains."
There is still criticism – just weighted differently: The risks have so far been assessed "for AI in general," not for specific use cases. Impact measurement must occur periodically. Synergies with the Federal Administrative Court and Federal Criminal Court have not yet been utilized. The 2018 IT strategy is overdue for revision. Recommendations that sound like improvement, not scandal.
Sovereignty: The Omitted Keyword
The decisive difference lies not in the lists of deficiencies, but in the architecture of dependency. This is where the comparison becomes political.
The BPA operates its "crisis infrastructure" on SAP HANA – a German, and thus European database product. But the emergency system lies in the Microsoft Azure Cloud. In a crisis – precisely when crisis infrastructure should fulfill its purpose – the distribution of politically sensitive situation reports runs via the servers of a US corporation subject to the CLOUD Act. This is exactly the constellation that the then Swiss Army Chief Thomas Süssli warned against in September 2025 for Microsoft 365 and that National Councilor Gerhard Andrey cited to clarus.news as justification for his 10-million coup in the Federal Parliament: "It is the army chief's required exit strategy."
The BRH does not address this. Sovereignty, Cloud Act risk, geopolitical dependency – no section, no recommendation, no critical question. The audit office checks whether budget law was complied with. That the constitutional organ of the federal government uses American infrastructure in emergencies is apparently not audit-relevant information.
The SFAO takes the opposite path. While the word "digital sovereignty" also does not appear in their report – the entire praise is oriented toward sovereignty criteria. Self-operation of servers. Open source. Apertus as the goal. Sensitizing employees to the fact that training data is not viewable with commercial models. The Federal Court's own statement makes it even more explicit: the goal is to "ensure the use of corresponding possibilities with a certain independence from external service providers in Switzerland."
Two audit offices, two evaluation frameworks. One checks budget discipline and overlooks the architecture. The other checks economic efficiency and rewards precisely those architectural decisions that enable independence. This is no coincidence, but an expression of different institutional priorities.
Can Europe Keep Up? Apertus, Llama and Reality
This is where it gets honest – and uncomfortable. The SFAO praises the Federal Court for choosing an open-source model. But "Llama 3.3," currently running on the Federal Court's servers, comes from Meta. A US corporation. Even though the model is open and operated locally, the training data is not disclosed – a point the SFAO explicitly names as a remaining risk. With Apertus from the Swiss AI Initiative, developed at ETH and EPFL, Switzerland wants to close this gap. It is "the first large Swiss language model where it is disclosed with which data it was trained."
The sober question is: Is that enough? Apertus is a beginning, not parity. The leading models – GPT-5, Claude Opus 4.7, Gemini 2.5 – come from the USA. The leading open-source competition – Llama, DeepSeek, Qwen – comes from the USA and China. European models like the French Mistral are technologically competitive but commercially much smaller. Switzerland plays in the second league with Apertus – and it does so consciously, with a different goal than commerciality: sovereignty, transparency, Swiss data stocks.
The German report on BPAvis shows the alternative – and in all its shabbiness. The BPA did not weigh whether it wanted to build a sovereign emergency system. It simply rented Microsoft Azure because it was quickly available. There is no success monitoring, no emergency concept for the cloud, not even its own service level agreement. Sovereignty was not given up – it was never seriously considered.
The Federal Court approach – with own servers, own model, own charter – is more expensive per capita, slower in implementation, more complex in training. But it keeps options open. The German variant closes them.
Costs and Benefits: Factor 16
A direct cost comparison is tricky – the use cases are different. BPAvis serves the entire federal government with real-time agency reports, 1.8 million reports per year. The Federal Court system serves 217 lawyers plus staff. Nevertheless, the comparison is worthwhile:
| Germany (BPAvis) | Switzerland (Federal Court AI) | |
|---|---|---|
| Costs so far | 6.4 million euros | ~670,000 CHF (Anom + ChatTF) |
| Planned costs | 4.4 million euros (2024–2027) | – |
| Total expenditure | 10.8 million euros | ~670,000 CHF |
| Economic feasibility study | None available | Use case projection, amortization expected in 1 year |
| Success monitoring | None available | Recommended, accepted, in preparation |
| Infrastructure sovereignty | Cloud emergency in Microsoft Azure | Own servers, open-source model, goal Apertus |
| Security concept | Not available over 2 years after go-live | Risk-based usage rules + ethics charter |
| Acceptance of audit office recommendations | Partially refused (hotline) | All accepted |
The factor 16 between 10.8 million euros and 670,000 CHF does not serve as proof of efficiency. The applications are not comparable, the user groups different, the requirements as well. What can be compared, however: the ratio of costs to documented benefit. Germany: 10.8 million euros for a system whose economic efficiency the audit office cannot assess because it was never investigated. Switzerland: 670,000 CHF for a system whose amortization is modeled in one year and whose impact will be periodically evaluated in the future.
It's not that Switzerland is more efficient. It's that Switzerland operationalizes the word "efficiency" at all.
What Both Reports Conceal
The gaps in the reports are as revealing as the findings. The BRH completely conceals the sovereignty question, although the cloud architecture is a central strategic decision. It audits compliance with procedures, not procedural decisions. Thus it structurally reproduces what the SFAO audit report 23759 on Swiss digitalization governance also documented: a political gap between constitutional organs and their operational decisions.
The SFAO conceals the geopolitical fragility of its own praise. Llama 3.3 is Meta. DeepL – also integrated in ChatTF – is a German company, but commercial. Apertus exists, but its performance compared to GPT-5 and Co. is open. If tomorrow the USA imposes export restrictions on AI weights or Meta changes the licensing terms, the Federal Court also faces a problem. The SFAO mentions this in half a sentence ("the Federal Court wants to go in this direction building on Apertus in the future") – but draws no consequences from it.
Both reports share a blind spot: They audit institutions individually, in national frameworks, with national standards. The supranational architecture of digital dependency – cloud hyperscalers, model providers, licensing regimes – falls between the cracks. In Germany, because it is considered technical common sense. In Switzerland, because it is treated as a generational task that one approaches strategically without being able to solve it today.
Conclusion: Two Administrative Cultures, One Common Problem
The Federal Audit Office shows what happens when a federal office treats its "crisis infrastructure" like an ordinary IT project: no security concept, no economic feasibility study, no sovereignty consideration, but Microsoft Azure as emergency backbone. The Swiss Federal Audit Office shows what happens when an institution approaches AI with the care of a constitutional question: ethics charter, own servers, open source first, Apertus as the goal.
But both reports face the same wall. As long as the basic building blocks – models, cloud, chips – come from the USA and China, European sovereignty in AI is a target state, not a current state. Switzerland approaches this goal with Apertus, with the Swiss AI Initiative, with the Federal Court's consistent open-source approach. Germany approaches this goal with BPAvis 2.0 on Microsoft Azure.
What both would need would be an audit office that checks not only budget discipline and economic efficiency, but also structural sovereignty: the question of whether the architecture of state IT will still be under its own control in ten years. The BRH does not ask this question. The SFAO asks it without naming it. It would be time to ask it openly.
The "bigger box" that Gerhard Andrey described to clarus.news in December 2025 is not just a Swiss box. It stands in every European capital. And in Berlin it seems even more locked than in Bern.
Key Messages
- The German BRH documents fundamental violations of security guidelines and budget law in the BPAvis system – no IT security concept, no economic feasibility study, no success monitoring, cloud emergency in Microsoft Azure.
- The SFAO praises the Federal Court for its AI use: own servers, open-source model, ethics charter, mandatory training – four recommendations, all accepted.
- Both audit offices avoid the term "digital sovereignty" – the SFAO implicitly rewards sovereignty-oriented architectural decisions, the BRH completely overlooks the sovereignty dimension.
- Apertus from the Swiss AI Initiative is the first large Swiss language model with disclosed training data, intended to replace the currently used Meta model Llama 3.3 in the medium term.
- Cost ratio: 10.8 million euros (BPAvis total expenditure) against 670,000 CHF (Federal Court Anom + ChatTF) – a direct comparison is tricky, but the ratio of costs to documented benefit is certainly possible.
- Structural deficit of both reports: The supranational architecture of digital dependency – hyperscalers, model providers, licensing regimes – falls between the national audit frameworks.
Critical Questions
- Data quality: On what data basis does the BRH assess the economic efficiency of BPAvis when the BPA itself cannot provide success monitoring or call statistics?
- Methodology: What comparative standards does the SFAO apply when it calls the Federal Court "balanced" without auditing other courts – such as the Federal Administrative Court – to the same depth?
- Conflict of interest: What role do contractual relationships with Microsoft (BPA: Azure emergency) or Meta (Federal Court: Llama 3.3) play in the selection and evaluation of solutions – and are these relationships periodically questioned?
- Causality: Can the efficiency gain of ChatTF actually be attributed to AI, or does the Federal Court benefit from structural advantages (self-operation of IT, highly qualified personnel) that were already present?
- Alternatives: What open-source options (Nextcloud, OpenStack, on-premise inference) were seriously examined at the BPA for emergency infrastructure before Microsoft Azure was commissioned?
- Feasibility: If Apertus as a Swiss model alternative is not yet production-ready, how realistic is the exit from Llama 3.3 in a manageable timeframe – and with what performance losses?
- Risks: What would be the consequences of a US export regime for AI model weights (analogous to chip export restrictions) for the Federal Court and for the BPA – and is this scenario analysis addressed in the strategies?
- Governance: Why do both audit offices audit institutions individually instead of investigating structural dependencies at European or bilateral level – and which body would be responsible for this?
Bibliography
Primary sources:
- Federal Audit Office: Final communication to the Press and Information Office of the Federal Government on the audit "Use of artificial intelligence for informing the federal government, Part 3 – BPAvis", Ref. VII 3 – 0001818/3, April 7, 2025
- Swiss Federal Audit Office: Audit of the Use of Artificial Intelligence, Federal Court, SFAO-25732, January 23, 2026
Supplementary sources:
- Federal Act on the Use of Electronic Means for the Performance of Official Tasks (EMBAG), March 17, 2023
- Federal Implementation Plan 2017 – Guideline for Information Security in the Federal Administration
- BSI Standard 200-4: Business Continuity Management
- Niklaus et al. (2023): Automatic Anonymization of Swiss Federal Supreme Court Rulings
- Swiss AI Initiative: Apertus – Publication September 2025
- *clarus.news / National Councilor Gerhard Andrey: Email exchange on digital