Policymakers need sovereign AI, not commercial gadgets: how we built it and what we learned
Governments are adopting AI faster than they are governing it. But what does sovereign, public-interest AI look like in practice?
Raffaele F Ciriello, Anne-Marie Thow and Angelina Chen

16 February 2026
Digital sovereignty means more than hosting data in Australia. It means retaining public oversight and democratic control over the infrastructure, algorithms and interfaces that shape policy decisions. If artificial intelligence (AI) is to serve the public, it must be accountable to the public. Outsourcing policy-relevant infrastructure to overseas commercial providers risks undermining that sovereignty.
The longstanding assumption that free markets and commercial services better serve the public than state institutions has defined Western societies, including Australia, since the 1980s. But the risks of relying on commercial providers in public services have long been evident: from the unlawful Robodebt scheme, Medibank’s breach that exposed 10 million people’s health data and Optus’ repeated nationwide outages that prevented thousands of Triple Zero calls. Globally, a major Amazon Web Services outage recently paralysed banks, airlines and hospitals, revealing how dependent public life has become on a handful of Big Tech corporations. Recent moves toward digital sovereignty, such as France mandating domestic videoconferencing tools for officials, remain partial and isolated.
These vulnerabilities deepen as commercial AI tools enter government. In New South Wales, a contractor uploaded sensitive information from more than 2,000 residents into ChatGPT, triggering a formal investigation. In Victoria, a child protection worker fed sexual offence details into ChatGPT while drafting a court report, prompting an immediate ban. The European Parliament now instructs staff never to enter personal data into commercial chatbots. The problem is systemic: commercial AI tools like ChatGPT were never designed for policy development. Public servants cannot safely enter confidential material, lack the prompt engineering expertise needed for reliable outputs and even minor hallucinations or misreadings of trade obligations can destabilise entire policy frameworks.
Building a public-interest AI prototype
An interdisciplinary team of public health and information systems researchers at the University of Sydney developed a public-interest AI prototype for trade and nutrition policymaking with support from the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ; German Agency for International Cooperation). We co-designed the prototype with colleagues from national governments, the United Nations and NGOs. Their diverse perspectives on regulatory needs, national priorities and knowledge equity taught us that the real innovation lies not in the tool but in the institutional design around it. This pilot project provided rich insights for Australian policymakers.
Trade and nutrition provided a demanding test case where corporate influence and public health priorities often collide. While trade agreements can support sustainable development, they can also constrain governments’ autonomy to act on core public health issues. Designing AI for such contexts requires navigating complexity without oversimplifying it.
To support policymaking, we began with a curated evidence base rather than scraping the web. We compiled open-access, peer-reviewed research selected for geographic diversity, policy relevance and accessibility for low- and middle-income countries. Each source was manually indexed so the system could recognise key regulatory concepts. AI outputs are only as strong as the evidence they draw from.
We then built a retrieval-augmented generation (RAG) architecture. If the language model is the engine, RAG is the navigation system that keeps it on course. Every answer had to be grounded in the curated database, prioritising traceability over prescriptive advice to prevent hallucinations. The goal was to provide multiple perspectives with clear links to original sources, enabling policymakers to assess evidence and apply it in context. This was especially important for officials in the Global South, who often face evidence gaps and bias in mainstream AI systems.
The prototype was developed through participatory design. In workshops, policy colleagues tested and critiqued the tool using realistic policy questions. Their feedback confirmed that trustworthy AI for policymaking is less a technical challenge than an institutional one, requiring expert-curated evidence, transparent reasoning and sustained dialogue with users.
To remain infrastructure-agnostic, we piloted the tool on two platforms, revealing familiar trade-offs: cheaper web-based systems offer easier access but weaker adaptability and data protection, while locally run systems require higher costs and closer cloud integration. Maintaining such a system is labour-intensive. Updating evidence, re-indexing content and monitoring bias demand ongoing stewardship, even if AI can help detect gaps. Inclusivity remains a challenge, as academic literature and model outputs still privilege English-language content. Ultimately, sustainable public-interest AI depends less on tools than on collective stewardship.
The lessons from our prototype are directly applicable to policymakers in Australia. A sovereign, public-interest AI could support policy drafting and evidence synthesis in sensitive areas, while maintaining institutional control over data, assumptions and outputs.
Why commercial AI tools fall short for policymaking
Public institutions cannot simply rent “intelligence” from providers whose incentives diverge from democratic values. The Australian Government actively encourages the use of AI for policymaking through initiatives like the Public Service AI Plan. Yet, critical policy infrastructure is already embedded in foreign-owned cloud ecosystems, from whole-of-government Microsoft contracts to widespread AWS use across federal and state agencies. As AI is layered onto these platforms, policy analysis risks flowing through opaque models governed beyond Australian democratic control Citizens cannot control how these systems are trained, whose interests they encode, or how they may shift under commercial pressure. Their WEIRD cultural biases further risk overriding or excluding local perspectives. The question is: how do we design and govern AI in the public interest?
This question is sharpened by the release of the National AI Plan, which focuses on infrastructure, investment attraction and capability building. These are important steps, but sovereign AI could be strengthened by democratic deliberation, not just Australian data centres and procurement contracts. The government’s decision to abandon earlier commitments to mandatory guardrails in favour of “economic opportunity” – despite 75% of Australians wanting stronger regulation – makes public-interest AI a matter of national sovereignty.
So, what should governments do next?
The tool we built is only a prototype, but the lessons are immediate and actionable:
- Treat AI in core policy domains as public infrastructure, not as add-ons from commercial vendors.
- Anchor AI systems in expert-curated evidence, transparent reasoning and audit trails so policymakers and the public can see how claims are produced.
- Invest in AI literacy and build stewardship teams able to update evidence, monitor bias and maintain institutional control.
- Treat public-interest AI as a global project: collaboration with partners including in the Global South is essential to expand multilingual evidence bases, strengthen knowledge equity and avoid tools that default to WEIRD assumptions.
- Embed democratic deliberation into the AI development process, ensuring diverse Australian communities can shape and safeguard the legitimacy of public-interest AI.
AI is already entering the public sector. The question is whether governments let commercial providers dictate the terms, or whether they deliberately build sovereign AI that reflects democratic values and empowers policymakers to act in the public interest.
Raffaele F Ciriello is a Senior Lecturer in Business Information Systems at the University of Sydney Business School. His research focuses on compassionate digital innovation, focusing on ethical tensions in technology design, governance and use. He contributes frequently to academic and policy discourses related to digital sovereignty.
Anne-Marie Thow is a Professor of Public Policy and Health at the University of Sydney. Her applied policy research focusses on the interface between economic policy, food systems and public health. She leads global research collaborations and regularly provides expert advice for public policy.
Angelina Chen is a PhD candidate in Information Systems at the University of Sydney. Her research investigates how digital systems can nourish humanity’s desire for aesthetic experience and meaningful connection. She brings hands-on experience developing AI-assisted platforms for policy applications, and critically interrogates what it means to design technology for social good.
We gratefully acknowledge the contributions of Dr. Sabrina Chakori (University of Technology Sydney & University of Sydney) and Dr. Kelly Garton (University of Auckland) to the project.
Image credit: StudioProX/ Adobe Stock
Features
Raffaele F Ciriello, Anne-Marie Thow and Angelina Chen
Francesco Bailo, Rob Nicholls and Daniel Gozman
Subscribe to The Policymaker
Explore more articles
Neeru Sharma and Johra Kayeser Fatima
Thomas Longden, Kathryn Thorburn and Lloyd Pigram
Features
Raffaele F Ciriello, Anne-Marie Thow and Angelina Chen
Francesco Bailo, Rob Nicholls and Daniel Gozman
Explore more articles
Neeru Sharma and Johra Kayeser Fatima
Thomas Longden, Kathryn Thorburn and Lloyd Pigram
Subscribe to The Policymaker





