Bridging public trust in AI: why inclusive AI governance is essential public infrastructure
In the context of fragmented global governance and mixed outcomes for AI, governments should prioritise building public trust through proactive civic engagement and regulatory innovation.
Maxwell Scott

10 November 2025
Recent high-profile failures have significantly eroded public trust in artificial intelligence (AI), especially as the promised productivity gains of these systems are increasingly scrutinised. At the same time, the global regulatory environment for AI remains fragmented and complex, leaving the public sector with limited guidance and support.
According to OECD research, government entities are facing the following challenges as they implement AI:
- In defining their AI needs, organisations are struggling to translate their mission goals into technical requirements that anticipate AI risk.
- In developing AI governance programs to ensure safety, they face fragmented standards, unclear accountability and limited internal capacity.
- In deploying AI solutions, they need support in balancing public trust with operational consistency and the transparency required by law.
Aggressive sales tactics and the emphasis on AI as a panacea for every industry (with a competitive race to innovate) are undermining principled decision-making. To ensure safety and maintain public trust, it is critical that governments use this technology as a shared responsibility, requiring empathy and civic engagement to ultimately thrive in the communities it is meant to serve.
The perils of the “AI race” metaphor
The “AI race” has become a widespread and problematic notion in recent geopolitical discussions, reflected in initiatives such as the US executive order “Winning the AI race: America’s AI Action Plan”. This framing suggests a clear finish line and divides participants into winners and losers, which misrepresents the true nature of modern AI innovation. Unlike the “space race”, which pursued specific, well-defined objectives, AI innovation is inherently iterative, diffuse and continuously evolving. Treating AI development as a race prioritises speed over quality and consideration, increasing the risk that AI systems cause harm to the communities they serve.
New York City provides a recent example of prioritising speed over quality. A government-commissioned chatbot – meant to reduce the administrative burden on contact centre operators – was launched without sufficient planning, testing or end user engagement. The chatbot dispensed incorrect legal and health advice and failed to support non-English speakers – ultimately leading to a public scandal that scrapped it. As a result, public confidence in the service declined and demand for human support increased, undermining the intended goals of the initiative.
Global fragmentation of AI governance
As AI innovation accelerates without a clear endpoint, global governance is becoming increasingly fragmented. The US, China and the EU are each advancing distinct regulatory and technological agendas.
The EU, building on its General Data Protection Regulation legacy, prioritises risk management and aims to set a global standard for safety, though its stringent requirements may struggle to keep pace with rapid technological change.
China’s state-led approach, rooted in “core socialist values”, combines centralised oversight with rapid industrial scaling through major firms such as Tencent, ByteDance and Alibaba. Despite US efforts to restrict Chinese progress via export controls, companies such as DeepSeek continue to innovate around these barriers.
Meanwhile, the US leads in frontier model development and private sector investment. However, its patchwork regulatory landscape, marked by sectoral gaps and state-level initiatives, such as California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, limits its ability to establish a unified vision for responsible AI.
The western-centric portrayal of AI as a binary contest between winners and losers fails to recognise the intricate cultural, social and economic factors influencing AI innovation across the globe. In response, governments throughout the Global South are asserting digital sovereignty and crafting governance frameworks that reflect local priorities and realities. For example, India and Brazil have introduced national AI strategies that champion pluralistic governance, focusing on sectors such as agriculture and healthcare to address regional needs.
Treating trust as public infrastructure
To move AI innovation from pilot projects to widespread practice, policymakers must rethink their approach and treat AI governance as they would any essential public infrastructure. Public trust is the foundational “bridge” that enables successful deployment in the public sector. Just as infrastructure built without community input or ongoing maintenance is prone to failure, AI systems developed in isolation risk collapse and, once trust is lost, it is difficult to restore.
Building public infrastructure requires robust engagement with the community to ensure solutions are fit for purpose and responsive to public concerns. Similarly, when people voice anxieties about AI, policymakers should not dismiss these worries, but instead approach them with empathy to ensure they are fostering trust in the AI systems we increasingly rely on.
Leveraging innovative engagement approaches
History offers valuable lessons for effective AI governance. The internet and the global positioning system (GPS) are prime examples of innovation born from global collaboration. These technologies emerged through a collective effort among people from diverse backgrounds, ensuring their benefits were widely shared.
The internet, initially a military project, evolved into a global communication network thanks to contributions from engineers, scientists and policymakers across continents. Its open architecture and decentralised design reflect the input of diverse voices, enabling it to become a platform for innovation, education and social connection.
Similarly, GPS, originally developed for military navigation, was transformed into a tool for global accessibility through international collaboration. By integrating insights from various fields, including geospatial science, engineering and user experience design, GPS became a cornerstone of modern life, supporting applications from transportation to disaster response.
To build a truly global ecosystem for AI, it is crucial to establish institutions that actively support participation from underserved regions and communities. This requires investment in digital connectivity, localised AI education and the creation of representative data sets that empower more people to innovate. Equally important is providing opportunities for individuals from all backgrounds and levels of digital literacy to safely explore AI applications, challenges and possibilities. This will ensure that new technologies reflect the lived realities of the communities they are meant to serve.
Tailored governance for higher-risk sectors
Effective AI governance requires frameworks that are carefully tailored to the specific risks and contexts in which AI is deployed. The performance of any AI system is fundamentally shaped by the data on which it is trained, making representative data essential for reliable outcomes.
For example, an AI model developed and trained for California residents may not perform effectively when applied in Cameroon, due to differences in local context and data. Similarly, an AI solution designed for taxi drivers may not be suitable for limousine services, highlighting the importance of industry and culturally specific approaches.
To guide AI from promising pilot projects to trusted, large-scale deployments, a structured three-stage pipeline is essential:
- Piloting and defining needs. The first stage is about translating an organisation's mission into clear technical requirements and a realistic investment framework. This foundational step allows public sector bodies to define their specific use cases and needs upfront, empowering them to assess vendor claims and procure the right technology to solve their actual problems.
- Preparing the governance framework. The second stage involves building the structures necessary to ensure compliance and public legitimacy. This goes beyond a simple compliance checklist; it is about creating a governance framework that is specifically tailored to the context in which the AI will be used, addressing the unique needs and risks of the particular application.
- Practising ongoing AI safety. AI systems, like infrastructure, cannot be built and then ignored. This final, continuous stage is about maintenance. It requires actively measuring performance, mitigating risks over time and testing to ensure the model is not “drifting” from its original purpose, thereby ensuring the public can continue to use the service safely.
Ultimately, a globally representative AI ecosystem is not just about redistribution – it is about redefining who gets to imagine, build and decide the future of intelligent systems. These are profound and unprecedented times. As we navigate the AI era, we must resist the temptation to frame progress as a zero-sum game or a metaphorical race to be decided by a select few.
Instead, we should embrace the diverse perspectives and talents of the Global South, whose contributions are vital to shaping a future that is equitable and sustainable. By fostering global inclusivity, we can ensure that AI becomes a tool for empowerment rather than division, unlocking its full potential to benefit all of humanity.
Maxwell Scott is a globally recognised leader on responsible AI and emerging technology governance. At Microsoft’s Office of Responsible AI, he led global teams in developing and deploying governance frameworks for sensitive AI technologies, having previously served as a technology policy advisor at the US Department of State.
APPI recently hosted Maxwell for a roundtable with senior NSW policymakers focused on the intersection of AI, governance, and public sector delivery. This article provides an overview of Maxwell’s key insights.
Image credit: Canva
Features
Parisa Ziaesaeidi, Mary Hardie & Marissa Lindquist
Subscribe to The Policymaker
Explore more articles
Ehsan Noroozinejad, Greg Morrison and Nicky Morrison
Features
Parisa Ziaesaeidi, Mary Hardie & Marissa Lindquist
Explore more articles
Ehsan Noroozinejad, Greg Morrison and Nicky Morrison
Subscribe to The Policymaker



