Why Australia needs a healthier information ecosystem, not just fact-checks
Fact-checking alone cannot protect democratic debate. Australia’s misinformation policy should shift from policing individual falsehoods to ensuring digital platforms provide information environments that support informed public judgment.
Why Australia needs a healthier information ecosystem, not just fact-checks
Fact-checking alone cannot protect democratic debate. Australia’s misinformation policy should shift from policing individual falsehoods to ensuring digital platforms provide information environments that support informed public judgment.
Francesco Bailo, Rob Nicholls and Daniel Gozman

19 January 2026
In January 2025, Meta announced it was ending its fact-checking program in the US, joining a broader industry retreat from content moderation. Critics saw capitulation; supporters claimed victory for free speech. But this debate misses the broader point.
Fact-checking addresses one important aspect of a complex problem – verifying the accuracy of specific claims – but we must also focus on the consequences of users' cumulative exposure to content over time. The effect of information on citizens is contextual: it depends on what they have already seen, what they will see next, and how the pieces fit together. Policy should be designed to capture this contextual, organic quality of the information ecosystem.
Consider someone who wants to cast doubt on the fairness of an election. They need not spread outright falsehoods. Instead, they might flood the information environment with content that is technically accurate but selectively framed: isolated procedural irregularities, out-of-context statistics about mail-in ballots and ambiguous footage of election workers. While each item may pass a fact-check (even assuming fact-checkers can keep up with the volume and speed of content production), their cumulative effect can undermine public confidence in electoral institutions. Citizens may be left unsure whether an election was fair, even when no false information has circulated. The result is not misinformation in the traditional sense – it is the degradation of the conditions under which citizens can form reasonable judgments.
Generative AI will make producing such non-false but cumulatively confounding content far easier and more effective, posing regulatory challenges that current content-based moderation frameworks are not designed to address.
The problem intensifies during crises. When events unfold faster than verification can keep pace – a pandemic, a natural disaster, a contested election – 'epistemic gaps' emerge: periods when reliable knowledge is simply unavailable. These voids fill rapidly with speculation, rumour, and misleading content. Fact-checking cannot operate in real-time as there is no ‘truth’ yet, and by the time corrections arrive, the damage is done.
What we should really be worried about, then, is not only our capacity to fact-check content but the degradation of the quality of our ‘democratic publics’– the spaces where most public conversation now happens. We might call this degradation information disorder. Policy responses should therefore assess whether digital environments support informed public reasoning, not merely whether individual posts are accurate.
A new framework for policy
Australia's Code of Practice on Disinformation and Misinformation is currently under review, following the withdrawal of the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill in late 2024. The withdrawn bill illustrated the pitfalls of content-focused approaches. It was criticised from both sides: free speech advocates saw overreach, while others argued it was too weak and exempted mainstream media outlets that contribute to misinformation. The bill's reliance on distinguishing misinformation (accidental) from disinformation (deliberate) proved unworkable – intent is nearly impossible to verify as content gets reshared across platforms. It also failed to address the role of generative AI in producing and spreading harmful content. And its broad definitions risked capturing legitimate speech while missing the systemic problem entirely.
This is an opportunity to reorient policy around the quality of information environments and democratic publics. Drawing on recent scholarship, we propose that two ideas should anchor Australia's approach: epistemic rights and information disorder.
Epistemic rights, a concept developed by media scholar Hannu Nieminen, refer to people's ability to access the information they need to make informed decisions – and to have the capacity to navigate and make sense of what they encounter.
Information disorder, then, measures how difficult it is for users to exercise these rights. The more pronounced the disorder, the harder it becomes to form reasonable judgments. This framing shifts attention from policing individual falsehoods to assessing the systemic conditions that enable or constrain informed citizenship. It also offers regulators a way to evaluate platform performance beyond takedown statistics and compliance reports.
Why current approaches fall short
Current policy conflates two fundamentally different harms, leading to regulatory approaches that are effective in some contexts but inadequate in others. The first is individual harm: people influenced to adopt dangerous behaviours by specific content: unproven health treatments, financial scams, or dangerous products. This harm is direct, measurable, and amenable to content-level interventions. Platforms should be held responsible for reducing such content through moderation.
The second is collective harm: the cumulative degradation of information quality that erodes trust in institutions and undermines democratic deliberation. This harm is systemic and diffuse. No single platform controls public discourse. Users move across services with varying moderation standards. News media amplify narratives. Intermediaries redistribute content. Addressing collective harm requires measuring the health of the ecosystem, not just counting removed posts.
Measuring what matters: a persona-based monitoring system
To address collective harm, we propose a concrete policy innovation: a national monitoring system that measures information ecosystem quality in terms of users’ capacity to make informed decisions. Rather than counting removed content, this system would track what representative Australians actually encounter.
The approach would define 'personas of interest' reflecting different demographic profiles and vulnerabilities – an average Australian user, a female teenager, a man aged 30–45 without a university degree, an elderly person over 65. It would also define topics of interest, such as election campaigns, health information and financial products. Platforms would provide data on the content these personas encounter, both when actively seeking information and when stumbling upon it incidentally. This data would be made accessible to researchers, journalists and civil society in near-real time, enabling independent assessment of ecosystem health.
There are precedents for this kind of collaboration. The Social Science One partnership between academics and Facebook demonstrated that structured data sharing is possible. Australia's own Consumer Data Right mandates API-based access in banking, energy, and telecommunications to promote transparency and competition. Extending similar principles to the information environment would represent a significant but achievable policy innovation.
Beyond the false choice
Australia has an opportunity to move beyond a debate that has become politically roadblocked. Efforts to combat misinformation are routinely characterised as partisan operations, making meaningful reform difficult. The question is not whether platforms should remove more or less content. It is whether our information environment allows citizens to exercise their epistemic rights – to access what they need to make informed decisions, and to recognise when certainty is not available.
The 2025 review of the Code of Practice should adopt ecosystem-wide monitoring and require platforms to contribute to shared measurement infrastructure. In practice, this means platforms would provide anonymised data on what content representative user profiles actually encounter on key topics (elections, health, financial products, etc.) enabling independent researchers but also policymakers or journalists to assess what information citizens are routinely exposed to and whether they can form reasonable judgments. Compliance would be assessed not by counting removed posts, but by whether platforms participate transparently in this monitoring system and respond to identified deficiencies.
Protecting the conditions for informed citizenship is not a constraint on free expression. It is its foundation.
The authors acknowledge the contribution of Professor Terry Flew to this article.
Dr Francesco Bailo is a Senior Lecturer in Data Analytics in the Social Sciences at the University of Sydney, where he also serves as Deputy Director of the Centre for AI, Trust and Governance and Director of the Computational Social Science Lab.
Dr Rob Nicholls is a senior research associate in media and communications at the University of Sydney.
Dr Daniel Gozman is an Associate Professor at the University of Sydney Business School, where he leads research and teaching at the nexus of emerging technology lead digital transformation, policy and governance.
Image credit: Canva
Features
Francesco Bailo, Rob Nicholls and Daniel Gozman
Neeru Sharma and Johra Kayeser Fatima
The Policymaker Team
Subscribe to The Policymaker
Explore more articles
Thomas Longden, Kathryn Thorburn and Lloyd Pigram
Mona Mashhadi Rajabi, Christina Sklibosios Nikitopoulos & Martina Linnenluecke
Parisa Ziaesaeidi, Mary Hardie & Marissa Lindquist
Features
Francesco Bailo, Rob Nicholls and Daniel Gozman
Neeru Sharma and Johra Kayeser Fatima
The Policymaker Team
Explore more articles
Thomas Longden, Kathryn Thorburn and Lloyd Pigram
Mona Mashhadi Rajabi, Christina Sklibosios Nikitopoulos & Martina Linnenluecke
Parisa Ziaesaeidi, Mary Hardie & Marissa Lindquist
Subscribe to The Policymaker





