Making AI safe, responsible and productive for Australia

A balanced approach – acting on both the opportunities and potential harms of AI – is the best way to harness this technology’s productivity gains while mitigating the risks to people and our economy.

A balanced approach – acting on both the opportunities and potential harms of AI – is the best way to harness this technology’s productivity gains while mitigating the risks to people and our economy.

Making AI safe, responsible and productive for Australia

12 August 2025

As Australia prepares for the Commonwealth Government’s Economic Reform Roundtable, the Productivity Commission’s interim report on harnessing data and digital technology arrives with the right ambition – but needs greater urgency and realism.

While the report outlines a broad reform agenda aimed at boosting productivity through digital transformation, what is required on AI, data access and privacy regulation is more decisive action.

At a time when Treasurer Jim Chalmers rightly calls for a “sensible middle path”, Australia can adopt a more robust approach that both harnesses the productivity gains of AI while also prioritising safety and accountability, acting on both opportunity and harm.

The importance of mandatory guardrails

In the context of rapidly evolving AI capabilities and increasing deployment across critical domains, delaying safeguards could expose individuals and institutions to significant harm. The Commission’s recommendation to pause the implementation of mandatory guardrails for high-risk AI, until gap analyses are complete, falls short of existing needs. While gap analyses are important, they should inform and refine guardrails, not postpone them. Proactive regulation is essential to ensure responsible innovation and public trust.

Frontier models – those with emergent capabilities, autonomy and global reach – pose systemic risks to markets, institutions and public safety. These models are already being deployed in areas such as finance, education and healthcare. The longer we delay, the more we expose Australians to risks we do not yet understand and cannot yet control.

The idea that existing laws can be stretched to cover these risks is optimistic. These models challenge legacy frameworks in ways that cannot be patched with minor amendments:

  • Emergent behaviours of AI such as deception, tool use and autonomous planning are not accounted for in consumer law or medical device regulation, for instance.
  • Given that many advanced AI systems operate as “black boxes”, it is often unclear how they arrive at decisions. This lack of transparency and difficulty in understanding their internal logic makes it nearly impossible to audit or explain their outputs – which in turn undermines accountability when things go wrong.
  • Autonomous decision-making complicates liability and enforcement, especially in high-stakes domains.
  • Dual-use risks, from misinformation to bioengineering, are not addressed by current cybersecurity laws.
  • Systemic concentration in a handful of firms creates infrastructure monopolies that competition law is ill-equipped to manage.

Pausing guardrails sends the wrong signal: that Australia is content to be a regulation-taker, not a leader. Worse, it undermines our ability to develop sovereign AI capability.

If we treat AI like we treated social media – as a benign innovation until proven harmful – we will once again find ourselves regulating in hindsight. This should be a turning point, not another missed opportunity.

Balancing productivity, safety and equity

We have been talking about AI safety for years. Meanwhile, models are evolving faster than our governance frameworks. Without clear safety standards and a dedicated AI Safety Institute, local developers face uncertainty, while global firms continue to shape our digital landscape unchecked.

Working across government and industry, I have seen how regulatory ambiguity stalls innovation – not due to unwillingness, but uncertainty around compliance.

The Commission’s projections of AI-driven productivity gains – up to 4.3 per cent labour productivity growth and $116 billion in GDP – are striking. But they are speculative, and rest on assumptions about infrastructure, skills and adoption that Australia currently lacks. More concerning is that they ignore the unmeasured costs of AI misuse, regulatory failure and social disruption.

But even if these gains materialise, who actually benefits? The report prioritises productivity over equity and responsible use, yet the focus should be on interrogating the distribution of those gains.

With most frontier AI models developed and hosted offshore, and with Australia’s domestic capability lagging, it is likely that a significant share of profits will flow to foreign tech giants, not Australian workers, SMEs or creators. This begs the question: if the economic upside of AI is so promising, why are we still outsourcing the infrastructure, models and profits?

More focus is needed on the risks of AI to job displacement, inequality and systemic disruption. If AI is to be the engine of productivity, we need more than projections. We need policy, infrastructure and institutions that can steer it safely and inclusively.

Protecting the value and sovereignty of data

The report’s cautious stance on introducing a text and data mining (TDM) exception is understandable – but the risks of these are significant. It suggests that reforms are needed to “facilitate innovation” and “keep pace with AI”, while downplaying the potential to undermine Australia’s copyright system.

Weakening copyright under the guise of AI progress is risky. Once we start carving out exceptions, we risk normalising the idea that creators’ work can be freely extracted and monetised, often by offshore firms. The global tech industry has already shown a willingness to train models on copyrighted content without attribution, compensation or control.

A narrowly scoped TDM exception for non-commercial research may be defensible. But any broader move must be accompanied by robust licensing frameworks, enforcement mechanisms and protections for creators. Otherwise, Australia becomes a data mine for offshore AI – a concern that echoes recent proposals like Scott Farquhar’s call for “digital embassies”, whereby foreign countries could store and process data on Australian soil under their own legal regimes.

While such ideas are framed as economic opportunities, they risk outsourcing our digital sovereignty and reducing Australia to a compliant host for global tech interests.

When combined with efforts to weaken copyright protections and delay AI safety regulation, these proposals signal a broader trend. We are outsourcing our future to firms we cannot regulate, and trading long-term capability, cultural value and national interest for short-term infrastructure rents and regulatory convenience.

Consumer rights and privacy protections

We need a regulatory body with teeth, not just guidance. The body must have the power to audit, enforce and publish sectoral benchmarks. Otherwise, ineffective rules become unenforceable principles.

The Commission’s proposal for tiered data access pathways, industry-led codes and standardised transfers is well-intentioned but structurally weak. Tiered access pathways refer to different levels of data availability depending on the sensitivity or sector – for example, basic consumer data versus health records. Industry-led codes are voluntary guidelines developed by businesses to govern how data is shared, while standardised transfers aim to ensure that data can move easily between systems using common technical formats.

Voluntary codes lack enforcement. Australia’s experience with the Consumer Data Right (CDR), a federal initiative aimed at giving consumers greater control over their data in sectors such as banking, is a case in point. Despite years of effort, CDR adoption remains low, bogged down by technical complexity, compliance costs and unclear incentives. The banking sector alone has spent over $1.5 billion on implementation, with minimal consumer uptake or innovation.

To avoid repeating the mistakes of CDR, a National Data Access Framework is urgently needed, with mandatory standards, sector accelerators and public-private intermediaries. Anything less risks perpetuating the same fragmentation and inertia.

The dual-track compliance model for privacy regulation – offering an outcomes-based pathway alongside prescriptive rules – has potential. But without enforcement, it risks under-compliance. The rejection of a GDPR-style “right to erasure” is another example of overcautiousness. Implementation is indeed complex, but the alternative – leaving individuals with no meaningful control over their data – is worse.

Mandating digital financial reporting for disclosing entities – which refers to publicly listed companies and other organisations that are legally required to disclose financial information – is a step forward. But limiting the scope to listed companies ignores the broader productivity gains available across superannuation, sustainability reporting and large proprietary firms.

The report acknowledges the benefits – improved transparency, reduced costs, better benchmarking – so a comprehensive rollout should still be on the table. A missed chance to modernise financial infrastructure and enable AI-driven insight.

A better approach to data and digital tech?

The Productivity Commission’s interim report is a valuable contribution, but it highlights Australia’s reluctance to confront the hard choices of digital governance. To act on these recommendations, the Commonwealth Government and the Productivity Commission should commit to a final report that includes concrete steps for implementation following the Economic Reform Roundtable.

This could include establishing a national AI Safety Institute to guide responsible development, mandating digital financial reporting across key sectors to unlock productivity gains and creating a National Data Access Framework with enforceable standards.

Additionally, the government should prioritise a whole-of-government AI strategy that balances innovation with public interest, and ensures that copyright, privacy and competition laws are modernised to reflect the realities of AI deployment.

The government must also invest in sovereign AI infrastructure and talent development – including funding for domestic model training, national compute capacity and AI research hubs.

Mandatory transparency and accountability mechanisms should be introduced for high-risk AI systems. This includes requiring public documentation of model capabilities, risk assessments and independent audits for systems deployed in sensitive domains such as healthcare, finance and education.

These actions would help Australia pursue productivity while safeguarding its digital sovereignty and societal wellbeing.

 

Dr Alex Antic is the Faculty Head of AI Strategy and an Adjunct Professor at UNSW Canberra. He is a recognised expert in responsible AI, data governance and public sector innovation. Alex advises government, industry, academia and start-ups on the safe and strategic development and deployment of emerging technologies.

Image credit: Be easy

Subscribe to The Policymaker

Subscribe to The Policymaker