Supercharging our AI Safety Institute now could make all the difference

Parliament House, Canberra, Australia

A modest increase in funding for Australia’s AI Safety Institute could position the country as a global leader in a fast-growing industry, while managing risks and unlocking major economic gains.

There are not many opportunities in the federal budget to make a small, targeted investment that can shape an entire emerging industry. Strengthening the Australian Artificial Intelligence Safety Institute (AISI) is one of these rare opportunities. For no more than $100 million a year – a rounding error against a $735 billion budget – Australia could position itself as a world leader in one of the defining industries of this century. There are few budget decisions with such a strong business case with such clear economic benefit.

Artificial intelligence (AI) is already beginning to reshape economies, labour markets and the information environment. Its ultimate trajectory remains uncertain, but the scale of potential change is enormous. Decisions about how AI is governed will shape where that value is created and who captures it.

Recent announcements about Anthropic’s Mythos provide yet another warning of the scale of the risks presented by AI. This technology can undermine cyber security, supercharge scams and misinformation, and impact the mental health and development of children. The economic impact of these risks is vast.

Australia’s AISI will play a huge role in how we respond to this emerging technology – both in managing the risks and capturing the opportunities. The AISI was announced in November 2025 and is currently in the process of being set up. It will be a group of AI experts who work with the National AI Centre and international partners to help our government keep pace with the rapidly changing AI landscape, by monitoring risks, advising policymakers and regulators, and coordinating with other countries.

The problem is that the AISI is underfunded. For such an important role in a potentially century-defining field, it will receive only $30 million over four years.

The benefits of a well-resourced AISI are enormous. The UK’s equivalent of the AISI demonstrates this. It receives 16 times more funding than our AISI. As a result, it is given early access to new AI models to test for risks. It had an opportunity to test Anthropic’s new model, Mythos, which poses a major threat to the cyber security of Australian and international companies. We have had no such opportunity and it is unclear whether we would be in a position to meaningfully contribute with an underfunded AISI.

In conversations with frontier AI developers, I have seen how much respect they have for the UK AI Security Institute. It plays a valuable role in the AI ecosystem and it gives the UK enormous credibility and business opportunities in AI.

A well-resourced AISI would support Australia’s effort to capture the productivity benefits of AI. Some estimates suggest AI could contribute up to $115 billion a year by 2030 to our economy.

In the absence of global expertise, heavy-handed regulation could limit AI’s potential productivity benefits in Australia. But if the AISI can provide a credible source of technical expertise to government, the AISI will also play an important role in ensuring policy and regulations are effective and proportionate. Without this source of expertise, policymakers and regulators may excessively err on the side of caution and inadvertently crush a huge industry.

There is a clear link between capability and economic opportunity. High-value activities such as testing, validation and compliance tend to cluster around jurisdictions with strong institutions and credible oversight. With the right investment, Australia can compete for that activity and capture a greater share of the value created by AI.

A commitment of $100 million a year could position Australia’s AISI alongside the UK AI Security Institute as a world leader. It would allow Australia to build a concentrated centre of expertise, attract leading researchers and engineers and engage directly with global AI developers on testing and standards.

There is a limited window for our AISI to establish a globally relevant role. Other countries are investing in their own AI safety and governance capabilities and building relationships with industry. Reputation in this space is built early and reinforced over time.

Global standards are being shaped now. Making a bold commitment to resourcing our AI at a global level would mean Australia can help to shape those global standards rather than adapting to them after the fact.

For a relatively modest annual investment, Australia can strengthen an institution that sits at the centre of both managing risk and capturing opportunity in AI.

In a budget defined by difficult choices, this is a practical and forward-looking investment. The business case is clear.