Today the Albanese government released its plans to ensure the safe and responsible use of AI across various sectors in Australia. The Australian Tech Council has welcomed this long-awaited report, but has some suggestions when it comes to existing regulations and ensuring innovation and growth.
The government’s approach is detailed in its interim response to the consultation on “Safe and Responsible AI in Australia”, which highlighted the necessity of balancing innovation with risk management in AI applications.
The consultation revealed that while there is recognition of AI’s potential to enhance wellbeing and economic growth, Australians are seeking stronger protections to address the risks associated with this technology. As a result, the government’s response primarily targets AI use in high-risk settings, where potential harms could be difficult to reverse.
However, the government simultaneously aims to allow the vast majority of low-risk AI applications to continue thriving with minimal restrictions.
“Many applications of AI do not present risks that require a regulatory response, and there is a need to ensure the use of low-risk AI is largely unimpeded. Our current regulatory framework does not sufficiently address risks presented by AI, particularly the high-risk applications of AI in legitimate settings, and frontier models,” the report reads.
The government is considering implementing mandatory guardrails for AI development and deployment in high-risk areas, possibly through amendments to existing laws or the creation of new, AI-specific legislation.
This consideration comes with a series of immediate actions, such as collaborating with the industry to develop a voluntary AI Safety Standard, exploring options for voluntary labelling and watermarking of AI-generated materials, and establishing an expert advisory group to assist in developing options for mandatory guardrails.
Key aspects of the proposed mandatory guardrails include ensuring the safety of AI products through rigorous testing, enhancing transparency in AI model design and data usage, and establishing clear accountability measures for developers and deployers of AI systems. These measures align with the global trend of countries like the EU, the US, and Canada, which are also actively working to address the challenges posed by AI technologies.
Minister for Industry and Science Ed Husic emphasised the Australian public’s desire for stronger safeguards in AI, stating, “Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled”.
He also highlighted the government’s commitment to building trust and transparency in AI, integrating safe and responsible practices from the early stages of AI design, development, and deployment.
The Australian Tech Council weighs in on the government’s AI plans
Kate Pounder, CEO of the Australian Tech Council, welcomed the government’s risk-based, proportionate approach, recognising the importance of differentiating between high and low-risk AI applications.
“We support the proposal to develop voluntary standards and the voluntary code and would be pleased to work with the government and other stakeholders, like the National AI Center, on these,” Pounder told SmartCompany on Wednesday.
Pounder also emphasised the fact that rigorous regulations in sectors like healthcare and automotive — both used as examples in the government’s response — already exist, suggesting new laws might not be necessary if current systems are effective.
“When you’re considering the right testing and rules for medical products, you really want to be sure that health experts are playing a role in that and perhaps not just setting it through a general standard,” Pounder said.
The report does mention the potential to lean on existing regulation where appropriate, including in finance and health. Pounder agrees this potential approach would be a positive one.
“In some of those areas you’ve got decades of experience and regulators overseeing the introduction of a whole range of new technology… they’ve been dealing with these kinds of questions for a long time and they’re really good at it.
“So I think letting them continue to do their jobs is probably going to be the best way to have safe medical products even when they are using AI.”
The Tech Council’s CEO also stressed the need for industry expertise in the government’s proposed expert advisory group, to provide practical insights for AI regulation.
Balancing AI innovation and regulation
On the surface, the Australian government’s approach to AI regulation seems to be aiming to strike a careful balance between fostering innovation and ensuring public safety.
Contrasting with the European Union’s pending AI Act, which includes prohibitions on some high-risk uses, the Australian approach is more aligned with the lighter regulatory touch seen in the United States and the United Kingdom.
This balance could prove crucial to maintaining Australia’s competitive edge in the global AI landscape while addressing the potential risks associated with AI technologies.
Pounder echoed this sentiment, highlighting the importance of a risk-based and proportionate approach to AI regulation.
“We think it’s pretty sensible that using AI in accounting software is treated differently from using AI in a medical device or a car,” she said.
However, Pounder also emphasised the opportunities that AI presents for economic growth and innovation.
Drawing parallels with international approaches, Pounder pointed out that other countries like the UK and the US have included more comprehensive plans to encourage AI innovation in Australia.
“The US presidential Executive Order, for example, had measures that were devoted to spurring innovation as well as to managing the risks of AI,” Pounder said.
“They’ve said in the paper they’re considering an AI investment plan. We’d really encourage them to do that because we think there’s a huge opportunity here for Australia to build new companies to come up with some really exciting new breakthrough products to create new jobs.
“And we don’t want to fall behind in that in that global race.”