Welcome back to Neural Notes, a weekly column where I look at some of the most interesting AI news of the week. In this edition: Industry leaders react to the latest news regarding the government’s proposed AI guardrails.
Science and Industry Minister Ed Husic made the announcement on Thursday, outlining new steps to establish guardrails for the safe and responsible development of AI.
In response to concerns about the misuse of biometric data, social scoring and criminal exploitation of AI — the government has released voluntary AI safety standards that include measures such as labelling and watermarking AI-generated content.
Minister Husic emphasised that while these are voluntary now, the government is exploring legislation to mandate these safeguards in high-risk areas like healthcare, law enforcement and recruitment.
He also highlighted the need for human oversight of these systems and providing avenues for individuals affected by AI decisions.
The consultation period for these proposed guardrails is now open, with the government seeking feedback on how to define high-risk AI and what specific regulations should be enforced.
And some of that feedback is already public, with industry leaders weighing in on the proposed guardrails.
The Tech Council of Australia (TCA)
Unsurprisingly, the TCA weighed in on the news. Here is what the new CEO Damian Kassabgi had to say:
“The Tech Council supports the release of the voluntary standard, as well as the proposals paper for mandatory safeguards for high-risk AI systems.
We see this as a crucial step in building an appropriately balanced regulatory strategy for AI governance to support confidence and responsible adoption in AI technologies. We have been encouraged by the consultative approach the Federal Government has taken in developing a range of policy options with consideration across the AI lifecycle to date.
We support the consistency between the standards, which will be helpful for industry adoption.
It is crucial that our regulatory approach in Australia is internationally interoperable and sufficiently flexible to accommodate new and emerging techniques that enable AI governance and oversight.
We are supportive of government options that leverage the expertise of existing, well-established regulators and a clear regulatory regime that is well-coordinated and aligned.
It is important that any AI regulation in Australia does not stifle the productivity and economic opportunity AI presents, which could help create up to 200,000 jobs by 2030.
We strongly encourage the Australian government to continue investing in Australian capability in AI as a nationally significant critical technology, and look forward to engaging with the Government on the next phase of consultations.”
KomplyAi
KomplyAi operates in the AI governance, risk and compliance space. Its founder and CEO, Kristen Migliorini, had this to say to SmartCompany:
“As a seminal Australian start-up founder in the emerging AI Governance, Risk, and Compliance market, I’m thrilled to see the government’s announcement of Voluntary AI Safety Standards and proposed mandatory AI guardrails for high-risk settings. This marks a significant step towards responsible AI development in our country and closely aligns with international counterparts.
While these ‘Voluntary AI Safety Standards’ may seem straightforward and practical on paper, our experience working in the trenches with customers shows that implementing them will be no small feat for businesses.
Our observation of the current Australian market, and certainly this is reflected in the report, is that there is a lot to be done in AI risk and compliance. Some enterprises have been resistant to those changes because there is no ‘stick.
Now there is a ‘stick’. We’ve been working tirelessly to make standards like these, and global regulatory requirements for AI, accessible through consumable, user-friendly technology. It’s not just about compliance for us; it’s about democratising AI innovation in this country.
The introduction of concepts in these Standards of end-user disclosures, real-time AI model monitoring capabilities, and social impact assessments, to name a few, are areas we have long been mapping given they closely align with international best practices and laws such as in Europe.
But they represent what I would say is a substantial shift in how we approach AI in Australia. In our experience, there is still a long way to go to upskill employees in this new territory of AI testing and validation. For example, static and manual processes of assessment, review and GRC for AI, don’t align well with these new Standards.
As someone who started building supporting technology when there was no market in AI GRC, and phrases such as ‘ethical AI’ were still being used, mostly falling on deaf ears, it’s incredibly rewarding to see the Government pushing for the importance of safety in AI.
For businesses working with AI in Australia, these Standards, and the upcoming changes to laws, mark the beginning of a transformative journey. The landscape is evolving rapidly, and I’m excited to be part of this transformation, helping organisations navigate these new waters and contribute to a safer AI ecosystem in Australia”.
KPMG
KPMG has given the thumbs up to the proposed standards, with its chief digital officer John Munnelly, saying:
“We welcome the Australian government’s new voluntary AI Safety Standard as an important step in building safe and ethical AI practices. The Standard will strengthen protections that promote safety in the deployment and use of AI whilst also promoting innovation. It is encouraging to see that the Standard is in alignment with international regulation and best practice.
We appreciate the practical nature of the Standard which is grounded in examples of how to apply them to AI use cases. This is something we have implemented with KPMG’s Trusted AI approach, one of three guiding principles we use to take a human-centred approach.
We also welcome the consultation for mandatory guardrails.
KPMG Australia is committed to evaluating how we will implement the Standard and the extent to which are existing systems and processes are already aligned.
We believe that standards are a critical step in Australia’s progress as a high-tech, innovation economy that will see safe, reliable AI technology developed to benefit Australians and exported to global markets.”
Pega
AI decisioning and workflow automation platform, Pega, still has some questions around loopholes, as well as ethical AI checks. Here’s its senior director of financial services and insurance Jonathan Tanner:
“The key for me is to have in-built ethical AI checks, as well as humans in the loop, to ensure that any issues can be caught before harm is done. Government can take a role in this by ensuring there is a framework that makes sure there is governance in place as well as proper classification and understanding of the risks of particular use cases, and that there are penalties if organisations don’t follow the guidelines laid down.
I also think the piecemeal implementation of legislation by different governments has a risk that loopholes will be taken advantage of. This technology has no respect for different jurisdictions, so a combined effort needs to be employed to ensure consistency and to prevent bad actors from taking advantage of low regulation regimes.”
Other AI news this week
- Earlier in the week Nvidia lost a casual US$279 billion in market value overnight. This historic nosedive saw the stock price drop by 9.5% – but why did it happen? We have the full story here.
- Canva has also been copping flack from some long-time users after finding out the price of their Teams subscriptions would be jumping by up to 300%. What does that have to do with AI? Well, the Aussie unicorn pointed to its now-robust gen AI offerings as one of the value adds to the Teams tier that hasn’t seen a price rise since being introduced in 2020.
- “Bill Gates has a good feeling about AI” — possibly my favourite headline of the week.
- Aussie startup Harrison.ai has a brand new model. And that’s all I’ll say for now — you’ll have to hear all the juicy details in next week’s Neural Notes!
- Anthropic now also has an enterprise plan for its Claude platform in an effort to eat OpenAI’s lunch.
- Apple is about to launch a ChatGPT-like version of Siri.
Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.