Welcome back to Neural Notes, a weekly column where I look at some of the most interesting artificial intelligence news of the week. In this edition: the government cracks down on AI use in all federal departments.
From September 1, all federal government departments and agencies will need to designate a role or individual responsible for AI use.
This push is being led by the Digital Transformation Agency (DTA), which has been rolled into the Department of Finance and is said to be part of a broader strategy to manage the risks associated with AI technology and maintain transparency in public sector applications.
The policy was launched jointly by Minister for Finance Katy Gallagher and Minister for Industry and Science Ed Husic last week.
Under the new policy, each agency will have 90 days to assign an ‘accountable official’ who will be responsible for overseeing AI activities. They will also be the primary point of contact for AI-related issues, including notifying the DTA of any new high-risk AI use cases.
Agencies will also be required to publish a public statement within six months of the policy’s implementation, outlining their approach to AI adoption and use.
This statement must be reviewed and updated annually, providing ongoing transparency about AI’s role in government operations.
The policy also encourages a coordinated approach to AI across the government. Accountable officials will be expected to participate in cross-government forums and processes, ensuring that AI use is consistent and that best practices are shared across departments.
Why transparency is needed for AI use in government
This policy is a response to the growing need for a clear line of responsibility as AI becomes more integrated into government functions.
And considering all of the talk about AI from the federal government — including the release of its interim response to safe and responsible AI consultation — it needed to get self-reflective at some point.
And the integration has been happening with seemingly little public oversight for a while now. This includes the government’s trial of Microsoft’s Copilot. Announced towards the end of 2023, the six-month trial involved 7,500 public servants using the tool, which integrates AI into existing Microsoft Office products.
In a hearing for the Select Committee on Adopting AI last week – which included a thorough grilling of Google reps – the DTA revealed the trial came at a discounted price and has been extended.
The trial was announced mere weeks after Microsoft pledged $5 billion to Australia’s tech future, including AI, cybersecurity and cloud capabilities.
This focus on AI accountability and governance also follows a November 2023 incident involving KPMG. The organisation lodged a formal complaint after AI-generated material was used in a Senate inquiry to implicate them in fabricated scandals.
The inquiry, which was looking into the Australian consultancy industry, relied on AI-generated case studies from Google Bard that falsely accused KPMG and other consultancy firms of misconduct.
The material was later found to be factually inaccurate, leading to a public apology from those involved. It also, unsurprisingly, raised serious concerns about the unchecked use of AI in official processes.
Who’s to blame, and should they be?
What will be interesting is what, if anything, this new transparency mandate will do. If government departments do make mistakes with AI, will they truly be held accountable?
I also wonder about those designated to be responsible for the use of AI within departments. Who will be chosen and why? What experience or qualifications will they have to shoulder the responsibility?
We already know that the Australian workforce is in dire need of AI training. We also know that an alarming percentage of workers are using generative AI on the job without telling their employers.
Will these public servants be upskilled and supported as they take on these new responsibilities? Or will their heads simply be the scapegoats on the chopping block when someone in their department screws something up?
Other AI news this week
Remember Clearview AI — the facial recognition app built by an Australian that built its database from billions of images it scraped, illegally, from social media platforms?
The one that was used by law enforcement, including the Australian Federal Police (AFP)?
Fun fact: I asked the AFP if it was using it back in 2020 and it denied it. An FOI I placed into its use of the app was also rejected. As it turns out, it was totally using it.
And as my colleague Cam Wilson discovered earlier this year, it was still using it after it said it had cut ties with the company.
But I digress.
The office of the Australian Information Commissioner has announced that it is dropping its case against the company.
The privacy regulator originally took action against Clearview AI back in 2021, alleging that it broke Australian privacy laws by scraping millions of photographs from sites such as Facebook to train its platform.
It told the company to stop collecting images and to delete the ones it had, but there was no evidence that Clearview AI ever complied.
What an astounding blow to privacy.
In other news, realestate.com.au is getting in on the AI game to help people renovate their homes, Ben Lee has some thoughts on the use of AI in music production, and CBA gets an IT support chatbot.
Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.