Welcome back to Neural Notes, a weekly column where I look at some of the most interesting AI news of the week. In this edition, I chat with Aurélie Jacquet, director of Ethical AI Consulting and chair of Standards Australia’s AI Committee. We touch on the parts of AI implementation that aren’t talked about enough, including interoperability between systems, global standards and not leading with FOMO.
Over the past two years, the AI conversation has been constant. For those who are as chronically across it as I am – it seems like the same conversations and points are being made around its potential, concerns, responsible use, and regulatory frameworks.
It is rare to hear something new.
So it was refreshing to speak to a long-time expert such as Jacquet, who is actively across the shaping of AI standards both in Australia and globally.
I was curious to know what amongst the loud and constant AI conversation is often missed.
Interoperability: The overlooked AI challenge
In the generative space, the last 18 months have seen an AI arms race from startups and big tech players alike. Everyone is clamouring for market share.
According to Jacquet, one of the lesser-discussed challenges regarding AI is the issue of interoperability. So much of the conversation and preparation is around “which” AI system to choose, when in the future the more likely scenario will be users and businesses using a variety of AI systems.
Jacquet suggests that as AI systems become more complex and widespread, organisations need to think beyond individual use cases and consider how these systems will function together.
She points to capital markets, and different arbitrators using algorithms to trade – and how one algorithm can push another and result in a domino effect.
“How do you manage the interaction between the different algorithms as they come out and look at composite AI systems?
“Everyone’s always thinking about having one model or one use case. They’re not thinking about how the use cases are playing together. So that’s the fascinating part, but it’s all a learning process. As we progress towards AI I think that’s an important piece to manage.”
And as more organisations adopt AI, the interaction between different algorithms and systems could lead to unforeseen risks.
“I think we need to focus on the risks that are [happening] now in order to be prepared for whatever comes later.”
The importance of putting the challenges first
Businesses are constantly being told by newly minted AI evangelists that if they don’t adopt AI, they’re going to be left behind.
And that may very well be true as AI becomes more tightly woven within the fabrics of our daily lives.
But this FOMO-inducing thinking also encourages businesses to dive head first into generative AI in particular without taking the time to think about how and if it should be using the technology.
From Jacquet’s perspective, it’s important to put business needs first.
“When I talk to clients about whether to use technologies such as AI – to think about the way you deliver services. Don’t think about the technology first. Think about your challenges and how and if that technology is best fitted to solve that challenge.”
This practical advice is one you don’t hear enough when it comes to AI evangelism in the workplace.
But according to Jacquet, this mindset could be applied to any technology, not just AI.
“If you run for AI or any technology – like if you put a lot of computers in your organisation many years ago but didn’t know how to use them – that’s going to be a catastrophe,” Jacquet said.
“We are the ones responsible. It’s first and foremost a tool. So it’s our duty to learn how to use it and use it well.”
Fortunately, help is increasingly available.
“There’s a lot more courses to educate. It’s not just technologists, but also the all different employees that are going into and are interacting with AI to understand how they can best use it – that’s what’s really going to help with the uptake,” Jacquet said.
According to Jacquet, the work she leads internationally on AI standards is developing a handbook for SMEs to adopt AI, which will be part of the international best practice that it sets up.
With AI increasingly being a plug-and-play situation for businesses, again it comes back to the question of how and why.
“If you’re using that tool – how is it changing your risk or amplifying your risk? This is very much something that a small business needs to consider,” Jacquet said.
The need for global AI standards
This year has seen the Australian government’s interim response to the safe and responsible AI consultation. Jacquet is optimistic about this and the role standards and adapted legal frameworks can play in guiding AI development.
“I think Australia has been extremely good in taking a considered approach to regulation,” Jacquet said, noting that the country’s efforts to involve various stakeholders such as SMEs in the process have been a model for balancing innovation with ethical considerations.
When it comes to laws, Jacquet is clear that new regulations should not be created unnecessarily. Instead, existing laws should be adapted — where possible — to address the specific challenges that AI presents.
“I’ve always been a big advocate that you don’t need to reinvent the wheel. You need to look at existing laws and see how they apply to AI,” Jacquet said.
As we move towards regulation, she stresses the need to think about AI and its standards globally.
“AI is not a local technology, It comes from different parts of the world,” Jacquet said.
“You need data from everywhere around the world. The system may be located here or somewhere else. So, it is very much an international piece.”
This is certainly important for business operations – from multinationals to SMEs and startups. Anyone who is involved in business internationally will need to deal with AI standards and regulations between countries.
Jacquet highlights the role of international standards in ensuring AI systems can operate across borders without causing regulatory headaches.
“There’s a lot more initiatives now — you see an international initiative on AI flourishing every day. It’s good to keep the competition, but to avoid contradiction – because there’s so much fragmentation,” Jacquet said.
“There’s so much noise in the scene – effectively we all need to work together to provide the best solution.
“Competition is good. Contradiction is not.”
The challenge is maintaining a balance between fostering innovation and ensuring that AI can be safely and effectively integrated into various legal and regulatory frameworks around the world.
Jacquet points to the different approaches from different areas of the world, such as the EU AI Act which has opted for a mandatory approach. Non-compliance with the Act will result in a maximum financial penalty of up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher.
Comparatively, Jacquet mentions Singapore which has more self-regulation.
“If there is too much contradiction across different countries it’s going to be a moot point because you’re not going to be able to scale,” Jacquet said.
“So that’s why the international conversation is absolutely key.”
Other AI news this week
- Earlier this morning Amazon was getting absolutely grilled about its use of AI by the Senate Select Committee on Adopting Artificial Intelligence. This included the use of AI in the US to monitor employees in fulfilment centres and charges filed by workers alleging the use of the technology for union busting. According to the company, Amazon isn’t using AI in Australia for worker monitoring.
- Google is also up in front of the Committee and has already been asked about its latest environmental report, which revealed the huge impact AI is having on its own sustainability goals, which we talked about in a previous edition of Neural Notes.
- Microsoft will be up this afternoon and you can follow along live here.
- The latest model of Elon Musk’s AI chatbot, Grok, was released this week. It immediately stirred controversy as users were able to generate images of a Nazi Mickey Mouse and Taylor Swift, Kamala Harris and Alexandria Ocasio-Cortez in lingerie. Some users have also reportedly been able to further get around Grok’s ‘guidelines’ by saying the prompts were for crime scene analysis.
- Australian public servants will now be held responsible for AI stuff-ups. All government agencies will need to name who will be responsible for AI use and let the Digital Transformation Agency (DTA) know when it has gone wrong.
- The new suite of Google Pixel 9 phones has some new AI tools: Pixel Studio — an AI-powered image-generating app, Pixel Screenshots — which makes your screenshots searchable by analysing content locally and Call Notes, which summarises conversations and saves them to the call log for easy reference.
Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.