Create a free account, or log in

Standards Australia and ANU join together for a new AI standards training course

When ChatGPT went public on November 30 2022, everything changed for AI.
Standards Australia

Partner Content

AI chatbot
Dr. Ian Oppermann, Associate Industry Professor at UTS and Director at Standards Australia. Source: Supplied.

When ChatGPT went public on November 30 2022, everything changed for AI. The release of the generative AI chatbot marked a turning point in the way we think about AI, its capabilities, benefits and potential risks, with use cases heading beyond basic automation towards tasks previously reserved for humans.

“There are really interesting use cases like intelligent process automation, automated summary creation, and low complexity decision making,” says Dr. Ian Oppermann, Associate Industry Professor at UTS and Director at Standards Australia. “Think about being able to ask questions along the lines of, ‘Tell me all the things related to…’. A question could be ‘I’m a pensioner in New South Wales, and I want to understand what services are available to me, or what support is available to help me live independently.’ Then there’s the ability to generate something for example, a summary or a synthesis. Imagine a scenario where you can say ‘here’s all the information about this topic, now give me a high-level summary, tell me what the most important issues are, tell me what the exceptions are, or tell me what the gotchas are.”

Register today to attend Standards Australia and ANU’s training course on AS ISO/IEC 42001:2023

AS ISO/IEC 42001: Australia’s adoption of new, global AI standard

In light of AI’s increasing use and the tech’s benefits and risks — Oppermann points to the case of one US lawyer whose AI-assisted research resulted in fake case citations — businesses need to take care with how they wield these technologies. With its recent adoption of AS ISO/IEC 42001:2023, Standards Australia is laying out guidelines for the responsible, safe and effective use of modern AI, even as it continues to evolve.

“What we once thought of as AI has changed. After some time we said, ‘well, that’s just an algorithm that plays chess’,” Oppermann says. “The frontier of AI keeps moving, and it will continue to move, and we’ll continue to redefine what we mean by intelligence and what we mean by these different capabilities. So, the standard tries to frame some thinking around uses of AI which aligns with risks and ways to think about risk.”

That risk-based approach to AI use is at the heart of the standard, complementing previous, broader risk management standards like ISO 27001. AS ISO/IEC 42001:2023 frames AI in relation to other tech concerns (like data governance and cybersecurity) and, as the first globally adopted standard of its kind, puts Australian businesses on the same page as international users. “If you adhere to the standard, you’ve all of a sudden entered an international market,” says Oppermann. “So, what we do in Australia is suddenly compatible with things that are happening in all other parts of the world which adhere to the standard.”

And while the standard covers complex AI issues like responsible use, transparency, risk management and efficiency, it’s important to remember that it’s not an AI instruction manual. “The standard doesn’t tell you how to use AI, but it certainly reframes the conversation in line with the capabilities of AI as they currently are. It helps people think through how to mitigate the risks of use of data driven tools that amplify or accelerate, or adapt,” says Oppermann.  

Learn more about the new standard with ANU

To help businesses understand and implement the new standard, Standards Australia has announced a training partnership with the Australian National University (ANU). The partnership will give businesses the opportunity to delve into the details of the standard through a two hour online course — Understanding AS ISO/IEC 42001:2023. The benefit of the training session, says Oppermann, is that an organisation can then apply the details of the standard to its own AI use cases.

“What the session will raise awareness of is the standard itself, peel off the lid and say, ‘This is what’s inside the box and these are the major elements of the standard,’” says Oppermann. “It’ll help you become familiar with ways of thinking about the standard and give you enough information so you can then go further yourself. The standard really is a tool for thinking about developing and deploying AI.”

Participants in the on-demand learning module will receive a detailed breakdown of the standard, a certificate on completion and – importantly – a chance for organisations to demonstrate a commitment to the principles of safe and effective AI use. “Over time companies can ultimately say, ‘We adhere to 42001,’” says Oppermann. “So, it’s a mark of trust.”