Cybersecurity experts have warned that the race to adopt artificial intelligence (AI) solutions in the corporate world is “fraught with moral and technical issues”.
A paper by researchers from the University of the Sunshine Coast (USC) has described the use of tools like ChatGPT, Bard or Google’s Gemini as a business blindspot.
Generative AI is used to make content that appears to be created by humans by transforming large amounts of real-world data.
Paper co-author Dr Declan Humphreys said generative AI tools could leave companies exposed to deliberate or accidental events. These could include events such as mass data breaches exposing third-party information, or business failures based on manipulated or “poisoned” AI modelling.
“The research shows it’s not just tech firms rushing to integrate the AI into their everyday work — there are call centres, supply chain operators, investment funds, companies in sales, new product development and human resource management,” Humphreys said.
“While there is a lot of talk around the threat of AI for jobs, or the risk of bias, few companies are considering the cybersecurity risks.”
In response to the concern, Humphreys and fellow computer science and artificial intelligence experts at USC developed a checklist to give businesses five ways to ethically implement these kinds of solutions.
For organisations looking to implement AI systems, the researchers said privacy and security should be top priorities in addition to:
- Secure and ethical AI model design
- Trusted and fair data-collection process
- Secure data storage
- Ethical AI model retraining and maintenance
- Upskilling, training and managing staff
The researchers stressed that companies that created their own artificial intelligence models or used third-party providers were equally susceptible to hacking.
“Hacking could involve accessing user data, which is put into the models, or even changing how the model responds to questions or the answers it gives,” Dr Humphreys said.
“This could mean data leaks, or otherwise negatively affect business decisions.”
Humphreys noted that organisations that were moving to adopt artificial intelligence solutions should think carefully about how they adapted their governance frameworks. But government regulatory frameworks to protect workers, sensitive information and the public should also rise to meet the challenges.
The fact that legislation had not caught up with the pace of data protection and generative AI issues only exacerbated the problem, he added.
“The rapid adoption of generative AI seems to be moving faster than the industry’s understanding of the technology and its inherent ethical and cyber security risks,” Humphreys said.
“A major risk is its adoption by workers without guidance or understanding of how various generative AI tools are produced or managed, or of the risks they pose.”
‘AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business’ was published in the Springer Nature journal AI and Ethics on Monday.
The USC research, also co-authored by Dr Dennis Desmond, Dr Abigail Koay and Dr Erica Mealy, was supported by Open Access funding enabled and organised by CAUL and its member institutions.
“This study recommends how organisations can ethically implement AI solutions by taking into consideration the cybersecurity risks,” Humphreys said.
This article was first published by The Mandarin.