It’s no secret employees have been covertly using generative AI at work ever since ChatGPT first launched. Data from last year showed two out of three Australian office workers were using AI tools without telling their bosses.
While more workplaces are adopting AI, including guardrails and company standards, there’s still a way to go when it comes to AI hype versus implementation. A study from Slack in 2024 found 60% of Aussie execs have a high degree of urgency when it comes to using AI at work, but only 35% of employees receive guidance.
So what we’re left with is a significant portion of the Aussie white-collar workforce being left to their own devices. And without proper guardrails and standards, private information is being fed into generative AI platforms without businesses knowing.
A new report from SaaS security company Indusface has revealed the top five pieces of private — and sometimes confidential — company information being fed into chatbots by workers.
To prevent this from happening, Indusface recommends companies make it a priority to enforce strict AI usage policies and rely on secure, approved AI tools.
Work-related files and documents
AI tools have become popular for tasks such as analysing data and drafting reports. Indusface found over 80% of Fortune 500 company professionals rely on AI platforms like ChatGPT. However, 11% of the data shared is strictly confidential, including business strategies.
Personal details
Personal details were also high on the list. Employees often share personal data such as names, addresses, and contacts with AI platforms.
According to the report, 30%of workers believe protecting this information isn’t worth the effort. While one in three employees now has access to cybersecurity training, many don’t use it.
Client or employee information, including financials
Professionals frequently input sensitive client or employee data into AI platforms.
Similarly, the report found that confidential financial information — both personal and business-related — is being loaded into the likes of ChatGPT.
Passwords and access credentials
Despite an uptick in warnings and education around password hygiene, employees continue to input passwords into AI tools, exposing multiple accounts to potential breaches.
AI platforms aren’t designed to store credentials securely. But that still doesn’t prevent people from adding those passwords to something like ChatGPT.
Companies should enforce password management protocols, unique passwords, and two-factor authentication at a minimum.
Intellectual property and company codebases
Lastly, the report notes some employees are adding intellectual property or a company’s codebase (its core source code and software components) to chatbots.
This is a problem because when proprietary code is pasted into AI tools like ChatGPT, it could be stored or even used to train future models, exposing trade secrets.
Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.