Google recently bundled its Gemini AI features into all Workspace business plans, touting them as productivity boosters that come with a modest price bump. These tools work across the Google Workplace suite of apps and are now active by default in many organisations.
This approach put the onus on users to disable features and has led to some confusion and frustration given the quiet nature of the rollout. And in at least one case involving Google Meet, it has raised privacy and security concerns.
Speaking to SmartCompany on the condition of anonymity, a source described a recent experience with Gemini’s transcription feature where it allegedly continued to gather and summarise sensitive meeting information after they said it had been turned off.
The incident highlights a pertinent issue as more AI is pushed into workflows – transparency and user control need to be non-negotiable.
Wondering whether Google Gemini’s AI is actually off
According to the source, a work colleague turned on the Gemini recording function at the beginning of a meeting. This was shortly after Gemini had been rolled into Workplace business accounts so it was something they hadn’t seen before.
However, they switched Gemini off again around a minute into the meeting. After the entire meeting was concluded, the organiser received an email from Gemini containing two attachments that have been seen by SmartCompany.
The first was the transcript, which confirmed it had stopped after 59 seconds. However, the second was a meeting summary that contained details of the entire meeting, including specific points discussed long after transcription had supposedly been disabled.
This summary was also already saved in the organiser’s Google Drive.
This was a problem, because sensitive information from the meeting had been summarised and stored in a place that could be accessed by others within the company – despite the team’s explicit action to turn off the feature.
One of the problems may be that Google Gemini’s AI has too many steps
It’s worth noting that SmartCompany was not able to replicate this issue. During a recent meeting, I too turned on the recording and transcription functionality, switching it off again after a minute.
Myself and my editor – who was the meeting organiser – were both sent the transcription for the time the recording was on. The meeting summary also only contained information for that minute, despite the meeting continuing for longer.
It’s possible that what happened to the source was a bug in Gemini or perhaps an error due to how the the AI’s features are configured for Meet in particular.
When users enable AI note-taking in Meet, a document is created in the organiser’s Google Drive. Additional options include “also transcribe the meeting” and “also record the meeting”. Disabling these functions requires navigating multiple steps during the session, and even then, the process can feel unclear.
Similarly, when you stop Gemini from taking notes, it doesn’t stop everything by default. You have to click a box asking it to also stop transcribing.
The source speculated that these complexities could easily lead to miscommunication. “It’s possible some of these steps were missed, but the outcome is still incredibly unsettling. You shouldn’t need to be a technical expert to understand whether an AI tool is off.”
“If I hit the stop button, I expect all the Gemini functionalities to stop. Instead, it felt like the AI kept listening in the background.”
Very real privacy concerns with Google Gemini
The implications of this incident go far beyond individual meetings. If Gemini continues to operate in the background when users are unaware, it could pose serious risks for organisations handling sensitive data.
In this case, the source’s team acted quickly, disabling Gemini entirely across their organisation. But not every team will catch these issues in time. This also has to be done at an admin level, though individual users can toggle off some personal AI settings.
The risks extend to scenarios where summaries or transcripts could be accidentally shared with the wrong person inside or outside an organisation, particularly in collaborative environments.
These documents are also really easy to find. In the case of my experimentation, the summary document was created twice — on my work Google Drive and my editor’s. Each document despite being the same – had a unique URL.
If a company’s overall Google Drive is open, meaning that anyone within it can search for documents, this has the potential to be a huge problem. It would be easy to search for meeting summaries or transcripts via a name or other obvious keywords to see what was discussed.
Beyond privacy concerns, this could also represent a major security threat, especially if the AI processes data related to intellectual property, NDAs, contracts, or other confidential information and a bad actor targeted a company’s Google Drive or an individual worker’s email.
Our take: Google’s lack of transparency in mass rolling out Gemini to businesses is alarming
Google’s vague response to these concerns isn’t helping. When approached for comment, the company didn’t directly answer any questions. Instead, a representative said that users can “adjust their smart features and personalisation settings” or use admin controls to manage AI access.
They also pointed to the announcement blog post (which most business workers were never going to see) as well as one providing more info on ow to use Gemini with Google Meet.
But none of this addresses the core problem: Gemini’s functionality isn’t transparent or intuitive. And it went from being a pricey add-on to being quietly folded into Workspace business accounts last week without warning.
AI tools in the workplace need to be simple to understand, easy to control, and — most importantly — opt-in.
If Google and other big tech companies want businesses to embrace AI, it must first build trust.
Ironically, this is exactly what many of them have been touting during their generative AI arms race for the past two years.
But it needs to be more than lip service.
It means ensuring users have clear and granular control over when, how, and why these tools operate. Until then, incidents like this will only fuel scepticism about the role of AI in professional environments — and rightly so.
Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.