Create a free account, or log in

Repeat after me: Your AI agent is not a worker

Aubrey Blanche-Sarellano says some companies and leaders are anthropomorphising AI agents, calling them “AI workers” or “digital workers”. As leaders in people and culture, Blanche-Sarellano argues we should push back on this: loudly and urgently. 
AI
Aubrey Blanche-Sarellano. Source: SmartCompany

We’re seeing increasing interest in incorporating artificial intelligence (AI) agents in the flow of work. While things like conversational chatbots have been around for ages in technical terms (you know, the 2010s!), the rise of large language models (LLMs) has people using computing in entirely new and creative ways. Whether in our personal or professional lives, people are automating tasks, personalising every aspect of their experience, speeding up research, and bolstering their creativity. 

There are so many reasons to be optimistic about the potential positive impact of AI: it can free humans to do more strategic and creative tasks when deployed well. 

But there are also major drawbacks that often don’t get as much airtime: lowering the overall quality of work output, embedded and exacerbated bias, and the threat of mass unemployment, to name a few. 

Business leaders are increasingly pushing to adopt these technologies, but often lack the people, ethics, and legal expertise to fully consider all aspects of deployment. This means utility is sometimes prioritised over human impact. 

Even more concerningly, some companies and leaders are anthropomorphising AI agents, calling them “AI workers” or “digital workers”. 

As leaders in people and culture, I’m absolutely saying we should push back on this: loudly and urgently. 

AI agents are not people

I’m going to say it simply: your AI agents are not people. 

I get that as humans, our brains aren’t great at separating digital, human-appearing entities from actual, breathing humans (Her came out in 2013 and has aged pretty well). Our brains develop parasocial relationships with people we only ever see in media, social or otherwise. 

But I believe we are equally capable of the emotional and cognitive work to separate “sounds like” and “is” a human.

Too many questions, not enough answers

The fact is, people have complex stories, emotions, and rights. Treating an AI agent as a human, at minimum, comes with questions I haven’t seen leaders in this space acknowledge, let alone answer. Even to begin, there are huge role scope and workforce planning considerations. 

Lately, I’ve been asking:

  • In a world where compensation is determined based on a 38-40 hour workweek, how do we reward someone who can automate away enormous portions of their work?
  • Does “flexible work” include the right to outsource part of your job to technology?
  • If you do leverage AI to reduce your current workload, do you get that time back or are you expected to pick up additional work?

Ethically, before we elevate a computer to the level of a colleague, we should probably nail down answers to questions like:

  • Who is held accountable when a model’s inevitable bias manifests and harms someone?
  • When they hallucinate, do you initiate a wellness check (and are they eligible for your healthcare benefits)?
  • When they produce false information, is it a lie and a violation of your code of conduct?
  • What rights does the AI have, and how do we protect them?

As someone who has been writing about using AI in an HR context for quite a while, these questions still feel wildly murky to me. For many of the folks I see flooding LinkedIn with pontification about the “digital worker revolution”, I’m not sure they’ve thought about these questions let alone tried to formulate cogent answers. 

When AI goes bad

There are an astounding amount of unknowns in this space, but we can be absolutely certain that AI will get a lot of things wrong. What we don’t know is how wrong, or what the harmful impact will be

Within all sorts of math, there is something known as the “error rate”. Basically, how often something gets it wrong. We know AI has an error rate, and the rate we’re willing to tolerate can and does vary widely between use cases. For example, the potential harm of getting a summary for 5,000 employee comments in a survey isn’t 0, but it’s also not likely to result in the loss of someone’s livelihood. But giving an employee an incorrect performance rating could ruin someone’s career. 

And knowing that the AI can go wrong raises another important question: when do we fire the AI from a particular task, and when do we fire it all together?

Use, don’t confuse, the technology

While I tend to fall a bit on the cynical side of technology (largely due to major questions about the ability of many of its creators to know or care about the social implications of their inventions), I see many wonderful use cases for AI

But in order to achieve those benefits – and not harm actual people in the process – we need AI to stay where it belongs: in your stack. 

This doesn’t mean HR is irrelevant, quite the opposite! As people leaders, it’s imperative we’re in the room working with our stakeholders to deploy these technologies in a way that balances the benefits and risks. 

And that starts with calling them by their name: agents.

Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.