Artificial Intelligence tools have the potential to revolutionise the way we work.
In fact, they already are. When it comes to recruitment, for example, AI-powered platforms can save recruiters as much as 80% of their annual costs, reduce time-to-hire by up to 90%, and screen candidate resumés 70% faster than humans.
What those statistics don’t show is the potential cost organisations face when this incredible technology is deployed irresponsibly. Left unchecked, AI can amplify inequality, perpetuate bias and leave behind the most marginalised members of society.
Amazon’s hiring algorithm that favoured male candidates is a cautionary tale of what happens when machines are left to learn from biased historical data.
But it doesn’t stop at gender bias. Systems that identify more “employable” candidates by analysing facial movements, word choice, and speaking voice in video interviews can penalise ESL candidates, people with anxiety, and people with disabilities.
The problem of de-biasing AI is not new – it’s something academics and tech experts have been grappling with for years with little progress. Perhaps it’s time we acknowledge the problem isn’t the technology, it’s the human touch.
When those designing and deploying this technology lack awareness of diversity and inclusion principles, the finished product is significantly impacted. This inherently human problem requires a more holistic and robust solution.
A human-centred approach
CSIRO Data61’s Diversity and Inclusion in Artificial Intelligence (D&I in AI) team emphasises the importance of human-centric design as crucial to AI’s evolution.
Simply put, AI exists to enhance human lives, solving complex issues and improving everyday tasks. By focusing on human-centred AI, we ensure these systems align with our ethical standards and human values.
This approach isn’t just about being user-friendly – it’s about building AI that works for everyone, reflecting the diversity of its users in its design and function. It’s about creating technology that’s transparent and trustworthy, where people can feel confident in the AI that interacts with them.
A three-year study examining the impact of AI-supported recruitment tools on diversity and inclusion by Diversity Council Australia (DCA), Hudson RPO, and Monash University backs up this premise.
In the third and final stage of this research, DCA examined the state of play for AI tools in recruitment and found discrimination by algorithms stems from the lack of diversity and lived experience in the tech and AI workforce, which is typically made up of white men under the age of 40.
That means AI is taught to make decisions based on incredibly limited information that lacks the nuance and understanding of marginalisation and the systemic nature of discrimination.
Without the decades of progress it would take to properly diversify the tech industry, which would likely be hindered by biased AI tools anyway, DCA’s research suggests a more immediate solution is to examine how we deploy this technology.
Concerningly, most organisations have not considered diversity in their exploration or deployment of AI – 74% of surveyed organisations have not taken key steps to reduce unintended bias and 60% have not developed ethical AI policies.
Much of this seems to be due to a misguided view – by both employers and job seekers – that machines are, by their nature, more objective and supposedly less biased than humans.
DCA’s newly released report, Inclusive AI at Work in Recruitment: How organisations can use AI to help rather than harm diversity emphasises the need for businesses using AI tools in their recruitment process to consult with those who have lived experience of marginalisation.
And that means truly engaging with them, not just doing a tick-and-flick.
If your organisation’s D&I maturity is low to begin with, it’s likely introducing these tools will only lead to more of the same, if not worse, and at scale. That’s why a holistic approach to diversity and inclusion is imperative before you even consider using this technology.
Using AI to help rather than hinder diversity
In recruitment, where AI is increasingly entrusted with finding the right candidate for the job, the imperative for inclusivity is clear.
To ensure AI tools do not perpetuate biases, we must first ensure the datasets they are trained on are diverse — representative datasets can help AI overlook entrenched stereotypes and evaluate candidates on their merits.
Secondly, transparency is crucial. Companies must be clear about how the AI makes its decisions and provide explanations when needed. This allows for accountability, contestability, and the opportunity to correct any inadvertent biases.
Moreover, human oversight cannot be discounted. AI is a tool, not a replacement for human judgment, and recruiters must remain vigilant, ensuring that AI recommendations are just one part of a holistic hiring process.
Lastly, ongoing audits of AI-driven recruitment practices can help identify and rectify biases, ensuring these systems evolve to become more equitable over time.
Keeping humans as the central pillar in the AI ecosystem means AI-enabled technologies will serve as a partner in progress, amplifying human potential rather than replacing it. With the right approach, AI has the potential to not only find the best talent but also to help ensure the decision-making process is equitable and fair for all job seekers.
Professor Didar Zowghi is a senior principal research scientist at CSIRO’s Data61 and Dr Annika Kaabel is a research manager at Diversity Council Australia.