New AI laws are set to shake up the use of AI for talent decision-making in 2023. This is just the start of a global movement that organisations need to be aware of.
To date, there have been no consistent nor universally applicable standards for the use of AI in employment decision-making.
That leaves vendors able to call their technology ‘ethical’ based on self-assessment, or by choosing their own ‘ethics committees’ — without the input of legal, ethical, global, and importantly independent regulatory exports.
Ethical AI laws and regulations
But that’s set to change, with a number of ethical AI laws and regulations set to come as soon as January 1, 2023.
This rise in regulations is going to be global but will start in the United States with NYC Local Law 144. This law will mandate that employers can only use AI decision-making tools for employment or promotion decisions for individuals in New York City if that AI has been subject to an independent bias audit within the last 12 months.
The violations range from not having public information on the audit available, to not giving candidates the option to opt-out of the process, and the penalties will apply daily, adding up to significant financial pain for organisations.
This law will be followed by similar regulations across the US, with multiple privacy acts across California, Colorado, Virginia, and Connecticut set to be enacted, alongside an AI Bill of Rights.
The EU Commission has already proposed multiple regulations including the EU AI Act, and Australia is following suit with the NSW government already adopting an assurance framework to assess AI tools, with more regulations no doubt set to follow across the country.
This is the start of a global move towards consistent standards that ensure AI vendors are developing their technology with ethics and bias reduction at the forefront.
This is a change we at Reejig welcome. But where does it leave organisational leaders who maybe haven’t heard of these regulations?
HR AI vendors
The focus now should be on reviewing your HR AI vendors, not just their compliance with these new regulations, but their stance on ethics overall.
After all, it is you as the organisation who will be liable for non-compliance — and the rise in ethical AI regulations won’t be going anywhere.
It’s important that as an organisational leader, you understand where your vendors sit, ask questions, ask for the required documentation, and be prepared to look elsewhere for ethical vendors if yours can’t come to the table.
Unbiased algorithms
We believe AI has the power to do good; transform professional workforces, inform business decision-making, and unlock opportunities for people who may be missed. But without a deliberate focus on ensuring data and algorithms are free of bias, it’s all too easy for the outcomes and recommendations that AI makes to be biased too.
Since the conception of Reejig, we’ve been committed to ensuring the recommendations our AI makes go toward good and fair decisions. That’s why, in partnership with the University of Technology Sydney, we developed the world’s first independently audited Ethical Talent AI.
We’re also continuing to lead in the ethical AI space, undergoing a new independent audit with a global panel of experts in line with new regulations to continue giving the organisations who work with us employment decision-making support grounded in fairness, ethics, privacy, and transparency.
2023 will only be the beginning of ethical AI accountability, as lawmakers, organisations, and society look at how vendors are managing the risk of bias when it comes to applying decision-making support to people.
Getting onboard now won’t just save you significant financial pain down the road, but it will allow you to feel confident that your AI is helping make fair decisions that make people feel seen and heard, create opportunities, and avoid wasted potential in your workforce.
Siobhan Savage, is co-founder of Reejig, an independently audited workplace intelligence platform.