Artificial intelligence’s (AI) rapid expansion into everyday life hasn’t left human resources behind. AI is the great equalizer, using facts and information it gathers with machine learning (ML).
Most of us mistrust AI today, as we are unsure if we can trust machines to make life-changing decisions for us.
It’s no different in HR. New guidelines from the U.S. federal government are making a statement: don’t just rely on AI. Let’s break down the problems, opportunities, and future of AI by looking at how you can apply it to your HR practices without allowing bias to interfere with decisions.
McKinsey & Company defines AI as “a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem-solving, and even exercising creativity.” It’s been behind the scenes for about 70 years in robotic process automation (RPA) and calculators, but today’s Generative AI and machine learning have become a transformative force.
AI can efficiently and effectively filter through the applications received for a job opening in your company. It can sort resumes for specific skill sets that fit employer expectations. It can also support hiring managers in comparing various applicants, taking small nuggets of information about applicants, and planning decisions with it.
AI isn’t something out of movies like The Matrix or Terminator, where robots take over the world. Instead, the problem is that it gets things wrong. A flaw in AI learning data can easily translate into inaccurate AI output you shouldn’t rely on. Uncertainty about the accuracy of the information can make using AI worrisome.
AI seems to promise the same as blind hiring, restricting access to demographics to limit bias in decisions. The job site Zappia reports that blind hiring:
That’s why many companies have turned to AI and blind hiring to combat bias.
In the 2023 Talent Index report from lifecycle management platform Beamery, 59% of polled applicants looking for jobs say companies used AI during the recruitment process.
AI is a tool like many other tools hiring managers, and HR directors can use to help plan decisions. However, AI has one big (super important) limitation: it decides based on the information used to train it. The quality of AI training data matters.
In other words, “garbage in, garbage out.”
Training data can be any dataset you decide to use, from your history to the entire internet. If that training data contains any bias, the decisions made by your AI tools will contain that bias.
You may think that hiring managers have moved away from allowing bias for any reason (discrimination, age, disability, and other protected classes). Yet, that’s not the case. Zappia reports that 85 to 97% of hiring managers rely on intuition (which could mean a gut feeling or an unrecognized bias) to make decisions.
AI often uses historical data to plan decisions. What happened in the past is the best representation of the future, right? Now, think about hiring 20 or more years ago. Could you safely say there was no bias then? If the historical data the AI tools use lacks diversity, then the AI tool is going to make hiring and promotional decisions that lack diversity as well.
There are various types of bias present in AI algorithms that could influence decision-making. HR Morning noted these four examples:
Look deeper than just missing out on a job. AI has a powerful ripple effect that’s difficult to ignore within the employment world. If any diversity or other flaw in the AI programming sneaks in, it will create a big problem with the talent the company has over time, making decisions for the company, and representing the organization’s workforce.
Bernard Marr believes AI will affect society at all levels, including the economy, the law, politics, and regulation of all jobs in all industries.
It’s critical to consider how AI affects the Equal Employment Opportunity Commission (EEOC) rules all companies must follow. The EEOC released specific requirements in May 2023 that put pressure on companies to ensure no bias exists in using AI tools during the hiring process.
How will HR directors make decisions about using AI in legal matters? Could a company face liability for the decisions AI bots make if they violate compliance or regulatory requirements?
There are ways to reduce the bias of AI in hiring and promotions that you can implement (no tech degree needed here!)
Just like training the new hires coming into your company, training AI with unbiased datasets is the critical first step in mitigating this problem. One way to support this is by using both in-house and third-party data resources, widening the pool of information available.
It’s impossible to take the other steps if the company’s bias and fairness standards are unclear. Instead, ensure leadership defines expectations in this area well to eliminate or reduce the risk of bias and to inform hiring teams of what is and is not acceptable.
Many companies are already using bias-free AI effectively. For example, Electrolux, the home appliance manufacturer, felt the pressure from an aging-out workforce and lack of talent. They turned to AI to help them overhaul their marketing and hiring efforts. Their recruiters had automated nurturing campaigns created that helped provide clarity to applicants on job opportunities but also helped align objectives in preferences, interests, and career objectives.
Stanford Health Care is another example. They created an AI chatbot that helped streamline their otherwise complicated hiring process. It enabled candidates to complete the application process over time on their cell phones and then offered relevant job matches.
AI doesn’t get to make all the decisions. It shouldn’t scare you either, especially since this is more of an adapt-and-overcome situation (AI isn’t going anywhere).
Organizations like the American Civil Liberties Union (ACLU) are working to ensure companies recognize the risks of AI bias in hiring and promotions and adjust for it. They recommend:
Most importantly, employees should clearly understand that companies must test AI and ensure it is not violating the law. If you are the victim of this or believe that to be the case, it may be ideal to speak to the hiring team or leadership to communicate that factor.
The U.S. White House Guidance on Discrimination Protections encourages disparity testing and mitigation efforts whenever companies use datasets like this. It’s working on what it calls “a Blueprint for an AI Bill of Rights.
Artificial intelligence will continue to move into all aspects of hiring and promotions. Supply Chain Brain offers clarity, especially on how AI can support blue-collar companies and workers, saying that the best route forward is a “harmonious collaboration, where the strengths of both AI and human expertise unite to forge an optimal recruitment approach.”
Using AI in the initial stages of candidate identification and matching skills to needs is a solid place to incorporate AI’s ability to process data quickly. This, along with recruiters who are exceptionally skilled in using AI, can help to create better outcomes for companies.
Will that happen? It’s going to take some work, but it’s the goal.
Read about how you can lead your organization to well-governed artificial intelligence to ensure your success and compliance.
Download your free eBook:
Phenom eCloud is a comprehensive technology solutions provider committed to empowering businesses to overcome challenges, enhance their workforce capabilities, and achieve superior outcomes.