PhenomeCloud Insights

A Guide for HR on Managing Ethics in AI.

Written by PEC | Aug 26, 2024 1:56:57 PM

While AI has been around for over 70 years, it has impacted HR profoundly. Rule-based Robotic Process Automation (RPA) is still useful, but simple automation is giving way to generative AI.

This more powerful tool responds to requests to create new data at scale: text, speech, images, video, software code, a new automobile design, or a new miracle drug.

What enables modern AI is machine learning and deep learning. Machine learning imitates how humans learn. Deep learning is a type of machine learning that relies on artificial neural networks to "learn and make decisions based on unstructured, unlabeled data."

You can apply this processing power for tangible benefits in recruitment, learning and development, performance management, and employee engagement.

The good news is that you don't need to invent a custom solution for your business. HR software vendors embed these capabilities in their tools. For example, Oracle AI services deliver trained models you customize with your own data, and SAP plans to use Microsoft's OpenAI Services to improve how customers attract, retain, and upskill their people.

AI can speed up decision-making by automating tasks and offering data insights. However, with great power comes great responsibility. Ensuring that your people use AI ethically is crucial to avoid adverse effects on employees and the organization. In this blog post, we'll discuss how to manage ethics for AI in HR.

Establishing Ethical Principles

The first task in ethical AI is to define ethical principles for your business that align with your values and culture. Ethics, also known as moral philosophy, is a set of universal moral principles that help us discern right and wrong. In HR, most practitioners adhere to the Code of Ethical and Professional Standards in Human Resource Management, published by SHRM.

That's an excellent place to start by defining your company's principles. This is an area where strong leadership from HR is essential to success, beginning with keeping the "human" in AI practices. Humans must always be in charge of AI, exercising the principles of fairness, equity, privacy, and human dignity.

Removing Bias in AI

AI doesn't change the principles of ethics, but it adds another dimension and tremendous responsibilities. The problem with AI is that it inherits its biases from the data we used to train it.

AI already provides profound benefits to society and business in medicine, social connectivity, and how we work. However, we've seen how people and organizations can use it to amplify or suppress ideas, threaten human rights, and perpetuate bias in hiring decisions.

So, job #1 in HR is to ensure your AI models have unbiased training data. Bias can get into the data in several ways, including training data, a lack of diversity in the development team, or a failure to audit. Work with the AI developers and vendors to identify and address potential biases in the data, and that you operate from the same set of principles. By doing so, you can ensure transparency in AI decisions.

Understanding How AI Makes Decisions

If you can't explain how your AI makes decisions, the first casualty is trust. Maintaining that trust with everyone, from your newest recruit to your CEO, is essential to making AI work for you. Just imagine the mayhem if combat pilots flying at supersonic speed didn't trust their digital partners in the cockpit!

AI has a solution: Explainable AI

The answer to the problem is explainable AI (XAI), developed by the U.S. Defense Advanced Research Projects Agency (DARPA). It's a set of techniques embodied in three primary methods:

1. Prediction accuracy

Local Interpretable Model-Agnostic Explanations (LIME) addresses the "black box" problem in machine learning by faithfully predicting any classifier or regressor in the model.

2. Traceability

This technique maintains a complete account of the provenance of data, processes, and artifacts involved in producing an AI model. There are many traceability systems available on the market.

3. Decision understanding

Unlike the previous methods, decision understanding is about human understanding of how AI makes decisions. It's a design method that shows how and why the AI model makes those decisions, the most critical aspect of explainable AI.

New models are coming online that can "explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future." This new way of thinking came from DARPA, which must create advanced systems people can trust in the midst of battle.

 

It involves educating the organization, beginning with the team working with the AI, to enable them to understand how and why the AI makes decisions. By improving decision understanding, models can explain how models reach conclusions, so you can improve decision-making and increase trust in your organization.

Instituting governance throughout the organization

AI should be part of your organization's rigorous data governance framework. You need clean, usable data to have success in AI. So, start there and build on a firm foundation of data quality, accountability, accessibility, and transparency. Make your AI ethical principles the guiding light for your governance framework, and prepare to embark on a consistent, never-ending campaign to make it a part of your culture.

You can learn more about governance from the Data Governance Institute, the oldest and best-known source of in-depth Data Governance best practices and guidance. You can join the institute, get unlimited access to resources and on-demand training, and participate in virtual and in-person conferences and events.

Every organization has its own culture and structure, so tailor your framework to your business needs. To get their insights, align the governance policies to involve stakeholders from every part of the organization (including the board of directors). Create a framework that aligns with your goals and values.

Integrating the framework into your operations

Ensure that training in principles and methods reaches every team. Create a culture of compliance and responsibility to make AI governance an integral part of the daily routine. Embed ethics in your culture.

Overcoming governance challenges

There will be obstacles to instituting AI governance, such as technical complexities and a lack of understanding. Provide clear communication, education, and support to your teams, seeking collaboration between HR and IT for a smooth implementation.

Conclusion

AI is changing HR in many ways with generative artificial intelligence that can create new data at will. Machine learning and deep learning drive modern AI, allowing businesses to move beyond their limitations.

It's up to us to live up to our ethical principles, aligned with our values and culture, to capitalize on the value this new technology can deliver.

Join us as we explore how to cope with the growing regulatory environment of AI and how we can keep our AI and its data secure.

Check out these additional resources that may catch your attention

Discover how AI is Revolutionizing HR.

Phenom eCloud is a comprehensive technology solutions provider committed to empowering businesses to overcome challenges, enhance their workforce capabilities, and achieve superior outcomes.