Artificial Intelligence has been changing HR for at least 50 years, but today's advanced tools are doing it at breakneck speed.
And now, in generative AI's breakout year, it's made AI many times more capable. What was the leading edge a year or two ago is becoming commoditized, and graph computing has given us a trillion-fold increase in computing power.
Meanwhile, there's a worldwide rush to regulate AI. The fear of AI is growing enough that we hear a lot of noise about the need for centralized government control.
Let's face it—AI will be regulated. The question is: "By whom?"
The best defense against over-regulation is direct participation in rule-making and rigorous self-governance. By doing so, we can influence the balance of innovation and protection. When we're invited to take part in regulatory advisory groups, by all means, let's get active.
Overcoming Mistrust of AI
We have a long way to go to earn the trust of the people we serve. A study by Accenture revealed that only 35% of consumers trust businesses using AI and most people want regulators to ensure accountability.
Harvard Business Review cautions us about the inherent risks of AI:
"AI increases the potential scale of bias: Any flaw could affect millions of people, exposing companies to class-action lawsuits"
— "AI Regulation is Coming." François Candelon, et. al. HBR Sept.-Oct. 2020.
The way forward is the rigorous practice of AI governance—managing the risks and opportunities AI delivers to make it ethical, transparent, and reliable.
AI governance is a framework that helps organizations manage the responsible development and deployment of AI. It ensures that AI applications align with your organization's values and principles, meet ethical and regulatory requirements, and reduce the risks of using AI. In this blog, we'll explore its benefits, risks, and the steps you can take to implement a sound governance program.
Defining AI Governance
AI governance is a part of your corporate governance structure. It should mesh with your data governance, IT governance, and corporate governance.
There isn't a global consensus on the definition of AI governance, but most are similar. TechTarget defines it in this way:
"…the framework for ensuring AI and machine learning technologies are researched and developed with the goal of helping humanity navigate the adoption and use of these systems in ethical and responsible ways."
— "artificial intelligence (AI) governance." Nick Barney, Sarah Lewis. TechTarget.
Defining your AI governance is up to you and your company's needs. Your principles and framework are also yours to create and adapt to your business, designed to mitigate the risks of bias, errors, loss of privacy, and unintended consequences.
Your definition should include these elements::
- the principles of responsible AI,
- the roles and responsibilities of stakeholders,
- ethical and legal principles,
- security policies, and
- compliance requirements
AI principles
The OECD Council on Artificial Intelligence has recommended a set of five value-based principles for the responsible stewardship of trustworthy AI:
- inclusive growth, sustainable development and well-being.
- human-centered values and fairness;
- transparency and explainability.
- robustness, security, and safety. and
- accountability.
You may want to keep your list of principles simple enough for everyone to remember so you can easily embed them in your culture. Most of the solutions we reviewed favored some variation of transparency, accountability, and fairness.
Here's an example from KPMG:
1. Integrity — algorithm integrity and data validity, including lineage and appropriateness of how data is used.
2. Explainability — transparency through understanding the algorithmic decision-making process in simple terms.
3. Fairness — ensuring AI systems are ethical, free from bias, free from prejudice and that protected attributes are not being used.
4. Resilience — technical robustness and compliance of your AI and its agility across platforms and resistance against bad actors.
— "The shape of AI governance to come."
KPMG International.
The most important thing to remember is to ensure your AI principles reflect the values and culture of your organization.
Understand the Risks of AI
AI's fast growth exacerbates the risks of privacy violations, security threats, bias, decision errors, and more. Your program must mitigate these risks by creating policies that address each challenge you identify. It's all about anticipating potential risks and preparing to mitigate or eliminate them, including unknown risks.
Prepare for AI Governance Implementation
Creating an effective governance program takes three steps:
- First, engage with your stakeholders to understand the potential risks and benefits.
- Second, develop a clear understanding of the principles of responsible AI, transparency, accountability, and fairness.
- Third, establish a governance framework tailored specifically to your business needs.
Implementing AI Governance
Implementing AI governance requires a framework that aligns with the principles of responsible AI. A general framework for governance consists of five key components:
principles, policies, and standards,
governance structure,
roles and responsibilities,
accountability, and
oversight.
Adapt your framework to your needs and invest in AI governance training and education. Identify your AI use cases and work with your stakeholders to align your policies.
Assemble your team
Start by putting together the executive or steering committee you entrust to oversee the project, set the strategic direction, and make high-level decisions. Who is on the committee varies from one organization to another according to business needs.
Steering/Executive Committee
The steering committee provides oversight and direction, assists the project sponsor, and establishes performance and risk thresholds and measures. The committee will establish an AI Governance Office to carry out its directives and decisions and manage day-to-day governance operations.
Executive Sponsor
To lead the change, you will want to engage an executive sponsor. You'll need a C-suite executive with leadership and motivation skills to lead the people in your organization to embrace AI governance. Your sponsor can be your project director but doesn't necessarily need to be. The sponsor is the lead evangelist in making governance a part of your company ethos.
You will want the sponsor to be responsible for getting the resources and funding needed and ensuring your efforts align with the firm's values and goals.
Project Owner(s)
For each project, the project owner is the person accountable for the KPIs, OKRs, or deliverables for the business process being served.
Legal and Compliance Experts
You will want senior legal and risk management office representatives to help you enforce your AI principles and standards, statutory requirements, and risk mitigation and reporting.
Information Technology
A broadly knowledgeable IT leader may be sufficient but will want to include, as needed, a System Architect, Data Engineer, DevOps Engineer, Data Scientist, and Business Analyst.
Human Resources
If you're the CHRO, you will often be the project owner, but remember that any strategy's most critical component is the talent to execute it. If you're not the project owner, you will need to have the CLO, and Chief Recruiter handy so they can close skill gaps quickly.
Internal Marketing
You may need marketing assets to help you with change management. Your internal collaboration channels will be a vital part of the communication effort.
Define your Ethics, Standards, and Measures
You will want to work jointly with your stakeholders to build the framework for controlling and evaluating your AI use cases. Strive to make each of them feel that their contributions are valuable to your program, no matter how small.
Reid Blackman, author of Ethical Machines, proposes these steps to building your definitions:
- Identify and document your current infrastructure as the foundation for your data and AI ethics program.
- Create a framework for data and AI ethical risk adapted to your industry.
- Change your thinking about ethics: Take cues from others' successes to guide your thinking.
- Provide product managers with the necessary advice and tools to ensure ethical AI development.
- As part of your change management effort, create awareness about the importance of data and AI ethics.
- Incent employees, both formally and informally, to play their role in identifying ethical risks.
- Monitor the impact of your AI systems to measure your success.
Define methods for transparency, fairness, and accountability
Transparency builds trust. Ensure that AI-driven decisions are explainable and understandable. The way forward is explainable AI (XAI), developed by the U.S. Defense Advanced Research Projects Agency (DARPA). It's a set of techniques embodied in three primary principles:
- No "black box" algorithms that are difficult to interpret. Instead, opt for models that provide clear insights into the factors influencing decisions.
- Strive for fairness by regularly auditing your AI systems and addressing any biases or inequalities that arise.
- Establish clear lines of responsibility and ensure that individuals within your organization are held accountable for the actions and outcomes of the AI systems.
Begin Change Management
Prepare to show others in your organization what AI governance is and why it's necessary. The governance mechanism is the policies, procedures, and controls that make ethical and responsible use of AI attainable. They are the guard rails that protect against pitfalls and ensure alignment with your organization's values and goals.
Implementing AI is the start of a "forever" project of motivating, educating, and retraining people as your AI technology and workforce change.
Engage with your Stakeholders
Stakeholders are anyone who will direct, design, deploy, use, are affected by, or have an interest in, your initiative.
They will include executive committee members, IT, HR, process managers and users, policymakers, community members, researchers, and others with a vested interest in the outcomes.
You will want to ensure that each of the groups with an interest in your AI practices have a representative on an advisory committee that meets regularly with the executive committee and communicates with the Governance Office.
Understand their needs and expectations. The best way forward is to co-create the framework with stakeholders to build trust and ownership and empower those with an active role. They will become a part of the governance structure. You'll also want to involve them in monitoring and evaluating your solution and work with them to improve it.
Build your framework
Document your organization's specific goals to help guide your governance framework. Every organization is unique, so it's crucial to tailor the framework to your specific needs. Identify the AI use cases within your HR processes and align the governance policies accordingly. Involve stakeholders from various departments to gain insights and create a framework that meets your organization's goals and values.
Conclusion
AI governance will protect you from potential risks and help ensure a robust, ethical, trustworthy AI implementation. As you build your governance framework, remember the importance of trust in the decisions you make. Use legal professionals and ethics advisors, and work through the questions with your executive team. Create detailed plans for responding to violations and communicate with stakeholders throughout the process. A structured, collaborative approach will help ensure your AI governance will be effective and reliable.
Phenom eCloud is a comprehensive technology solutions provider committed to empowering businesses to overcome challenges, enhance their workforce capabilities, and achieve superior outcomes.
Leave a Comment