CIOs Responsible AI Governnance
CIOs Responsible AI Governnance
CIOs Responsible AI Governnance

Six Ways to Get Responsible AI Governance Right

The huge potential of Artificial Intelligence (AI) comes with the risk of unintended consequences.

AI is only as good as the underlying algorithms. And AI algorithms remain susceptible to bias, errors, and misinformation. AI systems can also cause privacy risks and legal violations. The lack of transparency on how the underlying algorithms make decisions leads to a trust gap.

Such risks lead many enterprises to avoid AI, especially gen AI, at work.

Enter Responsible AI (RAI).

RAI offers a framework for the ethical use of AI. It brings about transparency, explainability, and accountability into AI systems. CXOs can use RAI to understand and mitigate the unintended consequences associated with AI use.

RAI governance lays down policies and processes for developing and using AI systems. It promotes the integrity and trust of AI systems.  

But RAI governance is not easy. Multiple factors can derail the most well-intentioned RAI governance systems. CXOs struggle to keep pace with the rapid evolution of technology. RAI stresses ethical principles, but there is no consensus on ethical principles. Also, weak enforcement can undo the best of plans. The complexity of AI systems means the RAI governance may fail to apply in real-world settings, further eroding trust in AI.

The following are the key elements of RAI governance. CXOs applying these elements can overcome the challenges associated with RAI governance.

Use of High-Quality Data

The strength and integrity of any AI system depend on the underlying data. AI systems built with high-quality data deliver superior performance. But the AI system will inherit flawed, biased, or incomplete training data and perpetuate them in its outputs.

RAI emphasise attention to data collection, cleaning, and preparation. CXOs who train the algorithms with real-world data can reduce discrimination and hallucinations.

The key considerations for CXOs include:

  • Making sure data is current and relevant.
  • Co-opting data that represents different demographics, viewpoints, and experiences.
  • Avoiding underrepresentation or overrepresentation of any group in the data sets.
  • Ensuring data remains free of errors and inconsistencies, through cleansing.
  • Investing in data auditing to examine the data for stereotypes or imbalances. The techniques used could include statistical analysis or qualitative review.

 

Risk Mitigation

A key objective of RAI is to mitigate risks associated with AI development. CXOs can achieve the same through:

  • Conducting impact assessments to evaluate the ethical and legal implications of AI systems.
  • Using data balancing, fairness-aware algorithms, and adversarial debiasing to reduce discriminatory outcomes.
  • Putting in place default capabilities that mitigate vulnerabilities related to sensitive data.
  • Scrutiny through independent third-party audits about ISO, SOC and other industry standard certifications.

 

A good example of risk mitigation in action is Workday’s RAI risk evaluation tool, applied at the ideation stage of any new AI project. Project managers get a series of questions to establish the context and characteristics of the project. The tool then recommends the risk mitigations to apply, depending on the intended use case.  

Factoring in Regulatory Frameworks

RAI governance ensures adherence to relevant laws and regulations. This includes copyright, privacy, data protection, anti-discrimination, and consumer protection laws. It also protects personal data from unauthorised access or misuse.  

Factoring in Regulatory Frameworks

For instance:

  • Training the algorithms to identify copyrighted content gives attributions or avoids such content.
  • Training the algorithm on regulations such as GDPR, CCPA, or the upcoming EU AI Act ensures the answers comply with these legislations.

 

RAI also helps to anticipate the regulations on the horizon, avoiding rework when it arrives.

Human-Centric Design and Control

RAI involves humans in the AI decision-making process. The RAI approach is for humans and AI to work as partners, with each contributing their strengths to achieve synergy. Humans have the ultimate control over AI systems and can intervene when necessary. Side-by-side, RAI emphasizes accountability. RAI governance enforces clear demarcation of responsibility for the impact caused by AI.

RAI governance ensures human review of AI-generated outputs.  Users get complete visibility and control to turn individual AI features on or off at their discretion. They could, for instance, turn AI features off by location or job function.

A good example of user control and review in action is Workday’s human-in-the-loop approach. The human operator could provide input or feedback to the AI systems, or review and override AI decisions.

Workday’s internal processes co-opt RAI champs, drawn from various departments. These RAI champs assist teams with the technical, legal, and compliance nitty-gritty. A cross-disciplinary RAI advisory board to manage escalations.

Transparency

Another core RAI principle is transparency. RAI governance applies transparency by:

Comprehensive documentation that makes explicit the inner workings of AI solutions. The documentation co-opts details of the training and testing the algorithms receive.

Publishing fact sheets that make explicit the development process of the AI models. These fact sheets may contain information on how the system provides safeguards to ensure fairness and mitigate bias. It may also offer details on the explainability and interoperability of the algorithms.

Making clear to users where and how the system will use data to train AI models. Users get the option to grant or revoke consent for the same.

Workday’s AI Fact Sheets provides users with details on the build, test results and more, for each feature. Users get the option to export data and perform additional tests, locally, for some sensitive use cases.

Explainable AI (XAI)

XAI techniques make explicit how the algorithm makes decisions. Users can understand the basis on which the algorithms make decisions.  

XAI works through techniques such as:

  • Feature Importance, which identifies the most influential variables in the AI’s decision. For instance, a loan approval system may use the applicant’s income and credit score to make the decision.
  • Local Interpretable Model-agnostic explanations (LIME),. This technique unsettles the input data to analyse how the prediction changes.
  • Partial Dependence Plots (PDPs), which show how predictions change by changing a single input feature.
  • SHapley Additive explanations (SHAP), which reveal the contribution of each feature to a prediction.  
  • Counterfactual explanations, which make explicit how the input has to change to get a different outcome.
  • Extracting human-readable rules from the AI model. A popular example is creating a decision tree to understand the decision logic. Many XAI techniques use visualisations to make the explanations clear. 

 

It is early days for responsible AI, though. In a 2024 global study, “Closing the AI Trust Gap,” only 22% of employees say their company has shared guidelines on the responsible use of AI. RAI guidelines help enterprises deploy AI systems at scale with confidence and consistency. State-of-the-art platforms such as Workday help enterprises develop safe and compliant AI solutions.

Tags:
Email
Twitter
LinkedIn
Skype
XING
Ask Chloe

Submit your request here, my team and I will be in touch with you shortly.

Share contact info for us to reach you.
Ask Chloe

Submit your request here, my team and I will be in touch with you shortly.

Share contact info for us to reach you.