How CFOs can Close the Trust Gap
How CFOs can Close the Trust Gap
How CFOs can Close the Trust Gap

How CFOs Can Close the AI Trust Gap

Artificial Intelligence (AI) is the next game-changer for businesses. But the success of AI-centric digital transformation initiatives depends on improving trust around AI.

Enterprises use AI to improve efficiency, drive innovation, and unlock competitive advantage. The latest GenAI solutions automate several tasks and enable many new possibilities. But amidst the excitement, a significant trust gap stalls widespread AI adoption. This gap represents the disconnect between AI promises and the impact of AI implementation.

Workday’s 2024 global study, “Closing the AI Trust Gap”, reveals that 62% of business leaders and 52% of employees welcome AI. But 23% of employees lack confidence in how their organisation implements AI. 70% of business leaders agree AI should undergo human review and intervention. 80% of employees opine that their company has not shared guidelines on responsible AI use. Such scepticism can hinder widespread adoption and limit the value-addition offered by AI.

The following are the main factors contributing to the AI trust gap:

  1. Lack of transparency: AI algorithms often operate as black boxes. Stakeholders have no idea of the underlying rationale behind the decisions. Such opacity breeds mistrust. More so when AI-driven outcomes conflict with human intuition or experience.
  2. Data privacy concerns: AI systems rely on data. Data privacy issues surface when AI systems process a huge quantum of enterprise data. The misuse of personal information heightens scepticism regarding AI’s ethical implications. There is also the issue of security breaches surrounding the data processed by AI systems. There are also issues related to sharing proprietary information in public domains.
  3. Fear of job losses and displacement. AI automates many tasks, from creating content to predictive maintenance. Humans have been doing many of these tasks, and fears of AI leading to job losses and redundancies are real. Some C-suite executives also view AI as a threat to their domain. The perception of AI leading to job losses erodes trust in AI initiatives, particularly among employees at risk of job losses. Even when there are no immediate job losses, AI triggers change from the status quo. Employees distrust and resist such change initiatives.
  4. Bias and unfairness: AI algorithms are only as good as the training data fed into it. The output will reflect the bias and prejudices inherent in the training data. Such bias often causes discriminatory outcomes, undermining trust in AI.
  5. Legal issues: AI’s hunger for data leads to legal issues such as intellectual property theft. GenAI’s LLM, for instance, uses many copyrighted works without a licence. The defence of “fair use” may allow GenAI applications to get away on technical grounds, but users look at these applications with distrust. 
  6. Leaders mistrust: Leaders’ mistrust of AI partly stems from the fear that workers may rely too much on it. Many employees consider AI recommendations the gold standard without applying human judgement. AI tends to hallucinate, spew errors, and give only surface-level insights. Blind adoption of AI degrades the quality of work and even causes errors.

 

Addressing the AI trust gap is a complex task, but it’s not insurmountable. CFOs can take the lead in this journey by implementing a multifaceted approach that includes transparency, awareness, and empowerment. 

CFOs Trust Gap

Develop robust policies and frameworks 

Bridging the AI trust gap depends on developing internal policies. Some of the initiatives to such ends include:

  • Adopting a risk-based framework to assess use cases. Developing risk evaluation tools with safeguards helps employees identify sensitive use cases. These tools guide employees on how to handle risks. 
  • Monitoring AI systems for biases and discriminatory outcomes and refining the algorithms. Make sure the algorithms uphold principles of fairness, are non-discriminatory, and free from bias. 
  •  Developing robust data governance frameworks prioritising privacy, security, and regulatory compliance. 
  • Benchmarking and implementing industry best practices and standards to safeguard sensitive information. Workday’s Responsible AI (RAI) approach offers a good benchmark. A governance team ensures adherence to the governance framework in design, development, administration, and all other aspects of AI implementation. 

 

Many enterprises are impatient with AI and for good reasons. In today’s fast-paced and competitive world, a first-mover advantage delivers a huge competitive advantage. But speed comes at the cost of trust. Developing trust requires starting slow and refining the frameworks before full-blown AI implementation.

Ensure transparency

The best way to increase trust in AI is to improve transparency on how it works. Explainable AI (XAI) explains the rationale behind the underlying algorithms. It describes the algorithm’s workings, including its biases and potential impact. Users can understand how the system reached the conclusion it did. 

Apart from investing in XAI:

  • Cultivate an organisational culture that values transparency, open communication, and inclusivity. 
  • Ensure full disclosure. Create fact sheets that provide insight into the technology build-up. Document monitoring, testing, maintenance, and all other processes. Include data sources, model architectures, and validation methodologies. Such transparency builds credibility and fosters trust.
  • Spread awareness. Anxiety regarding an uncertain and unfamiliar AI is normal among rank-and-file employees. Overcoming such anxiety requires getting people to become comfortable with the change. AI implementation is a change and requires all the usual change management tactics. 
  • Encourage the sharing of AI best practices to promote familiarity. For instance, encourage employees to share how they use AI applications to improve work outcomes. Focus on how AI has given employees more time to focus on higher-value tasks.  
  • Convince employees the sound research behind the AI implementation decisions. 

 

The more team members know about AI, the more they will trust and use it. 

 Empower employees

Securing the rank-and-file employee’s trust requires taking them into confidence. 

Emphasise the role of AI in augmenting human capabilities rather than replacing jobs outright. For instance, position AI as a tool for enhancing productivity, creativity, and informed decision-making. Convince employees how AI efficiencies can help them manage their time better. 

  • Involve employees in AI initiatives from inception to implementation. Seek their feedback and address their concerns. Empower them with ownership of AI in their domains.
  • Launch reskilling and upskilling initiatives for employees to adapt to the evolving landscape. Training in data analysis, problem-solving, and AI literacy equip workers for AI-enabled roles. 
  • Conduct training and workshop sessions to educate employees and other stakeholders. Such sessions can dispel misconceptions regarding AI.
  • Encourage and incentivise employees to experiment with AI tools.
  • Empower teams to ensure fairness and bias-free AI implementations. Workday offers a good model. A diverse team of product experts, data scientists and others, under a chief responsible AI officer, uphold RAI governance. 

 

Workday is a good example of a company engendering trust with customers. Workday’s responsible AI approach puts people first, focusing on amplifying human potential. The risk-based approach to responsible AI factors the sensitivity level of new AI applications. A key component of Workday’s Responsible AI is co-opting human review of any output generated from the AI technology. A governance team addresses the associated risks and unintended consequences throughout the lifecycle. Fact sheets make explicit the development process of AI and co-opt guidelines to ensure fairness.

Tags:
Email
Twitter
LinkedIn
Skype
XING
Ask Chloe

Submit your request here, my team and I will be in touch with you shortly.

Share contact info for us to reach you.
Ask Chloe

Submit your request here, my team and I will be in touch with you shortly.

Share contact info for us to reach you.