Can CIOs Protect their Organisation from Risks when Implementing AI?
Can CIOs Protect their Organisation from Risks when Implementing AI?
Can CIOs Protect their Organisation from Risks when Implementing AI?

AI push or pause: What’s the best path forward for CIOs?

Artificial Intelligence (AI) is becoming all-pervasive. A recent Gartner poll reveals that 70% of enterprises are exploring generative AI now, and 19% already have pilot projects underway. The attention around ChatGPT has led to 45% of enterprises increasing their AI investments. 

Several generative AI foundation models boost various aspects of businesses. For instance, Jasper.ai offers a foundational model for content creation. Midjourney.ai provides the same for image creation and Azure OpenAI for text processing. 

Good reasons to push AI

For most enterprises, AI can connect people in new ways, improve efficiency, and enhance customer experiences. It can also make innovation super-easy. Such benefits are tempting even after considering the collateral ill effects. In fact, the AI push advocates believe the projected risks and ill effects of AI adoption as exaggerated and overblown.

The rise of generative AI will impact human jobs. But the impact would more likely be on the way of doing work rather than wholesale job cuts. The workforce, cutting across rank and designations, will have to co-opt and work with the new technology or be at risk of obsolescence. 

Smart workers already use generative AI tools such as Chat GPT as productivity tools. Marketers and content creators, for instance, use ChatGPT to auto-generate a basic structure and fill in the details. Or they fact-check and refine the auto-generated content. Such approaches reduce the time to create content, improving productivity and efficacy manifold. 

Also, AI’s potential goes beyond automating processes or generating content.

Consider the healthcare industry. Clinical trials leverage AI’s advanced computing power for genetic research. Breakthroughs have already emerged, promising to cure diseases such as diabetes. Unlocking the same insights was not feasible, even using the best human expertise.

The full potential of generative AI is yet to realise. For instance, AI will simplify analytics when users can securely merge private data sets with open-domain public data. When such use cases unlock, the benefits from the technology will rise. 

Good reasons to pause AI!

Artificial Intelligence is a double-edged sword. It unlocks great potential but is also risky and can do great harm. Every exciting new prospect also comes with several unintended consequences, and AI is no different. Many government and business leaders advocate pausing AI implementation until these systems mature.

The reliability and integrity of any AI system depend on the base training. If the training data correlate an individual with some factor, such association amplifies. The algorithm will continue to favour or discriminate against certain profiles. The system becomes discriminatory or unjust. AI can even kill by recommending and amplifying wrong healthcare or safety information. 

AI also fails big-time in recognising or handling exceptions from the standard rule. The resilience of any enterprise or system depends on its ability to handle exceptions to the general rule.

The advocates of AI pause hold that AI is not yet ready for mass commercial use. They give the case of ChatGPT as an example. This popular and powerful AI generative tool churns out content in minutes, saving hours of human work. But such generated content suffers from big-time drawbacks. AI-generated content

  • comes with the risk of copyright violation 
  • gives outright wrong or made-up answers
  • lacks human insight
  • churns out wordy content that lacks both depth and a human touch
  • has issues concerning data ownership. 


These shortcomings may resolve with time as the technology develops. But such time is still years away by current estimates.

Another big danger is the possibility of weaponising AI. Already, cybercriminals have launched sophisticated AI-powered targeted cyber attacks. Conventional cyber security remains helpless to thwart such attacks. Security teams must invest in AI technologies to fight fire with fire and stay afloat.

Can CIOs Protect their Organisation from Risks when Implementing AI?

The middle ground

A total hold on such a powerful technology as AI is neither desirable nor practical. But CIOs need to know the risk, rewards, and impacts of AI technologies. They should use the technology to understand the limitations, risks, and dangers. Following the hype and AI flying left, right, and centre is a recipe for disaster.

One possible solution to control the inherent risks of AI is the due diligence of those who adopt the technology. But placing controls in the tech space is easier said than done. Just as it is impossible to stop an idea whose time has come, it is impossible to bring the AI genie back into its bottle. 

Regulations

A viable way is regulating the AI space.

Those who push AI hold that too much regulation could slow technology growth, stifle innovation and quell competition. But most tech experts and leaders, including Sam Altman, the founder of ChatGPT, advocate the need for regulating AI. Deep learning models can cause great harm unless restrained by rules and regulations. Allowed to run loose, AI can wreak havoc. It can spread false information, displace jobs, and make systems discriminatory. The position of AI in society is the same as guns and knives. While these devices have good uses, they can also cause great harm if allowed unrestricted or unregulated use.

Existing laws and guidelines on Artificial Intelligence are vague and mostly confined to data collection. 

Another big issue with Artificial Intelligence is its black-box way of functioning. Few stakeholders, including the workforce or customers, know how machine learning algorithms work. When businesses do not understand how the algorithm decides, they overlook a biased output or wrong information.

Internal guardrails.

Many CIOs have developed rules and standards as the industry waits for regulations. Such guardrails mostly manage the access and behaviours of those who access the technology. These policies are similar to regulations governing intellectual property and Personable Identifiable Information.

CIOs can also place technical restraints. For instance, intentional air-gapping limits the fallout if any model goes out of control.

The lack of universal standards or best practices makes the effectiveness of such guardrails a grey area, though.

Wide collaboration

Adopting Artificial Intelligence needs to be a collaborative approach. Involve lawyers, universities, big tech, think tanks and government research centres. These entities can contribute their expertise to develop best practices and guidelines.

At the implementation level, CIOs could form oversight committees, including external experts. For instance, lawyers with technical expertise could guide the enterprise through Artificial Intelligence implementation. 

CIOs could develop a risk-based model for Artificial Intelligence-based tools, clarifying the data at risk of exposure and other potential threats.

Push or pause: the verdict

There is no one-size-fits-all answer to whether to push or pause Artificial Intelligence adoption. The decision to push forward or pause depends on enterprise-specific factors and goals. 

The moral and ethical considerations notwithstanding, financial concerns can answer whether to push or pause Artificial Intelligence investments at the enterprise level. Enterprises would do well not to get carried away by the hype. Implementing and integrating Artificial Intelligence systems into the enterprise workflow requires sizable investments. Evaluate the potential return on investment of such Artificial Intelligence initiatives. Consider factors such as direct cost savings, impact on revenue generation, and the benefits of faster processes. Consider indirect benefits such as the positive effects of improved decision-making. The expected ROI will often justify the investment, but this need not always be true for all companies.

When Artificial Intelligence implementation is viable, successful enterprises strike the right balance between strict regulations that stifle innovation and guidelines that ensure the responsible use of Artificial Intelligence. 

The onus is on the CIO to evaluate the organisation’s readiness and maturity in terms of data infrastructure, talent, and Artificial Intelligence governance. Also, consider the potential risks of Artificial Intelligence implementation, such as data privacy, security, bias, and ethical concerns. Assess the impact of these risks on the enterprise. If the necessary foundations are in place, they should continue pushing forward. Otherwise, pausing and focusing on building those capabilities might be the prudent option.

Tags:
Email
Twitter
LinkedIn
Skype
XING
Ask Chloe

Submit your request here, my team and I will be in touch with you shortly.

Share contact info for us to reach you.
Ask Chloe

Submit your request here, my team and I will be in touch with you shortly.

Share contact info for us to reach you.