Select the directory option from the above "Directory" header!

Is AI in the enterprise ready for primetime? Not yet.

Is AI in the enterprise ready for primetime? Not yet.

AI has the potential to raise productivity over the next decade, but we’re still some way from it transforming the enterprise in the short term.

Although bullish on the prospects for AI to automate many work activities, McKinsey acknowledges it’ll take several decades for this to happen at any scale. CIOs and other executive leaders should keep this in mind amid the hype and wild claims made by many vendors and consultants.

There are a number of reasons why meaningful AI deployments within the enterprise will take longer than many imagine.

Complexity of human work

It’s been estimated that the average person makes 2,000 decisions every hour. While many of these decisions are routine and require little thought, others are far more complex and nuanced. At work, we’re efficient at processing multiple inputs in rapid time to take into account issues of safety, social norms, the needs of our colleagues and employer, as well as accuracy and strategic goals.

At the same time, we can communicate these decisions orally, in writing, and through gestures using multiple systems and workflows.

While computing technologies and enhanced access to data may have helped businesses make better routine, low-value decisions, anything more complex still requires human input and oversight. An organisation’s reputation lives or dies by the decisions made within it and once lost, is difficult, and often impossible, to regain. 

While chatbots will take over many functions currently performed by human-powered call centers, these will operate within tightly defined parameters including their data inputs and the answers they can give.

AI hallucinations

The problem of AI hallucinations, where a large language model (LLM) presents authentic-looking but made-up results, shouldn’t be underestimated for enterprise AI deployments. It’s been estimated the hallucination rate for ChatGPT is between 15 per cent and 20 per cent, an unacceptable figure for business-critical decision making.

Hallucinations can be reduced within enterprise deployments by fine-tuning LLMs through training them on private data that’s been verified. Further improvements can be made by restricting queries to proven prompts as well as incorporating open source tools such as Langkit and Guardrails, or proprietary products like Galileo. 

These tools and frameworks are still in the early stages of development, and users will need to experiment with multiple approaches and solutions. It’ll be several years at least before established and trusted methods for reducing hallucinations to acceptable levels are widely available.

Changing habits and workflows

While the consumer adoption of new technologies such as smartphones and social media occurs rapidly, it’s usually much slower within the enterprise. Workflows, user-training and technological path-dependency act as brakes on the deployment of new hardware and software solutions

Cloud computing, common data formats and APIs have lowered these barriers to an extent but they are still significant. A Gartner survey recently revealed 45 per cent of customer service reps (CSRs) have avoided adopting new technologies and chosen to rely on legacy systems and tools.

Cloud computing pioneer and CEO of Box Aaron Levie recently expressed his doubts that AI was going to have a significant impact on digital transformation initiatives in the short term, “I think we’re so early on for any kind of operational task stream with any level of efficacy to be able to replace even 10 minutes of what a real person does,” he says.

What next for enterprise AI

The initial hype and excitement over generative AI is starting to wane and more realistic expectations are emerging. Traffic to the ChatGPT website fell by almost 10 per cent in June from May with users spending 9 per cent less time on the site. It’s becoming apparent that making practical use of these new tools within the enterprise will require considerable customisation and investment.

The complexity and subtlety of much human work and the need for organisations to maintain consumer trust are behind this realisation. However, it’d be a foolish business that didn’t start on this journey because the potential rewards are significant.

Follow Us

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags artificial intelligence (AI)generative AIlarge language model

Show Comments