3 Ideas for Improving AI Adoption

Today, every team lead or division head is looking for ways to integrate AI into its operations. What many who’ve tried have realized, is that adoption of AI solutions by its teams can be stubbornly hard.

In this article I’m discussing three common challenges in adoption of AI solutions at large enterprises and strategies for overcoming them. These insights are based on 30+ implementations of AI-powered master data management software at large enterprises.

Let’s dive in.

Idea #1: Put dedicated AI ops team in place and engage your workforce

Shocker: AI solutions (still) require human effort and new skills. Plenty of cautionary tales have been written about AI replacing human jobs and as a result, many people seem to believe that once an AI tool has been implemented, the work is done. What is happening instead, is that human jobs change and working with AI solutions requires a new set of skills. Many organisations are unprepared for this. As a result, solutions are half-implemented, under-maintained, and worst of all, not trusted. My guesstimate is that lack of trust is a top three adoption killer.

Business leaders should think about how to embed the management of AI solutions into its operations. There is no single best way to do this and it depends on your organisation, overall tech savviness, and the weight you are giving to AI in your overall business strategy.

There are, however, some common pitfalls. 

One is a lack of education on this new technology whose reputation has been shaped by headlines in the media. That picture is distorted, and you want your entire organisation to have a more grounded understanding of what the technology is and what it isn’t. 

Another common pitfall is to make the management of AI solutions solely the responsibility of the IT department. What tends to happen is that AI solutions are managed just like any other piece of software but in reality have a different life cycle consisting of a variety of complex components. At a minimum this should be recognised by your IT leaders by updating existing policies and processes.

Wise business leaders do not opt for the minimum, however, and assign a dedicated team with the right set of skills. The skills you are looking for include data engineering (building the pipelines, fine-tuning the models, getting the data consumption-ready), data science (building the dashboards, doing the analytics), and data governance (data stewardship, data curation, data ownership and permissions).

Today, the majority of AI solutions used by the enterprise rely on machine learning (LLMs like ChatGPT rely on neural networks) and a dedicated field exists for managing these: MLOps. If ML-driven solutions are central to your business strategy and operating model, this may be the team you need to build.

It is key that your dedicated team works closely with the data consumers, sometimes called data citizens, to ensure that the output meets user needs and can take feedback to iterate on the results if not. You would be surprised how often there is a total disconnect between the managers of the AI solutions and the users. A recipe for failure.

Idea #2: Establish baseline and acceptance criteria upfront

An interesting insight is that people tend to hold AI decision-making models to a higher bar than human decision-makers. What I mean by this is that a mistake made by artificial intelligence is getting a higher “penalty” than the same mistake made using human intelligence. In other words, people are less forgiving towards AI and are setting a higher bar.

This makes sense for a variety of reasons. To name a few:

  • People feel less sympathy towards a “machine” and are OK being more “tough” in their judgment 
  • People can be held accountable for mistakes in ways AI models cannot, which specifically matters in highly regulated industries such as banking and healthcare
  • People may fear replacement by AI and express hostility towards solutions suggested by upper management

How this may manifest itself is that deployment of AI solutions (also referred to as going into production or productionization) is being delayed despite demonstrating superior performance over existing processes. Where 90% accuracy is acceptable for a human-led process, the AI model has to be right 99% of the time.

Voltaire once said that “Perfect is the enemy of good  and the strive for perfection right from the start can strangle AI adoption, where the ability to learn and continuously improve over time are central to the power of AI.

In addressing this challenge, the goal is not to lower the bar for AI solutions. The goal is to set a bar that reflects (a) the situation today, and (b) an ambition level that corresponds with the cost of being wrong.

Companies who understand this and do this well take the following approach, they

  • Establish the baseline (benchmark)
  • Agree on the cost of being wrong
  • Define acceptance criteria upfront, including the performance bar and other metrics

From here, they monitor, identify areas for improvement, and iterate.



Idea #3: Recognise the importance of model explainability, but establish performance as the critical driver for deployment

Model explainability is a hard problem.

As many solutions shift from a rules-based model to a probabilistic model, explaining why a certain output or answer was given becomes a harder problem to solve.

Increasingly, LLMs provide sources for their answers, but with neural networks spanning billions of parameters all feeding into the final response the answer to why is not easy to provide. Building on the illustrative example above, even in the scenario where 100 different attributes in a customer profile from your CRM database (first name, last name, email, phone, etc.) were used to train an ML-model to match against sales leads, the answer to why two records were or weren’t matched together isn’t always easy to explain.

It can be difficult to switch from an easy-to-understand rules-based solution to a better-performing probabilistic solution, mainly because it’s harder to wrap your head around. I have seen it hold up go-live decisions where in most cases the benefits of holding out do not outweigh the costs (there are some exceptions, for example where regulators require details on decisions for audit trails). Business leaders should have this on their radar and define priorities accordingly.

In the past five years we have seen model explainability come a long way but there’s still a lot to improve. Given the current speed of development, it is reasonable to expect this problem will be even more thoroughly addressed in the near future.

In the meantime, energy is better spent on monitoring performance, analysing mistakes, and providing feedback to the model to drive continuous improvement. Data is continuously changing and models need to adapt. Most AI solutions used in the enterprise still rely on some form of supervised learning where human subject matter experts (SMEs) provide training to the AI.

While recognising the importance of model explainability, be confident that significant advancements are on the horizon. Let performance be the critical driver for deployment and adoption, and embrace continuous learning and improvement for both your human workforce and AI models.

Leave a comment