Business of AI
Business Use Cases8 min read

The Business Case for AI: What Most Executives Get Wrong

The best AI investments do not start with the technology. They start with the most expensive problem in the building.

By Onil Gunawardana

Last year, I sat in a board meeting where the CEO turned to me and said, "We need an AI strategy. What is our AI strategy?" I had been running product for enterprise software companies for over a decade at that point, and I had shipped eight products that collectively drove over two billion dollars in revenue. I knew the answer he wanted. I also knew it was the wrong question.

He did not have a problem he wanted AI to solve. He had a technology he wanted to adopt because a competitor mentioned it on an earnings call. That distinction — between chasing a technology and solving a problem — is where most AI investments go wrong before they even start.

The Executive AI Trap

Here is a pattern I have seen play out at least a dozen times. A senior leader reads a consulting report about AI transforming their industry. They call a meeting. They ask their technology team to "explore AI opportunities." The team, eager to work on something interesting, spins up a proof of concept. Three months later, there is a demo that impresses the boardroom. Six months after that, the project is quietly shelved because nobody can explain what business outcome it actually improved.

I call this the executive AI trap. It starts with the technology and works backward toward a problem. The logic sounds reasonable: AI is powerful, our competitors are investing, therefore we should invest too. But "we should use AI" is not a strategy. It is a sentiment.

The executives I have seen succeed with AI do the opposite. They start with the most expensive, most painful, most data-rich problem in their business — and then ask whether machine learning is the right tool to address it. Sometimes it is. Sometimes a better spreadsheet would do the job. That willingness to let the problem dictate the solution is what separates a real AI strategy from an expensive science fair project.

A Concrete Example: InsightBoard

Let me make this tangible. Consider a fictional SaaS analytics company I will call InsightBoard. They sell dashboards to mid-market finance teams. Their data science team has six people. Their CEO wants to "add AI to the product."

The first instinct is to build a predictive analytics feature — a model that forecasts revenue trends based on historical data. It sounds impressive. The data science team is excited. Marketing can put "AI-powered" on the website. So they build it.

Eight months and roughly $1.2 million later, the feature launches. Adoption is dismal. It turns out that InsightBoard's customers — mid-market CFOs — do not trust a black-box forecast from a vendor dashboard. They already have their own forecasting models in Excel. They do not need another one. They need something else entirely.

Here is what InsightBoard should have done first. They should have looked at their support ticket data. If they had, they would have found that 40 percent of their Tier 1 support volume was customers asking some variation of the same question: "Why did this metric change?" Customers would see a spike or a dip on their dashboard and immediately file a ticket asking for an explanation.

The real AI opportunity was not prediction. It was automated anomaly explanation — a model that detects when a metric moves significantly and surfaces the most likely contributing factors. This is a problem that is high volume, repetitive, data-rich, and where human judgment is the bottleneck. It also directly reduces support costs, which means the business case writes itself.

How to Find the Right Problem

After building products across multiple industries, I have developed a simple filter for identifying problems that are genuinely worth solving with machine learning. I look for four characteristics, and in my experience, the best AI use cases have all four.

High volume. The problem occurs frequently enough that even a small improvement per instance creates meaningful aggregate value. If the task happens ten times a day, automation savings are modest. If it happens ten thousand times a day, even a five-percent improvement is significant.

Repetitive and pattern-driven. The task follows recognizable patterns that a model can learn from historical data. If every instance is truly novel and requires creative judgment, machine learning is probably not the right tool. But if 80 percent of cases follow a handful of common patterns, a well-trained model can handle those and route the exceptions to humans.

Data-rich. There is sufficient historical data to train and validate a model. This is where many AI ambitions die quietly. The problem might be perfect for machine learning in theory, but if you have six months of messy, inconsistent data, you are not ready. What I have found is that you typically need at least two years of clean, labeled data to build something reliable for enterprise use.

Human judgment is the bottleneck. A person is currently making the decision, and they are either too slow, too expensive, or too inconsistent to scale. This is critical. If the bottleneck is data collection, infrastructure, or process design, AI will not help. AI is most valuable when the data is already there and a human is the constraint on turning it into action.

In the InsightBoard example, anomaly explanation hits all four: thousands of metric changes per day across their customer base, a recognizable taxonomy of root causes, years of historical metric data, and support engineers manually diagnosing every ticket.

Building the Business Case

Once you have identified the right problem, the business case needs to be concrete and measurable. In my experience, the AI business cases that actually get funded — and stay funded past the pilot — quantify three things.

Time saved. How many hours per week do humans currently spend on this task? What percentage of those hours can the model handle? Be conservative. If your support team spends 200 hours per week diagnosing metric changes and the model can handle 60 percent of cases accurately, that is 120 hours reclaimed. At fully loaded cost, that is a real number.

Error rate reduction. How often do humans get it wrong today, and what does each error cost? If support engineers misdiagnose the root cause 15 percent of the time and each misdiagnosis leads to an average of two additional ticket exchanges, that is quantifiable waste. A model that reduces the error rate to 8 percent has a measurable impact on customer satisfaction and support efficiency.

Revenue impact. This is harder to measure directly, but it matters. For InsightBoard, faster anomaly resolution means customers spend less time confused and more time acting on their data. That drives retention. If you can tie your AI feature to a two-point improvement in net retention rate, the revenue case becomes compelling very quickly.

What I have found is that the most effective business cases present all three dimensions but lead with the one that matters most to the specific executive audience. CFOs respond to cost savings. Chief Revenue Officers respond to retention. Chief Product Officers respond to differentiation. Same feature, different framing.

The Pilot Trap

Even with the right problem and a solid business case, I have watched AI initiatives stall in the pilot phase. The pattern is predictable enough that I now warn teams about it before they start.

The pilot trap works like this. The team builds a model, runs it on a subset of data, and achieves promising accuracy metrics. Leadership sees 87 percent accuracy and green-lights a broader rollout. But the pilot was designed to demonstrate technical feasibility, not operational readiness. Nobody planned for how the model integrates with existing workflows. Nobody defined what happens when the model is wrong. Nobody built the feedback loop that lets the model improve over time.

In my experience, the pilots that successfully scale share three characteristics that failing pilots lack.

First, they define success in business terms, not model terms. "87 percent accuracy" is a model metric. "Reduce average ticket resolution time from 4.2 hours to 1.8 hours" is a business metric. The pilot should be measured against the business case, not against a confusion matrix.

Second, they plan for the human-in-the-loop from day one. What does the workflow look like when the model is confident? When it is uncertain? When it is wrong? The operational design is at least as important as the model design.

Third, they build the data feedback pipeline during the pilot, not after. Every time a human overrides or corrects the model, that signal should flow back into the training data. Without this, the model you deploy is the best it will ever be — and in a changing business environment, that means it starts degrading on day one.

What I Tell Executives Now

When an executive asks me how to think about AI investment, I give them a version of the same advice I wish someone had given that CEO in the board meeting.

Do not start with AI. Start with the three most expensive problems in your business. Not the most interesting problems — the most expensive ones. The ones where you are burning headcount, losing customers, or leaving revenue on the table at scale.

Then ask a simple question for each one: is there enough historical data, is the task repetitive, and is a human currently the bottleneck? If the answer to all three is yes, you probably have a genuine AI use case. If not, you have a process improvement opportunity that does not require a data science team.

Build the business case in dollars, not in technical metrics. "We will save $1.4 million per year in support costs" gets funded. "We will achieve 90 percent F1 score" does not.

Design the pilot to test the business case, not just the model. If you cannot explain how you will measure business impact during the pilot, you are not ready to start.

And finally, plan for integration from the beginning. The model is maybe 20 percent of the work. The other 80 percent is workflow integration, edge case handling, feedback loops, and change management. In my experience, the teams that treat the model as the hard part are the teams whose pilots never scale.

AI is a genuinely powerful tool for the right problems. But the right problems are specific, measurable, and grounded in business reality — not in a desire to have an AI strategy because everyone else does.

What do you think? I would love to hear your perspective — feel free to reach out.

Onil Gunawardana
Onil Gunawardana

Founder, BusinessOfAI.com

Product management executive with 15+ years building enterprise software. Created 8 major products generating $2B+ in incremental revenue.