I see it all the time.

A brilliant and passionate team contacts me for advice or investment. They’ve conceptualized an amazing fully automated AI system that will (eventually) solve a customer’s problem.

And it’s really hard to do and it’s going to be very expensive in terms of time and money but, the team is confident it can improve the relevant metric by 90%. Just getting started will likely take months and then, if everything works as expected, the integration has to happen, and all of that is not going to be quick.

When the customer investigates its options, it turns out that a bunch of bright folks in India, provided with comparatively simple and inexpensive tools, can begin more or less immediately and get the customer to 60% improvement within months. No massive new investment in tech, no huge upfront startup costs, no big change to processes, no long wait for an uncertain outcome (link to word doesn’t work).

How can you avoid this unfortunately widespread pattern amongst AI innovators?

As I introduced in the crucial questions overview post, when I talk to a company pitching AI for commercial customers, I expect they’ve got good answers that convince me that their solution is still the best. These questions are designed to help companies that are prone to mischaracterize the challenges of transferring technology out of the lab: you may be surprised to learn that there’s some critical non-technical due diligence required before you talk to prospective investors: they won’t answer these questions for you..

If you’ve been through the process of selling mission-critical technology solutions to enterprise clients, then these questions be second-nature to you. Unfortunately, the nature of diligence required is still new to many AI companies. Excited about their solution and convinced that it addresses the problems prospective customers are struggling with, they overlook the basics.

One important category of such questions involves identifying alternatives and competitors to your solution. A careful look at the following questions tends to be glossed over in favor of technical factors by many companies. And this is hard; I’ve met PhDs from the best schools in the world that are challenged to answer them.

Return on investment includes hidden costs, and is relative to alternatives

You’ll need to convincingly demonstrate that your solution will provide the Return on Investment (ROI) a customer is looking for within a reasonable time frame, and at reasonable risk. And this ROI can be tricky to understand, as there are “hidden” costs to your customer that they’ll have to incur before they use your solution. Also, the ROI can’t be considered in isolation; it must be assessed in the context of alternative solutions.

Specifically:

  • Assess hidden costs:
    • Do you know what each prospective customer will have to do to prepare to use your system?
    • How are they going to have to modify existing tech and business processes before you can even begin integrating your AI into their business?
  • Assess alternatives along various metrics (like speed, accuracy, and cost) that matter to your customers.
    • How does your solution compare to other AI systems for this problem? If the problem that your AI system is solving is too hard for everyone else to solve, why is it not too hard for you? What are you missing? What do you have that’s special?
    • Is there a legacy system that can be reconfigured or upgraded to solve the problem at a lower cost?

In my own work and in talking to new AI companies, I’ve seen repeatedly that we just aren’t being creative enough in thinking about the possible alternatives like these. And when we don’t address questions about alternatives and competitors as part of our internal due diligence, but still get hired, that can lead to failed projects, millions of wasted dollars, and frustrated customers. That damages all AI companies’ reputations and hurts our ability to raise money and to sell our AI even when we have the best solution.

Painful lessons

I learned these lessons myself the hard way. Early on, before even the early commercial search engines, we developed a great technology for crawling the web and extracting information. We had the best team ever assembled to attack the problem. We knew our technology worked, but we thought we had to scale it to crawl the entire internet. We designed a system that could operate at that massive scale.

I, and the rest of the team, thought that our competition was limited to other AI approaches to massive data acquisition and cataloguing — using scripts, algorithms, or other related code— that might also be able to handle the scale of the problem as we saw it.

We were partly right. There was, indeed, was a scale challenge, and so looking at an automated solution did make sense. But it wasn’t the scale of the entire internet and it didn’t need to be to create a huge amount of value.

As we eventually learned, human workers using relatively unsophisticated and well-understood tools could find and catalogue a huge amount of information. And those folks were already working and producing acceptable results.

To replicate their work in an AI system, we would be starting pretty much from scratch to even get to a proof of concept, with robust human competition already in place. Our lesson learned:

An AI solution needs to be superior to obvious alternatives.

No one was going to pay us what we needed to fund our massive development plan. The investment might have paid off one day, but there was no near-term ROI possible. And what was already working was good enough.

Matching the solution to the problem

How can you do better? In addition to the hidden costs and competitors questions from above, you should address the following related issues:

  • What is the actual customer problem, not the problem we are best positioned to solve?
  • What metrics does the customer use to assess our solution’s impact on their problem?
  • What is the customer’s acceptable level of quality as defined by those metrics?
  • How long can your customers wait to address their problem?
  • What return does the customer require for this investment,

As you work to answer these questions, you have a high bar to cross: an AI application should offer a solution to real customer need that actually works and is much better than alternative approaches. If not, then maybe it’s back to the drawing board.