The Three Questions That Determine Whether Your AI Project Lives or Dies

Author:
Louis-Paul Baril
9/12/2025
The Three Questions That Determine Whether Your AI Project Lives or Dies

I've watched $40 billion disappear into AI projects that never delivered a dollar of return.

The pattern is always the same. Someone gets excited about AI capabilities. They rush to implementation. Six months later, the project sits abandoned in a folder labeled "lessons learned."

More than 80% of AI projects fail—twice the failure rate of regular IT projects. Last year alone, the share of companies abandoning most AI initiatives jumped from 17% to 42%.

The problem isn't the technology. It's what happens before anyone writes a single line of code.

Question One: What Problem Are You Actually Solving?

Most AI projects fail because organizations misunderstand—or miscommunicate—what problem needs solving.

I see this constantly. A team wants to "implement AI" or "leverage machine learning." When I ask what specific operational problem they're addressing, the room goes quiet.

You can't automate a process you haven't defined. You can't optimize a workflow you don't understand.

The organizations that succeed do something different. They redesign their end-to-end workflows before selecting any modeling techniques. They map the actual requirement, not the perceived desire.

This diagnostic phase feels slow. It feels like you're not making progress. But organizations that skip it fail at predictable rates.

Here's what the diagnostic phase reveals:

  • Where your current process breaks down
  • What manual steps actually create value versus what steps exist because "that's how we've always done it"
  • Which problems AI can solve and which problems need different solutions
  • Whether you're trying to digitize a broken process

McKinsey found that 70% of digital transformations fail primarily because organizations digitize existing processes without redesigning them first.

You're not looking for places to insert AI. You're looking for problems that AI happens to solve well.

Question Two: Do You Have the Infrastructure This Requires?

The second question separates real implementation from expensive experimentation.

AI doesn't run in isolation. It needs data infrastructure, integration points, and operational substrate to anchor into.

I've seen organizations invest millions in AI models while lacking the basic infrastructure to deploy them. The model works beautifully in testing. Then it sits unused because no one can integrate it into existing systems.

Winning AI programs invert typical spending ratios. They earmark 50-70% of timeline and budget for data readiness—extraction, normalization, governance metadata, quality dashboards, retention controls.

The teams that rush past this phase hit the same wall. Their data quality blocks model training. Their systems can't talk to each other. Their infrastructure can't handle the computational load.

Before you commit to any AI project, verify:

  • Your data exists in usable format
  • Your systems can integrate with new components
  • Your infrastructure can handle the processing requirements
  • Your team understands how to maintain what you're building

Organizations often lack adequate infrastructure to manage their data and deploy completed AI models. This infrastructure gap increases failure likelihood more than any technical limitation.

The constraint isn't what AI can do. The constraint is what your existing systems can support.

Question Three: Does Your Team Understand How This Actually Works?

This question makes people uncomfortable. It shouldn't.

The biggest predictor of AI project failure isn't technical capability. It's the learning gap between tools and organizations.

MIT research found that 95% of enterprise AI pilots fail to deliver returns. The primary reason? Flawed enterprise integration stemming from comprehension gaps.

You can't safely operate systems you don't understand. You can't troubleshoot failures you can't diagnose. You can't optimize performance you can't measure.

I refuse to implement solutions without establishing foundational comprehension in the team that will own them. This isn't about making everyone a data scientist. It's about ensuring people understand:

  • What the system actually does versus what they think it does
  • Which configuration parameters affect output reliability
  • How to verify results instead of accepting them blindly
  • When the system will fail and what that failure looks like

Organizations focus on using the latest technology instead of solving real problems for their intended users. They treat AI deployment like software installation—click through the setup wizard and start using it.

That approach works for consumer apps. It fails catastrophically for AI systems where configuration determines reliability and misunderstanding creates vulnerability.

The comprehension requirement feels like it slows you down. It does the opposite. Teams that understand their systems adapt them successfully. Teams that don't understand their systems abandon them when the first unexpected behavior appears.

What Success Actually Looks Like

Only 5% of AI pilot programs achieve rapid revenue acceleration. The other 95% stall, delivering little measurable impact.

The successful 5% share common patterns. They invest in diagnosis before deployment. They build on existing infrastructure instead of introducing new complexity. They ensure comprehension before implementation.

They answer these three questions honestly before writing any code.

The answers might reveal you're not ready for AI implementation. That's valuable information. It saves you from joining the 80% failure rate.

Or the answers might reveal you have everything you need—just not in the configuration you expected.

Either way, you know before you invest resources in a project that won't deliver.

The field optimizes for deployment speed. I optimize for implementation success. The difference shows up in outcomes, not timelines.

These three questions feel basic. They are basic. That's why skipping them creates such predictable failure patterns.

Answer them first. Build second.

Sources

  1. RAND Corporation (2024) - "The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed"
    https://www.rand.org/pubs/research_reports/RRA2680-1.html
  2. CIO Dive / S&P Global (2025) - "AI project failure rates are on the rise: report"
    https://www.ciodive.com/news/AI-project-fail-data-SPGlobal/742590/
  3. McKinsey & Company (2018) - "Digital transformation failure rates"
    https://medium.com/@tomlinsonroland/digitally-transformed-and-still-broken-ee0b374a160a
  4. MIT NANDA Initiative / Fortune (2025) - "MIT report: 95% of generative AI pilots at companies are failing"
    https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/