AI Analysis · Critical

The AI Reality Check
for Hospitality Leaders

The 95% figure

You've probably seen it. An estimated 95% of AI initiatives fail to deliver measurable revenue or productivity gains. It gets cited in conference presentations, vendor decks, and industry reports — usually followed by a pitch for whatever product the person citing it happens to be selling as the solution to the problem.

The figure is contested, as most headline statistics are. But the direction is right. The majority of hotel groups that have launched AI pilots in the last three years have little to show for the investment. Most projects stalled at the proof-of-concept stage. Adoption inside teams remained low. Leadership lost confidence quickly and moved on to the next thing.

This is not a technology problem. The tools work. Claude, GPT-4, Vanna, and a dozen other AI platforms are genuinely useful for the kinds of tasks hotels need help with — data analysis, content generation, customer communication, operational triage. The technology has outpaced the industry's ability to use it.

Why it fails

Three reasons the
projects stall

After watching this pattern repeat across properties of different sizes and different markets, the failure modes cluster into three things — not the technology, but everything around it.

01
Unclear business objectives
The brief was "use AI" rather than "reduce the time it takes to generate a morning revenue briefing from 45 minutes to 5." Vague mandates produce vague implementations — and vague implementations don't survive the first budget review.
02
No process redesign
The AI tool was added on top of existing workflows rather than redesigning the workflows around it. This is the equivalent of giving someone a faster car but keeping the same congested roads. The bottleneck moves, it doesn't disappear.
03
No ownership
The project was handed to IT to "implement" while the commercial and operations team carried on as normal. AI implementation is a change management project with a technology component — not the other way around.
04
Vendor-led strategy
The technology vendor defined the use case and the implementation plan. This is like letting a real estate agent tell you which house to buy. They're not wrong that you should buy a house — but their incentives are not your incentives.
The patterns

What the failing projects
have in common

The pattern is remarkably consistent. A hotel group's leadership attends a conference where AI is the dominant theme. They return with a mandate to "do something with AI." The IT or digital team is tasked with finding solutions. Vendor demos are booked. A pilot is approved for one property. Six months later, the pilot is still running. Twelve months later, it has been quietly shelved.

The brief was "use AI" rather than a specific business outcome. That's where it ends before it begins.

— Studio Oriente · AI Analysis

The pilot failed not because the tool didn't work, but because nobody redesigned the workflow around it. The revenue manager who was supposed to benefit from the AI briefing tool was still doing her morning analysis the old way — because the AI briefing wasn't integrated into how her day actually started, and because nobody had changed what she was accountable for producing.

The tool was added. The job wasn't changed. The tool became redundant.

What's different

The projects that
actually work

The implementations that stick share a small number of characteristics that distinguish them from the ones that stall. None of them are about the technology choice.

They start with a named business outcome. Not "improve revenue management" — "reduce the time between morning data availability and the first pricing decision from 90 minutes to 20." The outcome is specific, measurable, and owned by someone whose performance is tied to it.

The workflow is redesigned before the tool is deployed. The process that the AI is meant to support is mapped, critiqued, and rebuilt with the AI as a first-class participant — not an add-on. This means someone's job changes. That's uncomfortable. It's also the only way it works.

Leadership stays involved throughout. Not just at the approval stage. Not just at the review stage. The CEO or commercial director who commissioned the project is visible to the team implementing it, asking questions, reviewing outputs, and visibly treating the new system as the way the company now does things.

The test

The one question
to ask before starting

Before any AI initiative in hospitality, there is one question that predicts whether the project will stick or stall. Not "which tool should we use?" Not "what's our AI strategy?" Just this:

"What specific decision will be made faster, better, or more consistently because of this — and who is accountable for that decision today?"

If you can't answer both halves of that question in one sentence, the project isn't ready to start. Not because the technology isn't good enough — because the organizational conditions for success don't exist yet. The work to do before the pilot is to create those conditions, not to find a better vendor.

This is uncomfortable advice for an industry that has been sold the idea that AI is a product you buy rather than a capability you build. But the 95% failure rate is the cost of not following it.

Studio Oriente's AI Readiness Audit exists precisely for this moment — before the pilot, before the vendor selection, before the budget commitment. If that's where you are, talk to us.

Newsletter

Checked In.

What actually works when hotels put AI to use. No slide decks. Straight to your inbox — bi-weekly.