Most AI Pilots Fail — And That’s Because Companies Skipped Fixing the Work They Wanted to Automate

5 min read
Most AI Pilots Fail — And That’s Because Companies Skipped Fixing the Work They Wanted to Automate

This article was written by the Augury Times






Why Lukas Egger says most pilots are doomed

On a recent podcast, Lukas Egger of SAP Signavio (SAP) made a blunt claim: roughly 95% of enterprise AI pilots never become useful, lasting projects. That number grabbed attention because it came from someone whose team spends its days mapping how real work actually gets done inside big companies.

Egger’s point isn’t about model quality. It’s about what companies try to wrap models around. He argues many firms treat AI like a silver bullet — drop in a model, point it at data, and watch efficiency appear. In reality, he says, the companies that win start by fixing the processes first. SAP Signavio’s view matters because it sits at the junction of process mapping, change management and enterprise software. For CIOs and tech buyers, that perspective flips the usual pilot playbook: the work comes before the automation, not after.

Anatomy of a failed pilot: where most projects break down

Egger lays out a pattern that will sound familiar to anyone who’s watched an expensive pilot fizzle. It usually starts with a tactical ask — automate a task, make a model that flags exceptions, or reduce time on a manual step. Teams then grab a dataset, spin up an algorithm, run the pilot in a controlled slice of the business, and declare technical success. But when the pilot tries to scale, things fall apart. Here are the common causes he highlights.

First, problem framing is wrong. Teams often aim at symptoms instead of root causes. For example, a finance team might ask for an AI model to predict invoice errors without asking why errors appear in the first place. If the underlying process routes invoices through multiple legacy systems and manual rekeying, the model only learns the mess. It can flag suspicious invoices, but it can’t fix the process that creates the noise.

Second, data and process misalignment. AI needs stable, well-understood inputs. Many pilots feed models data pulled from ad hoc exports, spreadsheets and shadow systems. That data has gaps, timing mismatches and inconsistent labels. The model can seem to work on the pilot sample but will fail when the business changes or when the volume grows.

Third, organizations assume a plug-and-play vendor model. They expect a vendor to hand over a prebuilt model and an SLA and assume integration is an engineering detail. In truth, integration is a business design question. Where does the model sit within end-to-end flows? Who sees the output? What action does it trigger? If these things aren’t decided before you build a model, you’ll get alerts no one acts on.

Fourth, measurement failures hide true value. Pilots are often judged on narrow technical metrics — model accuracy, precision, recall — rather than business metrics like cycle time, error rate, cash flow or customer satisfaction. A model can be statistically impressive and still produce zero business benefit when it’s not tied to outcomes that matter.

Finally, human and change barriers are underestimated. Pilots that require staff to change habits need champions, training and clear governance. Without those, a small group’s success won’t spread beyond the pilot team.

Re-engineer before you automate: a playbook for enterprise AI

Egger’s prescription is straightforward: treat process design as the primary task and AI as an augmenting tool. That sounds obvious, but it requires shifting budgets, timelines and vendor expectations. Below is a practical playbook for CIOs who want pilots that scale.

1) Start with a readable map. Use process mapping to document the current state end-to-end — not a single form or screen, but the full journey a piece of work takes. That reveals handoffs, delays and duplicate steps you can’t see in raw logs. Signavio’s core strength is making those flows visible, but any disciplined mapping discipline works.

2) Define outcome metrics first. Pick a small set of business outcomes to move — for example, invoice processing time, first-time-right rate, or time to resolution. These metrics determine whether a change is worth scaling. Make them measurable from the start and tie them to incentives.

3) Patch the process defects. Before you layer AI on, remove obvious waste: duplicate data entry, unnecessary approvals, unclear ownership. Many times a simple rule or a small workflow redesign delivers most of the gain. Save models for the parts where pattern recognition or prediction truly adds value.

4) Design integration and decision flows. Decide how model outputs will enter human work. Will a model auto-apply a change, or will it create a work item for a human to review? Define SLAs, escalation paths and who owns mistakes. These are change-management choices, not plug-and-play tech settings.

5) Run pilots as experiments with business owners on the hook. Treat pilots like clinical trials: pre-specify success criteria, guardrails, and how you’ll measure impact. Include people who will rely on the new flow in the design and the evaluation. If frontline workers mistrust a model, the pilot will not scale.

6) Use modular architecture. Aim for reusable data pipelines, clear APIs and observability so you can rerun experiments and monitor drift. That reduces the cost of scaling a successful pilot into production.

What CIOs and procurement must change immediately

For CIOs, Egger’s advice means shifting the RFP and vendor evaluation process. Procurement should stop buying isolated AI features and start buying capabilities around process improvement, integration and change enablement. That alters the vendor shortlist: vendors that sell only models or prebuilt connectors may struggle unless they partner with organizations that know processes.

CIOs should also demand pilot contracts that include outcome-based milestones, not just delivery milestones. Insist on proof that data lineage is solid and that the vendor can support observability and retraining. Internally, budget some time and money for process rework before automating — that line item will save far more than the model license fee.

Finally, recognize which pilots should never scale. If a pilot succeeds only under near-perfect conditions, or if it depends on manual cleanups that no one will sustain, codify it as a learning exercise, not a productization candidate.

Investor takeaways: which vendors benefit (and which don’t)

This reality has clear market implications. Vendors that combine process mapping, integration tooling and change management stand to gain. That helps companies like SAP (SAP), which now offers Signavio as part of its process portfolio, and system integrators that wrap models with process change services — firms such as Accenture (ACN) and IBM (IBM). Cloud vendors with robust integration platforms, like Microsoft (MSFT), also benefit because they can host the end-to-end stack.

Pure-play model vendors or narrow point-solution sellers face a harder path. Firms that sell only APIs or prepackaged models will find buyers who demand process guarantees and measurable outcomes. Watch sales cycles and deal structures: longer cycles that bundle process work and outcome-based contracting hint that the market expects more than a model. Investors should track recurring revenue quality, the share of deals that include integration and process work, and partnership lists that show who can deliver end-to-end outcomes.

In short, the winners will be those who admit that AI is a tool, not a product — and who can show how that tool changes the work it augments.

Sources

Comments

Be the first to comment.
Loading…

Add a comment

Log in to set your Username.

More from Augury Times

Augury Times