Key Takeaways: Not every AI opportunity is worth pursuing — and recognizing the bad ones early is more valuable than building fast. Vanity metrics, inaccessible data, moving ROI goalposts, compliance gaps, and solutions without real problems are the five patterns that reliably kill AI projects. For each, the red flag shows up before the first sprint, not at the demo. Saying no early is not a failure of ambition; it’s the discipline that makes the projects you do take on actually ship.
The hardest advice to give a client is: this project isn’t ready. Not yet. Maybe not ever in its current form.
It’s hard because there’s genuine enthusiasm on the other side of the table. There’s budget. There’s a slide deck with a compelling vision. And turning that down feels like leaving money on the floor.
But some AI projects are structured to fail from the first conversation. The signal is usually subtle — a question that gets a vague answer, a data source that “should be accessible,” a success metric that shifts when you push on it. After enough of these conversations, the patterns become recognizable.
Here are the five we’ve learned to treat as genuine stop signs.
Pattern 1: Vanity Metrics With No Real Business Outcome
The red flag is an AI project whose success is measured in AI terms: modelA mathematical function trained on data that maps inputs to outputs. In ML, a model is the artifact produced after training — it encapsulates learned patterns and is used to make predictions or… accuracy, inferenceThe process of using a trained model to generate predictions or outputs on new data. Unlike training (which is computationally intensive), inference is typically faster and is the production-time… speed, number of predictions made. Stakeholders talk about “achieving 90% accuracy” or “processing 10,000 documents per day” as if those are outcomes in themselves.
They’re not. They’re internal metrics. The business outcome — reduced processing costs, fewer manual errors, faster cycle times — is conspicuously absent from the success criteria.
This matters because a modelA mathematical function trained on data that maps inputs to outputs. In ML, a model is the artifact produced after training — it encapsulates learned patterns and is used to make predictions or… with 85% accuracy that saves 3,000 hours of manual work per year is a good AI project. A modelA mathematical function trained on data that maps inputs to outputs. In ML, a model is the artifact produced after training — it encapsulates learned patterns and is used to make predictions or… with 94% accuracy that replaces a process that wasn’t a real bottleneck is not, regardless of what the benchmark says. When the success metrics discussion stays inside the AI layer and never touches the business layer, there’s no accountability structure. Nobody asks the right question at the end: did this change anything?
What to do instead: Before scoping the AI work, ask the client to name the business outcome and the person who owns it. If neither exists, or neither is convincing, the project isn’t ready. The AI requirements should follow from the business case, not the other way around.
Pattern 2: Data That Technically Exists but Isn’t Actually Accessible
“We have all the data” is one of the most reliably misleading phrases in an AI discovery call.
The data exists. But accessing it requires a request to a department that reports to a different executive. Or it lives in a system whose vendor contract prohibits extraction. Or it’s spread across spreadsheets in twelve regional offices with no consistent schema. Or the legal team hasn’t reviewed what can be processed by an external modelA mathematical function trained on data that maps inputs to outputs. In ML, a model is the artifact produced after training — it encapsulates learned patterns and is used to make predictions or….
Data access problems rarely surface in the initial conversation. They surface three weeks into the project when someone finally pulls up the actual database and realizes the trainingThe process of exposing a machine learning model to labeled or unlabeled data so it can learn patterns. During training, the model adjusts its internal parameters (weights) to minimize a loss… data is locked, inconsistent, or missing in large parts.
At that point, projects don’t get paused — they get restructured around whatever data is available. Which usually means building something that answers a different question than the one that justified the budget.
What to do instead: Make data access a gate, not an assumption. Before any technical scoping, ask to see a sample of the actual data: how it’s stored, who owns it, what extraction looks like end-to-end. If that request is difficult to fulfill, the project has a data access problem, not an AI problem. Fix that first.
Pattern 3: ROI That Moves Every Time You Push on It
Some AI projects have ROI defined loosely on purpose. The initial case is built on optimistic estimates (“this could save up to…”), and when you push on specifics — hours saved per person, current process cost, expected error rate reduction — the numbers turn out to be estimates built on estimates.
That’s not always fatal. Rough estimates are normal in early-stage scoping. The real problem is when those estimates shift every time someone asks a harder question. Projected savings go up in stakeholder presentations and down in technical discussions. Scope expands to cover adjacent problems whenever the original case looks thin. Nobody will write down a specific number and own it.
Moving goalposts tell you the project doesn’t have a real problem owner. Someone is championing AI as an initiative — for visibility, for budget, for genuine but diffuse enthusiasm — without a specific operational outcome they’re accountable for. These projects tend to produce demos that look polished and results that are impossible to measure.
What to do instead: Ask the client to define ROI in a way that could be falsified. Not “this should improve efficiency” but “we expect invoice processing time to drop from 4 hours to 45 minutes per batch, and we’ll measure this for the first 90 days post-launch.” If they can’t write that down, or won’t, you have your answer.
Pattern 4: Compliance Gaps That AI Can’t Paper Over
The compliance version of this conversation sounds like: “We know there are some data privacyThe protection of personally identifiable information (PII) used in AI training and inference. Data privacy practices include data anonymization, differential privacy, federated learning, and… concerns, but we think AI could actually help us address those.”
It can’t.
AI doesn’t resolve compliance gaps — it adds new ones. A modelA mathematical function trained on data that maps inputs to outputs. In ML, a model is the artifact produced after training — it encapsulates learned patterns and is used to make predictions or… trained on data that shouldn’t have been processed creates liability. An automated decision system operating in a regulated domain (credit, hiring, medical, financial) has its own compliance requirements on top of the underlying data rules. Running AI over personal data stored in ways that don’t meet local regulations doesn’t clean up the underlying situation; it extends the problem into a new system.
This isn’t about excessive caution. There are plenty of AI applications in regulated industries that work well, precisely because compliance was addressed upfront. The problem is when compliance gets treated as something to handle after the system is built, once everyone can see that it works.
By that point, untangling compliance issues from the AI architecture is usually more expensive than starting from scratch.
What to do instead: Compliance review happens before technical design, not after. If the relevant team — legal, DPO, compliance officer — can’t commit to a review timeline that fits the project schedule, the project doesn’t have a realistic start date. That has to be part of the scoping conversation, not an afterthought.
Pattern 5: A Solution That Arrived Before the Problem
This is the most flattering trap to fall into, because it starts with genuine interest in AI rather than a manufactured business case.
The conversation sounds like: “We want to use AI in our operations. We’re not sure exactly where yet, but we have leadership support and budget.” This isn’t bad faith — it’s often sincere enthusiasm from companies that have watched competitors announce AI initiatives and want to move.
But projects that begin with “we want AI” rather than “we have this specific problem” follow a recognizable path. The team picks something that demos well, not something with material operational impact. The first project ships and looks impressive in a slide deck. Nobody measures what changed, because the goal was to do AI, and that goal was achieved. The second project is harder to justify. The third doesn’t get approved.
The result is an AI capability that exists without a use case that earns its place.
What to do instead: Redirect the conversation from “where can we use AI” to “what are the three biggest operational bottlenecks in your business right now.” Then evaluate whether AI is the right tool for any of them — or whether simpler automation, a process change, or a better dashboard would solve the problem faster. The right AI project is almost always downstream of an honest bottleneck audit, not an AI strategy document.
The Common Thread
Each of these five patterns points to the same underlying issue: the AI work is being proposed before the non-AI work is done.
The business outcome isn’t defined. The data isn’t accessible. The ROI isn’t owned. The compliance isn’t reviewed. The problem hasn’t been found yet. AI can’t substitute for that foundation. A modelA mathematical function trained on data that maps inputs to outputs. In ML, a model is the artifact produced after training — it encapsulates learned patterns and is used to make predictions or… trained on bad data produces bad outputs more efficiently. A high-accuracy system measuring the wrong thing is still measuring the wrong thing.
The discipline isn’t in building fast. It’s in knowing which projects deserve the effort.
At Trobz, we run a structured discovery session before any AI engagement — partly to scope the work, partly to check whether the work should happen at all. If you’re evaluating an AI initiative and want a second opinion on whether the foundation is solid, reach out and we’re happy to talk it through.