I open a new Statement of Work, and within 60 seconds, I know the project is doomed.
Not risky. Not challenging. Doomed.
The timeline says six months. The scope section runs 40 pages. Three developers are proposed for 25 distinct features. I don’t need to read the requirements. The math already told me everything.
This isn’t a project plan. It’s a contractual trap that was engineered before anyone wrote a single line of code.
The Mathematical Impossibility Test
Here’s what I do first: I count discrete features in the deliverables list. Not bullet points. Actual functional capabilities that need their own development cycle.
Let’s say the SOW lists 25 features. The proposed timeline is six months—roughly 26 weeks. I subtract 20% for project overhead, meetings, and status updates. That’s realistic even in efficient shops. You’re down to 20 working weeks.
Now each feature needs requirements clarification, design, development, internal testing, client review, feedback incorporation, and final acceptance. That’s a minimum seven-stage cycle.
The math: 25 features × 7 stages = 175 weeks of effort. You have 20 weeks of calendar time.
Divide 175 by 20. You need 8.75 full-time developers working in perfect parallel with zero dependencies.
The SOW proposes three developers.
The math doesn’t work. It’s not even close. According to McKinsey research, large IT projects run 45% over budget and deliver 56% less value than predicted. That’s $66 billion in total cost overruns—more than Luxembourg’s entire GDP.
This isn’t estimation error. This is intentional lowballing designed to win contracts with planned renegotiation built in.
Why Procurement Teams Miss This Every Time
Procurement teams don’t catch the mathematical impossibility because they’re not engineers. They evaluate price competitiveness and contract terms, but they don’t decompose work into effort units.
They see “six months” and think it sounds reasonable because another vendor said eight months. They’re comparing vendor promises to each other instead of comparing promises to physical reality.
That’s the vulnerability vendors exploit. Procurement has no mechanism to validate feasibility—only to compare relative pricing.
When I show clients this math, their first reaction is usually defensive of the vendor. “Well, they’re a reputable firm, they must know what they’re doing.” Or: “Maybe they have proprietary tools that make them faster.”
They’ve already emotionally committed to the relationship, sometimes before they even send me the contract. They want the math to be wrong because acknowledging it means restarting procurement.
How Vendors Respond When Cornered
Vendors never dispute the math directly. That would require them to show their actual resource plan, which they won’t do.
Instead, they pivot to methodology.
“We use agile sprints.”
“Our developers are 3x more productive than industry average.”
My personal favorite: “We’ll leverage offshore resources that aren’t listed in the SOW.”
That last one is particularly telling. It admits the proposed team is insufficient while simultaneously introducing new risks around coordination and quality.
When I force the conversation by asking “Show me your sprint plan that delivers 25 features with three developers in 26 weeks,” they usually respond with: “We’ll develop a detailed project plan after contract signature.”
That’s the moment clients should walk away.
But most don’t. They’ve already told their board the project is happening. They’ve allocated budget. Backing out feels like failure. The sunk cost fallacy kicks in before a single dollar is spent—just from the time invested in procurement.
The Success Definition Vacuum
The second thing I check after timeline feasibility: whether acceptance criteria exist at all.
If I see phrases like “to client’s satisfaction” or “mutually agreed upon standards” without explicit definitions, I know the vendor has already transferred all interpretation power to themselves.
The client thinks they’re protected by that language. They’ve actually just handed over a blank check.
Research shows that 48% of developers point to changing or poorly documented requirements as the leading reason for software project failure. Projects without explicitly defined acceptance criteria lose client leverage progressively throughout delivery.
When success isn’t clearly defined, subjective vendor interpretations become contractual reality. The absence of clear metrics transfers control from client to vendor.
Here’s what really tells me a project is doomed: impossible timeline plus vague success criteria.
This combination means the vendor plans to declare victory on their schedule regardless of what actually works. I’ve seen this pattern so many times I can spot it in under 60 seconds.
Change Orders Are Revenue Engineering
The few times I’ve seen vendors actually defend their timeline with specifics, it always involves assumptions that aren’t in the contract.
“We assumed you’d provide the API documentation.”
“We’re counting on your internal team handling data migration.”
These dependencies aren’t listed as client responsibilities in the SOW. When they become blockers, they’ll generate change orders.
This isn’t oversight. It’s intentional design.
Vendors structure proposals to minimize initial cost perception while embedding change-order triggers that activate predictably during execution. According to international anti-corruption guidance, change orders can be manipulated through intentionally low bids, with contractors knowing officials will approve change orders later to recover profits.
Contractual gaps that convert directly to billable work represent deliberate revenue engineering. Buyers discover that the initial contract value was only the starting point, with the true expense revealed through a growing list of scope modifications.
I call this “change order archaeology”—excavating the specific contractual gaps that predictably convert into six-figure scope modifications.
Common Change-Order Triggers
- Undefined integration points: “System will integrate with existing infrastructure” without specifying APIs, data formats, or authentication methods
- Ambiguous data migration scope: “Historical data will be transferred” without volume limits, transformation rules, or quality thresholds
- Missing performance criteria: “System will support concurrent users” without specific load requirements or response time standards
- Vague testing responsibilities: “Comprehensive testing will be performed” without test case counts, environments, or acceptance thresholds
Each of these gaps transforms into billable work the moment execution begins.
The Leverage Decay Curve
Client power erodes exponentially once projects begin.
Before signature, you hold all the leverage. You can walk away. You can demand changes. You can force vendors to price hidden work into base contracts.
After signature, sunk costs, organizational commitment, and information asymmetry shift leverage dramatically toward vendors.
Once you’ve told your board the project is happening, once you’ve allocated budget, once you’ve committed internal resources—backing out becomes politically impossible regardless of what the contract says.
Vendors know this. They structure contracts to be “fixable later” because they understand that later means never. The pressure to continue failing initiatives overwhelms rational cost-benefit analysis.
Pre-establishing exit ramps and approval checkpoints creates structural mechanisms that preserve client optionality.
Phase gates with explicit sign-off requirements. Acceptance criteria with measurable thresholds. Early termination signals tied to objective milestones.
These controls prevent sunk-cost commitment escalation. They preserve your ability to pause or stop projects when evidence indicates failure—before financial exposure becomes catastrophic.
Why Traditional Risk Assessment Fails
Traditional risk assessment approaches that average risk across categories obscure critical failure points.
A project with moderate average risk but extreme concentration in one area—such as governance or vendor dependency—carries fundamentally different exposure than a project with evenly distributed moderate risk.
Academic research published in the Journal of Management Information Systems found that the average cost overrun for IT projects “does not exist” due to power-law distribution with infinite variance. Disastrous IT projects aren’t outliers. They’re extreme values following highly regular and predictable patterns.
Heat map methodologies that highlight concentration patterns provide superior predictive value. They prevent single-category excellence from masking catastrophic weakness in another area.
I evaluate five critical dimensions independently:
- Scope risk: Clarity, boundaries, and enforceability
- Cost risk: Pricing structure, assumptions, and exposure
- Schedule risk: Timeline realism and dependencies
- Governance risk: Decision rights, escalation, and acceptance
- Vendor dependency risk: Staffing, assumptions, and control
A project might score well on cost structure but catastrophically on governance. Averaging those scores produces a comfortable “medium risk” rating that completely misses the governance failure that will kill the project.
Follow the Incentives
The existence of specialized pre-signature IT project risk assessment services signals something important: traditional procurement processes systematically fail to protect clients in software development engagements.
Standard legal review focuses on liability and compliance rather than delivery feasibility and cost structure analysis. This capability gap creates an entire market category for technical contract analysis that bridges legal, financial, and engineering domains.
Systems integrators, consultancies, and software vendors benefit financially from scope ambiguity, timeline optimism, and change-order friendly contract structures.
Truly protective due diligence requires separation from parties who profit from project execution.
That’s why vendor-agnostic assessment exists as a market category. The inherent conflicts of interest in traditional engagement models mean that protective analysis can’t come from vendors or their partners.
Research shows that only 35% of projects worldwide finish successfully, meeting all goals and timelines. That means 65% fail or deliver mixed results. When the baseline failure rate is that high, the problem isn’t execution—it’s structural.
The 10-Page Constraint Philosophy
Every IT Project Risk Report I produce is 10-15 pages. Not 50 pages. Not 100 pages. Ten to fifteen.
This isn’t arbitrary. It’s deliberate design for executive consumption rather than comprehensive technical documentation.
Decision-maker attention is the scarcest resource in enterprise purchasing. Reports must deliver actionable intelligence within realistic reading time constraints. They must prioritize decision support over exhaustive analysis.
The structure progresses from executive decision-making to technical detail:
- Immediate action recommendation (Sign/Fix/Walk Away)
- Overall risk rating
- Top five project red flags
- Risk heat map across five dimensions
- Contract and SOW risk findings
- Delivery feasibility analysis
- Change-order exposure quantification
- Pre-signature negotiation fix list
This pyramid structure allows different stakeholders to extract relevant information at appropriate depth levels. Executives get the recommendation in 30 seconds. Legal teams get specific clause revisions. Technical leaders get feasibility analysis.
The report concludes with a concise risk-reduction checklist: specific contract clauses to add or revise, questions that force scope and assumptions into the open, redlines intended to protect the client before signing.
This isn’t legal advice. It’s pre-signature risk mitigation guidance that converts assessment into leverage.
What This Means for You
If you’re evaluating a software development contract right now, do this:
Count the discrete features in the deliverables list. Look at the proposed timeline and team size. Do the math.
If the numbers don’t work, the project won’t work. The execution problems you’ll face six months from now are already embedded in the contract you’re about to sign.
Search your SOW for acceptance criteria. If you see phrases like “to client’s satisfaction” or “mutually agreed upon standards” without explicit definitions, you’ve already transferred control to the vendor.
Look for the change-order triggers: undefined integration points, ambiguous data migration scope, missing performance criteria, vague testing responsibilities.
These gaps will convert into billable work the moment execution begins.
The project isn’t just risky at that point. It’s already failed. You just don’t know it yet.
The question is whether you’ll discover that before or after you sign.