Technical Discovery: How to De-Risk a Software Project Before It Starts

Reading time: 6 minutes

Last modified:

Technical discovery risk reduction diagram

Most software projects don’t fail at month 8. The failure happens at month 1, when someone writes a scope document, attaches a timeline, and everyone agrees it sounds reasonable. Month 8 is just when the consequences arrive.

The reason is structural: estimates are made before the unknowns are known. Integrations are assumed to be simpler than they are. The “can’t we just…” assumptions about existing systems turn out to be wrong. And the thing that was supposed to take 3 months quietly becomes 6 before anyone wants to say it out loud.

Technical discovery exists to surface those problems before they become schedule overruns.

What Technical Discovery Actually Is

Discovery is not a requirements document. A requirements document is a list of things to build. Discovery is a risk identification process.

The goal isn’t to specify every feature in detail. It’s to find the things that could cause the project to go wrong — the technical uncertainties, the integration dependencies, the assumptions baked into the scope that haven’t been verified — and to resolve or explicitly acknowledge them before development starts.

A discovery phase done properly produces a different kind of confidence. Not “we know exactly what we’re building” — that’s false confidence. But “we know where the unknowns are, we’ve sized the risks, and we’ve structured the scope to test the riskiest assumptions first.” That’s real confidence.

The 5 Questions Discovery Must Answer

1. What are we actually building?

Not “a platform for X” — a concrete description of the user flows, the data model, and the system boundaries. What does the user do? What does the system do in response? Where does the data come from and where does it go? The goal is to get specific enough that an engineer can estimate it, not so specific that it takes three months to write.

2. Where are the integration points?

Third-party APIs, legacy systems, external data sources, payment processors, identity providers — every integration is a risk. The questions to answer: Does the API you’re relying on actually support what you need? What are the rate limits? Who maintains the authentication? What happens when it goes down? Integration assumptions that turn out to be wrong are among the most common causes of project delays.

3. What are the unknowns?

Every project has things the team doesn’t know yet. The difference between a well-run project and a poorly-run one is whether those unknowns are written down. An honest list of “we don’t know how this works yet” is more useful than a scope that pretends everything is understood. Known unknowns can be planned for. Unknown unknowns blow up timelines.

4. What is the riskiest assumption?

There’s always one thing in the scope that, if it turns out to be wrong, invalidates a large chunk of the plan. Maybe it’s “the existing API supports bulk operations.” Maybe it’s “we can use the current database schema.” Maybe it’s “the client’s third-party vendor will cooperate with our integration.” Whatever that assumption is, it needs to be identified, named, and tested — either with a prototype, a technical spike, or a direct conversation — before Phase 1 starts.

5. What does “done” mean?

Agreement on completion criteria before development starts prevents the most common cause of late-stage friction: the client and the developer having different mental models of what “shipped” looks like. Performance requirements, browser support, accessibility standards, definition of testing coverage, deployment environment — all of this should be explicit, not assumed.

Discovery Outputs

A discovery phase should produce five artefacts, not fifty pages:

Architecture sketch. A diagram showing the major system components, their relationships, and the data flows between them. Not a full technical specification — a shared map of what the system looks like that both technical and non-technical stakeholders can read.

Integration map. Every external dependency listed with: what it’s needed for, who owns it, what assumptions we’re making about it, and what we need to verify before we can build against it.

Risk register. A table of identified risks, rated by likelihood and impact, with a mitigation or contingency plan for each. The most important column is “what happens if this goes wrong.”

Phased scope. The work broken into phases, each delivering standalone value, with the riskiest or most-unknown work front-loaded. This is not a Gantt chart — it’s a sequence of bets ordered by what we need to learn.

Definition of done. Clear, agreed-upon completion criteria for Phase 1, written in terms that can be verified, not interpreted.

How Long Discovery Should Take

For most projects: one to three weeks.

One week for a well-understood scope with few integrations and a technically experienced team that’s worked together before. Three weeks for a project with significant integration complexity, a legacy system that needs to be understood, or a domain that requires research.

Discovery should not take three months. If it’s taking three months, it’s not discovery — it’s a paid design phase that’s avoiding the hard conversation about what the project actually is. The goal is speed and focus: surface the risks, structure the scope, start building.

Discovery vs. a Paid Design Phase

Discovery is distinct from design (UX, visual design, detailed specifications). A discovery sprint is engineering-led and risk-focused. A design phase is design-led and deliverable-focused.

Both have their place, but they answer different questions. Discovery answers: is what we’re proposing buildable, and what are the risks? Design answers: what should it look like and how should it behave?

For a project with significant uncertainty, you run discovery first. Once you know the shape of the system and the location of the risks, you can design intelligently. Running design before discovery produces beautiful wireframes for a system that turns out to be architected incorrectly.

The Most Common Discovery Failure

Skipping the risk register.

Teams go through the motions — they write down the scope, sketch an architecture, list the integrations — but they don’t write down what could go wrong. This feels like negative thinking. It isn’t. It’s the only part of discovery that directly protects you from the failure modes that actually materialise.

A risk register is not a document that predicts the future. It’s a forcing function that makes people articulate their assumptions, assign probability to failure modes, and think through mitigations before they’re urgent. The act of writing it down surfaces disagreements and blind spots that wouldn’t emerge otherwise.

Red Flags in a Discovery Process

  • No engineers involved. Discovery run entirely by project managers or account managers will miss technical risks by definition. Engineers who will do the work need to be in the room.
  • No prototype of the risky parts. If there’s an integration that’s never been tested, or a technical approach that’s untested in your context, discovery should produce a spike — a minimal proof that the approach works — not just an assumption that it will.
  • No challenge to scope. If the discovery process produces the same scope the client came in with, unchanged, something is wrong. Discovery should produce a refined scope that reflects what was learned, not a rubber stamp on the original brief.
  • Timelines produced before risks are resolved. If estimates are committed before the risk register exists, you’ve skipped the most important step.

Starting a new project and want to make sure you’re going into development with real confidence rather than a false sense of certainty? Write to us at hello@cimpleo.com — a short discovery engagement at the start is the cheapest insurance you can buy.

Table of Contents