There’s a sentence we hear often from founders, Clinical Affairs and Regulatory teams:
“We have a strong device, but we’re afraid of losing time.”
Not time spent working.
Time lost to rework, the kind that appears halfway through a study, when you realize it started… but wasn’t solid enough.
Studies don’t fail because there’s a lack of effort.
They crack when the hard part begins and you discover that:
- the endpoint doesn’t truly support the claim
- the population isn’t defined with enough precision
- the data collected doesn’t answer the clinical question
- key decisions lack a clear, traceable rationale
At that point, teams work harder, but progress slows.
Not because they’re unskilled, but because there’s no shared compass.
Why this matters even more in 2026
Not because everything will suddenly change, but: the system is maturing, there is more focus on device lifecycle, there are higher expectations for robust, explainable evidence, there is less tolerance for studies that “work on paper” but collapse under scrutiny.
The way out isn’t mysterious, it has a name: a defensible evidence strategy.
Innovation isn’t what the device does, it’s what it proves
Many projects start from features. Decision-makers start from a different question:
what changes for the patient or the clinical decision?
When the clinical promise is vague:
- endpoints become convenient
- data becomes accumulation
- studies turn into technical exercises
When it’s clear, the opposite happens: the scope narrows and what remains becomes strong.
A measurable endpoint isn’t necessarily a useful one
This is one of the most expensive mistakes: endpoints are chosen because they’re easy to collect,
because the technology generates them effortlessly.
Then reality hits: yes, it’s measurable, but it doesn’t prove what matters.
The mature approach is the opposite:
- define the clinical question
- define what “success” really means
- choose the metric that truly represents it
Simple, but rarely done well.
Speed comes from reducing uncertainty, not ignoring it
Every innovative device carries uncertainty.
That’s not a flaw, it’s the nature of innovation.
A solid evidence strategy doesn’t promise perfection, it does something more valuable:
- makes key uncertainties explicit
- defines how to reduce them
- clarifies what happens if data doesn’t behave as expected
Trust is built on the certainty that “if things change, we know how to move.”
The page that creates clarity: the evidence map
To understand whether a project is ready, you often need a single page.
Not to impress, but to remove ambiguity (the real source of rework).
A solid evidence map includes:
- a single-sentence clinical claim
- a clearly bounded population
- the real-world use scenario
- the comparator or standard of care
- primary and secondary endpoints chosen for relevance
- the 3–5 risks that can truly derail the study
- a plan to reduce uncertainty across the lifecycle
- clear decision points: who decides, how, and where it’s documented
When this page is clear, conversations get shorter and studies become defensible.
Two mistakes that cost months (even with great teams)
Mistake #1: confusing available data with useful data
Especially in digital devices: plenty of outputs, little clinical meaning.
Mistake #2: leaving decision forks implicit
When deviations, outliers or context changes appear, every choice becomes a new meeting.
It’s not competence that was missing: alignment was.
A final check
If you want a fast reality check, ask:
- Can we state the claim in one sentence, all saying the same thing?
- Can we explain success without showing a chart?
- Which endpoint, if it fails, breaks everything?
- Which risk actually costs us time in real studies?
- Can we reconstruct decisions months later without starting over?
A study becomes calm not when nothing happens but when, if something happens, you don’t lose the thread.
In 2026, the difference will be who has a clearer method for building evidence that holds, even when the context evolves.

