Improving activation by cutting onboarding from 21 steps to one

Research and a 30-minute A/B test revealed that Intruder's carefully built onboarding flow was actively harming conversion - and established experimentation as a new norm in the process.

Role Lead designer & project lead
Collaborators Product squad, Leadership, Data, Engineering
When Q1 2025
Intruder onboarding - target type selection and scan progress UI

Context

There was internal consensus that the free trial experience could work harder, but nobody had looked closely at the data. Following a review of the marketing site's conversion performance, I took it upon myself to investigate.

Intruder had a 21-step onboarding flow covering everything from adding colleagues to reviewing scan results. After investigating drop-off data, I found that fewer than 1% of trials were completing the whole thing. More than 50% weren't completing even the second step.

Marketing site review covering structural, operational and pricing page observations
The marketing site review that preceded this work - covering conversion observations, structural opportunities, and pricing page analysis.

The problem

The 21-step flow walked users through adding colleagues, adding targets, starting a scan, configuring integrations, reviewing results, fixing vulnerabilities, and running a confirmation scan. The steps had been chosen based on a combination of correlation with activation and logical sequencing - well-intentioned, but built around assumed behaviour rather than observed behaviour.

The old Intruder Getting Started onboarding flow showing the adding targets step
The existing onboarding flow - here on the "Adding targets" step, one of 21. The sidebar progress tracker shows how much remained.
Complete onboarding flow diagram showing all 21 steps and decision points
The full onboarding flow mapped out - 21 steps with branching logic. Less than 1% of trial users completed it.

Session recordings revealed that many users were skipping around the flow or ignoring it entirely. Interviews with recently activated customers surfaced three consistent patterns:

  • Preference to roam - users consistently explored the platform themselves, consciously avoiding onboarding
  • Inbuilt inertia - onboarding actually slowed customers down from getting to their job-to-be-done
  • JTBD undervalued - the flow focused on features rather than the task customers were hiring Intruder to achieve

Research and insight

A competitive review of seven similar products found an average of ~9 steps, with a large minority having no formal onboarding flow at all. Both approaches prioritised momentum over instruction.

The 'Aha moment' - when trial customers reliably understood the value of Intruder - was when a vulnerability scan completed and returned issues. The biggest unavoidable obstacle was scan duration: on average ~71 minutes. A long time to keep someone engaged with a flow that was already losing half of them by step two.

Current state journey map showing user goals, emotional state, pain points and opportunities across the trial flow
Current state journey map - charting user goals, emotional state, pain points, and opportunities across the full trial experience from sign-up to conversion.
Onboarding analysis FigJam board covering problem definition, peace of mind framing, and experiment hypothesis
The analysis board used to frame the problem and build the case for removing onboarding - covering the peace of mind insight, drop-off data, and experiment rationale.

Strategic reframe

Internally, the preference was to condense the existing flow - seen as the safest option and aligned with what competitors were doing. I advocated for removing it entirely and testing that hypothesis before committing to a direction.

Intruder didn't have infrastructure for redirect tests, and the culture wasn't yet comfortable with experimentation at this scale. I spent a non-trivial period making the case - reframing the question not as "self-service vs. onboarding" but as: would removing the flow reduce conversion by more than 1%?

Document showing A/B test rationale, addressing criticism and potential future directions
The rationale document used to gain approval - addressing likely objections, defining the decision rule, and framing what the experiment would teach us regardless of outcome.

The test

Once approved, the test took under 30 minutes to implement. For 50% of new trial customers, we redirected sign-in directly to the empty state of the dashboard - no onboarding flow, no changes to the empty state itself. The deliberately minimal intervention isolated the effect of removing onboarding entirely.

Old Intruder dashboard empty state showing 'Nothing to see here' message with Add target CTA
The empty state that 50% of test users landed on after sign-in - unchanged from production. No onboarding, no guidance. Just the product.
-78% Reduction in time to activate
+33% Increase in monthly recurring revenue from target customers
+30% Customer activation

The test clearly validated that the existing flow was not suitable - and that reducing friction while preserving customer momentum drove meaningfully better outcomes.

What came next

With a validated baseline, I mapped an idealised future state and scoped improvements in tiers by effort and impact. A new empty state gave users immediate, intent-based choices for how to start - cloud assets, external infrastructure, web application, or a demo target.

New Intruder empty state showing Add and scan targets with four target type options
The new empty state - replacing "Nothing to see here" with immediate, intent-based choices. Cloud first, with a demo target for those not yet ready.

Early findings surfaced during the scan itself, reducing the ~71-minute wait for a result. A redesigned scan progress view showed live findings as they emerged, giving users something meaningful to engage with before the scan completed.

New scan in progress view showing real-time findings including SQL Injection and other vulnerabilities
The new scan progress view - surfacing early findings mid-scan, so users don't wait ~71 minutes for their first signal of value.
Scan setup modal showing recurring scan configuration with weekly, monthly and quarterly options
The scan setup modal - defaulting to recurring scans so new users were automatically positioned for ongoing monitoring from day one.
Possible future state journey map showing intent capture, quick setup, value preview and first win stages
The possible future state - a journey built around intent capture, fast time-to-value, and contextual deepening after the first win rather than before it.

Outcomes

The test validated the direction. The redesigned empty state, early scan findings, and recurring scan defaults that followed drove further gains in the months after release:

+150% Increase in external targets added in trials
+53% Increase in licenses bought at activation
+37% Increase in customer activation
+25% First targets added

Success for this project mainly came through customers adding increasing numbers of targets and subsequently licensing them when they signed up. This came through the emphasis on external targets, and specifically targets sourced from cloud platforms which we made more prominent as part of this work.

With a median customer ARR of roughly $3,100, even modest improvements in activation rate compounded quickly into meaningful revenue impact.

Reflections

The hardest part of this project was the advocacy, not the design. I knew from previous experience with A/B testing and growth teams that the experiment was worth running - but I was asking an organisation without a testing culture to bet on removing something they'd invested significant effort in building.

Getting that approval set a precedent that mattered beyond this project. The willingness to question a previous investment on the basis of evidence - rather than defending it - became a norm we could build on.

The time it took to run a scan remained the biggest structural constraint in the trial experience. This project led to scan duration becoming a metric that was monitored internally and actively managed as a result of this. Issue previews and scan milestone updates let us work around that constraint, giving customers meaningful signals of value before the full scan completed, rather than asking them to wait.