Skip to content

Lurnet - Detailed Roadmap

Phase-by-phase plan with deliverables, success criteria, dependencies, and gating decisions. References concept.md for vision, planning.md for architecture and workstreams, and spec/product.md for personas and requirements.

Roadmap at a glance

PhaseFocusApprox. durationExit gate
Phase 0Discovery + Concierge Validation4-8 weeksDiscovery and concierge gates met; design partner committed
Phase 1Narrative Validation8-10 weeksOne paying design partner using the product weekly; artifact approval rate >=60%
Phase 2Demand Capture8-12 weeksThree paying customers; qualified leads routed with product context
Phase 3Assisted Outbound12-16 weeksApproved outbound assets contribute measurable pipeline for at least two customers
Phase 4Learning Loop12-16 weeksProduct-level narrative recommendations improve conversion for active customers
Phase 5Scale6+ monthsRetention, expansion, self-serve, and channel signals justify scaling

Estimates assume a small founding team. Phase durations slip if validation slips; do not start the next phase until the previous phase's exit gate is met.

Phase 0: Discovery and Concierge Validation

Goal: Validate the product thesis with real multi-product B2B SaaS teams before investing in the productized workflow.

Activities:

  • Run 5-10 customer discovery calls with PMM, demand gen, and RevOps leaders.
  • Recruit one design partner willing to share catalog data and run a manual concierge test.
  • Manually generate 10-20 product GTM kits using LLM tools and a structured prompt template.
  • Mock or publish landing pages for 5 products with lightweight capture forms.
  • Route leads or sample submissions into HubSpot or a generic webhook with product context.
  • Run for 2-4 weeks and measure the concierge gates from planning.md.

Deliverables:

  • Discovery call notes.
  • Concierge prompt template and scoring rubric.
  • Concierge test report: gate-by-gate results, lessons, and design-partner quotes.
  • Decision: proceed to Phase 1, pivot, or kill.

Success criteria:

  • Catalog activation: 20+ products imported or mapped.
  • Artifact approval: >=60% approved or lightly edited.
  • Time to first page: under one business day from input to approved page.
  • Demand capture: at least one qualified form fill, reply, or sales conversation tied to a Lurnet-generated asset.
  • Routing value: design partner agrees product context in CRM is useful.
  • Repeat usage: design partner returns weekly.
  • Commercial signal: design partner accepts catalog-entry pricing as plausible.

Dependencies:

  • Founder time for sales and discovery.
  • Network introductions to multi-product B2B SaaS teams.
  • LLM API access and a small experiment budget.

Gating decisions at exit:

  • Are the gates met? If not, pivot the buyer, beachhead vertical, or value proposition.
  • Is one design partner ready to convert to a paid Phase 1 pilot?
  • Is the catalog data shape consistent enough to design ingestion against?

Resourcing: founder-led; one engineer part-time on prompt/templates; no durable infrastructure yet.

Phase 1: Narrative Validation

Goal: Productize the concierge narrative workflow. One paying design partner imports, reviews, edits, approves, and exports product-level GTM kits weekly.

Workstreams:

#WorkstreamDeliverableSuccess criteria
W1Product domain + persistenceTenant, CatalogEntry, GtmArtifact, ArtifactVersion, AuditLog; basic CRUD APIsMVP entities tested; tenant isolation enforced; audit log captures approval and edits
W2Catalog importCSV upload, schema mapping UI, validation, import history, change detection1,000-row import in <60s; row-level validation errors; retry without duplicate rows
W3Narrative engine v1Multi-step pipeline, provider abstraction, OpenAI adapter, per-section regeneration, cost reporting<90s per-product full generation; cost tracked; risky claims surfaced
W4Review + approval workspacePer-product workspace, per-section edit/regenerate/approve/reject, version history, diff view5 approval cycles per kit without blocking workflow; version history navigable
W5Manual export and dogfoodExport approved copy; internal dogfood catalog with 7-10 Lurnet positioning variantsDogfood produces real marketing assets and exposes workflow friction

Deliverables:

  • Lurnet web app for one assisted tenant.
  • CSV catalog import and product workspace.
  • GTM kit generation, editing, approval, and export.
  • Onboarding runbook for concierge-assisted tenants.
  • Internal dogfood of Lurnet's own positioning variants.

Success criteria:

  • One paying design partner uses Lurnet weekly for at least 4 weeks.
  • =60% artifact approval rate with light edits.

  • 10-20 approved GTM kits.
  • First manual export or activation through the customer's existing tools.
  • Design partner agrees the workflow saves meaningful PMM/growth time.

Dependencies:

  • Phase 0 exit gates met.
  • Design partner catalog data is structurally consistent.
  • LLM cost budget for one tenant.
  • Simple hosted application and managed database; no Kubernetes requirement.

Gating decisions at exit:

  • Is the design partner willing to continue into demand capture?
  • What is the activation friction for tenant #2?
  • Has artifact quality stabilized, or does the narrative engine need a quality push?
  • Does HubSpot-first still match design-partner needs?

Resourcing: two engineers + founder full-time on product + part-time design.

Phase 2: Demand Capture

Goal: Turn approved narratives into published pages, captured leads, CRM routing, and measurable demand signal.

Workstreams:

#WorkstreamDeliverableSuccess criteria
W6Landing + captureHosted page renderer, slug routing, capture form, dedupe, page metricsPage publish in <2 min; dedupe configurable; visits and form fills recorded
W7RoutingHubSpot webhook, generic webhook fallback, signing, retry, delivery dashboardProduct context lands in CRM reliably; failures are visible and retryable
W8Analytics v1Per-product metrics: imports, approvals, publishes, visits, fills, routes; CSV exportTime-range filters work; metrics reconcile to raw events
W9Pilot hardeningError states, seed data, onboarding polish, basic auth/RBAC hardeningThree customers can use the product without founder-only intervention

Deliverables:

  • Hosted landing pages.
  • HubSpot webhook and generic webhook fallback.
  • Basic per-product analytics.
  • Customer onboarding runbook for repeated pilots.

Success criteria:

  • Three paying customers.
  • Average 10+ published product pages per active customer.
  • Qualified leads routed with product context.
  • Customers return weekly to review, publish, or inspect results.
  • At least one customer referral or expansion request.

Dependencies:

  • Phase 1 exit gate met.
  • CRM access for design partners.
  • Clear definition of qualified lead for each pilot.

Gating decisions at exit:

  • Does product-level routing create visible sales value?
  • Are customers activating enough pages to justify active-entry pricing?
  • Should Salesforce be added before broader sales, based on actual prospects?

Key risks:

  • Landing pages do not get enough traffic to prove value. Mitigation: help customers activate pages through owned channels and approved outbound exports.
  • Manual onboarding drags. Mitigation: automate only repeated steps observed across customers.

Phase 3: Assisted Outbound

Goal: Lurnet-generated outbound contributes meaningfully to customer pipeline without owning send infrastructure.

Deliverables:

  • Approved sequence drafts for 4-7 step cadences per product.
  • Export to customer tools such as HubSpot Sequences, Outreach, Salesloft, or Smartlead.
  • Optional mailbox/ESP integration where the customer grants access.
  • Reply tracking where permitted.
  • Suppression list support.
  • Send throttling and unsubscribe handling if Lurnet initiates sends.
  • Sequence-performance analytics.

Explicitly deferred: owning SMTP/IPs and autonomous large-volume cold sending.

Success criteria:

  • Lurnet-assisted outbound contributes measurable pipeline for at least two customers.
  • Reply and meeting rates are competitive with each customer's existing outbound baseline.
  • Compliance and suppression issues remain below agreed customer thresholds.

Dependencies:

  • Phase 2 exit gate met.
  • Customer demand for outbound validated in Phase 2 conversations.
  • Legal review of outbound compliance per region.

Gating decisions at exit:

  • Is owning send infrastructure still unnecessary?
  • Which outbound outcomes should feed the learning loop: replies, meetings, objections, or opportunities?
  • Should LinkedIn assist be added before deeper send automation?

Phase 4: Learning Loop

Goal: Lurnet's product-level memory layer becomes a measurable advantage. Narrative recommendations improve conversion over baseline.

Deliverables:

  • Per-product narrative variant tracking with conversion attribution.
  • A/B testing UI for landing-page headlines, hero copy, and outbound snippets.
  • Narrative recommendations such as "Variant B converts better; promote as default?"
  • Cross-product portfolio insights.
  • Cohort comparisons.
  • Narrative health scoring: claim risk, vagueness, and freshness.

Success criteria:

  • At least two customers show measurable conversion improvement from recommendations.
  • Customers cite the learning loop as a renewal or expansion driver.
  • Learning-loop features are used monthly by most active customers.

Dependencies:

  • Phase 3 exit gate met.
  • Enough traffic and lead volume for useful comparisons.
  • Per-product attribution works reliably.

Gating decisions at exit:

  • Which low-risk autonomous capabilities are worth introducing?
  • Should benchmark data become a paid product?

Key risks:

  • Low-volume products do not reach statistical significance. Mitigation: focus on high-volume products first and aggregate patterns across the portfolio.
  • Recommendations feel weak if evidence is thin. Mitigation: only show recommendations when sample size and confidence are sufficient.

Phase 5: Scale

Goal: Lurnet becomes a repeatable commercial platform with self-serve onboarding, channel leverage, vertical playbooks, and enterprise readiness.

Deliverables:

  • Multi-tenant self-serve onboarding.
  • Marketplace and PIM connectors where customers already maintain catalog data.
  • Vertical playbooks for DevTools, AI platforms, security, and vertical SaaS.
  • Channel partner program for agencies and consultants.
  • Cross-product portfolio analytics for executive buyers.
  • Bidirectional CRM sync.
  • SCIM provisioning, advanced RBAC, and SOC 2 Type 2.
  • Scoped per-product autonomous agents for low-risk classes such as ad-copy variants and ICP refinement.
  • Multilingual generation.

Success criteria:

  • Retention and expansion justify a larger GTM team.
  • At least one channel partner sources real pipeline.
  • Self-serve onboarding works for simple catalogs.
  • Enterprise security posture supports larger contracts.

Dependencies:

  • Phase 4 exit gate met.
  • Revenue, marketing, product, and operations leadership in place.
  • Funding or revenue to support team expansion.

Cross-phase principles

  • Validation gates are sacred. Do not start the next phase without meeting the current phase's exit gate. Slippage is preferable to building on unvalidated ground.
  • Cut scope, not quality. Each phase has must-have and nice-to-have features; cut nice-to-have first when timelines compress.
  • Dogfood from day one. Every Lurnet feature should be testable on Lurnet's own positioning variants before customer release.
  • Approval gates are non-negotiable. Every customer-facing claim Lurnet generates ships through human review until low-risk autonomy is backed by evidence.
  • Per-product attribution is foundational. Build the data model right in Phase 1 and Phase 2; the learning loop depends on it.