Appearance
Lurnet - Detailed Roadmap
Phase-by-phase plan with deliverables, success criteria, dependencies, and gating decisions. References
concept.mdfor vision,planning.mdfor architecture and workstreams, andspec/product.mdfor personas and requirements.
Roadmap at a glance
| Phase | Focus | Approx. duration | Exit gate |
|---|---|---|---|
| Phase 0 | Discovery + Concierge Validation | 4-8 weeks | Discovery and concierge gates met; design partner committed |
| Phase 1 | Narrative Validation | 8-10 weeks | One paying design partner using the product weekly; artifact approval rate >=60% |
| Phase 2 | Demand Capture | 8-12 weeks | Three paying customers; qualified leads routed with product context |
| Phase 3 | Assisted Outbound | 12-16 weeks | Approved outbound assets contribute measurable pipeline for at least two customers |
| Phase 4 | Learning Loop | 12-16 weeks | Product-level narrative recommendations improve conversion for active customers |
| Phase 5 | Scale | 6+ months | Retention, expansion, self-serve, and channel signals justify scaling |
Estimates assume a small founding team. Phase durations slip if validation slips; do not start the next phase until the previous phase's exit gate is met.
Phase 0: Discovery and Concierge Validation
Goal: Validate the product thesis with real multi-product B2B SaaS teams before investing in the productized workflow.
Activities:
- Run 5-10 customer discovery calls with PMM, demand gen, and RevOps leaders.
- Recruit one design partner willing to share catalog data and run a manual concierge test.
- Manually generate 10-20 product GTM kits using LLM tools and a structured prompt template.
- Mock or publish landing pages for 5 products with lightweight capture forms.
- Route leads or sample submissions into HubSpot or a generic webhook with product context.
- Run for 2-4 weeks and measure the concierge gates from
planning.md.
Deliverables:
- Discovery call notes.
- Concierge prompt template and scoring rubric.
- Concierge test report: gate-by-gate results, lessons, and design-partner quotes.
- Decision: proceed to Phase 1, pivot, or kill.
Success criteria:
- Catalog activation: 20+ products imported or mapped.
- Artifact approval: >=60% approved or lightly edited.
- Time to first page: under one business day from input to approved page.
- Demand capture: at least one qualified form fill, reply, or sales conversation tied to a Lurnet-generated asset.
- Routing value: design partner agrees product context in CRM is useful.
- Repeat usage: design partner returns weekly.
- Commercial signal: design partner accepts catalog-entry pricing as plausible.
Dependencies:
- Founder time for sales and discovery.
- Network introductions to multi-product B2B SaaS teams.
- LLM API access and a small experiment budget.
Gating decisions at exit:
- Are the gates met? If not, pivot the buyer, beachhead vertical, or value proposition.
- Is one design partner ready to convert to a paid Phase 1 pilot?
- Is the catalog data shape consistent enough to design ingestion against?
Resourcing: founder-led; one engineer part-time on prompt/templates; no durable infrastructure yet.
Phase 1: Narrative Validation
Goal: Productize the concierge narrative workflow. One paying design partner imports, reviews, edits, approves, and exports product-level GTM kits weekly.
Workstreams:
| # | Workstream | Deliverable | Success criteria |
|---|---|---|---|
| W1 | Product domain + persistence | Tenant, CatalogEntry, GtmArtifact, ArtifactVersion, AuditLog; basic CRUD APIs | MVP entities tested; tenant isolation enforced; audit log captures approval and edits |
| W2 | Catalog import | CSV upload, schema mapping UI, validation, import history, change detection | 1,000-row import in <60s; row-level validation errors; retry without duplicate rows |
| W3 | Narrative engine v1 | Multi-step pipeline, provider abstraction, OpenAI adapter, per-section regeneration, cost reporting | <90s per-product full generation; cost tracked; risky claims surfaced |
| W4 | Review + approval workspace | Per-product workspace, per-section edit/regenerate/approve/reject, version history, diff view | 5 approval cycles per kit without blocking workflow; version history navigable |
| W5 | Manual export and dogfood | Export approved copy; internal dogfood catalog with 7-10 Lurnet positioning variants | Dogfood produces real marketing assets and exposes workflow friction |
Deliverables:
- Lurnet web app for one assisted tenant.
- CSV catalog import and product workspace.
- GTM kit generation, editing, approval, and export.
- Onboarding runbook for concierge-assisted tenants.
- Internal dogfood of Lurnet's own positioning variants.
Success criteria:
- One paying design partner uses Lurnet weekly for at least 4 weeks.
=60% artifact approval rate with light edits.
- 10-20 approved GTM kits.
- First manual export or activation through the customer's existing tools.
- Design partner agrees the workflow saves meaningful PMM/growth time.
Dependencies:
- Phase 0 exit gates met.
- Design partner catalog data is structurally consistent.
- LLM cost budget for one tenant.
- Simple hosted application and managed database; no Kubernetes requirement.
Gating decisions at exit:
- Is the design partner willing to continue into demand capture?
- What is the activation friction for tenant #2?
- Has artifact quality stabilized, or does the narrative engine need a quality push?
- Does HubSpot-first still match design-partner needs?
Resourcing: two engineers + founder full-time on product + part-time design.
Phase 2: Demand Capture
Goal: Turn approved narratives into published pages, captured leads, CRM routing, and measurable demand signal.
Workstreams:
| # | Workstream | Deliverable | Success criteria |
|---|---|---|---|
| W6 | Landing + capture | Hosted page renderer, slug routing, capture form, dedupe, page metrics | Page publish in <2 min; dedupe configurable; visits and form fills recorded |
| W7 | Routing | HubSpot webhook, generic webhook fallback, signing, retry, delivery dashboard | Product context lands in CRM reliably; failures are visible and retryable |
| W8 | Analytics v1 | Per-product metrics: imports, approvals, publishes, visits, fills, routes; CSV export | Time-range filters work; metrics reconcile to raw events |
| W9 | Pilot hardening | Error states, seed data, onboarding polish, basic auth/RBAC hardening | Three customers can use the product without founder-only intervention |
Deliverables:
- Hosted landing pages.
- HubSpot webhook and generic webhook fallback.
- Basic per-product analytics.
- Customer onboarding runbook for repeated pilots.
Success criteria:
- Three paying customers.
- Average 10+ published product pages per active customer.
- Qualified leads routed with product context.
- Customers return weekly to review, publish, or inspect results.
- At least one customer referral or expansion request.
Dependencies:
- Phase 1 exit gate met.
- CRM access for design partners.
- Clear definition of qualified lead for each pilot.
Gating decisions at exit:
- Does product-level routing create visible sales value?
- Are customers activating enough pages to justify active-entry pricing?
- Should Salesforce be added before broader sales, based on actual prospects?
Key risks:
- Landing pages do not get enough traffic to prove value. Mitigation: help customers activate pages through owned channels and approved outbound exports.
- Manual onboarding drags. Mitigation: automate only repeated steps observed across customers.
Phase 3: Assisted Outbound
Goal: Lurnet-generated outbound contributes meaningfully to customer pipeline without owning send infrastructure.
Deliverables:
- Approved sequence drafts for 4-7 step cadences per product.
- Export to customer tools such as HubSpot Sequences, Outreach, Salesloft, or Smartlead.
- Optional mailbox/ESP integration where the customer grants access.
- Reply tracking where permitted.
- Suppression list support.
- Send throttling and unsubscribe handling if Lurnet initiates sends.
- Sequence-performance analytics.
Explicitly deferred: owning SMTP/IPs and autonomous large-volume cold sending.
Success criteria:
- Lurnet-assisted outbound contributes measurable pipeline for at least two customers.
- Reply and meeting rates are competitive with each customer's existing outbound baseline.
- Compliance and suppression issues remain below agreed customer thresholds.
Dependencies:
- Phase 2 exit gate met.
- Customer demand for outbound validated in Phase 2 conversations.
- Legal review of outbound compliance per region.
Gating decisions at exit:
- Is owning send infrastructure still unnecessary?
- Which outbound outcomes should feed the learning loop: replies, meetings, objections, or opportunities?
- Should LinkedIn assist be added before deeper send automation?
Phase 4: Learning Loop
Goal: Lurnet's product-level memory layer becomes a measurable advantage. Narrative recommendations improve conversion over baseline.
Deliverables:
- Per-product narrative variant tracking with conversion attribution.
- A/B testing UI for landing-page headlines, hero copy, and outbound snippets.
- Narrative recommendations such as "Variant B converts better; promote as default?"
- Cross-product portfolio insights.
- Cohort comparisons.
- Narrative health scoring: claim risk, vagueness, and freshness.
Success criteria:
- At least two customers show measurable conversion improvement from recommendations.
- Customers cite the learning loop as a renewal or expansion driver.
- Learning-loop features are used monthly by most active customers.
Dependencies:
- Phase 3 exit gate met.
- Enough traffic and lead volume for useful comparisons.
- Per-product attribution works reliably.
Gating decisions at exit:
- Which low-risk autonomous capabilities are worth introducing?
- Should benchmark data become a paid product?
Key risks:
- Low-volume products do not reach statistical significance. Mitigation: focus on high-volume products first and aggregate patterns across the portfolio.
- Recommendations feel weak if evidence is thin. Mitigation: only show recommendations when sample size and confidence are sufficient.
Phase 5: Scale
Goal: Lurnet becomes a repeatable commercial platform with self-serve onboarding, channel leverage, vertical playbooks, and enterprise readiness.
Deliverables:
- Multi-tenant self-serve onboarding.
- Marketplace and PIM connectors where customers already maintain catalog data.
- Vertical playbooks for DevTools, AI platforms, security, and vertical SaaS.
- Channel partner program for agencies and consultants.
- Cross-product portfolio analytics for executive buyers.
- Bidirectional CRM sync.
- SCIM provisioning, advanced RBAC, and SOC 2 Type 2.
- Scoped per-product autonomous agents for low-risk classes such as ad-copy variants and ICP refinement.
- Multilingual generation.
Success criteria:
- Retention and expansion justify a larger GTM team.
- At least one channel partner sources real pipeline.
- Self-serve onboarding works for simple catalogs.
- Enterprise security posture supports larger contracts.
Dependencies:
- Phase 4 exit gate met.
- Revenue, marketing, product, and operations leadership in place.
- Funding or revenue to support team expansion.
Cross-phase principles
- Validation gates are sacred. Do not start the next phase without meeting the current phase's exit gate. Slippage is preferable to building on unvalidated ground.
- Cut scope, not quality. Each phase has must-have and nice-to-have features; cut nice-to-have first when timelines compress.
- Dogfood from day one. Every Lurnet feature should be testable on Lurnet's own positioning variants before customer release.
- Approval gates are non-negotiable. Every customer-facing claim Lurnet generates ships through human review until low-risk autonomy is backed by evidence.
- Per-product attribution is foundational. Build the data model right in Phase 1 and Phase 2; the learning loop depends on it.