Skip to content

Lurnet - Go-to-Market Strategy

Pricing detail, GTM motion by phase, buyer journey, success metrics, risk register, and competitive positioning. References concept.md (vision, market wedge), spec/product.md (personas), and spec/roadmap.md (phase goals).

1. Pricing and Packaging

Pricing is a hypothesis until discovery and design-partner work show which budget Lurnet belongs to: product marketing, demand capture, or broader GTM automation.

1.1 Packaging hypotheses

Possible packages:

  • Pilot: concierge-assisted activation for one product line or a fixed number of active catalog entries.
  • Growth: broader catalog activation with hosted pages, CRM routing, analytics, and assisted outbound exports.
  • Enterprise: governance, SSO, advanced routing, security review, dedicated support, and expanded integrations.

"Active entry" should mean a catalog entry with at least one approved artifact plus either a published page or an activated channel asset in the billing period. Inactive entries should not count toward usage.

1.2 Packaging principles

  • Platform fee covers core workflow: catalog, workspace, narrative engine, approval, hosted pages, basic routing.
  • Active-entry pricing aligns price with delivered value because every active product is a unit of GTM motion delivered.
  • Design-partner pilots should reduce buying friction, but exact discounts should be set after discovery.
  • Annual contracts should wait until pilot value is visible.

1.3 Expansion levers

Customers expand by:

  1. Activating more products.
  2. Adding workspaces for separate brands or business units.
  3. Adding assisted outbound after the core workflow proves value.
  4. Adding learning-loop / experimentation capabilities after enough traffic exists.
  5. Adding governance, custom domains, SSO, or advanced CRM routing as the account matures.

Net retention target by Phase 5: at least 120%, once there are enough customers for the metric to mean something.

1.4 What we won't do

  • Per-seat pricing (would penalize teams that scale workflow adoption).
  • Per-API-call pricing (would create cost anxiety in narrative review).
  • Free tier in MVP (signal-to-noise too poor early; revisit in Phase 5).

2. GTM Motion by Phase

2.1 Phase 0: Founder-led discovery and concierge

  • Outbound: founder personal LinkedIn + email outreach to network and targeted introductions.
  • Inbound: founder voice on LinkedIn/X and direct conversations only.
  • Conversion: discovery call -> catalog review -> concierge test -> committed design partner.
  • Volume target: one committed or paying design partner ready for Phase 1.
  • Cost per acquisition: founder time only; track hours per qualified design partner.

2.2 Phase 1: Productized narrative validation

  • Outbound: founder-led outreach to PMM, demand gen, and RevOps leaders using real catalog examples from the concierge test.
  • Inbound: dogfood Lurnet's own product pages and publish proof from the manual test.
  • Conversion: discovery -> catalog-based demo -> narrative-validation pilot.
  • Volume target: one paying design partner using the product weekly.
  • Buyer: PMM lead (B1) or Demand Gen lead (B2) primary; RevOps (B3) validates routing.

2.3 Phase 2: Demand capture commercialization

  • Outbound: founder + part-time operator once repeatable messaging exists.
  • Inbound: long-tail content around product marketing for multi-product SaaS, catalog-native GTM, and product-level demand capture.
  • Conversion: standardized pilot with success criteria: approved kits, published pages, routed leads, and rep validation.
  • Volume target: three paying customers with active pages and routed leads.
  • CAC target: track payback, but do not optimize paid acquisition until retention is visible.

2.4 Phase 3: Assisted outbound expansion

  • Outbound: dogfood approved sequence drafts and export workflows; do not own send infrastructure.
  • Inbound: case studies showing product-page and routed-lead outcomes.
  • Customer marketing: expansion conversations triggered by active catalog-entry usage.
  • Volume target: outbound assist contributes measurable pipeline for at least two customers.

2.5 Phase 4: Learning-loop proof

  • Inbound: customer-led webinars and evidence around narrative improvement.
  • Customer marketing: quarterly reviews focused on conversion lift, narrative iteration, and portfolio insights.
  • Volume target: learning-loop features become a renewal driver for active customers.

2.6 Phase 5: Scale and verticalization

  • Channel: agency partner program for agencies running GTM ops for SaaS.
  • Marketplace listings: AppExchange, HubSpot App Marketplace, AWS Marketplace when integrations justify them.
  • Vertical playbooks: Lurnet for DevTools, AI platforms, security, and vertical SaaS, each with packaged templates and integrations.
  • Volume target: scale only after Phase 4 shows retention and expansion pull.

3. Buyer Journey

3.1 Awareness

  • Founder content on LinkedIn / X (Phases 0-2).
  • Outbound: targeted to PMM / DemandGen leads at companies that fit the catalog-shape filter.
  • Inbound: SEO around "marketing automation for multi-product SaaS," "long-tail product marketing," "AI product page generation."
  • Word of mouth: design-partner referrals (often a PMM moves jobs and brings Lurnet).

3.2 Consideration

Buyer evaluates against:

  • HubSpot / Marketo (incumbent automation; can't they do this?). Counter: catalog-native data model.
  • Tofu HQ, Mutiny (AI-native peers). Counter: SKU-as-unit vs persona/account-as-unit.
  • Internal build (some PMM teams will try Notion + ChatGPT). Counter: workflow + approval + routing > raw generation.
  • Doing nothing. Counter: cost of long-tail neglect modeled in pipeline-contribution calculation.

Sales tools needed:

  • Demo with their actual catalog (this is the moment that converts; insist on it).
  • Comparison sheets vs HubSpot, Tofu, Mutiny.
  • ROI calculator: cost-per-pipeline-dollar with vs without product-level GTM.
  • Case studies (post-Phase 2).

3.3 Trial / pilot

  • Fixed-scope pilot with bounded active entries, concierge onboarding, and weekly check-in.
  • Success criteria documented at kickoff: 5+ approved kits, 5+ published pages, 1+ qualified lead, 1+ rep validation.
  • Conversion gate: hit success criteria → propose annual contract.

3.4 Conversion

  • Pricing presented after pilot, not before (anchors against value, not feature checklist).
  • Design-partner commercial consideration for the first customers in exchange for weekly feedback and case-study rights.
  • 7-14 day procurement; founder available for legal/security questions.

3.5 Expansion

  • Quarterly account review with PMM lead + customer success.
  • Trigger: customer hits 80% of included entries → expansion conversation.
  • Trigger: customer launches new product line → workspace expansion conversation.
  • Trigger: customer asks about outbound or experimentation → upgrade to Growth + add-on module.

3.6 Renewal

  • 60-day pre-renewal review: usage, pipeline contribution, satisfaction.
  • Renewal lift target: ≥10% on average via tier upgrade or entry expansion.
  • Save plays: discount, scope reduction, executive sponsor in escalation.

4. Metrics Framework

4.1 North Star Metric

Active catalog entries with weekly review activity, across all paying tenants.

This metric captures both adoption (more entries) and engagement (weekly review = workflow stickiness). Doubling this number is what makes Lurnet matter.

4.2 Phase-level metrics

PhaseLeadingLagging
0Discovery calls completed; concierge gates passedDesign partner committed
1Artifact approval rate; weekly active workspace usagePaying design partner; approved GTM kits
2Published pages per tenant; routing eventsQualified leads routed; repeat usage
3Outbound assets generated and exportedOutbound-sourced pipeline; reply rate
4Variant tests run per tenantConversion lift attributable to optimization
5Self-serve activations; channel-sourced dealsARR; NRR; SOC 2 status

4.3 Customer-level metrics (per tenant)

  • Active entries (entries with approved + published artifacts in trailing 30 days).
  • Approval acceptance rate (approved / generated).
  • Time to first published page (from import).
  • Pipeline contribution per active entry (Phase 2+).
  • Renewal probability score (composite of usage, support, expansion conversations).

4.4 Operational metrics

  • Generation success rate (target ≥98%).
  • Generation latency p95 (target ≤90s for full kit).
  • Webhook delivery rate (target ≥99%).
  • Page p95 latency (target ≤500ms global).
  • AI cost per kit (target ≤$0.50 amortized).

4.5 Reporting cadence

  • Weekly: customer-level adoption + operational health.
  • Monthly: revenue, pipeline, expansion, churn.
  • Quarterly: NSM trend, ARR, NRR, board reporting.

5. Risk Register

Probability: L (low, <25%), M (medium, 25-60%), H (high, >60%). Impact: L (recoverable), M (delays a phase), H (existential or major pivot).

IDRiskProbabilityImpactMitigation
R-01HubSpot ships catalog-native marketing in 12-24 monthsMHRace to enterprise / multi-product where their data model lags; lock PIM connectors; outpace on narrative depth
R-02Tofu HQ pivots to catalog-as-inputMMLock PIM partnerships; build narrative critique depth; price differently
R-03LLM cost spikes >2xMMProvider abstraction in place; per-step model selection; aggressive caching; per-tenant cost dashboards
R-04Generation quality plateaus below acceptance thresholdMHCritic step; risky-claim detection; concierge fallback for stuck customers; invest in eval harness
R-05Cold-IP deliverability damages brand reputationLMCustomer-owned send avoids this in MVP; if a later phase explores owned send, require a dedicated reputation plan
R-06First design partner doesn't convert to paidMMMultiple candidates in flight; defined success criteria for pilot
R-07Buyer persona is wrong (PMM doesn't have budget; CMO does)MMTested all three buyer personas in discovery; sales team adjusts
R-08Engineering capacity insufficient to ship productized validation MVP in 10 weeksHMCut polish, not quality; hire engineer #2 early
R-09Design-partner catalog data is messier than expectedMMMinimum-viable input fields defined; concierge cleanup as paid service if needed
R-10Compliance / legal exposure from AI-generated claimsLHApproval gates; risky-claim detection; T&Cs disclaim AI output; SOC 2 path
R-11Net retention <100% from churnLHQuarterly account reviews; customer success investment; usage-based pricing aligns incentives
R-12Beachhead vertical is too narrow to scale (multi-product B2B SaaS smaller than estimated)LHVertical playbooks expand TAM in Phase 5; adjacent B2B marketplace/catalog workflows are a fallback
R-13Investor pitch falls flat without "agentic" framingMMInvestor arc explicitly frames assisted-now to autonomous-later; emphasize data flywheel + workflow memory
R-14Customer's existing CRM is Salesforce, not HubSpotMLSalesforce after demand-capture validation; generic webhook fallback in MVP
R-15Privacy regulations (GDPR, CCPA) tighten on AI-generated marketingMMAudit log; data-handling policy; SOC 2 path; consent-based capture

6. Competitive Positioning

For each named competitor: how we differ, when they win, when we win.

6.1 vs HubSpot Marketing Hub

  • How we differ: HubSpot's data model is contact- and campaign-centric. Catalog is bolted on (Commerce Hub) and not integrated with marketing AI. Lurnet treats the catalog entry as a primary object.
  • When HubSpot wins: customer wants one tool for everything (CRM, email, blog, forms). Customer is single-product. Customer's marketing motion is campaign-centric.
  • When Lurnet wins: customer has a catalog (20+ products) and is frustrated by long-tail neglect. Customer values per-product analytics, narrative memory, and approval workflow.
  • Coexistence story: Lurnet integrates with HubSpot; we route to HubSpot. Not a CRM replacement.

6.2 vs Marketo / Pardot

  • How we differ: enterprise B2B automation, powerful for complex nurture but no catalog model and no AI-native generation. Slow to ship per-product variants.
  • When they win: large enterprise with deep Salesforce integration, complex multi-touch nurture programs, dedicated marketing-ops team.
  • When we win: mid-market with catalog scale, fewer marketing-ops resources, AI-native expectations.

6.3 vs Tofu HQ

  • How we differ: Tofu generates campaign assets from brand + segments. Persona × account is the unit. Lurnet's unit is the catalog entry - a structurally different data model.
  • When Tofu wins: customer has one or two products, complex segmentation needs, prefers persona-driven motion.
  • When we win: customer has 20+ products and the campaign framing fails them. Catalog-native workflow is faster than persona-driven generation per product.
  • Risk: Tofu pivots toward catalog. Counter: depth of catalog ingestion + PIM partnerships improves switching cost.

6.4 vs Mutiny

  • How we differ: Mutiny personalizes existing pages per visitor account. Lurnet generates new pages per product. Different operations entirely.
  • When Mutiny wins: customer has flagship product, wants per-account personalization for ABM.
  • When we win: customer needs per-product creation, not per-account adaptation.
  • Coexistence: plausible - Mutiny personalizes Lurnet-published pages per visitor.

6.5 vs 11x.ai / Artisan / Apollo (AI SDR class)

  • How we differ: AI SDRs operate at prospect × single-product granularity. Lurnet operates at catalog × prospect. Outbound is one channel for us, the entire product for them.
  • When AI SDRs win: customer wants outbound automation only and has a single offer.
  • When we win: customer has multiple products and needs each represented coherently across pages, outbound, and routing.

6.6 vs Clay + Leadpages MCP (DIY combo)

  • How we differ: Clay is a programmable signal layer; Leadpages MCP generates pages. The DIY combo requires GTM-engineer skill to stitch and maintain. Lurnet is opinionated, integrated, and managed.
  • When DIY wins: customer has a strong GTM engineer and prefers to compose.
  • When we win: customer has a PMM lead who wants outcome, not infrastructure.

6.7 vs Salesforce + Qualified

  • How we differ: the Salesforce-Qualified bundle is enterprise B2B conversation + ABM personalization. No catalog model.
  • When Salesforce wins: large enterprise, Salesforce-centric stack, RFP-driven procurement.
  • When we win: mid-market with catalog complexity that Salesforce treats as a flat field on the contact record.

6.8 vs Akeneo / Salsify / Productsup (PIM)

  • How we differ: PIMs manage product data and syndicate to retail. They stop at distribution. Lurnet picks up where PIMs end - using product data for GTM execution.
  • When PIMs win: customer's primary need is multi-channel retail data syndication.
  • When we win: customer needs lead-gen and pipeline generation off the catalog, not retail listing.
  • Coexistence: strong. Lurnet ingests from PIMs; PIM partnerships are a Phase 5 channel.

6.9 Positioning summary

When asked "what is Lurnet?", the canonical answer:

Lurnet is the catalog-native GTM workflow for multi-product B2B SaaS. We turn every product in your catalog into a reviewed, measurable demand-generation motion - generated by AI, approved by humans, captured and routed back to your CRM with product context. Today we're assisted; tomorrow some workflows can become autonomous; the catalog memory layer is what compounds.