Skip to content

Lurnet - Planning

Status: revised planning draft, 2026-04-27. This plan assumes the beachhead is multi-product B2B SaaS and prioritizes validation before infrastructure-heavy automation.

Current Decisions

AreaDecisionRationale
BeachheadMulti-product B2B SaaSMatches the product-catalog GTM problem without drifting into ecommerce
First buyer hypothesisProduct marketing, growth, or RevOps leaderThese teams feel the catalog coverage and routing pain
First product promiseTurn a product catalog into reviewed GTM kits, product pages, routed leads, and product-level learningValidates value before autonomous outbound
Outbound approachAssisted first; customer-owned sendingAvoids deliverability and compliance drag before product-market signal
CRM handoffHubSpot first, generic webhook fallbackCovers common design-partner need while keeping scope bounded
Pricing thesisPlatform fee + active catalog entriesAligns with product-level value; revisit after design partners
Architecture postureConcierge test first, modular monolith after validation, actor model later if load requires itKeeps early build simpler while preserving boundaries

Product Boundaries

Lurnet should be planned as a catalog-native GTM workflow, not as an AI SDR, generic copywriter, or outbound-infrastructure company.

The first product must make product-level routing and measurement central:

  • Every catalog entry has a workspace and GTM memory.
  • Every generated asset has human approval before publish.
  • Every captured lead keeps product and narrative context.
  • Every outcome rolls up by product, segment, and channel.
  • Outbound remains assisted until the core workflow proves value.

Success Gates Before Full Planning

Planning should work backward from validation gates, not from the full platform vision.

Customer Discovery Gates

Before implementation planning, run 5-10 calls with multi-product B2B SaaS teams and confirm:

  • They can name neglected products, modules, integrations, or packages.
  • A real owner exists: product marketing, growth, or RevOps.
  • The current workaround is painful enough to discuss budget.
  • Product-level context changes lead routing, sales follow-up, or campaign decisions.
  • They will share enough catalog data for a concierge test.

Concierge Test Gates

Use one real catalog before automating the workflow.

GateMeasurementWhy it matters
Catalog activation20+ products imported and mapped by a design partnerProves the input exists and is usable
Artifact approval60%+ of generated GTM kits approved or lightly editedProves AI output is useful
Time to first pageUnder one business day from import to approved pageProves speed advantage
Demand captureQualified form fill, reply, meeting, or sales conversation tied to a productProves this is GTM, not just content generation
Routing valueCRM handoff includes useful product-level contextProves workflow integration
Repeat usageWeekly return to review, publish, or inspect product performanceProves operational pull
Commercial signalDesign partner accepts catalog-entry pricing as plausibleProves packaging direction

Do not proceed to autonomous outbound until catalog activation, artifact approval, routing value, and repeat usage are visible.

MVP Product Surface

The MVP should have six user-facing surfaces:

  1. Catalog Import. CSV upload, schema mapping, validation, product list.
  2. Product Workspace. Product profile, source fields, generated GTM kit, approval state, version history.
  3. Narrative Review. ICP, positioning, proof points, competitive angles, landing-page copy, outbound snippets.
  4. Approval Workflow. Edit, approve, reject, regenerate, publish gate, and claim-risk notes.
  5. Landing & Capture. Hosted page template, form, lead dedupe, HubSpot/generic webhook.
  6. Analytics. Imported products, approved artifacts, published pages, visits, form fills, routed leads.

MVP Workstreams

#WorkstreamScopeEstimateDepends on
W1Product domain and persistenceTenant, catalog entry, artifact, approval, page, lead1-2 wks-
W2Catalog importCSV upload, schema mapping, validation, import history1 wkW1
W3Narrative engine v1Prompt pipeline, model abstraction, artifact persistence, regeneration2 wksW1
W4Review and approval workspaceProduct list, artifact review, edit/approve/reject, version display, publish gates2 wksW1, W3
W5Landing and capturePage renderer, form, lead storage, dedupe, HubSpot/generic webhook1-2 wksW1, W3
W6Analytics v1Basic product and page metrics1 wkW5
W7Pilot hardeningAudit logs, error states, seed data, design-partner polish1 wkall

Expected validation MVP: 8-10 weeks for a small team after the concierge test, if outbound automation and owned SMTP/IP infrastructure stay out of scope.

Architecture Direction

Use clear module boundaries from the start, but avoid over-committing to distributed infrastructure before product signal.

Recommended initial shape:

  • API / App host - ASP.NET Core.
  • Domain - tenants, catalog entries, GTM artifacts, approvals, pages, leads, routing events.
  • Catalog module - import, schema mapping, validation.
  • Narrative module - AI provider abstraction, prompt pipeline, artifact generation, cache keys.
  • Publishing module - landing-page rendering, public page routing, form capture.
  • Routing module - HubSpot webhook and generic webhook.
  • Web app - React product workspace, review flow, publishing controls, analytics.
  • Data stores - PostgreSQL first; Redis only if needed for cache/queues.

Orleans can remain a later option for long-running agents, per-product state machines, or high-volume outbound sequencing. Do not make Orleans load-bearing until the product needs autonomous agents or thousands of active workflows.

Tech Stack

LayerChoiceNotes
Backend runtime.NET (ASP.NET Core, current LTS)Minimal API or controllers; pick at scaffold time
Primary storePostgreSQLSingle managed instance for MVP
Cache / queuesRedisAdd when needed; not load-bearing in MVP
Background workIn-process hosted services + Postgres-backed job queueDefer Orleans until autonomous agents or thousands of active workflows are real
LLMOpenAI first, behind a provider-agnostic abstractionAnthropic mixable later; per-step model selection (cheaper for critique/classify, premium for draft/refine)
Frontend frameworkReact + TypeScript, built with Vite
State managementRedux Toolkit + redux-sagaredux-saga handles async flows in review/approval UI
StylingTailwind CSS + shadcn/uishadcn/ui for accessible primitives without lock-in
Email sendCustomer-owned ESP/mailbox first; Lurnet-managed sending deferredNo owned SMTP/IPs in MVP
CRM integrationHubSpot first; generic webhook fallbackSalesforce after design-partner validation

Hosting and Infrastructure

ConcernChoiceNotes
ClusterCloud-managed Kubernetes for MVP velocityCandidates: DigitalOcean LKE, Linode LKE, Hetzner managed. Migrate to fully self-hosted (Talos / k0s / k3s) post-MVP if cost or control demands
PostgresExternal managed VMSeparate from cluster; pgBackRest or WAL-based backups. Avoid Postgres-on-k8s complexity until scale demands
RedisIn-cluster (Bitnami chart) when introducedNot part of MVP unless caching or queueing forces it
Object storageS3-compatible (cluster provider's offering)Artifact versions, page assets, CSV imports
SecretsSealed Secrets or external secret managerDecide at scaffold time
CI/CDGitHub Actions → Helm chart → clusterSingle environment to start; staging when needed
ObservabilityStructured logs + basic metrics firstFull APM deferred until traffic warrants

Narrative Engine V1

Inputs:

  • Catalog fields: product name, category, description, audience, use cases, existing URL, owner.
  • Optional context: competitors, proof points, customer segments, pricing tier, CRM routing hints.

Outputs:

  • ICP hypothesis.
  • Positioning statement.
  • Messaging pillars.
  • Landing-page sections.
  • Outbound snippets.
  • Competitive angles.
  • Risks or missing inputs.

Pipeline:

  1. Normalize product inputs.
  2. Generate draft GTM kit.
  3. Critique for vagueness, unsupported claims, weak audience, and missing proof.
  4. Refine into review-ready artifacts.
  5. Persist artifact versions with source-input hashes.

Cost controls:

  • Cache by catalog-entry hash and generation settings.
  • Regenerate only changed artifacts by default.
  • Use cheaper models for critique/classification if quality is acceptable.
  • Track generation cost per tenant and product.

Outbound Scope

For MVP, Lurnet should generate outbound assets but not send automatically.

Phase 2 can add assisted outbound with:

  • Sequence draft builder.
  • Approval gates.
  • Export to CSV or customer tools.
  • Optional mailbox/ESP integration.
  • Reply tracking where the customer grants access.
  • Suppression list support.
  • Send throttling and unsubscribe handling if Lurnet initiates sends.

Explicitly deferred:

  • Owning SMTP servers.
  • Owning dedicated IP pools.
  • Cold-IP warmup as a core product function.
  • Autonomous large-volume sending.

This keeps the moat in product-level narrative and routing rather than deliverability operations.

Data Model Sketch

Core entities:

  • Tenant - customer workspace and integration settings.
  • CatalogEntry - marketable product/module/add-on/integration.
  • CatalogImport - import file, mapping, validation report.
  • GtmArtifact - generated ICP, positioning, page copy, outbound copy, competitive angle.
  • ArtifactVersion - versioned artifact body, model metadata, source hash, approval state.
  • PublishedPage - public slug, template, approved artifact version, publish state.
  • Lead - captured form submission and dedupe key.
  • RoutingEvent - CRM/webhook delivery status.
  • MetricEvent - visit, form fill, approval, publish, route, reply or meeting when available.

Error Handling and Guardrails

  • Imports should produce row-level validation errors and allow partial import only when required fields are present.
  • AI generation should mark missing inputs explicitly instead of fabricating proof points.
  • Published pages should only use approved artifact versions.
  • Webhook failures should retry and expose delivery status.
  • Leads should dedupe by tenant, email/domain, product, and time window.
  • Every generated customer-facing claim should be editable before publishing.

Design-Partner and Dogfood Plan

Two parallel tracks. External design-partner work validates ICP, pricing, and demand-capture value with unbiased eyes; internal dogfood lets engineering iterate on the workflow without external blockers. Dogfood does not substitute for external feedback.

External design partner

Profile:

  • 20+ products/modules/integrations.
  • HubSpot available or generic webhook accepted.
  • Willing to share catalog data and review generated artifacts weekly.
  • Can identify product owners or routing rules.
  • Has at least one channel to activate approved assets.

Pilot steps:

  1. Import catalog.
  2. Select 10-20 products for first activation.
  3. Generate and review GTM kits.
  4. Publish landing pages for approved products.
  5. Route captured leads to CRM.
  6. Review metrics weekly and decide whether to expand.

Internal dogfood catalog

Treat 7-10 Lurnet positioning variants as a synthetic multi-product catalog:

  • Lurnet for MarTech/AdTech vendors.
  • Lurnet for AI-platform companies.
  • Lurnet for vertical SaaS suites.
  • Lurnet for DevTools companies.
  • Lurnet for security platform vendors.
  • Lurnet for sales-engineering teams.
  • Lurnet for marketing operations teams.

Each variant gets its own ICP, positioning, landing-page copy, and outbound snippets. The dogfood track:

  • Provides an always-available test catalog so engineering can iterate without depending on a design partner's pace.
  • Generates real Lurnet marketing assets we can publish.
  • Surfaces honest first-person feedback on the workflow before customer feedback rolls in.

Run dogfood and external design-partner work in parallel through Phase 0 and Phase 1.

Revised Roadmap

Phase 0 - Discovery and Concierge Validation

Interview 5-10 multi-product SaaS teams. Run one concierge catalog test before building the productized workflow.

Phase 1 - Narrative Validation

Build catalog import, product workspace, narrative generation, artifact review, approval workflow, and manual export.

Phase 2 - Demand Capture

Add hosted product pages, forms, HubSpot/generic webhook, dedupe, and basic analytics.

Phase 3 - Assisted Outbound

Add approved sequence drafts, exports, optional mailbox/ESP integration, reply tracking, and compliance controls.

Phase 4 - Learning Loop

Recommend narrative changes from visits, form fills, replies, meetings, and CRM outcomes.

Phase 5 - Scale

Add multi-tenant onboarding, richer CRM integrations, LinkedIn assist, paid ad creative, marketplace/catalog connectors, vertical playbooks, and portfolio analytics.

Open Decisions

  • Which design partner vertical first: MarTech/AdTech, AI platform, DevTools, security, or vertical SaaS suite?
  • Is the first buyer product marketing, growth, or RevOps?
  • Which 5-10 discovery targets can provide the strongest signal before implementation planning?
  • Which design partner can provide a real catalog for a concierge test?
  • What minimum catalog fields are required for useful generation?
  • Should Lurnet host landing pages on customer subdomains in MVP or use Lurnet-hosted preview URLs first?
  • Salesforce fallback: required for any specific design partner, or HubSpot-first holds?
  • What counts as a qualified lead for the first pilot?
  • What approval workflow is required before publishing customer-facing content?