Appearance
Lurnet - Planning
Status: revised planning draft, 2026-04-27. This plan assumes the beachhead is multi-product B2B SaaS and prioritizes validation before infrastructure-heavy automation.
Current Decisions
| Area | Decision | Rationale |
|---|---|---|
| Beachhead | Multi-product B2B SaaS | Matches the product-catalog GTM problem without drifting into ecommerce |
| First buyer hypothesis | Product marketing, growth, or RevOps leader | These teams feel the catalog coverage and routing pain |
| First product promise | Turn a product catalog into reviewed GTM kits, product pages, routed leads, and product-level learning | Validates value before autonomous outbound |
| Outbound approach | Assisted first; customer-owned sending | Avoids deliverability and compliance drag before product-market signal |
| CRM handoff | HubSpot first, generic webhook fallback | Covers common design-partner need while keeping scope bounded |
| Pricing thesis | Platform fee + active catalog entries | Aligns with product-level value; revisit after design partners |
| Architecture posture | Concierge test first, modular monolith after validation, actor model later if load requires it | Keeps early build simpler while preserving boundaries |
Product Boundaries
Lurnet should be planned as a catalog-native GTM workflow, not as an AI SDR, generic copywriter, or outbound-infrastructure company.
The first product must make product-level routing and measurement central:
- Every catalog entry has a workspace and GTM memory.
- Every generated asset has human approval before publish.
- Every captured lead keeps product and narrative context.
- Every outcome rolls up by product, segment, and channel.
- Outbound remains assisted until the core workflow proves value.
Success Gates Before Full Planning
Planning should work backward from validation gates, not from the full platform vision.
Customer Discovery Gates
Before implementation planning, run 5-10 calls with multi-product B2B SaaS teams and confirm:
- They can name neglected products, modules, integrations, or packages.
- A real owner exists: product marketing, growth, or RevOps.
- The current workaround is painful enough to discuss budget.
- Product-level context changes lead routing, sales follow-up, or campaign decisions.
- They will share enough catalog data for a concierge test.
Concierge Test Gates
Use one real catalog before automating the workflow.
| Gate | Measurement | Why it matters |
|---|---|---|
| Catalog activation | 20+ products imported and mapped by a design partner | Proves the input exists and is usable |
| Artifact approval | 60%+ of generated GTM kits approved or lightly edited | Proves AI output is useful |
| Time to first page | Under one business day from import to approved page | Proves speed advantage |
| Demand capture | Qualified form fill, reply, meeting, or sales conversation tied to a product | Proves this is GTM, not just content generation |
| Routing value | CRM handoff includes useful product-level context | Proves workflow integration |
| Repeat usage | Weekly return to review, publish, or inspect product performance | Proves operational pull |
| Commercial signal | Design partner accepts catalog-entry pricing as plausible | Proves packaging direction |
Do not proceed to autonomous outbound until catalog activation, artifact approval, routing value, and repeat usage are visible.
MVP Product Surface
The MVP should have six user-facing surfaces:
- Catalog Import. CSV upload, schema mapping, validation, product list.
- Product Workspace. Product profile, source fields, generated GTM kit, approval state, version history.
- Narrative Review. ICP, positioning, proof points, competitive angles, landing-page copy, outbound snippets.
- Approval Workflow. Edit, approve, reject, regenerate, publish gate, and claim-risk notes.
- Landing & Capture. Hosted page template, form, lead dedupe, HubSpot/generic webhook.
- Analytics. Imported products, approved artifacts, published pages, visits, form fills, routed leads.
MVP Workstreams
| # | Workstream | Scope | Estimate | Depends on |
|---|---|---|---|---|
| W1 | Product domain and persistence | Tenant, catalog entry, artifact, approval, page, lead | 1-2 wks | - |
| W2 | Catalog import | CSV upload, schema mapping, validation, import history | 1 wk | W1 |
| W3 | Narrative engine v1 | Prompt pipeline, model abstraction, artifact persistence, regeneration | 2 wks | W1 |
| W4 | Review and approval workspace | Product list, artifact review, edit/approve/reject, version display, publish gates | 2 wks | W1, W3 |
| W5 | Landing and capture | Page renderer, form, lead storage, dedupe, HubSpot/generic webhook | 1-2 wks | W1, W3 |
| W6 | Analytics v1 | Basic product and page metrics | 1 wk | W5 |
| W7 | Pilot hardening | Audit logs, error states, seed data, design-partner polish | 1 wk | all |
Expected validation MVP: 8-10 weeks for a small team after the concierge test, if outbound automation and owned SMTP/IP infrastructure stay out of scope.
Architecture Direction
Use clear module boundaries from the start, but avoid over-committing to distributed infrastructure before product signal.
Recommended initial shape:
- API / App host - ASP.NET Core.
- Domain - tenants, catalog entries, GTM artifacts, approvals, pages, leads, routing events.
- Catalog module - import, schema mapping, validation.
- Narrative module - AI provider abstraction, prompt pipeline, artifact generation, cache keys.
- Publishing module - landing-page rendering, public page routing, form capture.
- Routing module - HubSpot webhook and generic webhook.
- Web app - React product workspace, review flow, publishing controls, analytics.
- Data stores - PostgreSQL first; Redis only if needed for cache/queues.
Orleans can remain a later option for long-running agents, per-product state machines, or high-volume outbound sequencing. Do not make Orleans load-bearing until the product needs autonomous agents or thousands of active workflows.
Tech Stack
| Layer | Choice | Notes |
|---|---|---|
| Backend runtime | .NET (ASP.NET Core, current LTS) | Minimal API or controllers; pick at scaffold time |
| Primary store | PostgreSQL | Single managed instance for MVP |
| Cache / queues | Redis | Add when needed; not load-bearing in MVP |
| Background work | In-process hosted services + Postgres-backed job queue | Defer Orleans until autonomous agents or thousands of active workflows are real |
| LLM | OpenAI first, behind a provider-agnostic abstraction | Anthropic mixable later; per-step model selection (cheaper for critique/classify, premium for draft/refine) |
| Frontend framework | React + TypeScript, built with Vite | |
| State management | Redux Toolkit + redux-saga | redux-saga handles async flows in review/approval UI |
| Styling | Tailwind CSS + shadcn/ui | shadcn/ui for accessible primitives without lock-in |
| Email send | Customer-owned ESP/mailbox first; Lurnet-managed sending deferred | No owned SMTP/IPs in MVP |
| CRM integration | HubSpot first; generic webhook fallback | Salesforce after design-partner validation |
Hosting and Infrastructure
| Concern | Choice | Notes |
|---|---|---|
| Cluster | Cloud-managed Kubernetes for MVP velocity | Candidates: DigitalOcean LKE, Linode LKE, Hetzner managed. Migrate to fully self-hosted (Talos / k0s / k3s) post-MVP if cost or control demands |
| Postgres | External managed VM | Separate from cluster; pgBackRest or WAL-based backups. Avoid Postgres-on-k8s complexity until scale demands |
| Redis | In-cluster (Bitnami chart) when introduced | Not part of MVP unless caching or queueing forces it |
| Object storage | S3-compatible (cluster provider's offering) | Artifact versions, page assets, CSV imports |
| Secrets | Sealed Secrets or external secret manager | Decide at scaffold time |
| CI/CD | GitHub Actions → Helm chart → cluster | Single environment to start; staging when needed |
| Observability | Structured logs + basic metrics first | Full APM deferred until traffic warrants |
Narrative Engine V1
Inputs:
- Catalog fields: product name, category, description, audience, use cases, existing URL, owner.
- Optional context: competitors, proof points, customer segments, pricing tier, CRM routing hints.
Outputs:
- ICP hypothesis.
- Positioning statement.
- Messaging pillars.
- Landing-page sections.
- Outbound snippets.
- Competitive angles.
- Risks or missing inputs.
Pipeline:
- Normalize product inputs.
- Generate draft GTM kit.
- Critique for vagueness, unsupported claims, weak audience, and missing proof.
- Refine into review-ready artifacts.
- Persist artifact versions with source-input hashes.
Cost controls:
- Cache by catalog-entry hash and generation settings.
- Regenerate only changed artifacts by default.
- Use cheaper models for critique/classification if quality is acceptable.
- Track generation cost per tenant and product.
Outbound Scope
For MVP, Lurnet should generate outbound assets but not send automatically.
Phase 2 can add assisted outbound with:
- Sequence draft builder.
- Approval gates.
- Export to CSV or customer tools.
- Optional mailbox/ESP integration.
- Reply tracking where the customer grants access.
- Suppression list support.
- Send throttling and unsubscribe handling if Lurnet initiates sends.
Explicitly deferred:
- Owning SMTP servers.
- Owning dedicated IP pools.
- Cold-IP warmup as a core product function.
- Autonomous large-volume sending.
This keeps the moat in product-level narrative and routing rather than deliverability operations.
Data Model Sketch
Core entities:
- Tenant - customer workspace and integration settings.
- CatalogEntry - marketable product/module/add-on/integration.
- CatalogImport - import file, mapping, validation report.
- GtmArtifact - generated ICP, positioning, page copy, outbound copy, competitive angle.
- ArtifactVersion - versioned artifact body, model metadata, source hash, approval state.
- PublishedPage - public slug, template, approved artifact version, publish state.
- Lead - captured form submission and dedupe key.
- RoutingEvent - CRM/webhook delivery status.
- MetricEvent - visit, form fill, approval, publish, route, reply or meeting when available.
Error Handling and Guardrails
- Imports should produce row-level validation errors and allow partial import only when required fields are present.
- AI generation should mark missing inputs explicitly instead of fabricating proof points.
- Published pages should only use approved artifact versions.
- Webhook failures should retry and expose delivery status.
- Leads should dedupe by tenant, email/domain, product, and time window.
- Every generated customer-facing claim should be editable before publishing.
Design-Partner and Dogfood Plan
Two parallel tracks. External design-partner work validates ICP, pricing, and demand-capture value with unbiased eyes; internal dogfood lets engineering iterate on the workflow without external blockers. Dogfood does not substitute for external feedback.
External design partner
Profile:
- 20+ products/modules/integrations.
- HubSpot available or generic webhook accepted.
- Willing to share catalog data and review generated artifacts weekly.
- Can identify product owners or routing rules.
- Has at least one channel to activate approved assets.
Pilot steps:
- Import catalog.
- Select 10-20 products for first activation.
- Generate and review GTM kits.
- Publish landing pages for approved products.
- Route captured leads to CRM.
- Review metrics weekly and decide whether to expand.
Internal dogfood catalog
Treat 7-10 Lurnet positioning variants as a synthetic multi-product catalog:
- Lurnet for MarTech/AdTech vendors.
- Lurnet for AI-platform companies.
- Lurnet for vertical SaaS suites.
- Lurnet for DevTools companies.
- Lurnet for security platform vendors.
- Lurnet for sales-engineering teams.
- Lurnet for marketing operations teams.
Each variant gets its own ICP, positioning, landing-page copy, and outbound snippets. The dogfood track:
- Provides an always-available test catalog so engineering can iterate without depending on a design partner's pace.
- Generates real Lurnet marketing assets we can publish.
- Surfaces honest first-person feedback on the workflow before customer feedback rolls in.
Run dogfood and external design-partner work in parallel through Phase 0 and Phase 1.
Revised Roadmap
Phase 0 - Discovery and Concierge Validation
Interview 5-10 multi-product SaaS teams. Run one concierge catalog test before building the productized workflow.
Phase 1 - Narrative Validation
Build catalog import, product workspace, narrative generation, artifact review, approval workflow, and manual export.
Phase 2 - Demand Capture
Add hosted product pages, forms, HubSpot/generic webhook, dedupe, and basic analytics.
Phase 3 - Assisted Outbound
Add approved sequence drafts, exports, optional mailbox/ESP integration, reply tracking, and compliance controls.
Phase 4 - Learning Loop
Recommend narrative changes from visits, form fills, replies, meetings, and CRM outcomes.
Phase 5 - Scale
Add multi-tenant onboarding, richer CRM integrations, LinkedIn assist, paid ad creative, marketplace/catalog connectors, vertical playbooks, and portfolio analytics.
Open Decisions
- Which design partner vertical first: MarTech/AdTech, AI platform, DevTools, security, or vertical SaaS suite?
- Is the first buyer product marketing, growth, or RevOps?
- Which 5-10 discovery targets can provide the strongest signal before implementation planning?
- Which design partner can provide a real catalog for a concierge test?
- What minimum catalog fields are required for useful generation?
- Should Lurnet host landing pages on customer subdomains in MVP or use Lurnet-hosted preview URLs first?
- Salesforce fallback: required for any specific design partner, or HubSpot-first holds?
- What counts as a qualified lead for the first pilot?
- What approval workflow is required before publishing customer-facing content?