Case Study

Swarm-Lite — Project Studio

AI persona swarms for market intelligence

Next.js · Payload CMS · Claude API · LangGraph · Postgres · Vercel

Demo complete — active developmentLive demo — deploy pending
The dashboard surfaces the three core entities — Markets, Personas, and Projects — as independent but composable. A persona exists once and can be used across any number of projects. A market can be monitored independently of any active project. This separation is an architectural decision, not a layout choice.

The problem

Product teams are reactive by default. They gather user feedback late, expensively, and at the wrong moment.

When a new feature is proposed internally, the question “what would our users think?” triggers a research process that takes weeks to complete — by which time the feature has already been scoped, estimated, and committed to a sprint. When a positioning change is debated in a leadership meeting, no one in the room represents the voice of the customer. When a competitor ships something unexpected, the response is driven by instinct rather than evidence.

The problem is not a shortage of interest in user feedback. It is the structural cost of obtaining it. A user interview takes days to arrange and an hour to conduct. A focus group takes weeks to organise. A survey takes weeks to design, distribute, and analyse — and it answers the questions you thought to ask, not the ones that matter.

Swarm-Lite removes that cost entirely. It answers the question in minutes, before the decision is already made.

The concept

A Project Manifest defines your product, your target market, and the archetypes of the users you are building for. Swarm-Lite generates a swarm of AI personas — each with a defined role, seniority level, domain expertise, and known pain points — and allows you to run Huddle sessions: structured conversations where the swarm evaluates a feature, a positioning decision, or a strategic change.

Each persona is a typed entity with a role, seniority, archetype label, and domain. Personas are not tied to a specific project at creation — they exist in a shared library and can be assigned to any project. Sarah Chen (Product Manager), Marcus Webb (CTO / Enterprise Buyer), Aisha Patel (Data Analyst): these are reusable research assets, not one-off prompt inputs.

The output is a Huddle Summary: per-persona sentiment, specific objections and conditions, a composite recommendation, and a set of Pending Actions for the user to approve or dismiss. The swarm does not make the decision. It surfaces the reasoning behind likely reactions. The human owns the conclusion.

The Huddle Summary is the core product output — not a conversation, a document. Sarah Chen: Strongly Positive. Marcus Webb: Conditional Positive (concerns around security audit trail and third-party data access). Aisha Patel: Strongly Positive. James Torres: Mixed (pricing concerns). The per-persona breakdown is deliberate: an averaged score of 3.8/5 would hide the signal that a startup founder has reservations about pricing that could kill adoption in that segment.

Architectural decisions

Markets, Personas, and Projects as three distinct entities

Markets are defined independently of projects. A market definition — B2B SaaS Analytics, with tracked competitors (Salespal, LogicPilot, Stackbase) and market conditions — is a reusable asset. The same market can inform multiple projects without being duplicated.

Not a flat feed. Not a single ‘project’ object that contains everything. Three separate entities that can be composed. A persona exists independently of any market or project. A CTO archetype created for one product is immediately available for another. A market definition — with its tracked competitors, conditions, and signals — is a reusable asset that multiple projects can reference.

The alternative — embedding personas inside projects — is simpler to build and produces a dead end: every new project requires rebuilding personas from scratch, and there is no way to ask “how would our Enterprise Buyer archetype respond to this, across all our products?”

Session branching — non-destructive thread forking

A Session Branch isolates a new line of inquiry without contaminating the main session record. The branch can target a subset of personas without affecting the primary Huddle. The branching model treats the research record as immutable — you can always see what was asked and in what context.

Different stakeholders want to interrogate the same scenario from different angles. A Create Session Branch allows any of these questions to be asked as an isolated thread without contaminating the main session record. The original Huddle remains unchanged. The branch produces its own summary. Both are saved, both are auditable, and neither overwrites the other.

This is the same principle that makes git branching powerful: non-destructive exploration preserves the record of how you arrived at a conclusion.

HITL approval on Pending Actions

Pending Actions surface decisions that require human judgement — a pricing recommendation, a feature scope decision, a positioning change. Each action is held in a pending state until explicitly approved or dismissed. The APPROVED status shown here is not a notification — it is a record that a human evaluated the agent's reasoning and chose to act on it.

Same pattern as TX-1 and SS-1. The agent proposes. The human approves. Pending Actions are the interface between agent reasoning and human decision-making. When the swarm reaches a conclusion that implies a concrete next step, that step surfaces as a Pending Action. It is held until reviewed. The approval is logged with a timestamp.

An agentic system that acts without asking is noise. An agentic system that asks for approval on every trivial step is friction. The Pending Actions model is the product design decision that determines where the line sits.

Per-persona sentiment over aggregate score

The Huddle Summary shows each persona’s response individually. An averaged sentiment score of 3.8/5 looks like “broadly positive.” But the underlying data is: two strongly positive, one conditional, one mixed, one not yet evaluated. The conditional and mixed responses contain the most valuable signal — the specific objections, the pricing concerns, the security requirements — and they are invisible in an aggregate.

Individual voice matters more than smoothed data. The persona who says “Conditional Positive — I need confirmation that data doesn’t leave the tenant boundary” is telling you what to build. The average is telling you nothing.

The Huddle Summary as an artefact

The output of a Swarm-Lite session is a saved, versioned document — not a conversation thread. The value of a research session is the record it produces, not the exchange that produced it. A Huddle Summary is a document with a title, a date, a composite recommendation, per-persona sections, and a list of Pending Actions. It can be shared with a stakeholder who was not in the session. It can be retrieved six months later when a similar question arises.

Artefact-first design is the architectural choice that makes Swarm-Lite a research tool rather than a chat interface.

The connection to TX-1 and SS-1

TX-1 acts on failures inside your systems. SS-1 monitors for changes outside them. Swarm-Lite simulates how your users will react before you make the change.

Together they complete the full decision loop: detect what is broken, monitor what is changing, simulate how users will respond, then act. All three systems share one interaction model — something changes, an agent reasons about it, a human approves the response — because the underlying problem is the same problem viewed from three different angles.

The interaction pattern is not a coincidence. It is a design standard: every intelligent action in these systems passes through a human decision point before it becomes real. That standard is what makes them trustworthy in enterprise contexts where the cost of a wrong decision is measured in money, pipeline, or reputation.

What this demonstrates

A multi-agent orchestration system where personas are typed entities, not prompt fragments. An artefact-first research model where the output is a document, not a transcript. A HITL approval pattern applied to strategic decisions, not just operational ones. And a third data point confirming that the TX-1 and SS-1 design conventions are a reusable system — not a project-specific solution.

Swarm-Lite is also the project that demonstrates the stack most directly relevant to the portfolio site itself: Next.js, Payload CMS, and a polymorphic content model. The portfolio site is built on the same architectural decisions as the product it presents.

Status and direction

Swarm-Lite demo is complete. Active development continues on the Huddle orchestration depth, the persona auto-generation pipeline, and the integration layer that connects Swarm-Lite simulation output to SS-1 market monitoring — so that a significant competitive signal can trigger a pre-configured Huddle and return a user reaction brief alongside the intelligence brief.