Spec a feature, get an agent-ready brief
“Plan a multiplayer lobby on Firebase: schemas, latency budget, phased rollout.”
Xenonflare Studio queues work from a short brief. When generation completes, you review files, charts, and tables in the dashboard — everything organized per workspace, ready for Cursor, Copilot, or any agentic flow.
Workspace · ProductSpec
Artifacts
One brief in. A workspace of artifacts out.
The studio is good at structured outputs: things you would normally split into a doc, a chart, and a spreadsheet — produced together.
“Plan a multiplayer lobby on Firebase: schemas, latency budget, phased rollout.”
“Compare 6 vector DBs for a 50M-doc workload. Score on cost, latency, ops burden.”
“Quarterly roadmap for a B2B analytics app. Show effort vs. impact and a checklist per phase.”
“Write an on-call runbook for our auth service: incidents, dashboards, escalation tree.”
One results view: prompts, visuals, grids, and account controls.
Charts, files, tables, billing — plus an open-source runner you can self-host. Details live in the docs.
Bar, pie, line, area, scatter, and stacked-bar charts ride alongside prose — quick sanity checks before you ship specs to an agent.
Structured tables (datasets) for scores, comparisons, and checklists — easy to scan, easy to copy.
| Phase | Tokens | Status |
| Plan | 1.2k | OK |
| Build | 8.4k | OK |
| Review | 2.1k | … |
| Ship | 4.6k | OK |
Each completion is a small library of markdown — Frontend, Backend, agents, and more. Copy one file or the whole set.
Upgrade with Stripe, open the customer portal for invoices, and keep terms acceptance in sync with checkout.
Three steps. No credits spreadsheet.
Paste a product idea, stack hints, and constraints — we queue a structured job tied to a workspace thread.
Runners pick up the job in order. Use the shared pool on Free or self-host with your own API key for full throughput.
Open results: skim charts and tables, copy per-file prompts, and paste into Cursor or any agentic workflow.
Cloud queues. Your hardware generates.
Daily token credits per tier; compare on the pricing page, then subscribe in Settings → Billing.
Model calls run on hardware you control. The cloud only queues work and stores outputs for review.
Work is processed in order with clear states from queued through complete — no mystery inboxes.
Run more capacity on infrastructure you control so queued work finishes faster — no shared credentials across hosts.
Run the open-source worker on your own box.
The cloud queues work and stores results. The model call happens in a small Node worker you control — your API key never leaves the host. Spin up more processes to drain the queue faster.
$ git clone Xenon-Flare/runner
$ export RUNNER_TOKEN=…
$ export OPENAI_API_KEY=…
$ npm start
[runner-7] connected
[runner-7] leased ws_4Q9a · 12.4k tok
[runner-7] complete · 4 files · 2 chartsPasswordless login, dedicated workspaces, and a results surface built for builders — not slide decks. Queue a run, skim charts and tables, then iterate until the agent output feels right.