Studio · Workspaces & run artifacts

Ship specs your coding agent can execute.

Xenonflare Studio queues work from a short brief. When generation completes, you review files, charts, and tables in the dashboard — everything organized per workspace, ready for Cursor, Copilot, or any agentic flow.

  • Passwordless magic-link sign-in
  • Per-workspace chat threads
  • Charts + tables alongside files
  • Self-host with one Docker image
studio.xenonflare.comLive

Workspace · ProductSpec

Draft a real-time multiplayer lobby on Firebase. Include schemas, latency budget chart, and a phased rollout table.
Queued. Running on runner-7. Producing 4 files, 2 charts, 1 table
generating…

Artifacts

Frontend.md
Backend.md
Schemas.md
6 chart types
bar · pie · line · area · scatter · stacked
Multi-file
Markdown bundles per run
Parallel
Self-host extra runners
Stripe
Plans + customer portal

What you can build

One brief in. A workspace of artifacts out.

The studio is good at structured outputs: things you would normally split into a doc, a chart, and a spreadsheet — produced together.

App scaffoldsbrief → artifacts

Spec a feature, get an agent-ready brief

“Plan a multiplayer lobby on Firebase: schemas, latency budget, phased rollout.”

4 markdown filesLine chartPhase table
Researchbrief → artifacts

Distill noisy inputs into a clean summary

“Compare 6 vector DBs for a 50M-doc workload. Score on cost, latency, ops burden.”

Comparison tableScoring chartRecommendation file
Roadmapsbrief → artifacts

Turn ideas into a phased plan

“Quarterly roadmap for a B2B analytics app. Show effort vs. impact and a checklist per phase.”

Scatter chartEffort tableChecklist artifacts
Internal docsbrief → artifacts

Bootstrap onboarding & runbooks

“Write an on-call runbook for our auth service: incidents, dashboards, escalation tree.”

Multi-file docsPie chartEscalation tree (SVG)

Product surface

One results view: prompts, visuals, grids, and account controls.

Charts, files, tables, billing — plus an open-source runner you can self-host. Details live in the docs.

Charts

Charts that explain the run

Bar, pie, line, area, scatter, and stacked-bar charts ride alongside prose — quick sanity checks before you ship specs to an agent.

Tables

Dataset tables

Structured tables (datasets) for scores, comparisons, and checklists — easy to scan, easy to copy.

PhaseTokensStatus
Plan1.2kOK
Build8.4kOK
Review2.1k
Ship4.6kOK
Files

Multi-file outputs

Each completion is a small library of markdown — Frontend, Backend, agents, and more. Copy one file or the whole set.

Frontend.md
Backend.md
Agents.md
Roadmap.md
Billing

Billing & plans

Upgrade with Stripe, open the customer portal for invoices, and keep terms acceptance in sync with checkout.

Pro · monthlyStripe
Daily tokens · 62%Resets 00:00 UTC

Flow

Three steps. No credits spreadsheet.

  1. 01

    Describe the build

    Paste a product idea, stack hints, and constraints — we queue a structured job tied to a workspace thread.

  2. 02

    Generate on your hardware

    Runners pick up the job in order. Use the shared pool on Free or self-host with your own API key for full throughput.

  3. 03

    Review in the studio

    Open results: skim charts and tables, copy per-file prompts, and paste into Cursor or any agentic workflow.

Why this shape

Cloud queues. Your hardware generates.

Create an account in one tap →
Tiers

Free, Plus, and Pro

Daily token credits per tier; compare on the pricing page, then subscribe in Settings → Billing.

Privacy

Keys stay local

Model calls run on hardware you control. The cloud only queues work and stores outputs for review.

Queue

Fair, transparent ordering

Work is processed in order with clear states from queued through complete — no mystery inboxes.

Scale

Bring your own throughput

Run more capacity on infrastructure you control so queued work finishes faster — no shared credentials across hosts.

Self-host

Run the open-source worker on your own box.

The cloud queues work and stores results. The model call happens in a small Node worker you control — your API key never leaves the host. Spin up more processes to drain the queue faster.

API keys stay on your hardware
Same lease/heartbeat endpoints
Scale by adding processes
One Docker image, env-driven
~/runner
$ git clone Xenon-Flare/runner
$ export RUNNER_TOKEN=…
$ export OPENAI_API_KEY=…
$ npm start

[runner-7] connected
[runner-7] leased ws_4Q9a · 12.4k tok
[runner-7] complete · 4 files · 2 charts

Your next repo starts as one good brief.

Passwordless login, dedicated workspaces, and a results surface built for builders — not slide decks. Queue a run, skim charts and tables, then iterate until the agent output feels right.