Gigabox Apps · ComfyUILive

AI image generation on your own subdomain.

Get a managed ComfyUI instance with cloud-powered inference via fal.ai. Build node-based workflows for image and video generation — no GPU, no Docker, no infrastructure to manage.

comfyui.gigabox.ai

What you get

A full ComfyUI instance on your own subdomain — cloud inference, persistent storage, and real-time previews out of the box.

Node-Based Workflow Editor

Build complex image and video generation pipelines by connecting nodes. Chain models, controlnets, upscalers, and post-processing steps visually.

fal.ai Cloud Inference

Run Flux, SDXL, Kling, and more through fal.ai API nodes. No local GPU needed — inference runs on cloud hardware and results stream back to your workspace.

No GPU Required

ComfyUI runs in CPU-only mode on the server. All heavy computation is offloaded to fal.ai, so your instance stays lightweight and responsive.

Persistent Workflows

Your workflows, outputs, and uploaded inputs are saved to your own data directory. Everything persists across restarts and sessions.

Subdomain Isolation

Each instance runs on its own subdomain with wildcard SSL, a separate systemd process, and its own data directory. Fully isolated from other tenants.

Real-Time Progress

WebSocket support streams generation progress, queue status, and previews directly to the editor. Watch your images render in real time.

How it's built

Each instance is a ComfyUI process running in CPU-only mode, fronted by nginx with wildcard SSL. All inference is offloaded to fal.ai cloud.

RuntimeComfyUI · Python 3.12 · PyTorch (CPU)
Inferencefal.ai cloud (Flux, SDXL, Kling, etc.)
Authnginx basic_auth (per-instance credentials)
Proxynginx · wildcard SSL · WebSocket
Isolationsystemd · 512MB MemoryMax · per-process
InfraGCE · acme.sh DNS-01 · ZeroSSL

Why managed ComfyUI

Running ComfyUI locally means a beefy GPU, Python dependency management, model downloads, and port forwarding. Running it in the cloud means Docker, GPU instances, and billing complexity. Most people just want to build workflows.

With Gigabox ComfyUI, you get a dedicated instance on your own subdomain in minutes. Inference runs on fal.ai cloud hardware, so your instance needs no GPU at all. We handle the proxy, the SSL, the process management, and the custom node installation. You open your browser and start creating.

Each instance is fully isolated — its own systemd process with memory limits, its own data directory for workflows and outputs, its own credentials. The platform was built and deployed by AI, from the provisioning scripts to the fleet management CLI.