Image and Video Generation

Local-first media generation, now with remote Cloudflare activation.

The repo already had strong local generation infrastructure. This layer documents the current engines and the new Cloudflare Workers AI provider path for remote image and future video-capable model execution.

Local Engines

What already exists

  • ComfyUI: queue, history polling, direct output retrieval, template-driven workflows
  • Blender: headless scene rendering and premium environment scenes
  • FFmpeg: encode and sequence images into video outputs
  • Export Profiles: web, social, print, billboard, original
Remote Provider

Cloudflare Workers AI path

  • Status route: /v1/providers/cloudflare/status
  • Run route: /v1/providers/cloudflare/run
  • Security model: env vars only, no tokens committed
  • Binary outputs: saved to output/providers/cloudflare/
  • JSON outputs: returned directly for async or provider-managed flows
Activation

Secure setup sequence

powershell -ExecutionPolicy Bypass -File .\scripts\setup-cloudflare-workers-ai.ps1 `
  -AccountId <your-cloudflare-account-id>

# then start the API
python -m uvicorn apps.orchestrator.main:app --port 8400 --reload

# verify configuration
GET /v1/providers/cloudflare/status

# run a remote model
POST /v1/providers/cloudflare/run
Important Constraint

What is activated versus what is guaranteed

The repo now supports a generic Workers AI provider adapter. That means remote media generation can be called through the orchestrator using any valid model slug you provide. Binary responses are saved locally. JSON responses are returned directly so async or job-based models can be handled without guessing the provider contract.

For image generation, this is immediately useful. For video generation, the route is enabled for any Workers AI model that returns direct binary output or documented JSON. The local video stack remains the most reliable execution path until the exact production video model contract is finalized on your Cloudflare account.