Skip to main content

AI Workflow Generation

kindling generate scans your repository and uses an LLM to produce a complete GitHub Actions workflow — build steps, deploy steps, dependencies, secrets, ingress routes — tailored to what's actually in your code.


Quick start

kindling generate -k <openai-api-key> -r /path/to/your-app

This writes .github/workflows/dev-deploy.yml into your repo. Push it and your app builds and deploys automatically.

# Preview without writing
kindling generate -k sk-... -r . --dry-run

# Use Anthropic instead of OpenAI
kindling generate -k sk-ant-... -r . --ai-provider anthropic
Dashboard

You can also generate workflows from the dashboard: Setup → Analyze & Generate (step 2), or press ⌘K and type "generate". The AI output streams in real time. See Dashboard for details.


What it detects

The scanner reads your repo structure and extracts:

  • Services — each directory with a Dockerfile becomes a build + deploy step
  • Languages — Go, TypeScript, Python, Java, Rust, Ruby, PHP, C#, Elixir
  • Ports — from Dockerfile EXPOSE, framework defaults, and config files
  • Health check endpoints/healthz, /health, /ready, framework conventions
  • Dependencies — Postgres, Redis, MongoDB, RabbitMQ, etc. from docker-compose, env vars, and import analysis
  • External credentials*_API_KEY, *_SECRET, *_TOKEN, *_DSN patterns → suggests kindling secrets set for each
  • OAuth/OIDC — Auth0, Okta, Firebase Auth, NextAuth, Passport.js patterns → suggests kindling expose

Smart scanning

docker-compose.yml

If present, docker-compose is used as the primary source of truth: build contexts, depends_on for dependency types, and environment sections for env var mappings across services.

Helm charts

Detects Chart.yaml, runs helm template to render manifests, and passes them to the AI as authoritative context. Falls back gracefully if helm is not installed.

Kustomize overlays

Detects kustomization.yaml, runs kustomize build for rendered context. Falls back gracefully if kustomize is not installed.

.env template files

Scans .env.sample, .env.example, .env.development, and .env.template for required configuration variables.

Ingress heuristics

Only user-facing services (frontends, SSR apps, API gateways) get ingress routes by default. Use --ingress-all to override.


Models

ProviderDefault modelNotes
OpenAIo3Reasoning model — uses developer role and extended thinking
OpenAIo3-miniFaster and cheaper reasoning
OpenAIgpt-4oStandard chat model
Anthropicclaude-sonnet-4-20250514Default for --ai-provider anthropic
# Use a specific model
kindling generate -k sk-... -r . --model o3-mini

Examples

# Default (OpenAI o3)
kindling generate -k sk-... -r /path/to/my-app

# Anthropic
kindling generate -k sk-ant-... -r . --ai-provider anthropic

# Custom output path
kindling generate -k sk-... -r . -o ./my-workflow.yml

# Wire every service with ingress
kindling generate -k sk-... -r . --ingress-all

# Skip Helm/Kustomize rendering
kindling generate -k sk-... -r . --no-helm

Flags

FlagShortDefaultDescription
--api-key-k— (required)GenAI API key
--repo-path-r.Path to the repository to analyze
--ai-provideropenaiAI provider: openai or anthropic
--modelautoModel name
--output-o<repo>/.github/workflows/dev-deploy.ymlOutput path
--dry-runfalsePrint to stdout instead of writing
--ingress-allfalseGive every service an ingress route
--no-helmfalseSkip Helm/Kustomize rendering