# ifhost — Impossible Hosting > CLI + API for deploying websites and backend services to Fly.io. > API base URL: https://impossible-api.fly.dev > > Docs: > CLI Guide: https://impossible-api.fly.dev/docs > Advanced: https://impossible-api.fly.dev/docs/advanced > llm.txt: https://impossible-api.fly.dev/llm.txt ## Install CLI curl -fsSL https://impossible-api.fly.dev/install | sh ## Auth ifhost login # GitHub device flow (interactive) ifhost login --token # Direct token (CI/agents) IMPOSSIBLE_API_TOKEN=imp_xxx # Env var (overrides credentials file) Credentials stored at: ~/.impossible/credentials.json Token format: imp_ + 64 hex chars. Server stores SHA-256 hash only. ## Deploy (the core flow) Every deploy starts with `ifhost init` — generates `impossible.toml` with machine specs (app name, port, memory, cpus, autostop, min-machines, storage). `ifhost deploy` errors if there is no `impossible.toml` in the current directory. Agents must pick specs explicitly, not rely on silent defaults. ifhost init --app my-app --port 3000 --memory 512 ifhost deploy # reads impossible.toml Agent decision tree: Dockerfile in repo (web app, API, worker) → ifhost deploy → If Docker daemon running locally: builds and pushes image. → Else: uploads source tarball for remote build (limit 30 MB source). No Dockerfile but a `curl | sh` / pip / npm installer → ifhost deploy --runner → Boots a generic shell VM. Drive install via `machines console` (see "Runner + console" section below). The ONLY path for projects without a Dockerfile. Neither of the above → write a Dockerfile, or ask the user what they want. Never invent stack detection. Required init flags (agents: pass all of these, no prompts): --app, --port, --memory, --cpus, --autostop, --min-machines, --storage Optional: --cpu-kind (shared | performance), --cmd (startup override) The CLI auto-creates the app on first deploy. Rolling update: when an app has multiple machines (autoscaled, scaled manually, or standby), deploy updates the primary first, then propagates the new image to every remaining machine. A deploy is only "done" once all machines run the new image. Use `ifhost machines --app ` to list each machine's state, and `ifhost machines exec --machine ` to verify a specific replica. ## CLI Commands ### Account (no --app needed) ifhost login # GitHub device flow (interactive) ifhost login --token # Direct token (CI/agents) ifhost login --switch # Switch between stored accounts ifhost logout # Remove active account ifhost status # Login email, plan, all apps, CLI version ifhost init --app --port N --memory N --cpus N --autostop=BOOL --min-machines N --storage MODE # Create impossible.toml — REQUIRED before first deploy ifhost deploy # Build and deploy (reads impossible.toml) ifhost deploy --runner # Generic shell VM, no Dockerfile ifhost regions # List available Fly.io regions ifhost version # Check CLI version + update availability ifhost update # Self-update to latest version ifhost billing plan # Show current plan ifhost billing usage # Show resource usage ### Machine management (requires --app) ifhost machines --app # List machines ifhost machines start --app # Start all machines ifhost machines stop --app # Stop all (no cost while stopped) ifhost machines restart --app # Restart with fresh env/secrets ifhost machines scale --app # Manual scale to N machines ifhost machines autoscale set --min 1 --max 5 --target 25 --app # Auto-scaling ifhost machines autoscale off --app # Disable auto-scaling ifhost machines logs --app # Live tail (runs until Ctrl+C) ifhost machines logs --app --since 1h # Last hour, then exit ifhost machines logs --app --lines 50 # Last 50 lines, then exit ifhost machines logs --app --grep "ERROR" # Filter (works in both modes) ifhost machines logs --app --level error --lines 100 # Error-level only, historical ifhost machines logs --app --json # Structured JSON per line ifhost machines env set KEY=VALUE --app # Set env var (auto-restarts) ifhost machines env list --app # List env vars ifhost machines secrets set KEY=VALUE --app # Set encrypted secret (auto-restarts) ifhost machines secrets list --app # List secret keys (values hidden) ifhost machines volumes create --mount /data --size 5 --app # Create volume ifhost machines volumes list --app # List volumes ifhost machines volumes rm --app # Delete volume ifhost machines domains add example.com --app # Add custom domain + TLS ifhost machines domains list --app # List domains ifhost machines exec --app -- [args...] # Run command (picks first running machine) ifhost machines exec --app --machine -- # Target a specific machine ifhost machines destroy --app --yes # Delete app and all resources Global flags: --json (structured output on all commands) ## Status (overview across all your apps) `ifhost status` groups your account's apps by project and lists each project's URL, status, region, and current machine IDs (running + standby). Agents should run `status` first — the returned machine IDs are inputs for `machines exec --machine ` and `machines console start --app `. ifhost status # human-readable list ifhost status --json # structured — pipe to jq for scripts ## Exec (run commands inside the running container) ifhost machines exec --app my-app -- ls / ifhost machines exec --app my-app -- env ifhost machines exec --app my-app -- cat /app/config.json ifhost machines exec --app my-app -- python manage.py migrate ifhost machines exec --app my-app -- sh -c "df -h && free -m" Multi-machine apps (autoscaled, standby, or manually scaled): ifhost machines --app my-app # list machine IDs with name/state ifhost machines exec --app my-app --machine 32d41... -- # pin exec to one Without `--machine`, exec picks the first running machine. For tests that need to verify a specific replica (e.g., "did the rolling deploy reach every machine?"), target each machine ID in turn. Runs inside the Docker container. Available tools depend on the base image: - node:20 → has node, npm, sh, ls, cat, curl - python:3.12 → has python, pip, sh, ls, cat, curl - alpine → has sh, ls, cat (no bash, no curl unless installed) - nginx:alpine → has sh, ls, cat, nginx - distroless/scratch → NO shell, exec will fail Limitations: - 30-second timeout per command - No interactivity (no vim, htop, tmux) — output returned as text - No stdin — cannot pipe input - Machine must be running (start with: ifhost machines start --app ) - Runs as the container's default user (usually root unless Dockerfile sets USER) Use cases for agents: - Debug: check env vars, inspect files, test connectivity - Migrate: run database migrations after deploy - Config: read/verify runtime configuration - Health: check disk usage, memory, running processes ## Runner + console (for projects with multi-step / interactive setup) Use when the upstream project's install is too complex or interactive to bake into a Dockerfile up front (`hermes setup`-style wizards, multi-step CLIs, custom auth flows, projects without a published image, agent-managed setup). Flow: 1. ifhost init --app --memory 1024 --min-machines 1 --storage local # Create impossible.toml. storage=local gives a persistent /data. # min-machines=1 so the daemon stays up; autostop=true would kill idle bots. 2. ifhost machines secrets set K=V K2=V2 --app # Inject creds the project needs 3. ifhost deploy --runner --app # Boots a generic shell VM running /usr/bin/sleep inf 4. ifhost deploy --runner --app # ONE more time after secrets — re-applies init config 5. ifhost machines console start --app -- bash # Returns { session_id, machine_id } 6. Drive the install through console input/output (see below) The runner is a generic shell environment, not a project-specific image. tmux is auto-installed on first `console start` if missing — works on Debian/Ubuntu (apt) and Alpine (apk) bases. `--runner` is mutually exclusive with `--cmd`, `--local`, and `--remote`. Cannot be used for production HTTP services — for those, write a Dockerfile and use plain `ifhost deploy`. ### Driving an install through the console SESSION="ifhost-01..." # from `console start` ifhost machines console input --app "$SESSION" "" ifhost machines console input --app "$SESSION" --key Enter ifhost --json machines console output --app "$SESSION" --lines 80 Echo a unique marker after each long command and poll `console output` until the marker (with rc) appears, so you know success/failure WITHOUT timing out: cmd='apt-get install -y --no-install-recommends curl git python3 ...; echo __PHASE1_DONE__rc=$?' # send cmd, send Enter # poll until output matches: __PHASE1_DONE__rc=(\d+) For background daemons (gateways, message bus loops), launch inside a NAMED tmux session inside the container so the process survives `console` disconnect: tmux new-session -d -s app "cd /data/src && PYTHONUNBUFFERED=1 exec ./bin/run 2>&1 | tee /data/app.log" Always set `PYTHONUNBUFFERED=1` (or equivalent) for piped Python output — otherwise log files stay empty for minutes due to block-buffering. ### Common gotchas - `ifhost machines exec` has a ~60s edge-proxy timeout. Anything longer (apt installs, npm, pip with native compiles) needs the console + polling pattern above. The synchronous exec returns "unexpected EOF" on timeout. - `shared-cpu-1x` is throttled. A "kitchen sink" apt install (build-essential + ffmpeg + nodejs) takes 5-10 min. Slim down to ONLY the packages you actually need — for a Telegram bot: `curl ca-certificates git python3 python3-venv python3-pip tmux` (~50 MB, ~1 min). Skip ffmpeg unless TTS, skip build-essential unless pip needs to compile, skip nodejs unless the project uses it. For genuine heavy installs, consider bumping `[resources] cpus = 2` or `cpu_kind = "performance"`. - Setting secrets restarts the machine but may drop the runner's init config. Always run `ifhost deploy --runner` ONE more time after the secrets are set, then start the console. Verify with `ifhost machines exec --app -- env` that the secret keys are present in the container env. - `ifhost deploy --runner` after `ifhost machines destroy` works on a fresh app — the previous machine + volume are gone, fresh `/data` is auto-created from `storage = "local"`. Same toml; no extra flags. - For projects that ship optional dependency extras (`pip install .[all]`): install only the extras you NEED (e.g. `pip install python-telegram-bot` not the whole `[messaging]` extra). Saves ~5 min per redeploy. Look at the project's pyproject.toml `[project.optional-dependencies]` to find minimal extra names. - If the project has a `gateway`/daemon command that writes a PID file: use ` --replace` on restart, or run ` stop --all` first, otherwise the new instance refuses to start while the old PID is alive. - For projects driven by `OPENROUTER_API_KEY` / model providers: set `model.default` and `model.provider` explicitly via the project's config CLI before first message. An unset model often fails with confusing errors (e.g. OpenRouter 404 "No endpoints found for ."). Don't rely on "auto-detect" defaults. ### Verifying the bot/daemon is alive ifhost machines exec --app -- ps -ef | grep ifhost machines exec --app -- tail -50 /data/.log ifhost machines exec --app -- cat /data/logs/agent.log # if the project has its own log dir For Telegram-style bots, the canonical "it's working" signal in logs: INFO ✓ telegram connected INFO Gateway running with 1 platform(s) If logs show empty or just a banner: the daemon may be alive but Python output is buffered. Re-launch with `PYTHONUNBUFFERED=1` and check `/data/logs/*.log` (most projects also write structured logs there). ### When NOT to use --runner - Single-process stateless web service with a Dockerfile → `ifhost deploy` - You have a published image (Docker Hub, GHCR) that just needs a custom CMD → write a 3-line `FROM upstream:tag` wrapper Dockerfile and use `ifhost deploy --local` (Docker required). The local build path avoids the >30 MB source-tarball remote-build limit too. ## Config File (optional) File: impossible.toml (in project root) app = "my-app" region = "iad" [build] dockerfile = "Dockerfile" [service] internal_port = 8080 autostop = true min_machines = 0 [resources] cpu_kind = "shared" # "shared" or "performance" cpus = 1 # 1, 2, 4, 8, 16 memory_mb = 256 # 256, 512, 1024, 2048, ... [autoscale] min = 0 max = 1 concurrency_target = 25 [[volumes]] name = "data" size_gb = 5 mount_path = "/data" [env] NODE_ENV = "production" ## REST API All endpoints require: Authorization: Bearer imp_xxx Content-Type: application/json ### Auth POST /auth/login { "email": "..." } → { token, email } POST /auth/github/device (none) → { device_code, user_code, verification_uri } POST /auth/github/token { "device_code": "..." } → { token, email, username } | 202 (pending) GET /auth/me → { id, email } POST /auth/token { "name": "..." } → { token, name } GET /auth/tokens → { tokens: [...] } DELETE /auth/tokens/{id} → { revoked: id } ### Apps GET /apps → { apps: [...] } POST /apps { "name": "x", "region": "iad" } → { id, name, fly_app_name, url } GET /apps/{name} → { id, name, status, url, cpu_kind, cpus, memory_mb, ... } PATCH /apps/{name} { "memory_mb": 512, ... } → { updated: [...] } DELETE /apps/{name} → { deleted: name } ### Deploy POST /apps/{name}/deploy { "image": "reg/x:tag" } → { id, status, url, machine_id, machines, autoscale } POST /apps/{name}/source-deploy multipart: source.tar.gz → { id, status, url, machine_id } GET /apps/{name}/deployments → { deployments: [...] } POST /apps/{name}/registry-token → { token, registry, image } ### Machines POST /apps/{name}/stop → { stopped: N } POST /apps/{name}/start → { started: N } POST /apps/{name}/restart → { restarted: N } POST /apps/{name}/scale { "count": 3 } → { machines: 3 } GET /apps/{name}/machines → { machines: [...] } POST /apps/{name}/exec { "cmd": ["ls", "/data"] } → { stdout, stderr, exit_code } ### Env & Secrets GET /apps/{name}/env → { vars: { K: V, ... } } PUT /apps/{name}/env { "vars": { "K": "V" } } → { set: N, restarted: bool } GET /apps/{name}/secrets → { keys: ["K1", "K2"] } PUT /apps/{name}/secrets { "secrets": { "K": "V" } } → { set: N, restarted: bool } ### Volumes GET /apps/{name}/volumes → { volumes: [...] } POST /apps/{name}/volumes { "name": "data", "size_gb": 5, "mount_path": "/data" } → { ... } DELETE /apps/{name}/volumes/{vol} → { removed: vol } ### Domains GET /apps/{name}/domains → { domains: [...] } POST /apps/{name}/domains { "hostname": "example.com" } → { hostname, tls_status, cname_target } DELETE /apps/{name}/domains/{h} → { removed: hostname } ### Logs GET /apps/{name}/logs ?format=json&since=1h&no-follow=true → text/event-stream ### Health GET /apps/{name}/health → { app, status, healthy, machines_total, machines_running, uptime_seconds, http: { total_requests, error_rate, avg_response_ms, p99_response_ms }, machines: [...] } ### CLI Version (no auth) GET /cli/version → { build_id: "20260416-024938" } ### Billing GET /billing/plan → { plan_tier, limits, usage (incl. estimated_cost_cents) } GET /billing/usage → { compute_seconds, bandwidth_bytes, storage_bytes } GET /apps/{name}/usage → { ... per-app usage } PUT /billing/alert { "max_cents": 2000 } → { max_cents, display } GET /billing/alert → { max_cents, current_cents, exceeded, display } DELETE /billing/alert → { status: "removed" } ## Storage modes (--storage flag on deploy) ifhost deploy --app my-app # stateless (default — no persistent storage) ifhost deploy --app my-app --storage local # local /data volume, single machine Two modes: ### empty (default) — stateless - No persistent disk. Container filesystem is ephemeral; survives restarts but resets on redeploy. - Use this for HTTP services, workers, or anything that keeps state in a managed DB (Supabase, Neon, Upstash, Turso) instead of locally. ### local — for embedded state (SQLite, file caches) - 3 GB volume mounted at /data, persists across redeploys - ⚠ DISABLES AUTO-SCALING — pinned to 1 machine - ⚠ Region-locked at first deploy - Use only when the app must write to disk and you accept single-machine operation. For Postgres/MySQL/Redis at scale, use a managed DB instead. Object/cloud storage (auto-provisioned S3 buckets) was REMOVED. If your app needs blob storage, sign up directly with Tigris/S3/R2/B2 and pass credentials via --secret. ## Database recommendations (DO NOT use volumes for databases beyond SQLite) For databases, use a MANAGED service: Supabase — managed Postgres, free tier, dashboard included Neon — serverless Postgres, scales to zero, free tier Turso — managed SQLite at the edge, distributed reads PlanetScale — managed MySQL, branching workflow Connect via DATABASE_URL env var: ifhost deploy --secret DATABASE_URL=postgres://user:pass@host/db These scale independently of your app. Your app stays stateless + auto-scalable. This is the recommended architecture for ifhost apps. ### Subscription GET /subscription → { plan_tier, status, ... } POST /subscription/checkout { "plan": "pro", "method": "card" } → { checkout_url } POST /subscription/cancel → { cancelled: true } GET /subscription/invoices → { invoices: [...] } POST /subscription/crypto/{plan} (x402 payment header) → { upgraded: plan } ## Architecture ifhost CLI (Go binary) → API Server (Go, Fly.io) → Fly.io Machines API │ SQLite + WAL (on Fly Volume) - CLI: Go + cobra. Single binary, no runtime deps. - API: Go + chi. Handles auth, deploy orchestration, secrets encryption. - DB: SQLite WAL. Secrets encrypted AES-256-GCM with per-app HKDF keys. - Infra: Fly.io Machines — per-second billing, scale-to-zero, 30+ regions. ## Env var handling - Non-sensitive (NODE_ENV, PORT): ifhost machines env set → injected into container env - Sensitive (DB_URL, API_KEY): ifhost machines secrets set → values never returned by the API, only keys listable - Both can also be set inline during deploy: `ifhost deploy --env K=V --secret K=V` - Setting env/secrets auto-restarts the app so changes take effect immediately - Never put secrets in impossible.toml [env] (committed to git) or Dockerfile ENV (baked into image) ## Pricing tiers Free: $0/mo — 1 project, 256MB, 1 shared CPU, 10GB bandwidth Hobby: $10/mo — 5 projects, 1GB, 2 shared CPUs, 100GB bandwidth Pro: $25/mo — 20 projects, 4GB, 2 perf CPUs, 500GB bandwidth Team: $50/mo — 50 projects, 8GB, 4 perf CPUs, 2TB bandwidth