Open-source developer infrastructure for internal tools (APIs, background jobs, workflows and UIs). Self-hostable alternative to Retool, Pipedream, Superblocks and a simplified Temporal with autogenerated UIs and custom UIs to trigger workflows and scripts as internal apps.
Scripts are turned into sharable UIs automatically, and can be composed together into flows or used into richer apps built with low-code. Supported script languages supported are: Python, TypeScript, Go, Bash, SQL, and GraphQL.
Try it - Docs - Discord - Hub - Contributor's guide
Windmill is fully open-sourced (AGPLv3) and Windmill Labs offers dedicated instance and commercial support and licenses.
main.mp4
- Windmill - Developer platform for APIs, background jobs, workflows and UIs
-
Define a minimal and generic script in Python, TypeScript, Go or Bash that solves a specific task. The code can be defined in the provided Web IDE or synchronized with your own GitHub repo (e.g. through VS Code extension):
-
Your scripts parameters are automatically parsed and generate a frontend.
-
Make it flow! You can chain your scripts or scripts made by the community shared on WindmillHub.
-
Build complex UIs on top of your scripts and flows.
Scripts and flows can also be triggered by a cron schedule (e.g. '_/5 _ * * *') or through webhooks.
You can build your entire infra on top of Windmill!
//import any dependency from npm
import * as wmill from "windmill-client"
import * as cowsay from 'cowsay@1.5.0';
// fill the type, or use the +Resource type to get a type-safe reference to a resource
type Postgresql = {
host: string;
port: number;
user: string;
dbname: string;
sslmode: string;
password: string;
};
export async function main(
a: number,
b: "my" | "enum",
c: Postgresql,
d = "inferred type string from default arg",
e = { nested: "object" }
//f: wmill.Base64
) {
const email = process.env["WM_EMAIL"];
// variables are permissioned and by path
let variable = await wmill.getVariable("f/company-folder/my_secret");
const lastTimeRun = await wmill.getState();
// logs are printed and always inspectable
console.log(cowsay.say({ text: "hello " + email + " " + lastTimeRun }));
await wmill.setState(Date.now());
// return is serialized as JSON
return { foo: d, variable };
}
We have a powerful CLI to interact with the windmill platform and sync your scripts from local files, GitHub repos and to run scripts and flows on the instance from local commands. See more details.
You can run your script locally easily, you simply need to pass the right
environment variables for the wmill
client library to fetch resources and
variables from your instance if necessary. See more:
https://www.windmill.dev/docs/advanced/local_development.
To develop & test locally scripts & flows, we recommend using the Windmill VS Code extension: https://www.windmill.dev/docs/cli_local_dev/vscode-extension.
- Postgres as the database.
- Backend in Rust with the following highly-available and horizontally scalable.
Architecture:
- Stateless API backend.
- Workers that pull jobs from a queue in Postgres (and later, Kafka or Redis. Upvote #173 if interested).
- Frontend in Svelte.
- Scripts executions are sandboxed using Google's nsjail.
- Javascript runtime is the deno_core rust library (which itself uses the rusty_v8 and hence V8 underneath).
- TypeScript runtime is Bun and deno.
- Python runtime is python3.
- Golang runtime is 1.19.1.
We have compared Windmill to other self-hostable workflow engines (Airflow, Prefect & Temporal) and Windmill is the most performant solution for both benchmarks: one flow composed of 40 lightweight tasks & one flow composed of 10 long-running tasks.
All methodology & results on our Benchmarks page.
Windmill can use nsjail. It is production multi-tenant grade secure. Do not take our word for it, take fly.io's one.
There is one encryption key per workspace to encrypt the credentials and secrets stored in Windmill's K/V store.
In addition, we strongly recommend that you encrypt the whole Postgres database. That is what we do at https://app.windmill.dev.
Once a job started, there is no overhead compared to running the same script on the node with its corresponding runner (Deno/Go/Python/Bash). The added latency from a job being pulled from the queue, started, and then having its result sent back to the database is ~50ms. A typical lightweight deno job will take around 100ms total.
We only provide docker-compose setup here. For more advanced setups, like compiling from source or using without a postgres super user, see Self-Host documentation.
Windmill can be deployed using 3 files: (docker-compose.yml, Caddyfile and a .env) in a single command.
Make sure Docker is started, and run:
curl https://raw.githubusercontent.com/windmill-labs/windmill/main/docker-compose.yml -o docker-compose.yml
curl https://raw.githubusercontent.com/windmill-labs/windmill/main/Caddyfile -o Caddyfile
curl https://raw.githubusercontent.com/windmill-labs/windmill/main/.env -o .env
docker compose up -d
Go to http://localhost et voilà :)
The default super-admin user is: admin@windmill.dev / changeme.
From there, you can follow the setup app and create other users.
More details in Self-Host Documention.
We publish helm charts at: https://github.com/windmill-labs/windmill-helm-charts.
Each release includes the corresponding binaries for x86_64. You can simply
download the latest windmill
binary using the following set of bash commands.
BINARY_NAME='windmill-amd64' # or windmill-ee-amd64 for the enterprise edition
LATEST_RELEASE=$(curl -L -s -H 'Accept: application/json' https://github.com/windmill-labs/windmill/releases/latest)
LATEST_VERSION=$(echo $LATEST_RELEASE | sed -e 's/.*"tag_name":"\([^"]*\)".*/\1/')
ARTIFACT_URL="https://github.com/windmill-labs/windmill/releases/download/$LATEST_VERSION/$BINARY_NAME"
wget "$ARTIFACT_URL" -O windmill
Windmill Community Edition allows to configure the OAuth, SSO (including Google Workspace SSO, Microsoft/Azure and Okta) directly from the UI in the superadmin settings. Do note that there is a limit of 10 SSO users on the community edition.
To self-host Windmill, you must respect the terms of the AGPLv3 license which you do not need to worry about for personal uses. For business uses, you should be fine if you do not re-expose Windmill in any way to your users and are comfortable with AGPLv3.
To re-expose any Windmill parts to your users as a feature of your product, or to build a feature on top of Windmill, to comply with AGPLv3 your product must be AGPLv3 or you must get a commercial license. Contact us at ruben@windmill.dev if you have any doubts.
In addition, a commercial license grants you a dedicated engineer to transition your current infrastructure to Windmill, support with tight SLA, and our global cache sync for high-performance/no dependency cache miss of cluster from 10+ nodes to 200+ nodes.
In Windmill, integrations are referred to as resources and resource types. Each Resource has a Resource Type that defines the schema that the resource needs to implement.
On self-hosted instances, you might want to import all the approved resource types from WindmillHub. A setup script will prompt you to have it being synced automatically everyday.
Environment Variable name | Default | Description | Api Server/Worker/All |
---|---|---|---|
DATABASE_URL | The Postgres database url. | All | |
WORKER_GROUP | default | The worker group the worker belongs to and get its configuration pulled from | Worker |
MODE | standalone | The mode if the binary. Possible values: standalone, worker, server, agent | All |
METRICS_ADDR | None | (ee only) The socket addr at which to expose Prometheus metrics at the /metrics path. Set to "true" to expose it on port 8001 | All |
JSON_FMT | false | Output the logs in json format instead of logfmt | All |
BASE_URL | http://localhost:8000 | The base url that is exposed publicly to access your instance. Is overriden by the instance settings if any. | Server |
ZOMBIE_JOB_TIMEOUT | 30 | The timeout after which a job is considered to be zombie if the worker did not send pings about processing the job (every server check for zombie jobs every 30s) | Server |
RESTART_ZOMBIE_JOBS | true | If true then a zombie job is restarted (in-place with the same uuid and some logs), if false the zombie job is failed | Server |
SLEEP_QUEUE | 50 | The number of ms to sleep in between the last check for new jobs in the DB. It is multiplied by NUM_WORKERS such that in average, for one worker instance, there is one pull every SLEEP_QUEUE ms. | Worker |
KEEP_JOB_DIR | false | Keep the job directory after the job is done. Useful for debugging. | Worker |
LICENSE_KEY (EE only) | None | License key checked at startup for the Enterprise Edition of Windmill | Worker |
SLACK_SIGNING_SECRET | None | The signing secret of your Slack app. See Slack documentation | Server |
COOKIE_DOMAIN | None | The domain of the cookie. If not set, the cookie will be set by the browser based on the full origin | Server |
DENO_PATH | /usr/bin/deno | The path to the deno binary. | Worker |
PYTHON_PATH | /usr/local/bin/python3 | The path to the python binary. | Worker |
GO_PATH | /usr/bin/go | The path to the go binary. | Worker |
GOPRIVATE | The GOPRIVATE env variable to use private go modules | Worker | |
GOPROXY | The GOPROXY env variable to use | Worker | |
NETRC | The netrc content to use a private go registry | Worker | |
PATH | None | The path environment variable, usually inherited | Worker |
HOME | None | The home directory to use for Go and Bash , usually inherited | Worker |
DATABASE_CONNECTIONS | 50 (Server)/3 (Worker) | The max number of connections in the database connection pool | All |
SUPERADMIN_SECRET | None | A token that would let the caller act as a virtual superadmin superadmin@windmill.dev | Server |
TIMEOUT_WAIT_RESULT | 20 | The number of seconds to wait before timeout on the 'run_wait_result' endpoint | Worker |
QUEUE_LIMIT_WAIT_RESULT | None | The number of max jobs in the queue before rejecting immediately the request in 'run_wait_result' endpoint. Takes precedence on the query arg. If none is specified, there are no limit. | Worker |
DENO_AUTH_TOKENS | None | Custom DENO_AUTH_TOKENS to pass to worker to allow the use of private modules | Worker |
DISABLE_RESPONSE_LOGS | false | Disable response logs | Server |
CREATE_WORKSPACE_REQUIRE_SUPERADMIN | true | If true, only superadmins can create new workspaces | Server |
See the ./frontend/README_DEV.md file for all running options.
Using Nix.
This will use the backend of https://app.windmill.dev but your own frontend with hot-code reloading. Note that you will need to use a username / password login due to CSRF checks using a different auth provider.
In the frontend/
directory:
- install the dependencies with
npm install
(orpnpm install
oryarn
) - generate the windmill client:
npm run generate-backend-client
## on mac use
npm run generate-backend-client-mac
- Run your dev server with
npm run dev
- Et voilà, windmill should be available at
http://localhost/
See the ./frontend/README_DEV.md file for all running options.
- Create a Postgres Database for Windmill and create an admin role inside your
Postgres setup.
The easiest way to get a working db is to run
This will also avoid compile time issue with sqlx's
cargo install sqlx-cli env DATABASE_URL=<YOUR_DATABASE_URL> sqlx migrate run
query!
macro - Install nsjail and have it accessible in your PATH
- Install deno and python3, have the bins at
/usr/bin/deno
and/usr/local/bin/python3
- Install caddy
- Install the lld linker
- Go to
frontend/
:npm install
,npm run generate-backend-client
thennpm run dev
- You might need to set some extra heap space for the node runtime
export NODE_OPTIONS="--max-old-space-size=4096"
- In another shell
npm run build
otherwise the backend will not find thefrontend/build
folder and will not compile. - In another shell
sudo caddy run --config Caddyfile
- Go to
backend/
:env DATABASE_URL=<DATABASE_URL_TO_YOUR_WINDMILL_DB> RUST_LOG=info cargo run
- Et voilà, windmill should be available at
http://localhost/
Windmill Labs, Inc 2023