Call it from code (API)

Hit a URL, your project runs. Good for cron jobs, scripts, integrations, anything that is not a human clicking a button.

Drop an API Endpoint (POST) trigger into your project. Weft generates a public URL. Post JSON at it, the project runs, the JSON fields show up as typed output ports the rest of your graph can read. Done.

This is the main way to make a Weft project part of a bigger system. Zapier, a cron job on your server, a button in another app, a webhook from Stripe, a model serving layer, any of those can POST at your project and get an execution started.

A tiny example

Here is a project that accepts a topic in the POST body and has an LLM write a poem about it.

The key part is ApiPost -> (topic: String). You declare the expected fields of the incoming JSON body as output ports on the trigger node. Weft typechecks the downstream wiring against that schema.

How the URL is generated

The URL is generated the first time you activate the trigger. Flip to the runner view of your project, click the trigger's Start button in the ActionBar above the Run area, and the ApiPost node in the graph starts showing an Endpoint URL line with a copy button right inside the node.

On cloud, the host is your Weft cloud URL. On local dev, it is the weft-api port, which defaults to 3000 (override with the PORT env var). The <trigger-id> is stable: once the trigger is registered, the URL keeps accepting calls until you unregister the trigger (which deletes its row from the database). Deactivating the trigger in the UI stops in-memory listeners for polling-style triggers, but the webhook endpoint itself stays live, because the handler just looks up the row and fires an execution from the stored weft code.

Screenshot needed
An ApiPost node in the graph with its Endpoint URL display item visible, showing the generated URL and the copy button next to it.

Calling it

curl:

Python:

JavaScript / Node:

Fire and forget

The endpoint is fire-and-forget. The POST returns as soon as Weft has scheduled the execution, not when the execution finishes. The response body looks like this:

This is intentional. Executions can take seconds, minutes, or days (when a human is in the loop). Keeping the HTTP connection open that long does not make sense. If you need the result, poll the executions view for the execution ID, or wire the project to push the result somewhere the caller can read (a Postgres row, a Slack message, another webhook).

If you need a real request/response loop where the HTTP caller waits for the result, that is on the roadmap but not shipped yet. For now, structure your API-triggered projects as "receive, process, write the result somewhere" rather than "receive, return".

Mapping the body to ports

The trigger node expects a JSON object in the request body. Each top-level key in the JSON becomes an output port on the trigger node, provided you declared that port. Ports you did not declare are ignored.

For the poem example, this body:

...fills the topic output port because you declared it. The style key is silently dropped because the trigger does not have a matching port. If you want style, add it to the signature:

Mark optional fields with ?. Required fields that are missing from the body become null and stop the execution via null propagation, which is usually what you want.

The trigger node also exposes a receivedAt output automatically, with an RFC 3339 timestamp of when the request hit the server.

Authentication

By default the endpoint is open. Anyone who knows the URL can trigger the project. Fine for local dev and for URLs you keep secret, not great for production.

Set an API key on the trigger node to require authentication:

Then callers must send it in the x-api-key header:

Requests without the header (or with the wrong key) get 401. The comparison is constant-time, so you do not leak the key through timing.

For webhooks from services that sign their payloads (GitHub, Stripe, etc.), the trigger node also supports HMAC signature validation. Set secret, signatureHeader, and signaturePrefix on the node's config. Ask Tangle to set this up for a specific provider, it knows the format each service expects.

Watching runs

Every API-triggered execution is recorded just like a manual run. Open the Executions page from the dashboard to see the list, with the body each webhook received, the nodes that ran, the outputs, and any errors. This is where you debug production traffic.

Response codes

  • 200: execution scheduled.
  • 401: wrong API key or bad signature.
  • 404: trigger ID not found in the database (the trigger was never registered, or it was unregistered).
  • 400: the project failed to re-compile, or the stored trigger is not actually a Webhook type. The re-compile path usually means you re-published the deployment and broke typing.
  • 500: backend error (database, executor, compilation). Check the server logs.

What's next