Getting Started

Build your first project: an LLM that writes poems.

Loading playground...

Live playground. Code on the left, graph on the right. Click on nodes to inspect them.

Where do nodes come from?

Every node type (Text, LlmInference, Debug, etc.) comes from the node catalog. The catalog is the library of all available building blocks. Some are built-in (text, numbers, code, LLMs, debug). Others handle specific services (Slack, WhatsApp, email, databases). You can also build your own in Python or any language.

For the AI builder (Tangle), the catalog is in its memory. Tangle knows the built-in nodes by heart, and for less common ones, it searches the catalog at build time so it doesn't overload its context.

For you, there's a search bar: press Ctrl+P in the graph view to find any node by name. Each node in the catalog tells you what config it needs, what inputs and outputs it has, and what types they carry.

Breaking it down

Every project starts with a name and description:

Then you declare nodes. A node has an ID, a type, and config inside curly braces:

topic is the ID, used to reference this node in connections. Text is the node type. The config sets the label (what you see on the graph) and the value.

The LLM needs to know which model to use, what instructions to follow, and how creative to be. That comes from an LlmConfig node:

The LLM node itself declares what type of output it produces:

The -> (response: String) part is a port signature. It says: "this node has an output called response, and it carries a String." The LLM node doesn't know its output type ahead of time (the node catalog marks it as MustOverride), so you tell it what to expect.

In the graph view, a MustOverride port shows up as red. That's the compiler telling you: "I need you to decide the type here." You can either write it in the code (like above), or right-click on the red port in the graph and choose "Type" to set it directly. Every action in the code has a counterpart in the graph.

Connections wire nodes together. Each line reads like an assignment:

"The poet's prompt gets its value from topic's value output." Data flows right to left.

Finally, a Debug node displays the poem:

That's the whole project: four nodes, four connections. The LLM receives a topic and a config, writes a poem, and the Debug node shows it.

What's next