Live Flows
Live flows are execution sequences of different machine learning models and regular integration tools, collectively referred to as "steps". What makes these flows "live" is that they are executed on demand (for example, via a webhook) compared to regular workflows which are manually executed.
By creatively combining these different steps, we are able to use live flows to automate complex processes, such as company research, lead qualification, and content moderation.
Each time your run a live flow (either manually, via a webhook or from a batch flow) a new session is created. This session processes the input data in isolation from other sessions.
Steps
Live flows are made up of steps that are linked together in a custom sequence. There are a variety of different step types:
Steps can have multiple upstream steps but only one downstream step. While individual step types have different attributes, the common attributes are:
- prompt: this is the input to the step
- result: if the step runs successfully, this is the output
Linking steps
The result of one step can be used in any downstream steps by using special characters.
For example to feed the output of a previous node into an LLM node, the following syntax is used:
${node_name.result}
Let's explore the different step types.
The Start step
Live flows start with a "Start" step. Each step will have at least two attributes:
- prompt: this is the input to the step
- result: this is the output of the step, if successful
In the case of a Start step, the prompt and result are empty.
The LLM step
The LLM step takes a prompt and applies an LLM (OpenAI's GPT-4o) to generate a response. The result of the LLM step is stored in the result attribute of the step (e.g. ${llm_call.result}
)
The Input Step
Input steps are opportunities for users to input into a live flow. User inputs are handled via the live flow UI or via the TrueState REST API.
The message the user is presented with is the prompt and their response is the result. If you include an input step in your live flow, each time the user input step is run, that session will pause and wait for the user input to be returned.
For this reason we recommend this step only be included in advanced applications.
The Decision Step
Decision steps control the flow of a live flow session by leveraging a special class of LLM called an NLI model. These models, also known as universal classifiers, assess if a piece of text meets some criteria (known as the hypothesis).
NLI models output a number between 0 and 1. The closer the number is to 1, the more the model "thinks" that the hypothesis is true.
We use NLI models here because when building flow routing like this, it's important that the result is in a consistent structure. While LLMs that output text (e.g. GPTs) are amazing, they lack this structural guarantee and as such, are not the right model to use for this task.
The Search Step
Sometimes you need to return the most relevant records from a text database. The search step uses natural language search to return the most relevant records from a searchable dataset. Searchable datasets are created in batch workflows by running an embedding model on a dataset with a text field.
Searchable datasets can be selected from the dropdown in the step tab. Search steps return the top 10 most relevant records from the searchable dataset returning all fields in a JSON format.
The REST API Step
Sometimes you'll need to get data from or trigger actions on an external service. In situations like this, the REST API step can be helpful.
The REST API step can be used to make calls to other services via REST APIs.
The Web Scraping Step
Often you will want to extract information from a webpage. The web scraping step scrapes an individual URL from an IP address located in a nominated country.
If successful, the web scraping step returns the raw page data from the specified URL. This can then easily be passed to an LLM to interpret.
Executing a live flow
Live flows can be executed in three different ways:
Manually via the live flow UI
Here you can run the flow with or without a body of data passed to the flow. To run without a body, simply hit run, otherwise click the dropdown icon an select "Run with body".
On demand via the URL
External services and applications are able to trigger live flows by executing a post command against the TrueState API. Ensure that you have the relevant organisation id and a valid API key when executing this command.
These should be included in the header of the request as Current-Org-Id
and Api-key
. Some services don't allow for headers to be specified in a request, in which case you can specify them as querystring parameters. Below is an example of how to achieve this.
curl -X POST \
"https://api.truestate.io/live-flows/{live-flow-id}/sessions/?api_key={api-key}¤t_org_id={organisation_id}" \
-d '{"key":"value"}'
Using Live Flow Variables
We've already seen how you can use the result of one node in another like so:
${node_name.result}
This is one type of variable but there are more types of variables that you can utilise to achieve the desired result.
The 'INPUTS' Variable
The json body that you can provide when running the live flow is stored in the 'INPUTS' variable.
This variable can be accessed within your nodes as ${INPUTS['field_name']}
.
Here is a basic example to demonstrate how to use the 'INPUTS' variable: