Configure how your agent thinks, responds, and decides when to use its skills.
The Agent is the brain of your project. When a message comes in, the Agent reads it, decides what to do, and orchestrates your playbooks, workflows, and system tools to handle the conversation. It might reply directly, route to a playbook, kick off a workflow, or use a system tool like knowledge base search. Open the Agent tab in the sidebar to configure it.
The agent uses two prompts, and understanding the difference is key to getting good results.The Global prompt applies both to your Agent and to your playbooks, including those used in workflows via the Playbook step. Every response your Agent generates, whether from the Agent itself or from a playbook, includes this prompt. Use it for tone, persona, formatting rules, or constraints that should always apply. To get started, we recommend enabling our default guidelines as these were written with best-practices in mind.Instructions control the Agent’s own behavior. This is where you tell it how to handle requests, when to route to playbooks or workflows, and when to reply directly. Instructions don’t apply inside playbooks, so keep routing logic here and universal rules in the global prompt.Click the lightning bolt icon to generate either prompt using AI, then refine it for your use case.
The right panel shows what the Agent can access. Add and remove Playbooks and Workflows to control which ones your Agent can automatically route between.Enable System tools like knowledge base, web search, and end to allow your Agent to perform those actions, or cards, carousels and buttons to allow your Agent to send visual responses. Note that any Playbooks will automatically be able to use any system tools that are enabled at the Agent level.Click any playbook or workflow to add an LLM description. This helps the agent decide when to use it.(video)
At the top of the Agent tab can see and change the LLM (large language model) that powers your Agent. This will also be used as the default LLM for all playbooks in your project. On phone projects and web chat projects with voice mode enabled, can also change your text-to-speech and speech-to-text models.Each model comes with tradeoffs - for example, a more powerful LLM will also likely introduce more latency. Building a powerful AI agent requires you to decide how you’d like to prioritize power, cost, and latency, and there is no one-size-fits-all approach.