Skip to main content

Tools & APIs

Tools & APIs in Parlant

Parlant provides a guided approach to tool usage, tightly integrated with its guidance system.

Parlant's tool-calling approach is built from the ground up. It's more comprehensive than the tool-calling mechanisms you may be familiar with from most LLM APIs (including MCP). This is because it's built to enable a deep, seamless integration with its guidance-based behavior control.

Getting Started with Tool Services

The best way to get started with custom tool services is to use the code generation command that comes with parlant-server.

parlant-server module create my_tool_service --template tool-service

This will automatically create a scaffold module and add it to ./parlant.toml so that your server automatically loads it when it boots up.

Understanding Tool Usage

In Parlant, tools are always associated with specific guidelines. A tool only executes when its associated guideline is matched to the conversation. This design creates a clear chain of intent: guidelines determine when and why tools are used, rather than leaving it to the LLM's judgment.

In Parlant, business logic (encoded in tools) is separated and worked on independently from presentation (or user-interface) concerns, which are the behavior that's controlled by the guidelines themselves. This allows you to have developers work out the logic in code, with full control—offering these tools in the "tool shed" of a Parlant server—for business experts to then go and utilize in their guideline definitions, as needed.

As an analogy, you can think of Guidelines and Tools like Widgets and Event Handlers in Graphical UI frameworks. A GUI Button has an onClick event which we can associate with some API function to say, "When this button is clicked, run this function.". In the same way, in Parlant—which is essentally a Conversational UI framework if you think about it—the Guideline is like the Button, the Tool is like the API function, and the association connects the two (like registering an event handler) to say, "When this guideline is applied, run this tool."

Here's a concrete example to illustrate these concepts:

  • Condition: The user asks about service features
  • Action: Understand their intent and consult the docs to answer
  • Tools Associations: [query_docs(user_query)]

Here, the documentation query tool only runs after the guideline instructs that we should be consulting documentation for this user interaction.

Associating a Guideline with a Tool

To allow a guideline to run a tool, here's what you do:

parlant guideline tool-enable --id GUIDELINE_ID --service SERVICE_NAME --tool TOOL_NAME

Dynamic Behavior Based on Tool Results

When you enable multiple engine iterations, tool results can potentially influence which guidelines become relevant, creating dynamic, data-driven behavior that remains under guideline control.

Engine Iterations

To learn more about enabling multiple engine iterations, please review the optimization page.

Here's how this works in practice. Consider a banking agent handling transfers. When a user requests a transfer, a guideline with the condition the user wants to make a transfer activates the get_user_account_balance() tool to check available funds. This tool returns the current balance, which can then trigger additional guideline matches based on its return value.

For instance, if the balance is below $500, we might have a low-balance guideline activate, instructing the agent to say something like: "I see your current balance is low. Are you sure you want to proceed with this transfer? This transaction might put you at risk of overdraft fees."

Best Practices

A Note on Natural Language Programming

While LLMs excel at conversational tasks, they struggle with complex logical operations and multi-step planning. Recent research in LLM architectures shows that even advanced models have difficulty with consistent logical reasoning and sequential decision-making. The "planning problem" in LLMs—breaking down complex tasks into ordered steps and synthesizing conclusions—remains a significant unsolved problem when consistently at scale is required.

Given these limitations, Parlant takes a pragmatic approach: Separate logic in code from conversation modeling. Instead of embedding business logic in guidelines, Parlant encourages a clean separation between conversational behavior and underlying business operations.

Consider your tools as your place for deterministic, programmatic business logic, and guidelines as your conversational interface design. This separation creates cleaner, more maintainable, and more reliable systems.

Examples

1. E-commerce Product Recommendations:

DON'T Complex business logic in guideline, over-relying on the LLM

  • Guideline Action: If user mentions sports, check their purchase history. If they bought running gear, recommend premium shoes. If they're new, suggest starter kit.
  • Tool Associations: [get_product_catalog()]

DO Logic goes in the coded recommendation engine, where it belongs

  • Guideline Action: Offer personalized recommendations
  • Tool Associations: [get_personalized_recommendations(user_context)]

2. Financial Advisory:

DON'T Financial analysis logic relies on unreliable LLM numeric comprehension

  • Guideline Action: Check account balance and recent transactions. If spending exceeds 80% of usual pattern, suggest budget review. If investment returns are down, recommend portfolio adjustment.
  • Tool Associations: [get_account_data()]

DO Financial analysis logic is handled reliably in code

  • Guideline Action: Get personalized financial insights
  • Tool Associations: [get_financial_insights(account_id)]

Tool Insights

Because Parlant's architecture is a radically modular, components like guideline matching, tool calling and message composition operate independently. While this non-monolithic approach offers many advantages in managing its complex semantic logic, it also requires it to manually manage and communicate contextual awareness across these components.

Tool Insights is a bridging component between tool calling and message composition, ensuring the composition component is informed when a tool couldn't be called for some reason—for example, due to missing required parameters.

This allows the agent to respond more intelligently. For example, if it had no knowledge of when an appropriate tool couldn't be called, it might generate a misleading response. With tool insights, the agent can recognize missing information and, communicate it accordingly or, if needed, prompt the customer for the required arguments.

Tool Parameter Options

To allow you to control and enhance the baseline behavior of Tool Insights, we've introduced ToolParameterOptions, a special parameter annotation which adds more control over how tool parameters are handled and communicated.

While Tool Insights helps the agent recognize when and why a tool call fails, ToolParameterOptions goes a step further by guiding the agent on when and how to explain missing specific parameters.

The ToolParameterOptions consists of several optional arguments, each refining the agent’s understanding and application of the parameter:

  • hidden If set to True, this parameter will not be exposed to message composition. This means the agent won't notify the customer if it’s missing. It's commonly used for internal parameters like opaque product IDs or any other information that should remain behind the scenes.

  • precedence When a tool has multiple required parameters, the tool insights communicated to the customer can be overwhelming (e.g., asking for 5 different items in a single message). Precedence lets you create groups (which share the same value) such that the customer would only learn about a few (the ones sharing a precedence value) at a time—in the order you choose.

  • source Defines the source of the argument. Should the agent request the value directly from the customer ("customer"), or should it be inferred from the surrounding context ("context")? If not specified, the default is "any", meaning the agent can retrieve it from anywhere.

  • description This helps the agent interpret the parameter correctly when extracting its argument from the context. Fill this if the parameter name is ambigious or unclear.

  • significance A customer-facing description of why this parameter is required. This helps customers understand and relate to what information they need to provide and why.

  • examples A list of sample values illustrating how the argument should be extracted. This is useful for enforcing formats (e.g., a date format like "YYYY-MM-DD").

  • adapter A function that converts the inferred value into the correct type before passing it to the tool. If provided, the agent will run the extracted argument through this function to ensure it matches the expected format. Use when the parameter type is a custom type in your codebase.

  • choice_provider A function that provides valid choices for the parameter's argument. Use this to constrain the agent to dynamically choose a value from a specific set returned by this function.

Example

Let's take a look at a use case where we need to return a specific transaction we’re looking for:

@tool
def get_transaction_details(
context: ToolContext,
transaction_id:str,
preferred_currency:str,
) -> ToolResult:
# ...
return ToolResult(transaction_details)

You can create the same tool but with a better structured and guided approach using ToolParameterOptions. The new tool now looks like this:

@tool
def get_transaction_details(
context: ToolContext,
transaction_id: Annotated[str, ToolParameterOptions(
# Must be provided explicitly by the customer
source="customer",
# Helps the customer understand why we're asking for this
significance="Needed to identify the specific transaction",
)],
preferred_currency: Annotated[Optional[str], ToolParameterOptions(hidden=True)],
) -> ToolResult:
# your code here
return ToolResult({...})

Parameter Value Constraints

In cases where you need a tool's argument to fall into a specific set of choices, Parlant can help you ensure that the tool-call is parameterized according to those choices. There are two ways to go about it:

  1. Use enums when you are able to provide hard-coded choices
  2. Use choice_provider when the choices are dynamic (e.g., customer-specific)

Enum Parameters

Specify a fixed set of choices that are known ahead of time, using an Enum class.

class ProductCategory(enum.Enum):
LAPTOPS = "laptops"
PERIPHERALS = "peripherals"
MONITORS = "monitors"

@tool
def get_products(
context: ToolContext,
category: ProductCategory,
) -> ToolResult:
# your code here
return ToolResult(returned_data)

Choice Provider

Dynamically offer a set of choices based on the current execution context.

async def get_last_order_ids(context: ToolContext) -> list[str]:
return await load_last_order_ids_from_db(customer_id=context.customer_id)

@tool
def load_order(
context: ToolContext,
order_id: Annotated[Optional[str], ToolParameterOptions(
choice_provider=get_last_order_ids,
)],
) -> ToolResult:
# your code here
return ToolResult({...})

Secure Data Access with Tool Context

Suppose you need to build a tool that retrieves or displays data specific to different customers.

A naive approach would be to ask the customer to idenfity themselves and use that as an access token into the right data. But this approach is highly insecure, as it relies on the LLM for identifying the user. The LLM can get it wrong or, worse yet, be manipulated by malicious users.

A better and more reliable way to do this is to register your customers with Parlant and use the information available programmatically, which is contained in the ToolContext parameter of your tool.

Here’s how that would look in practice:

@tool
async def get_transactions(context: ToolContext) -> ToolResult:
transactions = await DB.get_transactions(context.customer_id)
return ToolResult(transactions)