Skip to main content

The conversation modeling engine for building better, deliberate Agentic UX.

Build controlled, compliant, and consistent generative AI chat agents.



Tame the chaos of generation.

LLMs are surprisingly poor at following more than a few instructions — due to attention drift. This quickly becomes a showstopper for production.

Parlant seamlessly solves this "cognitive overload" with dynamic guideline matching that ensures your instructions (including tool calls) are only fed into the language model when the relevant contextual conditions apply, seamlessly eliminating inconsistency — even in complex conversations.

Parlant's filtering system keeps track of the conversation. It understands when an instruction was already performed. It knows when a previously-applied action has shifted context and needs to re-apply. It distinguishes between different parts of a compound action and tracks them separately. It even understands when a condition depends on the agent's next intention, which it actively predicts. It's a context management powerhouse.

And it does all of this while maintaining low response latency, leveraging extensive parallelism and perceived performance techniques, delivering snappy conversation experiences while maximizing the LLM's accuracy.

import parlant.sdk as p

async def start_conversation_server():
async with p.Server() as server:
agent = await server.create_agent(
name="Otto Carmen",
description="You work at a car dealership",
)

sell_car_journey = await agent.create_journey(
title="Sell car",
conditions=[
"The customer wants to buy a new car",
"The customer expressed general interest in new cars",
],
description=dedent("""\
Proactively help the customer decide what new car to get.

1. First establish rapport and clarify their situation and needs.
2. Ask about their current car, what they like or don't like about it.
3. Ask about budget and preferences.
4. Once needs are clarified, recommend relevant categories
or specific models for consideration.
5. Once a choice is made, ask them to confirm the order"""),
)

# Add journey-scoped guidelines to ensure behavior remains
# guided and consistent even outside a journey's "happy path"
await sell_car_journey.create_guideline(
condition="the customer doesn't know what car they want",
action="tell them that the current on-sale car is popular now",
tools=[get_on_sale_car],
)

Control the interaction.

Instead of relying on the LLM to guess how you want the conversation to go, Parlant lets you dictate and enforce the exact behavior you want.

Define semantic relationships between guidelines, like prioritization, dependency, entailment, and more. This is useful when you want to ensure that certain guidelines are always followed first, that some guidelines are only triggered when specific other ones are active, or to ensure that certain instructions are mutually exclusive.

@p.tool
async def human_handoff_to_sales(context: p.ToolContext) -> p.ToolResult:
await notify_sales(context.customer_id, context.session_id)

return p.ToolResult(
data="Session handed off to sales team",
control={"mode": "manual"},
)

offer_on_sale_car = await journey.create_guideline(
condition="the customer indicates they're on a budget",
action="offer them a car that is on sale",
tools=[get_on_sale_cars],
)

transfer_to_sales = await journey.create_guideline(
condition="the customer clearly stated they wish to buy a specific car",
action="transfer them to the sales team",
tools=[human_handoff_to_sales],
)

await transfer_to_sales.prioritize_over(offer_on_sale_car)

Sanitize your outputs.

The outputs of LLMs are adaptive and fluid, but often impossible to predict and subtly inaccurate. This makes them unfit for message generation in high-stakes, customer-facing use cases.

But service conversations are repetitive, and it turns out you can take advantage of this quality to avoid even subtle output hallucinations by avoiding generated messages altogether, or, at most, by confining the generation to specific, controlled parts of the message.

Parlant supports utterance templates. How it works: the LLM drafts a fluid message, and Parlant matches and renders an utterance . Templates are in Jinja2 format, and are fed with both contextual and tool-driven data.

This means your customer sees a message that is approved, correct, and appropriate. Want to get closer to the true fluidity of the underlying LLM? Iteratively grow your utterance bank with time and experimentation.

@p.tool
async def get_on_sale_car(context: p.ToolContext) -> p.ToolResult:
car_model = "Tesla Model Y"

return p.ToolResult(
data=car_model, # Feed the fluid draft
utterance_fields={"car_on_sale": car_model,} # Feed the template
)

await agent.create_utterance(
# Use tool-driven field data to fill in the template
template=dedent("""\
The {{car_on_sale}} is currently on sale!

Want to hear more about it?"""),
)

await agent.create_utterance(
# Access built-in contextual fields using the std. prefix
template=dedent("""\
Hey {{std.customer.name}}!

What can I help you with today?"""),
)

await agent.create_utterance(
# Use generative fields to infer details
# from context in a controlled manner
template="Sorry to hear about {{generative.customer_issue}}."
)


Who Uses Parlant?

Parlant is used to deliver complex conversational agents that reliably follow your protocols in use cases such as:

Regulated financial services

Regulated financial services

Legal assistance

Legal assistance

Brand-sensitive customer service

Brand-sensitive customer service

Healthcare communications

Healthcare communications

Government and civil services

Government and civil services

Personal advocacy and representation

Personal advocacy and representation



Have Questions? Let's Talk!

Whether you're exploring or already building, we're happy to chat about how Parlant can help your use case.

Contact Team
Team members 1
Team members 2
Team members 3
Team members 4
Team members 5


Contributing

Join the open-source movement to get conversational GenAI agents under control!

Explore