Skip to main content

Installation

Getting Started with Parlant​

Quick Tip
Join our Discord channel for community support in getting your first agent up and running.
githubTo Discord

Parlant is an open-source context engineering framework for building scalable conversational AI agents.

It gives you clean, structured control over what your agent sees, says, when, and why—so you can build customer-facing agents whose interactions are consistent, traceable, and safe to deploy.

Here's what that means in practice:

  • Observations: The core primitive. Observations monitor the conversation and fire when a condition is met, allowing you to hook into conversational states and control exactly what enters the agent's context and when—guidelines, tools, retrieved information, journeys, and more. This keeps the agent focused and consistent even with hundreds of context elements.

  • Guidelines: Observations with actions attached. Define granular behavioral rules that tell the agent what to do when a specific condition is detected.

  • Tools: Attach external APIs and backend services to observations. Tools only enter the agent's context when their associated observation fires, eliminating false-positive tool calls.

  • Journeys: Define multi-turn Standard Operating Procedures (SOPs). These aren't rigid scripts: your agent adapts to the user's interaction patterns while keeping the bigger picture in mind.

  • Canned Responses: In high-stakes or otherwise sensitive moments, eliminate hallucinations by selectively bypassing output generation with pre-approved response templates.

  • Domain Understanding: Teach your agent domain-specific terminology and ground it in your business vocabulary. Every observation, guideline, and tool takes the glossary definitions into account.

  • Traceability: Inspect exactly which observations and guidelines fired, why, and what they produced—at every turn.

How It Works​

When your agent receives a message, Parlant dynamically assembles the right context and generates a controlled response:

Only the observations, guidelines, and tools relevant to the current conversation turn are loaded into context—keeping the model focused, consistent, and traceable even with hundreds of behavioral rules.

Installation​

Parlant is available on both GitHub and PyPI and works on multiple platforms (Windows, Mac, and Linux).

Please note that Python 3.10 and up is required for Parlant to run properly.

pip install parlant
Unstable Development Branch

If you're feeling adventurous and want to try out new features, you can install the latest development version from GitHub.

pip install git+https://github.com/emcie-co/parlant@develop

Creating Your First Agent​

Once installed, you can use the following code to spin up an initial agent. You'll flesh out its behavior later.

# main.py

import asyncio
import parlant.sdk as p

async def main():
async with p.Server() as server:
agent = await server.create_agent(
name="Otto Carmen",
description="You work at a car dealership",
)

asyncio.run(main())
New to async/await?

You'll notice Parlant follows the asynchronous programming paradigm with async and await. This is a powerful feature of modern Python that lets you to write code that can handle many tasks at once, allowing your agent to handle more concurrent requests in production with less compute resources.

If you're new to async programming, check out the official Python documentation for a quick introduction.

NLP Service Configuration​

The official and recommended NLP service for Parlant is Emcie, since the models are specifically optimized for Parlant's engine (guideline matching, message generation, etc.), resulting in lower costs and reduced latency compared to general-purpose model providers. Learn more about Emcie's inference platform.

To get started, set your API key and run the program:

export EMCIE_API_KEY="<YOUR_API_KEY>"
python main.py  # Run the server
Using Other LLM Providers

Parlant supports multiple LLM providers via the p.NLPServices class.

To use an alternative provider, specify it when creating the server:

async with p.Server(nlp_service=p.NLPServices.openai) as server:
...
Supported providers
ProviderExtra PackageEnvironment Variable
Emcie (default)Built-inEMCIE_API_KEY
OpenAIBuilt-inOPENAI_API_KEY
Anthropicparlant[anthropic]ANTHROPIC_API_KEY
Google Geminiparlant[gemini]GOOGLE_API_KEY
Google Vertex AIparlant[vertex]Google Cloud credentials
AWS Bedrockparlant[aws]AWS credentials
Azure OpenAIparlant[azure]AZURE_OPENAI_API_KEY
Cerebrasparlant[cerebras]CEREBRAS_API_KEY
Together AIparlant[together]TOGETHER_API_KEY
DeepSeekparlant[deepseek]DEEPSEEK_API_KEY
Ollamaparlant[ollama]— (local)
LiteLLMparlant[litellm]Varies by underlying provider

For the full, up-to-date list of supported providers, see pyproject.toml and the p.NLPServices factory class.

For providers that require an extra package, make sure to install it with pip:

pip install "parlant[anthropic]"

You can also implement your own provider via the p.NLPService interface—see Custom NLP Models for details.

Testing Your Agent​

To test your installation, head over to http://localhost:8800 and start a new session with the agent.

Post installation demo

Controlling Your Agent​

In Parlant, you control your agent by creating observations and guidelines. An observation monitors the conversation and fires when a condition is met. A guideline is an observation with an action attached—it tells the agent what to do when that condition is detected.

Parlant loads only the relevant observations and guidelines per turn, so you can add as many as you need without worrying about context overload or degraded instruction-following.

# main.py

import asyncio
import parlant.sdk as p

async def main():
async with p.Server() as server:
agent = await server.create_agent(
name="Otto Carmen",
description="You work at a car dealership",
)

##############################
## Add the following: ##
##############################
await agent.create_guideline(
# This observation condition detects when the guideline should fire
condition="the customer greets you",
# This action tells the agent what to do
action="offer a refreshing drink",
)

asyncio.run(main())

Now re-run the program:

python main.py

Refresh http://localhost:8800, start a new session, and greet the agent. You should expect to be offered a drink!

Using the Official React Widget​

If your frontend project is built with React, the fastest and easiest way to start is to use the official Parlant React widget to integrate with the server.

Here's a basic code example to get started:

import React from 'react';
import ParlantChatbox from 'parlant-chat-react';

function App() {
return (
<div>
<h1>My Application</h1>
<ParlantChatbox server="PARLANT_SERVER_URL" agentId="AGENT_ID" />
</div>
);
}

export default App;

For more documentation and customization, see the GitHub repo: https://github.com/emcie-co/parlant-chat-react.

npm install parlant-chat-react

Installing Client SDK(s)​

To create a custom frontend app that interacts with the Parlant server, we recommend installing our native client SDKs. We currently support Python and TypeScript (also works with JavaScript).

Python​

pip install parlant-client

TypeScript/JavaScript​

npm install parlant-client
tip

You can review our tutorial on integrating a custom frontend here: Custom Frontend Integration.

For other languages—they are coming soon! Meanwhile you can use the REST API directly.

github Get help on Discord!