Framework Integrations
Parlant handles conversational control: deciding what the agent says, when, and why. But real-world agents also need additional capabilities: knowledge retrieval, multi-step workflows, web search, and more. This page shows how to integrate external AI frameworks like LangGraph, Agno, and LlamaIndex through Parlant's tool and retriever interfaces.
Where Parlant Sits in Your Stack​
Parlant is the layer your customers talk to. It controls the conversation: which guidelines fire, what tools run, and how the response is composed. Other frameworks plug in as tools (on-demand actions) or retrievers (passive knowledge grounding). Parlant decides when they run and how their results shape the agent's response.
The Integration Pattern​
External frameworks connect to Parlant through two interfaces: tools and retrievers. Here's a quick overview of each (see the full docs for details).
Tools​
Tools are on-demand actions that run when a relevant guideline fires. Wrap any external framework call inside a @p.tool function:
import parlant.sdk as p
@p.tool
async def my_external_tool(context: p.ToolContext, query: str) -> p.ToolResult:
"""Description of what this tool does."""
result = await some_external_framework.run(query)
return p.ToolResult(data=result)
Attach the tool to a guideline so Parlant knows when to call it:
await agent.create_guideline(
condition="The customer asks about ...",
action="Look up the answer using ...",
tools=[my_external_tool],
)
Retrievers​
Retrievers passively ground the agent's knowledge on every turn. They run in parallel with guideline matching, adding context without extra latency:
async def my_external_retriever(context: p.RetrieverContext) -> p.RetrieverResult:
if last_message := context.interaction.last_customer_message:
results = await some_external_framework.search(last_message.content)
return p.RetrieverResult(data=results)
return p.RetrieverResult(data=None)
await agent.attach_retriever(my_external_retriever)
Use tools for on-demand actions triggered by specific situations (e.g., "when the customer asks about X, run Y"). Use retrievers for passive knowledge grounding that should inform every response (e.g., FAQ lookups, product catalogs).
For full details, see the Tools and Retrievers documentation.
LangGraph​
LangGraph models stateful agents as directed graphs, where each node performs a step and edges define transitions. It excels at multi-step backend workflows like retrieve-rerank-synthesize RAG pipelines, data processing chains, and multi-service orchestration.
As a Parlant Tool​
Here's a LangGraph RAG pipeline (retrieve, rerank, synthesize) wrapped as a Parlant tool and attached to a guideline about company policy questions:
import parlant.sdk as p
from langgraph.graph import StateGraph, START
from typing import TypedDict
# -- LangGraph RAG pipeline --
class RAGState(TypedDict):
query: str
documents: list
answer: str
def retrieve(state: RAGState) -> RAGState:
docs = vector_store.search(state["query"], k=20)
return {"documents": docs}
def rerank(state: RAGState) -> RAGState:
reranked = reranker.rerank(state["query"], state["documents"])
return {"documents": reranked[:5]}
def synthesize(state: RAGState) -> RAGState:
answer = llm.generate(state["query"], state["documents"])
return {"answer": answer}
rag_pipeline = (
StateGraph(RAGState)
.add_node("retrieve", retrieve)
.add_node("rerank", rerank)
.add_node("synthesize", synthesize)
.add_edge(START, "retrieve")
.add_edge("retrieve", "rerank")
.add_edge("rerank", "synthesize")
.compile()
)
# -- Parlant tool wrapping the pipeline --
@p.tool
async def search_company_policies(
context: p.ToolContext,
query: str,
) -> p.ToolResult:
"""Search company policy documentation to answer policy-related questions."""
result = await rag_pipeline.ainvoke({"query": query})
return p.ToolResult(
data=result["answer"],
metadata={
"sources": [
{"title": doc.metadata["title"], "url": doc.metadata["url"]}
for doc in result["documents"]
]
},
)
The LangGraph pipeline handles the multi-step retrieval logic internally. Parlant doesn't need to know about the individual nodes. It just calls the tool when the guideline fires and uses the result to compose the response. The metadata field passes source URLs through to your frontend without cluttering the agent's context.
# Attach to a guideline
await agent.create_guideline(
condition="The customer asks about company policies, terms, or coverage",
action="Search the policy documentation and provide an accurate answer",
tools=[search_company_policies],
)
Claude Agent SDK​
Claude Agent SDK gives you the same autonomous agent loop that powers Claude Code: built-in tools for file operations, web search, and shell commands, plus the ability to spawn subagents. It excels at multi-step reasoning tasks where the agent needs to autonomously explore, analyze, and synthesize information.
As a Parlant Tool​
Here's a Claude Agent SDK agent that performs deep research on financial news for stock recommendations, wrapped as a Parlant tool:
import parlant.sdk as p
from claude_agent_sdk import query, ClaudeAgentOptions
# -- Parlant tool wrapping a Claude Agent SDK agent --
@p.tool
async def research_stock(
context: p.ToolContext,
ticker: str,
) -> p.ToolResult:
"""Research recent financial news and analyst sentiment for a given stock."""
messages = []
async for message in query(
prompt=(
f"Research the stock {ticker}. Search the web for recent financial news, "
f"earnings reports, and analyst ratings. Summarize the bull and bear cases, "
f"and highlight any significant recent developments."
),
options=ClaudeAgentOptions(
allowed_tools=["WebSearch", "WebFetch"],
max_turns=15,
),
):
messages.append(message)
# Extract the final text response
summary = next(
(m.content for m in reversed(messages) if m.type == "text"),
"No summary generated.",
)
return p.ToolResult(data=summary)
The Claude Agent SDK agent autonomously searches the web, follows links, and synthesizes findings across multiple sources, while Parlant controls when this research runs and how the results are presented in conversation.
# Attach to a guideline
await agent.create_guideline(
condition="The customer asks to research a specific stock",
action="Research the stock and present a balanced summary",
tools=[research_stock],
)
Agno​
Agno provides self-contained agents with built-in tool integrations: web search, code execution, file processing, and more. It's a fast way to add capabilities like live web search or code interpretation to your Parlant agent.
As a Parlant Tool​
Here's an Agno agent with DuckDuckGo web search, wrapped as a Parlant tool for product comparison research:
import parlant.sdk as p
from agno.agent import Agent as AgnoAgent
from agno.models.openai import OpenAIChat
from agno.tools.duckduckgo import DuckDuckGoTools
# -- Agno web research agent --
research_agent = AgnoAgent(
model=OpenAIChat(id="gpt-4o"),
tools=[DuckDuckGoTools()],
instructions=[
"You are a product research assistant.",
"Search the web for current pricing and feature comparisons.",
"Return structured, factual summaries with sources.",
],
markdown=False,
)
# -- Parlant tool wrapping the Agno agent --
@p.tool
async def research_product_comparison(
context: p.ToolContext,
query: str,
) -> p.ToolResult:
"""Research and compare products using live web search."""
response = research_agent.run(query)
return p.ToolResult(data=response.content)
The Agno agent handles web search orchestration (issuing queries, parsing results, synthesizing findings), while Parlant controls when it runs and how the results are presented to the customer. This keeps live web data out of your agent's context until it's actually relevant.
# Attach to a guideline
await agent.create_guideline(
condition="The customer asks to compare products or wants current market information",
action="Research the comparison using live web data and present a clear summary",
tools=[research_product_comparison],
)
LlamaIndex​
LlamaIndex is a data framework for indexing, retrieval, and querying over documents. It provides high-level abstractions for vector stores, query engines, and document processing, making it straightforward to build RAG systems over your own data.
As a Parlant Tool​
Wrap a LlamaIndex query engine as a Parlant tool when you want on-demand document lookup with source attribution:
import parlant.sdk as p
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
# -- LlamaIndex setup --
documents = SimpleDirectoryReader("./knowledge_base").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(similarity_top_k=3)
# -- Parlant tool wrapping the query engine --
@p.tool
async def query_knowledge_base(
context: p.ToolContext,
question: str,
) -> p.ToolResult:
"""Query the knowledge base to answer customer questions with source references."""
response = await query_engine.aquery(question)
sources = [
{"file": node.metadata.get("file_name", "unknown"), "score": node.score}
for node in response.source_nodes
]
return p.ToolResult(
data=str(response),
metadata={"sources": sources},
)
As a Parlant Retriever​
Alternatively, use a LlamaIndex retriever for passive grounding. Relevant documents are fetched on every turn and fed into the agent's context automatically:
import parlant.sdk as p
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
# -- LlamaIndex setup --
documents = SimpleDirectoryReader("./knowledge_base").load_data()
index = VectorStoreIndex.from_documents(documents)
retriever = index.as_retriever(similarity_top_k=5)
# -- Parlant retriever wrapping LlamaIndex --
async def knowledge_base_retriever(context: p.RetrieverContext) -> p.RetrieverResult:
if last_message := context.interaction.last_customer_message:
nodes = await retriever.aretrieve(last_message.content)
# Filter by relevance score
relevant = [
{"content": node.text, "source": node.metadata.get("file_name", "unknown")}
for node in nodes
if node.score and node.score > 0.7
]
return p.RetrieverResult(data=relevant if relevant else None)
return p.RetrieverResult(data=None)
# Attach at the agent level
await agent.attach_retriever(knowledge_base_retriever)
The tool pattern gives you on-demand lookup with source attribution passed through metadata. This is ideal when you want the agent to explicitly search for answers. The retriever pattern gives you passive grounding: relevant documents are fetched in parallel on every turn, keeping the agent informed without adding latency from guideline matching.
If the user is asking for specific information (e.g., "What's your return policy?"), use a tool attached to a guideline. If the agent should always be aware of relevant context (e.g., product catalog, FAQ), use a retriever.
Combining Frameworks​
These integrations are composable: a single Parlant agent can use tools from multiple frameworks alongside retrievers. Each framework handles what it does best while Parlant orchestrates when and how their results reach the conversation:
import parlant.sdk as p
import asyncio
async def main() -> None:
async with p.Server() as server:
agent = await server.create_agent(
name="Support Agent",
description="A helpful customer support agent.",
)
# LangGraph for complex RAG workflows
await agent.create_guideline(
condition="The customer asks about company policies or terms",
action="Search the policy docs and provide an accurate answer",
tools=[search_company_policies],
)
# Claude Agent SDK for deep financial research
await agent.create_guideline(
condition="The customer asks about a specific stock or investment opportunity",
action="Research the stock and present a balanced summary",
tools=[research_stock],
)
# Agno for live web research
await agent.create_guideline(
condition="The customer asks to compare products or wants market info",
action="Research using live web data and present findings",
tools=[research_product_comparison],
)
# LlamaIndex for on-demand knowledge base queries
await agent.create_guideline(
condition="The customer asks a product-specific question",
action="Look up the answer in the knowledge base",
tools=[query_knowledge_base],
)
# LlamaIndex retriever for passive grounding
await agent.attach_retriever(knowledge_base_retriever)
if __name__ == "__main__":
asyncio.run(main())
Next Steps​
- Learn more about Tools and Retrievers
- Walk through a complete agent build in the Healthcare Example
- Deploy your agent to production