Use this file to discover all available pages before exploring further.
This guide walks you through the three pillars of Meibel — Context, Agents, and Confidence — in a single end-to-end workflow. By the end, you will have parsed a document, built a knowledge base, created an agent, and chatted with it.
Now move to the Agentic pillar. An agent combines a system prompt with one or more datasources to answer questions grounded in your content.
from meibel.models import CreateAgentDefinitionRequest, PublishAgentDefinitionRequest# Create the agentagent = client.agents.create_agent( body=CreateAgentDefinitionRequest( display_name="Product Assistant", description="Answers questions about product documentation", instructions="You are a helpful product assistant. Answer questions using only the provided knowledge base. If you don't know the answer, say so.", ))agent_id = agent.idprint(f"Created agent: {agent_id}")# Publish it so it can accept chat sessionsclient.agents.publish_agent( agent_id=agent_id, body=PublishAgentDefinitionRequest(commit_message="Initial release"),)print("Agent published")
Start a session and send a message. Each session maintains its own conversation history.
from meibel.models import ChatMessageRequest# Create a sessionsession = client.agents.create_session(agent_id=agent_id)session_id = session.session_idprint(f"Session: {session_id}")# Send a messageresponse = client.sessions.send_chat_message( session_id=session_id, body=ChatMessageRequest(user_message="What does the product do?"),)print(response.message)
For a real-time experience, stream the agent’s reply token-by-token. This is ideal for chat interfaces where you want to show output as it is generated.
stream = client.sessions.send_chat_message_stream( session_id=session_id, body=ChatMessageRequest(user_message="Summarize the key features"),)for event in stream: print(event.delta, end="", flush=True)print() # newline after stream completes
The Confidence pillar is built into every response. When an agent answers a question, the response includes confidence metadata that tells you how well-grounded the answer is in your datasource content.
response = client.sessions.send_chat_message( session_id=session_id, body=ChatMessageRequest(user_message="What is the pricing model?"),)print(f"Answer: {response.message}")print(f"Confidence: {response.confidence.score}")print(f"Sources: {len(response.confidence.citations)} citation(s)")for citation in response.confidence.citations: print(f" - {citation.source}: {citation.text[:80]}...")
Confidence scores let you build guardrails — for example, flagging low-confidence answers for human review or requiring a minimum score before showing a response to end users.