Creating Your First Agent¶
This guide will walk you through creating your first Sikka Agent. We'll build a simple chat agent that can respond to queries and use a search tool.
Basic Chat Agent¶
Let's start with a simple chat agent that can respond to text queries:
from sikkaagent.agents.chat_agent import ChatAgent
from sikkaagent.models.model_configure import ModelConfigure
from sikkaagent.utils.enums import ModelPlatformType
# Initialize the model with Ollama
model = ModelConfigure(
model="llama3.1:8b", # Use a model you have pulled locally
model_platform=ModelPlatformType.OLLAMA,
url="http://localhost:11434/v1" # Ensure Ollama server is running
)
# Create a basic chat agent
agent = ChatAgent(model=model)
# Chat with the agent
response = agent.step("What is artificial intelligence?")
print(response.msgs[0].content)
# Continue the conversation with context from previous messages
response = agent.step("Give me some examples of AI applications")
print(response.msgs[0].content)
This example shows how to create a simple chat agent using a local Ollama model. The code:
- Configures a model using
ModelConfigure
, specifying: - Model name (
llama3.1:8b
) - Model platform (
ModelPlatformType.OLLAMA
) - Ollama server URL (
http://localhost:11434/v1
) - Creates a
ChatAgent
instance with this model - Uses the
step()
method to send messages and get responses
Reference: This example uses the ChatAgent
class from sikkaagent/agents/chat_agent.py
and the model configuration from sikkaagent/models/model_configure.py
, and it properly uses the ModelPlatformType.OLLAMA
enum from sikkaagent/utils/enums.py
.
Adding Memory¶
Let's enhance our agent with memory capabilities:
from sikkaagent.agents.chat_agent import ChatAgent
from sikkaagent.models.model_configure import ModelConfigure
from sikkaagent.utils.enums import ModelPlatformType
from sikkaagent.memories.memory import Memory
from sikkaagent.storages.in_memory import InMemoryStorage
# Initialize model using Ollama
model = ModelConfigure(
model="llama3.1:8b", # Use a model you have pulled locally
model_platform=ModelPlatformType.OLLAMA,
url="http://localhost:11434/v1" # Ensure Ollama server is running
)
# Create a chat agent with memory
agent = ChatAgent(
model=model,
memory=InMemoryStorage()
)
# The agent will now remember previous interactions
response = agent.step("My name is Alice")
print(response.msgs[0].content)
response = agent.step("What did I tell you my name was?")
print(response.msgs[0].content) # Should recall the name "Alice"
Adding Tools¶
Now let's add a search tool to allow our agent to look up information:
import os
from sikkaagent.agents.chat_agent import ChatAgent
from sikkaagent.models.model_configure import ModelConfigure
from sikkaagent.utils.enums import ModelPlatformType
from sikkaagent.memories.memory import Memory
from sikkaagent.storages.in_memory import InMemoryStorage
from sikkaagent.tools.search_toolkit import SearchToolkit
# Initialize components
model = ModelConfigure(
model="llama3.1:8b", # Use a model you have pulled locally
model_platform=ModelPlatformType.OLLAMA,
url="http://localhost:11434/v1" # Ensure Ollama server is running
)
# Create agent with memory and tools
agent = ChatAgent(
model=model,
memory=InMemoryStorage(),
tools=[*SearchToolkit().get_tools()]
)
# The agent can now use the search tool to answer questions
response = agent.step("What are the latest advancements in fusion energy?")
print(response.msgs[0].content)
Customizing Agent Behavior¶
You can customize the agent's behavior with different parameters:
agent = ChatAgent(
model=ModelConfigure(
model="llama3.1:8b", # Use a model you have pulled locally
model_platform=ModelPlatformType.OLLAMA,
url="http://localhost:11434/v1", # Ensure Ollama server is running
config={
"temperature": 0.7, # Controls randomness (0.0 = deterministic, 1.0 = creative)
"max_tokens": 1000 # Maximum length of responses
}
),
memory=InMemoryStorage(),
tools=[*SearchToolkit().get_tools()],
system_prompt="You are a helpful AI assistant specializing in scientific research. Provide detailed, accurate information with citations when possible."
)
# Note: The streaming implementation depends on your model backend
response = agent.step("Explain quantum computing in simple terms")
print(response.msgs[0].content)
Creating a Task Agent¶
For specialized tasks, you can use TaskAgents:
from sikkaagent.agents.task_agents import TaskSpecifyAgent
from sikkaagent.models.model_configure import ModelConfigure
from sikkaagent.utils.enums import ModelPlatformType
# Initialize model
model = ModelConfigure(
model="llama3.1:8b", # Use a model you have pulled locally
model_platform=ModelPlatformType.OLLAMA,
url="http://localhost:11434/v1" # Ensure Ollama server is running
)
# Define a task
task_description = """
Analyze the following text and extract the main topics,
key points, and any action items.
"""
# Create a task agent
task_agent = TaskSpecifyAgent(
model=model,
task_specify_prompt=task_description
)
# Execute the task
text_to_analyze = """
In our meeting on April 5, we discussed the Q2 roadmap.
The team agreed to prioritize the following features:
1. User authentication improvements
2. Performance optimization for mobile devices
3. New reporting dashboard
John will lead the authentication work, Sarah will handle
mobile optimization, and Alex will design the new dashboard.
All features should be completed by June 15.
"""
result = task_agent.step(text_to_analyze)
print(result.msgs[0].content)
Next Steps¶
Now that you've created your first agent, you can:
- Explore the Core Modules to understand the framework's components
- Check out the Cookbooks for more complex examples
- Learn about Workflows for multi-agent setups
- Dive into the Tools documentation to add more capabilities
Remember to set up your environment variables with necessary API keys as described in the Configuration guide.