Skip to content

Core Modules

SikkaAgent is built on a modular architecture where each component serves a specific purpose in the agent framework.

Overview

The core modules of SikkaAgent work together to create a flexible, powerful platform for building AI agent systems:

Module Description Key Features
Agents Agent implementations with different capabilities Conversation, task execution, knowledge integration
Models Interface to various AI models Unified model API, provider abstraction
Memories Contextual memory systems Conversation history, prioritization strategies
Tools Specialized capabilities for agents Web browsing, search, file manipulation, media analysis
Retrievers Information retrieval components Vector search, semantic querying
Storages Data persistence solutions Memory storage, vector databases
Workflows Multi-agent orchestration DAG-based pipelines, role-playing simulations
Prompts Prompt management and templating System prompts, dynamic templates
Knowledge Knowledge Base Create Knowledge, Manage Knowledge Base, Query Knowledge Base
Documents Document Processing Document Processing, Handling

Module Relationships

The core modules interact in a layered architecture:

┌─────────────────────────────────────┐
│              Agents                 │
└───────────────┬─────────────────────┘
┌───────────────┼─────────────────────┐
│   Models      │       Workflows     │
└───────────────┼─────────────────────┘
┌───────────────┼─────────────────────┐
│  Memories     │     Retrievers      │
└───────────────┼─────────────────────┘
┌───────────────┼─────────────────────┐
│   Tools       │      Storages       │
└───────────────┼─────────────────────┘
┌───────────────┼─────────────────────┐
│   Knowledge   │      Documents      │
└───────────────┼─────────────────────┘
┌───────────────┴─────────────────────┐
│             Prompts                 │
└─────────────────────────────────────┘
  • Agents combine various modules to provide complete conversational or task-focused interfaces
  • Models power the reasoning capabilities of agents through LLM integration
  • Memories maintain context across interactions using various storage backends
  • Tools extend agent capabilities with specialized functions
  • Retrievers enable intelligent information access from various sources
  • Storages handle data persistence and retrieval for various components
  • Workflows orchestrate collaboration between multiple agents
  • Prompts underpin everything by structuring how agents communicate with models

Integration Patterns

SikkaAgent modules are designed to be used together in various combinations:

Agent + Memory + Tools

from sikkaagent.agents import ChatAgent
from sikkaagent.memories import Memory
from sikkaagent.storages import InMemoryStorage
from sikkaagent.tools import SearchToolkit
from sikkaagent.models import ModelConfigure
from sikkaagent.utils.enums import ModelPlatformType

# Initialize model
model = ModelConfigure(
    model="llama3.1:8b",
    model_platform=ModelPlatformType.OLLAMA,
    url="http://localhost:11434/v1"
)

# Create agent with memory and tools
agent = ChatAgent(
    model=model,
    memory=InMemoryStorage(),
    tools=[*SearchToolkit().get_tools()]
)

# Interact with the agent
response = agent.step("What were the major AI developments in 2023?")

Retriever + Knowledge + Agent

from sikkaagent.agents import ChatAgent
from sikkaagent.knowledge import AgentKnowledge
from sikkaagent.retrievers.sentence_transformers_embeddings import SentenceTransformerEmbeddings
from sikkaagent.storages.qdrant import Qdrant

# Create embedder and storage
embedder = SentenceTransformerEmbeddings()
vector_db = Qdrant(collection="knowledge_base", embedder=embedder)

# Create knowledge with vector database
knowledge = AgentKnowledge(vector_db=vector_db)

# Add to agent
agent = ChatAgent(
    model=model,
    knowledge=knowledge,
    add_references=True
)

Workflow + Multiple Agents

from sikkaagent.workflows.dag import StateGraph, State, START, END
from sikkaagent.agents import ChatAgent

# Create specialized agents
research_agent = ChatAgent(model=model, system_prompt="You research facts...")
writing_agent = ChatAgent(model=model, system_prompt="You write content...")

# Create workflow
graph = StateGraph(State)
graph.add_node("research", research_agent)
graph.add_node("writing", writing_agent)
graph.add_edge(START, "research")
graph.add_edge("research", "writing")
graph.add_edge("writing", END)

# Run workflow
workflow = graph.compile()
result = workflow.invoke({"query": "Write about quantum computing"})