Core Responsibilities
The server handles a wide range of critical functions:Authentication
Manages user sessions, JWT-based authentication, and supports multi-user environments.
LLM Orchestration
Provides a unified interface for interacting with various LLM providers (OpenAI, Anthropic, Gemini, etc.).
RAG Pipeline
Handles document ingestion, embedding generation, and semantic search for Retrieval-Augmented Generation.
Workspace Management
Organizes chats, documents, and settings into logical, isolated workspaces.
Real-time Events
Supports WebSocket and Server-Sent Events (SSE) for streaming AI responses and real-time updates.
Procotol Server
Acts as a bridge for the Model Context Protocol (MCP), enabling external tool integration.
Project Structure
The project follows a clean, modular architecture designed for scalability and maintainability.Getting Started
To dive deeper into the server, check out the following resources:Architecture
Understand the data flow and technical stack.
Components
Detailed breakdown of key services and providers.