Enterprise memory and knowledge management platform. AI-native, fully independent, owned by you. Seamless integration, uncompromising security.
Lore is a long-term memory platform designed to be the central repository for critical knowledge, architecture, and organizational memory. It bridges the gap between human knowledge management (Confluence-style wiki) and AI platform integration via MCP (Model Context Protocol) standard servers.
Unlike ephemeral AI context windows, Lore provides persistent, searchable, ranked memory that multiple AI agents can reliably retrieve and store information into. It's fully independentβyou own it, you control it, and it works seamlessly with any AI platform.
How information enters, is organized, and retrieved from Lore
Web-based wiki for human knowledge management + MCP server for AI agents. Both access the same unified memory store.
Wiki items ranked as higher accuracy source of truth. Semantic search with confidence scoring ensures most relevant memory retrieved first.
Track who wrote each memory. Classify data as PHI, PII, sensitive, or secret. Fine-grained access control per classification.
OIDC SSO (Authentik), OAuth, API bearer tokens. Multi-tenant isolation. Your data, fully encrypted, under your control.
Standard MCP server for seamless AI integration. Use with Claude, other AI platforms, or custom agents without proprietary APIs.
Container-native (ARM64). Embedded models included. Scale to external embeddings or LLM for large deployments.
Memory enters via human input (wiki), AI platforms (MCP), or APIs. Embedded into vector space using containerized embedding models (can scale to external).
Dual storage: vector store for semantic search + document store for full content. Metadata indexed for author, classification, tags, timestamps. Multi-tenant isolation enforced.
Semantic search with ranking (wiki items prioritized). Fine-grained access control based on data classification. MCP server for AI, web UI for humans, API for integrations.
Each memory can be tagged with classification level, controlling who can access it:
Lore runs entirely in Docker containers, optimized for ARM64 architecture:
# Core Services
lore-api: GraphQL/REST API + MCP Server
lore-embedder: Embedding model (containerized)
lore-web: Web UI (Confluence-style wiki)
lore-vector-db: Vector search (Qdrant/Weaviate)
lore-postgres: Document + metadata store
lore-auth: OIDC/OAuth integration
# Optional External Services
external-embedder: For large-scale deployments
external-llm: For advanced semantic search
Central source of truth for system design, decision records, and runbooks. AI agents retrieve context automatically.
"How do we authenticate users?" retrieves both wiki articles and related memories, ranked by accuracy and relevance.
Multiple AI agents share knowledge. One agent's findings become knowledge for another. No duplicate learning.
Safely store database passwords, API keys, and sensitive config. Only authorized AI agents can retrieve.
Track who wrote what, when. Data classifications ensure compliance with HIPAA, GDPR, and internal policies.
Separate knowledge bases per organization. OIDC SSO for unified authentication. Complete data isolation.
Lore includes a full MCP (Model Context Protocol) server, allowing AI platforms to interact with your memory as seamlessly as any other resource:
# AI Platform can call MCP tools:
- lore:search(query) β semantic search, returns ranked results
- lore:store(memory, tags, classification) β save new memory
- lore:retrieve(id) β fetch full memory with metadata
- lore:query_by_author(name) β find memories by creator
- lore:list_classifications() β available data classes
# Result: AI agents treat Lore as a reliable, persistent extension
# of their reasoning and knowledge base.
Lore gives you enterprise-grade memory infrastructure owned by you, integrated seamlessly with any AI platform.