Lore

Enterprise memory and knowledge management platform. AI-native, fully independent, owned by you. Seamless integration, uncompromising security.

What is Lore?

Lore is a long-term memory platform designed to be the central repository for critical knowledge, architecture, and organizational memory. It bridges the gap between human knowledge management (Confluence-style wiki) and AI platform integration via MCP (Model Context Protocol) standard servers.

Unlike ephemeral AI context windows, Lore provides persistent, searchable, ranked memory that multiple AI agents can reliably retrieve and store information into. It's fully independentβ€”you own it, you control it, and it works seamlessly with any AI platform.

Memory Flow

How information enters, is organized, and retrieved from Lore

πŸ€–
AI Platforms
MCP Server
β†’
πŸ“₯
Ingestion
Embedding & Index
β†’
πŸ’Ύ
Storage
Vector + Metadata
β†’
πŸ”
Retrieval
Ranked Search

Core Capabilities

🧠
Dual Interface

Web-based wiki for human knowledge management + MCP server for AI agents. Both access the same unified memory store.

πŸ“Š
Intelligent Ranking

Wiki items ranked as higher accuracy source of truth. Semantic search with confidence scoring ensures most relevant memory retrieved first.

🏷️
Author & Classification

Track who wrote each memory. Classify data as PHI, PII, sensitive, or secret. Fine-grained access control per classification.

πŸ”
Enterprise Security

OIDC SSO (Authentik), OAuth, API bearer tokens. Multi-tenant isolation. Your data, fully encrypted, under your control.

⚑
MCP Compatible

Standard MCP server for seamless AI integration. Use with Claude, other AI platforms, or custom agents without proprietary APIs.

πŸš€
Scalable Architecture

Container-native (ARM64). Embedded models included. Scale to external embeddings or LLM for large deployments.

Architecture & Deployment

Three-Layer System

1
Ingestion & Embedding Layer

Memory enters via human input (wiki), AI platforms (MCP), or APIs. Embedded into vector space using containerized embedding models (can scale to external).

2
Storage & Organization Layer

Dual storage: vector store for semantic search + document store for full content. Metadata indexed for author, classification, tags, timestamps. Multi-tenant isolation enforced.

3
Retrieval & Access Layer

Semantic search with ranking (wiki items prioritized). Fine-grained access control based on data classification. MCP server for AI, web UI for humans, API for integrations.

Data Classifications

Each memory can be tagged with classification level, controlling who can access it:

Public Internal PHI (Protected Health Info) PII (Personally Identifiable) Sensitive Secret/Password

Deployment Model

Lore runs entirely in Docker containers, optimized for ARM64 architecture:

# Core Services
lore-api: GraphQL/REST API + MCP Server
lore-embedder: Embedding model (containerized)
lore-web: Web UI (Confluence-style wiki)
lore-vector-db: Vector search (Qdrant/Weaviate)
lore-postgres: Document + metadata store
lore-auth: OIDC/OAuth integration

# Optional External Services
external-embedder: For large-scale deployments
external-llm: For advanced semantic search

Common Use Cases

πŸ—οΈ
Architecture Documentation

Central source of truth for system design, decision records, and runbooks. AI agents retrieve context automatically.

πŸ”
Intelligent Search

"How do we authenticate users?" retrieves both wiki articles and related memories, ranked by accuracy and relevance.

🀝
Multi-Agent Coordination

Multiple AI agents share knowledge. One agent's findings become knowledge for another. No duplicate learning.

πŸ”‘
Secret & Credential Management

Safely store database passwords, API keys, and sensitive config. Only authorized AI agents can retrieve.

πŸ“‹
Compliance & Audit Trail

Track who wrote what, when. Data classifications ensure compliance with HIPAA, GDPR, and internal policies.

🌍
Multi-Tenant Enterprise

Separate knowledge bases per organization. OIDC SSO for unified authentication. Complete data isolation.

MCP Server Integration

Lore includes a full MCP (Model Context Protocol) server, allowing AI platforms to interact with your memory as seamlessly as any other resource:

# AI Platform can call MCP tools:

- lore:search(query) β†’ semantic search, returns ranked results
- lore:store(memory, tags, classification) β†’ save new memory
- lore:retrieve(id) β†’ fetch full memory with metadata
- lore:query_by_author(name) β†’ find memories by creator
- lore:list_classifications() β†’ available data classes

# Result: AI agents treat Lore as a reliable, persistent extension
# of their reasoning and knowledge base.

Build Your Memory Foundation

Lore gives you enterprise-grade memory infrastructure owned by you, integrated seamlessly with any AI platform.