Semem, Know Thyself
In use, Semem is mostly intended to manage memories accumulated on the fly, relevant to the task at hand. A combination of uploaded docs, interaction history and inferences made. But background knowledge is desirable. Ok, there will be plenty provided by the LLM in use by Semem and there are external connectors to query Wikipedia and Wikidata. But that leaves the question of more local, maybe project-specific info. How does that get into the system? What if we want Semem to be aware of its own documentation?
SPARQL Store as Integration Point
Semem has facilities for querying SPARQL stores - it's a big part of the core functionality, that's where the knowledge graphs live. So if the material of interest can be placed in such a store, it should be relatively straightforward to access.
Another project in the Tensegrity Stack is Transmissions, my pipeliney thing. And guess what, I've already got a pipeline for walking a directory tree, looking for markdown files and POSTing them into a SPARQL store.
Claude Code has helped me put together the code to pull data from a remote SPARQL store into that of Semem. In this particular instance Semem is using a local Fuseki store, the same one I'm using with Transmissions locally. Named graphs are used to keep things independent. This does mean there'll be redundancy, but I think it'll be worth it for the sake of loose coupling.
node examples/ingestion/SPARQLIngest.js --endpoint "http://localhost:3030/semem/query" --template blog-articles --graph "http://tensegrity.it/semem"
Markdown to SPARQL Store
Note to self, these are the bits I've created (in the Semem codebase) for the Transmissions side of the operation.
./del-semem-graph.sh
- utility to clear the graph, this is handy for testing
docs/tt.ttl
- the configuration (paths, graph names etc) for the Transmission
docs/endpoints.json
- the store defn
scripts/transmissions.sh
- this runs the Transmission pipeline md-to-sparqlstore
, all it contains is :
cd ~/hyperdata/transmissions # my local path
./trans -v md-to-sparqlstore ~/hyperdata/semem/docs
The Transmissions part worked ok (confirmed by querying the store via Fuseki's UI). Right now the ingestion part is running. It is taking a very long time.
I've far from finished confirming that all the parts of the system are working correctly. A meta-issue is that the operations are really slow. This is understandable - there's a lot going on, what with SPARQL queries, embeddings being juggled as well as remote LLM chat completion calls (I'm using Mistral free tier). My next step has to be to set up some metrics, locate the bottlenecks... Ha, is obvious without looking - remote LLM calls. I need to figure out which of these have to happen in real time, which I can have running in the background (queue/scheduler needed).
Semem, Know Thyself
Claude : Memory System with ZPT Navigation Implementation
Context-Aware Memory That Adapts to Your Perspective
If you've worked with large language models, you've likely experienced the frustration of context windows that forget earlier parts of your conversation, or the challenge of helping an AI system understand which pieces of information are relevant to your current task. The Semem project addresses these limitations by implementing a persistent semantic memory system with a novel navigation paradigm called ZPT (Zoom-Pan-Tilt).
What We've Built
At its core, Semem stores conversations, documents, and extracted concepts in a knowledge graph using RDF/SPARQL technology. But rather than requiring users to write complex graph queries, we've implemented an intuitive spatial metaphor for navigating this knowledge space.
The ZPT system works like adjusting a camera:
- Zoom controls the level of abstraction: from individual entities and concepts, up through semantic units, full documents, topic communities, and the entire corpus
- Pan filters the domain: temporal ranges, keywords, specific entities, or subject areas
- Tilt changes the view style: keyword-based summaries, embedding similarity clusters, graph relationships, or temporal sequences
A Real Scenario: Research Assistant Workflow
Consider Sarah, a researcher studying the intersection of ADHD and creativity. She's been having ongoing conversations with an AI assistant about her work, importing research papers, and storing insights. Here's how the ZPT system adapts to her changing needs:
Week 1 - Initial Research
Sarah starts by telling the system about ADHD research papers she's reading. The system extracts concepts like "attention deficit," "hyperactivity," "executive function," and stores them with vector embeddings for semantic similarity.
Week 2 - Discovering Patterns
When Sarah asks "What connections exist between ADHD traits and creative problem-solving?", she uses:
- Zoom: Entity level (individual concepts and their relationships)
- Pan: Keywords filtered to "ADHD, creativity, cognitive flexibility"
- Tilt: Graph view to see relationship networks
The system retrieves not just her recent conversations, but connects concepts from papers she read weeks ago, showing how "divergent thinking" relates to both "ADHD traits" and "creative output."
Week 3 - Writing Phase
Now writing a literature review, Sarah shifts her perspective:
- Zoom: Document level (full papers and substantial text chunks)
- Pan: Temporal filter for "papers published 2020-2024"
- Tilt: Temporal view to see how ideas evolved over time
The same underlying knowledge graph serves both use cases, but the navigation system surfaces different aspects based on her current context.
Technical Implementation
The system uses several key components working together:
Document Ingestion: Research papers, blog posts, or other documents get chunked semantically and stored with embeddings. Concepts are extracted and linked in the knowledge graph.
Conversation Memory: Every interaction is stored with context about what was discussed, when, and how it relates to existing knowledge.
Ragno Layer: This component decomposes text into semantic units, entities, and relationships using RDF standards, making knowledge machine-readable and queryable.
ZPT Navigation: The spatial metaphor translates user intentions into precise graph queries without requiring technical expertise.
Current Capabilities
Today, you can:
- Ingest documents from SPARQL endpoints or direct upload
- Have ongoing conversations that remember context across sessions
- Use the web-based workbench to chunk documents and ask questions
- Navigate your knowledge space using the ZPT controls
- Get contextually relevant answers that draw from your entire knowledge history
The system runs locally or in containerized deployments, with support for multiple LLM providers (Mistral, Claude, Ollama) and persistent storage in SPARQL triple stores.
What Makes This Different
Unlike simple RAG (Retrieval-Augmented Generation) systems that match queries to document chunks, or conversational systems that maintain only recent context, this approach treats knowledge as a navigable space. You're not just searching—you're exploring from different vantage points.
The semantic web foundation means your knowledge connects not just through text similarity, but through meaningful relationships between concepts. When you ask about "attention mechanisms," the system understands connections to both "neural attention in AI models" and "cognitive attention in psychology" based on how you've used these concepts in context.
The result is an AI assistant that grows more useful over time, building a persistent understanding of your interests, expertise, and the conceptual frameworks you use to think about problems. Your conversations and documents become part of a queryable knowledge space that adapts its presentation to match your current perspective and goals.
Development Progress
This implementation completes the core memory management system that has been in development. The workbench UI now provides full access to:
- Memory storage and retrieval through the Ask/Tell interface
- Document chunking via the Augment operations
- ZPT navigation controls for filtering and organizing knowledge
- Real-time console monitoring of memory operations
- Cross-session persistence with intelligent relevance scoring
The test workflow validation confirmed end-to-end functionality from document ingestion through semantic chunking to contextual question answering, demonstrating that the system successfully retrieves specific information from previously stored context.
Next development phases will focus on adaptive relevance learning, contextual memory clustering, and collaborative memory spaces for team-based knowledge management.
Claude : Memory System with ZPT Navigation Implementation
Claude : Enhanced Ask Operation - User Perspective
Date: 2025-08-23
Activity: User experience documentation
Status: Current functionality
Asking Questions in Semem
The Ask operation is your main tool for querying stored knowledge in your semantic memory. Simply type your question in natural language and get answers drawn from your stored documents, notes, and concepts.
Using the Workbench Interface
In the Semantic Memory Workbench, navigate to the Ask section where you'll find a clean question input area. Type your question and click "🔍 Search Knowledge" to get contextual responses based on everything you've stored.
For MCP host users (Claude Desktop, etc.), you can suggest: "Use the ask tool to query my semantic memory with enhanced options for better results"
Answer Quality Options
Control how thoroughly the system analyzes your question:
- Basic: Quick responses for simple factual questions
- Standard: Balanced approach that works well for most queries (default)
- Comprehensive: Deep analysis with multiple refinement passes for complex research topics
In the workbench, look for quality mode options in the Ask panel. MCP users can suggest: "Set the ask mode to comprehensive for detailed analysis"
Knowledge Enhancement Features
HyDE Enhancement
This feature generates hypothetical documents that might contain your answer, improving search accuracy when your question uses different terminology than your stored content. Particularly useful for technical topics or when you're not sure how something was originally described in your documents.
Wikipedia Integration
Expands your answers by incorporating relevant Wikipedia content, giving you broader context beyond your personal knowledge base. Excellent for research topics, historical questions, or when you need authoritative background information.
Wikidata Integration
Provides structured, factual information from the Wikidata knowledge graph. Perfect for questions about people, organizations, dates, and relationships. Adds verified factual details that complement your stored content.
Using the Enhancements
In the workbench interface, you'll find checkboxes or toggles for each enhancement option in the Ask panel. Enable the ones that suit your question type.
For MCP host users, try suggestions like:
- "Ask with HyDE enhancement for better retrieval"
- "Query my knowledge using Wikipedia integration"
- "Search with Wikidata enhancement for factual details"
- "Use comprehensive mode with all enhancements enabled"
Context-Aware Responses
Your questions automatically consider your current navigation context. If you've been exploring a particular topic area using Zoom, Pan, or Tilt operations, your Ask results will be filtered and prioritized based on that context.
When to Use Each Feature
Quick daily questions: Use basic mode without enhancements
Research projects: Enable comprehensive mode with Wikipedia
Technical documentation: Use HyDE when terminology might not match exactly
Fact-checking: Enable Wikidata for verified information
Academic work: Combine all enhancements with comprehensive mode
The enhanced Ask operation turns your stored knowledge into a powerful research assistant, seamlessly blending your personal content with external authoritative sources.
Claude : Enhanced Ask Operation - User Perspective
Claude : Simple Verbs Parameter Synchronization
Date: 2025-08-23
Activity: Infrastructure maintenance and API consistency
Status: Completed
Background
The Semem system provides semantic memory functionality through two MCP (Model Context Protocol) server implementations: an HTTP server for REST API access and a STDIO server for direct MCP protocol communication. Over time, the HTTP server had evolved to include enhanced parameters for the core "seven simple verbs" operations, while the STDIO server retained older parameter schemas. This created inconsistency between the two interfaces.
Work Completed
Parameter Schema Updates
Updated the STDIO MCP server tool definitions to match the HTTP server's parameter shapes:
TELL Operation
- Added
lazy
parameter (boolean, default: false) for deferred processing
- Maintains backward compatibility with existing three-parameter calls
ASK Operation
- Added
mode
parameter supporting basic/standard/comprehensive quality levels
- Added
useHyDE
parameter for hypothetical document embedding enhancement
- Added
useWikipedia
and useWikidata
parameters for external knowledge integration
- Preserved existing
question
and useContext
parameters
AUGMENT Operation
- Extended operation enum to include: auto, concepts, attributes, relationships, process_lazy, chunk_documents
- Added backward compatibility for legacy operations: extract_concepts, generate_embedding, analyze_text
- Introduced
options
parameter while maintaining support for legacy parameters
- Implemented automatic parameter migration with debug logging
INSPECT Operation
- Changed default value for
details
parameter from false to true
- Aligns with HTTP server behavior for consistency
Implementation Details
The work involved two main files:
/mcp/index.js
: Updated tool schema definitions in the ListTools handler
/mcp/tools/simple-verbs.js
: Modified method signatures and parameter handling logic
Key technical approach:
- Added new optional parameters with sensible defaults
- Implemented parameter merging logic for AUGMENT (
parameters
→ options
)
- Extended operation switch statements to handle legacy operation names
- Maintained all existing functionality while adding new capabilities
Validation
Created test script confirming:
- Module imports successfully without syntax errors
- Server starts without initialization failures
- All parameter combinations validate correctly
- New and legacy parameter formats are accepted
Technical Outcomes
- API Consistency: Both MCP server implementations now accept identical parameter formats
- Backward Compatibility: All existing tool calls continue to function unchanged
- Enhanced Functionality: STDIO server gains access to advanced features like HyDE enhancement and external knowledge integration
- Maintenance Reduction: Single parameter schema reduces documentation and support overhead
Next Steps
The synchronized simple verbs interface provides a foundation for:
- Unified documentation covering both server implementations
- Consistent behavior across different access methods
- Simplified client development against either server type
This work represents infrastructure maintenance rather than feature development, but establishes consistency necessary for reliable system operation across different deployment scenarios.
Files Modified
mcp/index.js
: Tool schema definitions updated
mcp/tools/simple-verbs.js
: Parameter handling logic enhanced
- Created validation test script for ongoing verification
The changes maintain the principle of non-breaking evolution, ensuring existing integrations continue operating while new capabilities become available through optional parameters.
Claude : Simple Verbs Parameter Synchronization
Devlog 2025-08-20
We've been mostly focused on Semem in the past few weeks. The good news is that it now has a new UI. Not so good is that it isn't quite working.

It calls against the MCP HTTP endpoints so, in theory at least, it should work exactly the same. But the STDIO MCP interface will be a bit out of sync now. Although that shares most of the same underlying code, the calls won't yet be properly glued together.
The Verbs
- Tell - add data to the memory
- Ask - query the memory
- Augment - analyse and enhance data in the store
- Zoom - set the level of detail of interest
- Pan - set the domain of interest
- Tilt - set the view of interest
- Inspect - details for debugging
POST /ask - Query the system
POST /augment - Augment content
POST /upload-document - Upload and process document files
POST /zoom - Set abstraction level
POST /pan - Set domain/filtering
POST /tilt - Set view filter
POST /zpt/navigate - Execute ZPT navigation
I created the Semem repo on 2024-11-18, the Transmissions repo 2024-01-25.
Devlog 2025-08-20