Tensegrity Suite

Main components

I gave myself a deadline of 2025-06-21 to get this bunch of things to some kind of minimum viable pseudo-products. If I'm happy with what I've got to show at that point, I'll be doing announcements (haranguing the lists, socials and maybe even ex-girlfriends) on 2025-07-03. I'm more confident than usual that I'll have at least something to show, I have got the lion's share of the hard parts done already. We'll see.

My dev process is to cycle through the subprojects, spending a day or two on each at time. Not a regular cycle : cross-dependencies often force a different direction; things always take longer than expected; distractions (often trying dev spikes, especially now with vibes).

Goals

Have fun. Hopefully produce useful things as a side effect.

Philosophy

This comes from wondering about cognition, what can computers, AI achieve? What can I achieve? I'll use problem solving as a catch-all for intentional state change.

  1. Systems can be described in terms of concepts and relationships between those concepts. The human way.
  2. Problem solving is mostly about pattern matching. Fitting unknown systems to ones where their behaviour is known.
  3. All known agents, animal or artificial, at any given moment in time, have a finite amount of working memory. This acts as boundary on what they can process.
  4. Concepts can often be decomposed into smaller units.
  5. Groups of associated concepts can be abstracted into meta-concepts.
  6. Scale-free thinking for the win!

Filthy Lucre

I can see commercial potential in this stuff (nb. a ground rule is that it's all open source), but the motivation is more an Everest-ian because it's there challenge, my wanting to get my head around these things and, yeah, better tools for problem-solving would be nice to have. Whatever, this lot maybe best considered an art/school project.

Bag of Stuff

I have been building these things as quasi-independent projects, each with their own repo. But to start seeing the utility I'm after, they need joining up, which makes a big dependency graph : a tensegrity structure. However many of the dependencies are soft, X helps Y, rather than critical, X needs Y (cf. non-functional requirements). Some parts I've been working on for a long time (25 years is a long time in tech, right..?), but the arrival of Deep Learning, especially that around LLMs, forced a reboot. And obviously everything else is continually evolving. Sitting still is not an option.

It may seem like there's a lot of wheel reinvention, but that's mostly because I'm fed up of (other people's) frameworks. My own very limited context window needs to be used on the tasty bits, not conventions. I'm favouring ES module style nodejs and/or vanilla browser JS etc. so I don't have to flip languages so often.

All this stuff is riddled with RDF. I'll write a Why RDF? note (again) some time. But the tl;dr is twofold : the Web is the best graph knowledgebase in human history; some level of formal ontology definition helps considerably when describing things (especially knowledge).

Sites

Hyperdata is an arbitrary umbrella name I'm using for all this stuff. I've got the domain

hyperdata.it and associated hosting for any quasi-reliable service fixtures, docs etc. For experiments, maybe even useful things, I have strandz.it.

danny.ayers.name is the dumping ground for everything else.

TIA

I've many, mostly fuzzy forms which require a lot of hand-waving to describe. The main high-level target is to create communities of intelligent agents which improve on the state of the art in solving arbitrary problems. This aspect I've labeled TIA Intelligence Agency (it has a repo but there's nothing to see yet). Some example problems below.

For pragmatic reasons, and to keep things conceptually simple, the environment will be multi-user chat over XMPP. A containerized environment will enable repeatable experimentation :

tbox

A docker-compose setup with :

Status 2025-05-04 : it's largely in place, needs a lot of tidying up, hardening. Only just started with telemetry

The subproject with the most novelty is :

Semem

Semantic memory for agents. I want this mostly defined in terms of interfaces (see Vocabs), keep it flexible. Running code essentially helpers, just lowish-level things like concept extraction etc.

Core components :

The first 4 components are fairly self-explanatory, kind-of off the shelf. What sets it apart is an ontology-backed approach to composition & choreography, see Vocabs...

The background to this is probably worth noting. A while back I had a play with Graph RAG using SPARQL, working with LlamaIndex. Crude proof of context. I wanted more, but not dependent on the whims of a framework. I stumbled on memoripy which had most of the skeleton of what I wanted. So my starting point for #:semem was a kind of rewrite that using JS & SPARQL. At that point I shifted my energies back to #:transmissions, meanwhile working on the vocabulary support - very significant, see Vocabs...

Status 2025-05-04 : the basics are pretty much in place, although in need of a lot of tweaking, #:transmissions hooking in. Dependencies around here :

Transmissions

This is a pipeliney thing. In fancy speak, a workflow compositor.

Postcraft

TIA Information Agency

Vocabs

These are key. I've determined the main things I need are : a knowledge representation language (Ragno), a knowledge navigation language (ZPT), a knowledge communication language (Lingue) and a methodology language (UM).

The primary vocab links are placeholders, they'll hopefully resolve to something useful by 2025-06-21.

Ragno

The Ragno ontology is used to describe a knowledgebase. General-purpose terms are pretty well covered by SKOS already, what was demanded here were terms for specialisms that would fit well with RAG applications. I'd got a lot sketched out when I stumbled on the paper NodeRAG: Structuring Graph-based RAG with Heterogenous Nodes. The information model in that has nice intersections with what I was after, so I merged the ideas in. A significant bonus is that the paper describes algorithms for knowledge augmentation and retrieval that are a very good fit for what I was planning with Semem.

Zoom, Pan, Tilt (ZPT)

The ZPT ontology is used to describe a view of (part of) a knowledgebase. The primary purpose is to navigate the knowledge Universe and scope the current context of interest. The core terms are lifted from the world of cinema, with increasing levels of analogy-stretching :

Lingue

The Lingue ontology is designed to facilitate communication between agents in a way that is flexible in regard to agent capabilities. It subsumes much of MCP and A2A. A key notion is negotiation, such that an optimal local lingue franca may be determined. Agents will have self-descriptions (? in MCP cards in A2A) that may be compared at a fairly mechanical level. Any commonalities can allow promotion to more sophisticated or domain-specific language(s).

Avalanche Development Methodology

It seems clear that recent developments around the use of AI in dev call for a new paradigm in the whole process. I've been trying to capture ideas on what works and what doesn't : avalanche methodology.

Btw, I have a sandpile avalanche simulator

Client Stuff

Event Bus

I've been hopping between Node.js and vanilla browser JS (with npm, webpack, vitest). A mechanism that seemed to make sense is an event bus I could use in either. So I made a quick & dirty lib - "evb" [4]. Even though there's very little to it, I've published it as an npm package to try and keep some consistency across the things I make.

farelo postcraft sheltopusik squirt trestle wstore foaf-retro mcp-o trans-apps a2a-o elfquake hyperdata thiki uve atuin evb hyperdata-clients node-flow tbox tia viz