Cognogin · Full Measures Foundation · April 2026
Every module.
Every connection.
A complete catalog of the Cognogin platform's technical capabilities — what each module does, who it serves, why it was built, and how it reinforces every other module in the system.
47
Modules
8
Layers
5
Patents Filed
3
Verticals
Architectural Framework
A Recurrent Neural Architecture —
with a Human in the Loop

The Cognogin platform is organized as a three-layer architecture analogous to a neural network — but with two critical additions that make it fundamentally different from any standard AI system.

The Input Layer collects and processes raw signal: emotional state detection, audio analysis, metacognition results, polling data, and Sage's real-time perception of the member. These modules observe without acting.

The Hidden Layer transforms that signal: the DI Engine, the Differential Improvement Engine, the SOAP Pipeline, and the SCO/Redis layer combine and weight the inputs into meaningful representations — the member's DI trajectory, behavioral patterns, and clinical context — without ever touching the member directly.

The Output Layer is where computation becomes human experience: Sage interactions, BTG sessions, STEP education, the governed close, the Thread Model follow-through.

What distinguishes Cognogin from a standard feedforward neural net is twofold:

First: the feedback loop. The output layer feeds back into the input layer. A member's response to a Sage interaction becomes a new signal. The Thread Model's follow-through becomes a DI input. The governed close generates a behavioral commitment that is measured next session. This is a recurrent architecture — the system has memory across time steps, and past outputs directly influence future inputs.

Second, and critically: a recurrent architecture compounds. Each iteration builds on the last. If the objective function is wrong — if the system optimizes for engagement rather than flourishing — the compounding makes it worse with every cycle. A feedforward net runs once and stops. A recurrent net keeps going. This is precisely why the Overseer and the Planter's Karma are not optional features. They are structural necessities of the recurrent design. The Overseer sits between the hidden layer and the output layer — a human validation gate that no standard neural net possesses. The Planter's Karma ensures the objective function remains oriented toward genuine growth, not iterative manipulation. Without both, a recurrent clinical AI is not safer than a feedforward one. It is more dangerous.

Feedforward Net
Generative AI (LLM)
Cognogin RCCA
FEEDFORWARD NEURAL NET GENERATIVE AI (LLM) COGNOGIN RCCA INPUT HIDDEN OUTPUT Static · one pass · no memory Tokens Trans- former Self-attention N layers ~GPT/Claude RLHF trained engagement optimized Output token loop Context window = entire memory No cross-session memory · no DI · no overseer Emotional Metacognition Behavioral Time-Mastery Sage Perception (highest weight) DI/DIE 2nd order SOAP pipeline SCO Redis OVER SEER human gate BTG·Sage STEP·Learn Close·Thread ① DI SESSION FEEDBACK output → input · longitudinal · unbounded ② SAGE PERCEPTION hidden → input · qualitative INPUT HIDDEN OVERSEER OUTPUT Recurrent · persistent · human-gated · growth-optimized
Why the Overseer is a structural necessity, not a feature
A feedforward net runs once and stops — it cannot compound. A recurrent architecture with a feedback loop iterates. Each cycle builds on the last. If the objective function drifts toward engagement optimization, the compounding makes it exponentially worse. The Overseer is the circuit breaker between the hidden layer and the output layer — the human gate that prevents iterative manipulation from becoming entrenched. The Planter's Karma is the guarantee that the objective function never drifts in the first place. Both are required. Neither is optional.
User ↔ AI Reinforcement Loop
How modules interact to create positive, measurable growth
USER member SAGE mentor · guide DI ENGINE 2nd order feedback METACOGNITION pattern detection OVERSEER human in loop THREAD MODEL commitment + follow-through GOVERNED CLOSE next step · always signals perception escalate follow-through → positive DI delta validates Signal / Input Reinforcement Loop Oversight / Validation