The Cognogin platform is organized as a three-layer architecture analogous to a neural network — but with two critical additions that make it fundamentally different from any standard AI system.
The Input Layer collects and processes raw signal: emotional state detection, audio analysis, metacognition results, polling data, and Sage's real-time perception of the member. These modules observe without acting.
The Hidden Layer transforms that signal: the DI Engine, the Differential Improvement Engine, the SOAP Pipeline, and the SCO/Redis layer combine and weight the inputs into meaningful representations — the member's DI trajectory, behavioral patterns, and clinical context — without ever touching the member directly.
The Output Layer is where computation becomes human experience: Sage interactions, BTG sessions, STEP education, the governed close, the Thread Model follow-through.
What distinguishes Cognogin from a standard feedforward neural net is twofold:
First: the feedback loop. The output layer feeds back into the input layer. A member's response to a Sage interaction becomes a new signal. The Thread Model's follow-through becomes a DI input. The governed close generates a behavioral commitment that is measured next session. This is a recurrent architecture — the system has memory across time steps, and past outputs directly influence future inputs.
Second, and critically: a recurrent architecture compounds. Each iteration builds on the last. If the objective function is wrong — if the system optimizes for engagement rather than flourishing — the compounding makes it worse with every cycle. A feedforward net runs once and stops. A recurrent net keeps going. This is precisely why the Overseer and the Planter's Karma are not optional features. They are structural necessities of the recurrent design. The Overseer sits between the hidden layer and the output layer — a human validation gate that no standard neural net possesses. The Planter's Karma ensures the objective function remains oriented toward genuine growth, not iterative manipulation. Without both, a recurrent clinical AI is not safer than a feedforward one. It is more dangerous.