Chapter 13

The Anatomy of a SystemThat Doesn't Forget

Part Three: The Architecture


Everything in Part 2 was about building individual components: mods, frameworks, techniques, sequences. Each one powerful on its own. Each one demonstrably better than working without it.

But here's the problem that remains even when you have good mods: where do they live? How do you store them so they're actually usable? How do you load them reliably without hunting through notes or pasting from a dozen different documents? How do you keep them current as your work evolves without the whole system becoming a maintenance burden?

The answer is a context layer architecture — a structured hierarchy that organizes everything you've built into a system with clear levels, clear relationships, and a clear format for export and loading. This is the anatomy of a system that doesn't forget, because the knowledge isn't trapped in your head or scattered across files. It's organized, accessible, and designed to load cleanly into any AI session.

The Five Levels

The architecture has five levels, each nested inside the one above it. The cascade runs from the most general to the most specific: Base to Plugs to Packs to Mods to Prompts.

![][image10]

Figure 13.1 — The context layer hierarchy: Base to Plugs to Packs to Mods to Prompts.

Level 1: The Base

The Base is the outermost container — the unified system that holds everything else. It's not a folder of files or a collection of notes. It's a single, coherent document with global instructions that govern how the entire system behaves, regardless of which mod is active or which task is being done.

Think of the Base as a gravitational field. Every component inside it orbits the same set of global instructions: how the system thinks, how it maintains tone, what the non-negotiables are, how every part connects. These aren't prompts or metadata — they're behavioral laws that ensure coherence across the whole system.

In practice, the Base is typically a single document in your notes tool of choice: a Craft document, a Notion page, a Google Doc, a markdown file. The format matters less than the principle: everything lives inside one container, and that container has a clear global instruction set at the top.

Your Base already exists, partially — it's your Cognitive OS document. Part 3 is about giving it the right structure so it functions as a real Base rather than just a collection of notes.

Level 2: Plugs

Inside the Base, knowledge organizes into five high-level domains called Plugs. Each Plug is a gravitational zone — a distinct type of intelligence that serves a different function in the system.

Principles holds the values, ethics, and behavioral constants. The things that don't flex. This is where your Charter Mod lives, along with any other standing principles that govern the system's behavior. When something must remain true across every context and every mode, it belongs in Principles.

Archetypes holds the templates of identity — the defined modes and personas the system can inhabit. This is where your Persona Mods live: Meta Mode, your primary writing persona, your editorial mode, your strategic thinking mode. Each archetype is a distinct behavioral identity the system can activate.

Constructs holds the usable processes — the protocols, workflows, and structured procedures the system can execute. This is where your Protocol Mods live: the Ingestion Mod, the Draft Protocol, your research workflow, your review process. Constructs are what the system does, as distinct from who it is.

Narratives holds the connective tissue — the explanations, context, and story frames that give the system coherence and meaning. Documentation about why the system is built the way it is. Notes on your domain expertise. The background knowledge that informs how the system interprets ambiguous situations.

Surfaces holds the output formats — the templates, deliverable structures, and interaction patterns the system uses to present its work. What a finished draft looks like. How a research summary should be structured. The format for a client brief. Surfaces translate internal context into visible, usable outputs.

THE FIVE PLUGS Principles: Values, ethics, and behavioral constants. Where your Charter lives. Archetypes: Identity templates and personas. Where your Persona Mods live. Constructs: Processes, protocols, and workflows. Where your Protocol Mods live. Narratives: Connective tissue, domain expertise, context and meaning. Surfaces: Output formats, templates, and deliverable structures. Every mod, every piece of knowledge, every context unit belongs in exactly one Plug.

Levels 3, 4, and 5: Packs, Mods, and Prompts

Inside each Plug, related mods cluster into Packs. A Pack is simply a grouping of mods that share a theme or workflow. A Writing Pack contains your writing-related Protocol and Persona Mods. A Client Pack contains the context and templates specific to client work. A Research Pack contains your research protocols and frameworks.

Packs aren't a rigid requirement — they're an organizational tool. When your mod library is small, you may not need them. When it grows to twenty or thirty mods, Packs are what prevent the system from becoming unwieldy. They let you load a relevant cluster of context with one move rather than hunting for individual mods.

Inside each Pack are the Mods themselves — the atomic context units you've been building throughout Part 2. Each mod is complete: purpose, structure, parameters, examples, activation cue. Each one is independently loadable.

At the bottom of the hierarchy are Prompts — the raw instructions you send in a given moment. Prompts are the smallest unit, and they're the only level that isn't persistent. Prompts are generated in the session, drawing from the mods and context above them. A well-built architecture means your prompts can be short, because most of the context is already loaded.

Why the Hierarchy Matters

The five-level hierarchy isn't bureaucracy. It's what makes the system navigable as it grows.

Without structure, a mod library becomes a pile. You have a dozen mods but can't remember which one to load, or you load them inconsistently and get inconsistent results. The knowledge exists but isn't accessible in the moment you need it.

With the hierarchy, every piece of knowledge has a clear location. When you need a process, you go to Constructs. When you need an identity, you go to Archetypes. When you need a value to hold, you go to Principles. Navigation becomes automatic rather than effortful.

The hierarchy also makes the system portable. Because everything is organized in a single Base document with a consistent structure, you can export the whole thing — or any subset of it — and load it into any AI session on any platform. The format travels with the knowledge.

Building Your Base

Your Cognitive OS document already has the seeds of a Base. The sections you've been building throughout Part 2 — your mods, your RIPE defaults, your layering sequences — are the raw material. What they need now is the hierarchical structure that makes them a system.

STRUCTURE YOUR COGNITIVE OS DOCUMENT AS A BASE Open your Cognitive OS document. Create a new section at the very top titled: Global Instructions. Write three to five sentences that describe the overall purpose of this system, who it serves, and the two or three most important behavioral rules that always apply. This is the Base layer. Create five top-level sections below Global Instructions, one for each Plug: Principles, Archetypes, Constructs, Narratives, Surfaces. Move your Charter Mod into Principles. Move your Persona Mods into Archetypes. Move your Protocol Mods into Constructs. Move your domain knowledge notes and Narratives section into Narratives. Move your output templates and layering sequences into Surfaces. Inside each Plug, create a Pack for each cluster of related content. If you have two Protocol Mods for writing workflows, group them in a Writing Pack under Constructs. If you have client-specific context, group it in a Client Pack under Narratives. Review the full document. Does each mod have a clear home? Is anything missing a Plug? Is anything in the wrong place? Reorganize until the structure feels coherent. Add an index at the top of the document: a short list of what's in each Plug and how to find it. This becomes your navigation layer — especially useful when loading subsets of the system. Export the full document as a single file (PDF, markdown, or plain text depending on your platform). This export is your Base payload — the context package you'll load into AI sessions in Chapter 17.

The document you've just structured is your first real knowledge base. Not a collection of notes, not a pile of mods — a coherent, navigable, exportable system with clear levels and clear relationships between them.

In the next chapter, we talk about what happens when that system starts to degrade — because no system maintains its coherence forever without attention. Context drift is the enemy of every well-built architecture, and understanding it is what lets you prevent it.

ReflectApplyBuild
Look at your Cognitive OS document before doing this chapter's exercise. Which Plug would be fullest right now — and which would be nearly empty? The empty ones aren't failures, they're signals about...
Do the seven-step exercise in this chapter. Restructure your Cognitive OS document into the five-Plug hierarchy. Don't wait until you have more content — structure what you have now. An organized...
After restructuring, export your Base document as a single file. Save it somewhere accessible. Label it with today's date — this is Version 1 of your Base. Every major revision gets a new version....