There’s a viral Reddit post that went up a few days ago in the Obsidian community. The title: “I’m trying to build an AI Second Brain and I’m losing mine doing it.”

The person had a solid setup — Obsidian for notes, Claude Pro for thinking, NotebookLM for their documents, Gamma for presentations. Good tools, all of them. But here’s the thing they kept running into:

“Claude remembers convos. NotebookLM knows my uploads. My actual brain lives somewhere else entirely. Classic fragmentation.”

They had six half-finished projects that required a full mental reboot every single time they returned to them. Knowledge wasn’t disappearing — continuity was. And they were stuck somewhere between a junk drawer and a second job.

Sound familiar? If you’ve been trying to build an AI-assisted knowledge system in the last year or two, I’d bet good money you’ve hit the same wall. Everyone’s talking about AI second brains. Very few people are actually building ones that work.

Here’s why most AI second brains fail — and more importantly, what to do about it.

The Fragmentation Problem Nobody Warns You About

The AI tool market has exploded. That’s mostly a good thing. But it’s created a specific trap for knowledge workers: we’ve accumulated ecosystems of AI tools that don’t share context.

Your Claude session knows what you talked about yesterday. Your Obsidian vault knows what you’ve written. Your NotebookLM notebooks know what PDFs you’ve uploaded. Your Notion AI knows your project database. None of them know about each other.

That’s not four tools — that’s four separate brains, each with partial information, none of them the whole picture.

The irony is brutal. You set out to extend your thinking capacity, and you’ve ended up doing extra coordination work just to keep all your AI tools vaguely up to date with your life. You’ve added cognitive overhead to a system designed to reduce cognitive overhead.

This is the fragmentation problem. And it’s not a tool failure — it’s an architecture failure. Most people (understandably) adopt AI tools one at a time, as they discover them, without a coherent system design underneath. The result is a pile of useful things that don’t add up to a useful system.

The fix isn’t more tools. It’s better architecture.

The Single Source of Truth Principle

Every functioning second brain — AI-powered or not — needs a single source of truth. One place where your actual knowledge, decisions, and context live. Everything else is an interface on top of that.

Most people skip this step. They have notes in Obsidian, context in Claude, documents in Google Drive, and ideas in their phone’s default notes app. There’s no single source. There’s just noise distributed across twelve different places.

Pick one. Seriously. For most people in 2026, the best options are:

Obsidian — If you want full ownership, local files, and maximum flexibility. It’s a graph-based markdown editor with a thriving plugin ecosystem, and it’s become the default choice for serious knowledge workers. Your vault is just a folder of .md files — which means it’s trivially easy to give any AI tool access to it.

Notion — If you prefer a more structured, database-driven approach and don’t mind a cloud-dependent setup. Notion’s AI features have matured significantly, and the database views are genuinely powerful for tracking projects. The downside: your data lives on their servers, and the AI context features are still fairly surface-level compared to what you can build with Obsidian.

Google Drive (Docs/text files) — Underrated for people already deep in the Google ecosystem. It’s not sexy, but it’s universally accessible and integrates well with tools like NotebookLM and Gemini.

The tool matters less than the commitment. Pick one, move your actual thinking there, and stop splitting it across multiple apps.

The Memory File: Giving AI Tools Persistent Context

Here’s the practical technique that changed everything for me — and for a growing number of people building serious AI workflows.

Create what I call a memory file. It’s a plain markdown file (I call mine MEMORY.md) that lives in your main knowledge store. It’s not a journal and it’s not a project file. It’s a curated, living document that answers one question: what does an AI assistant need to know to pick up where we left off?

It includes things like:

Every time you start a new Claude session (or any AI session), you reference this file. You can paste it in, point the AI at your vault, or — if you’re using Claude Code or a similar tool with file access — have it read the file automatically at the start of each session.

The result: your AI isn’t starting from scratch every time. It’s starting with context. That’s the difference between talking to an intern who needs everything explained and a colleague who already knows the background.

A few practical tips on memory files:

Solving the Continuity Problem: Project Handoff Docs

That Reddit user hit on something important: hyperfocus → drop → mental reboot → repeat. This cycle destroys productivity because every return to a project costs you a recovery tax — time and energy spent re-orienting yourself to where you were.

The AI-native solution is a project handoff document. Think of it as a letter to your future self (and your AI assistant).

Every active project gets a file. At minimum it contains:

When you return to a project after time away, you open this file and share it with your AI. Suddenly you’re not starting from a cold engine — you’re picking up a warm thread.

Tools like NotebookLM are actually brilliant for this pattern. Feed your project handoff doc into a NotebookLM notebook alongside your research materials, and you’ve got a context-aware research assistant for that specific project. The limitation is that NotebookLM is read-only — you query it but can’t update it. That’s fine. Your Obsidian vault (or Drive folder) is the writable source of truth. NotebookLM is just a smart lens on top of it.

The Minimum Viable Capture Rule

One of the most common failure modes in personal knowledge management is over-engineering the capture process. People build elaborate systems for filing and tagging information — and then get exhausted by the maintenance and abandon the whole thing.

The AI-powered approach is different. AI is genuinely good at retrieval and synthesis, which means you don’t need to be as precise about organisation. You just need to capture.

Adopt what I call the minimum viable capture rule:

  1. Anything worth keeping goes in one inbox. A single folder, a single note, a single file. Don’t decide where it belongs yet — just get it in.
  2. Once a week, AI helps you sort it. Paste your inbox into Claude (or point it at the folder) and ask it to help you file things, identify patterns, and suggest what’s worth keeping. A 10-minute weekly review beats a 30-minute daily filing ritual.
  3. If you can’t find it, ask AI to find it. Rather than spending 15 minutes hunting for a note you half-remember writing, describe it to your AI and let it search. With Obsidian’s full-text search and Claude’s context window, this is surprisingly effective.

The goal is to reduce the friction of capture to near zero, and offload the sorting and retrieval work to AI. That’s what AI is actually good at. Stop making it a filing assistant and start making it a retrieval engine.

The Tools Worth Using in 2026

Let me be honest about the current landscape.

Obsidian is still the best foundation for a serious AI second brain. The addition of Obsidian Bases (structured data queries from your notes) and the growing Claude MCP (Model Context Protocol) integration means your vault can now be a genuine context layer for AI conversations — not just a place you paste things from. If you’re serious about this, Obsidian is where you want to be.

Claude (particularly Claude Pro or the API) remains the best thinking partner for complex reasoning. The extended context window means you can hand it genuinely large chunks of your knowledge base and get coherent responses. It’s excellent at the “think with me about this” use case. The memory between sessions is still limited unless you build it explicitly — which is exactly what the memory file approach solves.

NotebookLM has matured into a genuinely useful research assistant. Upload your PDFs, research papers, YouTube transcripts, and relevant docs for a project, and it becomes a project-specific knowledge base you can query in natural language. Great for research-heavy work. Not great as your primary system because it’s passive — you feed it, you query it, but it doesn’t evolve with your thinking.

Mem (mem.ai) is worth a look if you want something more automated. It’s built specifically as an AI-first knowledge base — everything you capture gets semantically indexed, and the AI surfaces relevant context automatically as you write. Less control than Obsidian, but far less manual work. Good middle ground for people who find Obsidian’s flexibility overwhelming.

Lindy and Zapier Agents are worth experimenting with if you want to automate the connective tissue — the workflows that move information between your tools. Lindy in particular has some clever automations for knowledge workers: capturing emails into your knowledge base, triggering research on topics you’re following, surfacing relevant notes based on what you’re working on.

What Actually Works: A Practical System

Here’s the system I’d recommend for someone starting fresh today:

Layer 1 — Capture: A single inbox. Could be an Obsidian note called “Inbox”, a dedicated folder in Drive, or even a simple voice memo app. One place. Low friction.

Layer 2 — Knowledge Store: Obsidian vault (or Notion, if you prefer). All your actual thinking, notes, project files, and documentation live here. This is your single source of truth.

Layer 3 — Context Files: MEMORY.md (overall context) + project handoff docs for each active project. These are what you share with AI tools to give them continuity.

Layer 4 — AI Interface: Claude for thinking and writing, NotebookLM for research-heavy projects, Mem if you want something more automated. These are interfaces on your knowledge store, not replacements for it.

Layer 5 — Automation (optional): Zapier or Lindy to handle the connective tissue — routing things to the right place, triggering reviews, sending you reminders.

The whole thing takes an afternoon to set up. The key is starting simple and adding complexity only when you hit a real friction point — not because it seems like a good idea in theory.

The Real Problem Was Never the Tools

Here’s the honest takeaway. The reason most AI second brains fail isn’t because the tools are bad. It’s because we’ve been treating AI as a magic layer that will somehow organise our chaos automatically.

It won’t. Not yet, anyway.

What AI is genuinely good at is working with organised information — helping you think through it, surface connections, write from it, and make sense of it. But the architecture still needs to be human-designed. The single source of truth, the memory files, the project handoff docs — these are structures you create. AI just makes them dramatically more useful.

Build the architecture first. The AI amplifies whatever you give it. Give it a junk drawer, you’ll get junk output. Give it a well-structured knowledge system, and you’ve got something that actually functions like an extended mind.

The Reddit user losing their mind over their fragmented AI setup isn’t failing because they’re using bad tools. They’re failing because they haven’t designed a system — they’ve accumulated gadgets.

Fix the architecture. The gadgets are fine.


Hungry for more practical frameworks for building with AI? The Augmented Mind goes deep on exactly this — how to build genuine AI-assisted thinking systems that actually improve your cognition over time, not just your output. Worth a read.

🔔 Get weekly tips on solar, automation & working smarter

No spam. Unsubscribe anytime.