Former OpenAI developer Andrej Karpathy recently described how he builds personal knowledge bases with an LLM («LLM Knowledge Bases»): collecting raw data, compiling it into a Markdown wiki via AI, then querying and expanding it in chat. LinkedIn reacted with instant enthusiasm. My first response was more mixed: interest, plus scepticism. Karpathy’s approach sounds like an elegant fix for the tedious parts of knowledge work, but it also hits a nerve. Is this another case of technologists misunderstanding what knowledge work actually is?
In his LinkedIn post, Chris Bühler distilled the debate into three questions that stuck with me. Here are my answers, based on my experience.
1. What tasks can AI handle when maintaining a "Zettelkasten" without undermining its unique benefits?
It depends on your system, and on your AI literacy.
The prerequisite is a functioning knowledge management system. A card index, a “second brain” in an Obsidian vault, or a Notion setup: the medium is secondary. What matters is that your thinking happens within a structure.
Once that structure exists, AI can expand it as a tool. No more, no less. It can help with administration: tagging notes, suggesting links between cards, creating summaries, finding dead links. These are operational tasks. They cost time, but they require little to no thinking.
Andy Matuschak is right to warn against the fallacy that you become “intellectual” by copying a clever person’s notebook. The visible part, taking notes, organising, managing, is not what makes knowledge work. The thinking behind it is. And that is where the boundary sits. AI should not take over the decision of why a note matters. It should not create new thoughts by linking two ideas on your behalf. It should not be the one to notice the question hidden inside an observation.
This step, AI as a tool inside an existing system, is coming anyway. Whether you use Obsidian, Notion, or Evernote, AI integration is already reality. There is no reason not to benefit from it, as long as you can tell the difference between managing and thinking.
2. To what extent can the recording and processing of material be outsourced without losing the personal learning effect?
Richard Feynman was once asked whether his notebooks were an “incredible record of his thinking”. His reply: “No, no. They are not a record of the thought process. The notebooks are my thought process.”
Niklas Luhmann put it just as sharply: “Without writing, one cannot think; at least not in a sophisticated, coherent way.”
That is the crux. And it is why outsourcing the whole process to AI would be problematic. If writing is thinking, delegating the writing means losing the thinking.
But here is the key shift: writing no longer has to be a solitary notebook.
In my own practice, the thought process has moved from monologue to dialogue. When I work on a topic, I talk with the LLM. I bring my context: prior knowledge, questions, perspective. The AI brings information, connections, counterarguments. In that back-and-forth, something can emerge that neither of us would produce alone.
One example: I recently explored whether hermeneutic interpretation can be supported by AI. In the dialogue, the LLM introduced three philosophical perspectives I would not have put side by side. But the synthesis, the insight that the act of interpretation is understanding and cannot be delegated, was mine. Without the dialogue, I would have arrived there more slowly. Without my prior knowledge, the dialogue would have stayed superficial.
This is not outsourcing. It is expanded thinking. Or, as I described it elsewhere: context architecture. You build the frame. The AI mirrors you statistically and linguistically. It works with probabilities, not understanding. You decide what remains.
At the end of the dialogue, I still need to formulate the thought. The AI provides raw material. The synthesis stays with me. And when I close the chat and summarise the key insights in my own words, the notes reflect my thinking, regardless of whether an LLM was involved.
3. What happens when your own thoughts and AI-generated notes get mixed up? Does it water down the system?
My answer is no, at least not if you work consciously.
What matters is not who produced the text, but whether you have thought it through. The difference comes down to two scenarios.
Scenario A: You ask the AI a question, copy the answer, and file it as a note. That is information gathering, not thinking. And yes, it dilutes your card index, because you are storing someone else’s thoughts as your own.
Scenario B: You engage in dialogue. You contribute prior knowledge, question the AI’s answers, combine them with your own observations, and finally write a note that reflects your perspective. In that case, your thinking is woven into the note, even if parts of it emerged in conversation with an AI.
I do not think you can reliably separate “your own” thoughts from “AI-assisted” thoughts over the long run. You do not separate which ideas came from a book, which from a conversation, and which from the shower. The source matters less than the quality of the processing.
So the real question is not “did anything get mixed up?”, but “did you really grasp the thought?” If you did, it is yours, regardless of how it came about.
Objections and responses
“Karpathy’s approach does work. He queries his wiki and gets good answers.”
Yes, for retrieval: finding and compiling information. For that, the system is excellent. But a card index is more than a database. It is a thinking tool. Its value lies less in querying than in linking. Karpathy’s wiki is a strong knowledge base. It is not (yet) a card index in the Luhmannian sense.
“Not everyone has the time for conscious thinking with AI.”
True. But the time saved does not come from skipping the thinking. It comes from accelerating it. Anyone who knows their context needs, in my experience, 15 minutes instead of 2 hours of manual research to have a productive AI dialogue.
“That’s a niche discussion for PKM nerds.”
Perhaps. But the question “How do I work effectively with AI?” concerns every knowledge worker. The card index is simply a magnifying glass that makes this question especially sharp.
Conclusion
Karpathy’s approach shows what is technically possible, and it is impressive. But the more interesting question is not “Can AI maintain my wiki?”, it is “Where does administration end, and thinking begin?”
Your knowledge work needs you, not as an administrator, but as a thinker. AI can make administration more efficient. Thinking remains your job.
Sources & References
- Andrej Karpathy (2026): X-Post «LLM Knowledge Bases»
- Chris Bühler (2026): LinkedIn discussion on the topic
- Andy Matuschak, zitiert in: «Writing as a structure of thinking»
- Richard Feynman, zitiert in: «Writing as a structure of thinking»
- Niklas Luhmann: zitiert in: «Writing as a structure of thinking»
- Georg Gusewski: «Become a Context Architect»
And receive regular updates from my digital garden.