In 1986, Marvin Minsky published The Society of Mind. His thesis was elegant and profound: intelligence isn’t a singular thing. It is a phenomenon that emerges from the interaction of countless smaller, mindless agents.
(There is a similar chapter in GEB -Ant Fugue-)
When I reading GEB for the first time in 2006. I was impressed. Even though I didn’t know English well at the time, the topic was too tempting to put down. I immediately went looking for more sources and bought Minsky’s Society of Mind and The Emotion Machine.

From that time to now, I still keep those books next to my desk, along with papers like Minsky’s work on Frame Theory. I’ve always been fascinated by his theory that if you put enough simple processes together,one that creates blocks, one that moves blocks, one that critiques height,you eventually get an architect.
Fast forward to 2025. We finally have the “agents” (to some degree). We have LLMs that can reason, code, and critique. We have the raw material Minsky could only dream of. But when I look at how we are wiring them together, I feel a disconnect. It seems we are building bureaucracies. We are wrapping these fluid, creative minds in the rigid administrative shackles of 1990s software engineering. We force them to fill out forms (JSON schemas). We make them follow strict org charts (pipelines). If “Agent A” wants to talk to “Agent B,” it must format a request, validate the types, and wait for a 200 OK. If a biological brain worked like a modern AI framework, I think you would have a stroke every time you tried to learn a new word just because the schema didn’t match :)
Last year, I decided to build something(homunculus.live) to make a “gang” of LLMs talk to each other, a live event 7/24. At the time, running 4 or 5 models simultaneously was too expensive(to make it live). But lately, as I’ve watched the explosion of “agent frameworks,” I felt that I could finally transform that old idea to something I felt deep down but couldn’t quite express into an experimental framework. I am drafting Homunculus to implement that feeling.
Intermediate State
There is a cliché: “We discovered nuclear power, and we decided to use it to boil water.”
With LLMs, I feel we are doing exactly that. We have a technology that can fundamentally jump,from intent to outcome, from fuzzy concept to concrete code. Yet, we surround it with endless “Intermediate States.” We build Plugin architectures, MCP servers, and API definitions just to make the AI feel safe and predictable. I think we are trying to maximize the efficiency of the plumbing, forgetting that the fluid inside is magic. Think when you hire a real-world consulting firm, you don’t hand them a schema. You state a problem. You rely on their ability to navigate the ambiguity, to realize they are missing a skill, and to go recruit a freelancer who has it.
Why do we deny our AI agents that same dignity? In the biological world, neurons do not exchange JSON packets. They exchange some sort of Signals. A neuron doesn’t know who is listening. It fires. The signal propagates through the substrate. The meaning of the signal isn’t encoded in a format; it is defined by the Resonance of the receiver. If I shout “Fire!”, the firefighter neuron resonates. The chef neuron ignores it. There is no central router. There is no schema validation. There is only signal and sensitivity.
Homunculus is my attempt to build software on this substrate:
- No More JSON Handshakes: In Homunculus, agents emit “Thoughts.” These are natural language streams wrapped in an embedding vector a “Pheromone” we called. (even though there are some JSONs presents, I will try to remove all)
- Routing via Resonance: I don’t hard-code
Agent A -> Agent B. I let the signals diffuse into a “Biosphere.” Agents have Receptor Fields. If a signal smells like “Legal Risk,” the Legal Agent wakes up. If it smells like “Python Error,” the Engineer Agent wakes up. - Failure is Information: In a microservice, a mismatch is a crash. In a brain, a mismatch is Dissonance. It is a feeling of stuckness. (Minsky described hallucination or confusion in his Frame theory almost as a gift, a signal that allows learning. That is where I want to go in the next phases.)
This brings me to the most important part of the architecture: Emergence. I don’t mean this in the “AGI Hype” sense. I use it in the dictionary sense: complex behavior arising from simple rules. Nothing more, nothing special. In traditional frameworks, if the “Marketing Agent” and the “Legal Agent” disagree, the script usually breaks or loops. In Homunculus, I try to treat this friction as a biological signal. When the system produces high-tension signals,“I must launch” vs “I cannot launch”,it creates semantic heat.
A specialized agent called the Meta-Observer (essentially the immune system) tastes this heat. It realizes the organism is stuck. But instead of throwing a TimeoutError, it spawns a new organ. It creates a Conflict Resolution Agent or a Risk Analyst on the fly, injects it into the conversation, and heals the rift.
I prefer not to pre-define the solution. I want to define the physics that allow the solution to evolve.
State
I have placed a deliberate constraint on Homunculus: it is designed for Local AI. I am not betting on one trillion-parameter model to rule them all. That feels like a mainframe mindset to me. I am betting on the swarm. I believe that some small models running on your laptop, coupled by signals and synapses can outthink a giant, if they are wired correctly. (so-called?) Intelligence is not just about the size of the neuron; it is about the complexity of the connection. So, use with unlimited resource with small models.
Homunculus is an experiment in “wetware engineering” written in plain TypeScript for easy prototyping. It includes a demo app you can run locally via Ollama to see these concepts in action. We have built the Signal Foundation. We have agents that perceive via embedding resonance. We have “Synapses” that learn which connections are valuable, strengthening them over time like Hebbian learning. We have a Generative Spawner that births new agents when the society faces a novel threat.
What I am building next is “Taste.” I want the system to develop an intuition for what a “good” solution feels like, so it can steer its own evolution. I’m also working on prompt optimization and integrating concepts from Minsky’s Frame Theory papers. Then will look real world implementations.
I think we have entered the era of intent. I want to build architectures that are messy, resilient, and organic. I want to build software that feels less like a machine and more like a mind. I believe; after enough iterations, this code will become “something”. Let’s see what emerges.
project repo: https://github.com/osmanorhan/homunculus