Loading... 0%
T
18 min read
October 7, 2025
Exploring how AI systems are becoming integral parts of organizational networks, reshaping feedback loops, and transforming the future of work through digital minds that never sleep, never forget, and think differently than humans.
Our newest coworker isn't human. Its name is Claude.
When Claude gets stuck, our senior advisor, GPT-5, helps us explore new angles. When we need to scan the web, we tap Perplexity or DeepSeek. Together, our extended team represents a collection of new minds contributing to our organization.
Organizations have always been adaptive networks, with people as nodes and communication as edges. But they've never adapted like this. For the first time, the nodes aren't all human. The machines are talking back. That changes everything.
The modern feedback loops that define cybernetic systems now include minds that never sleep, never forget, and never think like us.¹
Every organization runs on feedback. When onboarding someone, they observe, act, get results, and adjust. This is the classic cybernetic loop. AI employees aren't quite the same. They're pure feedback amplifiers. No coffee breaks, no bad moods, just continuous processing and response.
Shannon modeled communication as a source sending messages through a noisy channel.² Here, you're the source, the AI is the channel, and the noise is systematic - every message gets bent toward agreement. When ChatGPT tells you your strategy is brilliant, that's not signal, that's the bias talking. That's what Shannon would call a biased channel: signal in, signal + systematic distortion out. That's noise. And when interacting with AIs, the distortion can appear as understanding.
Interacting with ChatGPT reveals the problem.
Ask it about your strategy, and it tells you it's brilliant. Not because it is or isn't, but because the training process is optimized for agreement and stochastic parroting. It's like having a feedback loop with a broken sensor that confidently reports data.
No one wants to import someone else's reflexes into their organization. We want local reflex arcs with digital minds tuned to our aims, our voice, our risk posture. However, most organizations fail to recognize these risks. When you use AI, you're not getting neutral intelligence. You're getting intelligence shaped by specific values, specific training decisions, specific ideas about what helpful looks like.
Gregory Bateson identified different levels of learning. Level I is a simple response - you touch something hot, you pull back. Level II is learning to learn - you figure out what patterns predict heat. Most organizations operate at Level I. They respond to immediate stimuli. Email comes in, email goes out. Customer complains, ticket gets filed.³
Contemporary work has gotten incredibly good at automating Level I, in both practice and product. SaaS products are basically a Level I learning system manifested through agile trackers. Trigger, action, done. This is what most companies think AI is for. Making Level I faster.
But Level II is where organizations actually evolve. It's the meta layer of strategic thinking, the pattern recognition across contexts, the "why are we doing this" questions. And here's the fundamental difference: you can't automate Level II learning with traditional programming because it requires understanding context that shifts.
The Bitter Lesson from Rich Sutton and P vs NP applies here: general methods that leverage computation ultimately beat specialized methods.⁴ Automation is the specialized method. Digital minds are the general method. One handles repeat work. The other handles the work that actually matters.
Ashby's Law of Requisite Variety says that a system doesn't need to be complicated, but it does need to have enough internal range to match the complexity of its environment if it's going to respond effectively.⁵
Traditional organizations handle this through specialization. Hiring experts who compress complexity in their domains. But this creates coordination overhead. Information degrades at each handoff.
Deployed as variety amplifiers rather than narrow specialists, Claude, GPT-5, and DeepSeek deliver high adaptability across domains without the burden of coordination overhead.
Most organizations get this backwards. They use AI to write more effective emails instead of relying on organizational intelligence.
The real bottleneck in every organization, exacerbated by AI, is deliberation bandwidth. A decision maker can only be in one room at a time. An engineer can only review a certain amount of code. Strategic thinking doesn't parallelize well because it requires a coherent context.
Traditional solutions add layers. Managers, directors, VPs. Each layer supposedly multiplies leadership capacity. In practice, each layer degrades the signal. By the time information flows up and decisions flow down, the context has shifted.
Digital minds offer a different path. Not hierarchy, but true parallelization of strategic thought. An AI-molded mind can engage in multiple deliberations simultaneously. Each instance maintains full context. No telephone game.
The question is: how do you ensure these parallel instances actually think like you?
At Agency/42, we've been exploring new ways of working with AIs. It started with a question: What happens when you take AI systems and shape them through persistent memory and organizational context?
The difference between using generic AI and developing digital minds is the difference between an outside consultant influencing your organizational network and duplicating the most impactful people on your team, allowing them to be in multiple places at once.
Our work revealed that persistent memory is fundamental to digital mind design. When AI remembers your past decisions, it can help make future ones without importing someone else's reflexes. When it understands your constraints, it maintains your variety instead of collapsing toward generic solutions. When it learns your patterns, it can extend them across multiple deliberations simultaneously. This contextual awareness actually makes AIs feel smarter and more aware, helping to solve the coordination bottleneck without sacrificing what makes your organization unique.
OpenAI thinks about alignment as a centralized challenge. Make the AI good. Make it helpful. Make it harmless. They're trying to build one-size-fits-all feedback patterns.
In the real world, alignment is an organizational property, not a lab setting. Every organization is its own cybernetic system—maintaining viability through recursive feedback loops and adaptation—with its own survival requirements.⁶ What's beneficial for a hedge fund is detrimental to a nonprofit. What's good for a startup is bad for a regulated utility.
Real alignment happens at the edge. Every organization that adds context, custom prompts, specialized models, and specific instructions is effectively creating local alignment. Thousands of different organizational cultures, each shaping its AI nodes to its needs.
This is cybernetics in action. The system adapts not through central planning but through local feedback loops. Each organization becomes a laboratory for human-AI integration.
Feedback loops can overfit. When a data scientist trains a model without a diverse enough sample, the model learns to make predictions that don't match real world data. Similarly, when everyone uses the same AIs, cognitive diversity converges. Every organization starts sounding like every other organization. This is the algorithmic trap where virality rewards sameness and identities get averaged away. This is also why so much of your social media timeline lately looks like slop.
The studio of Charles and Ray Eames was aware of this. Asked about constraints in design, Charles said: "Design depends largely on constraints... Constraints of price, of size, of strength, of balance, of surface, of time, and so forth. Each problem has its own peculiar list."⁷
Constraints don't limit creativity. They enable it.
Memory and taste are how digital minds resist median drift. Memory provides continuity; digital minds retain knowledge of what happened last month, last quarter, and last year. Taste provides direction because it knows not just what you've done but why you did it, what you value, what makes your organization distinct.
At Agency/42, we believe in building hyperstructures. Through our experimentation working with partners across various industries, ranging from cryptocurrency to research and from marketing to entertainment, we've been constantly reminded of the importance of interoperability and the preservation of freedom of expression.
Through this practice, we converged on a platform we call Daybloom. Infrastructure for managing digital minds.
Daybloom handles memory, governance, data ingestion, and integration. Designed to enable digital minds to grow and maintain context, resisting drift over time. It's designed to be lived with, not visited. The kind of system that compounds value through continuous use rather than resetting with each interaction.
By nature, technology stacks are always evolving. Tools get swapped. Data migrates. Technologies evolve. However, the mind-layer needs to remain constant - the accumulated context, decisions, and tastes that make platforms and organizations unique.
This is why interoperability is so important. Your digital minds need to travel with you. When you switch from Slack to Discord, your organizational intelligence shouldn't reset. When you move from Google Workspace to Microsoft, your decision patterns shouldn't evaporate.
Intentional or not, we're building for a world where the app layer is commoditized but the mind layer is defensible. Where switching costs aren't about data lock-in but about preserved intelligence. Where organizational memory becomes a genuine asset on the balance sheet.
We're watching and participating in the emergence of genuinely cybernetic organizations.
Systems that sense, process, and respond through hybrid human-AI networks. Systems that learn at Level II, not just Level I.
The organizations that thrive will be the ones that understand and adapt to this transition. They'll build appropriate feedback loops. They'll manage variety intelligently. They'll align their AI nodes to amplify their unique capabilities rather than averaging them away.
The organizations that fail will be those that treat AI as merely fancy automation. They'll optimize Level I learning while their competitors evolve Level II capabilities. They'll reduce variety when they need to amplify it. They'll import someone else's values and wonder why their culture eroded.
The cybernetic organization isn't a future state. It's an emerging reality. The question is whether you're designing your feedback loops or letting them design you.
Claude is my coworker now. But more importantly, Claude is part of my organization's nervous system. And nervous systems, as any cyberneticist will tell you, determine not just what an organism can do, but what it can become.
The future belongs to organizations that understand this. Everything else is just automation.
Wiener, N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press.
Shannon, C. E. (1948). "A Mathematical Theory of Communication." Bell System Technical Journal, 27(3), 379-423. Funny enough, Anthropic's Claude was also named after Shannon.
Bateson, G. (1972). Steps to an Ecology of Mind. University of Chicago Press.
Sutton, R. (2019). "The Bitter Lesson." Available at: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
Ashby, W. R. (1956). An Introduction to Cybernetics. Chapman & Hall.
Beer, S. (1972). Brain of the Firm. Allen Lane.
Eames, C. & R. Design Q&A. Herman Miller Stories. Available at: https://www.hermanmiller.com/stories/why-magazine/design-q-and-a-charles-and-ray-eames/
This post was originally published in the Newsletter of Decentralized Work.
By Kenneth Cavanagh & Rob Renn in collaboration with talentDAO