Saturday, April 4, 2026

Cybernetic AI Agents for Robots - Stafford Beer's VSM (#2)


Now let’s talk about Cybernetics

The word cyber has been stretched in every possible sci-fi direction. It gives us cybermen, cyborgs, cyberpunk, and every variety of futuristic machine mythology. In popular culture, it usually means something digital, machine-enhanced, or vaguely threatening. Cybernetics is not a science-fiction aesthetic, it is a scientific and intellectual discipline concerned with control and communication in systems, e.g. biological, mechanical, or social.

Cybernetics began with Norbert Wiener, who established it as a formal field around control, communication, and feedback. From there, the field broadened through the work of thinkers such as W. Ross Ashby, Roger Conant, Stafford Beer, William T. Powers, and, in robotics, Rodney Brooks. Each contributed a different piece to the question of how systems persist, adapt, and remain effective in a changing environment.

Stafford Beer, the management theorist and philosopher, is especially important here. Beer’s major contribution was not about machine control, but the application of cybernetics to management and organizations. He was concerned with how complex organizations—companies, factories, governments, and institutions—could remain functional, adaptive, and coherent under changing conditions. His work developed into what is commonly known as the Viable System Model (VSM).

Beer’s key question was what makes a system viable. In this context, viability means more than simply functional or productive. It means the ability of a system to continue functioning as itself: maintaining coherence, adapting to change, absorbing disruption, and preserving its purpose despite internal complexity and external pressures. For organizations, this is essential. An organization that cannot coordinate itself, regulate itself, adapt to its environment, and remain aligned with its purpose will eventually drift, fracture, or fail. Beer’s model was an attempt to describe the minimum functions required for a system to remain viable over time.

In Beer’s model, viability depends on a set of interacting systems:

  • System 1 – Operations -- This is the level where the actual work gets done. These are the operational units carrying out the primary activities of the system.
  • System 2 – Coordination -- This system reduces conflict and oscillation between operational units. It helps ensure that the parts of the system do not work against one another and can function together in a stable way.
  • System 3 – Control / Internal Regulation -- This level provides internal oversight of the operational systems. It allocates resources, enforces constraints, and ensures that the organization is functioning as a coherent whole.
  • System 3* – Audit -- This is the auditing or monitoring function. It checks what is actually happening at the operational level, rather than relying only on reported summaries. It provides a direct channel for verification.
  • System 4 – Intelligence / Environmental Scanning -- This system looks outward and forward. It monitors the environment, detects change, evaluates threats and opportunities, and considers how the system must adapt in order to remain viable.
  • System 5 – Policy / Identity / Purpose -- This is the highest-order level. It defines the identity of the system, its governing policy, and its reason for existing. It is what ultimately anchors the rest of the organization.

The importance of this structure of divides responsibilities is that it establishes a relationship between immediate action, internal regulation, adaptation, and purpose. Lower systems can perform their responsibilities while being guided by the overall identity and purpose of the system. A viable organization can change its operations, coordination, and even its strategic posture, but it does so without losing itself.

Autonomy, resilience and Agents

That point connects to a broader cybernetic theme. An Agent is not meaningfully autonomous simply because it can perform tasks or optimize outputs on its own. What matters is whether it can regulate itself, adapt to change, and remain coherent while pursuing its purpose. In all cases, the environment will change, and that will inevitably require internal adaptation; cybernetics is best applied here to ensure that the adaptation keeps the system aligned to its purpose.

This is why cybernetics matters for the future of AI agents. Agents have become more capable, they are able to modify workflows, generate code, restructure processes, and adapt behavior. The problem is no longer repeatable execution. The real problem is how these types of adaptations remain organized, how they remain bounded, how they avoid drift, and how they continue to serve the original purpose of the system.

Application to Autonomous Robotics

This is where my interest is heading. My goal is a truly autonomous robot: a viable system that can operate, regulate itself, maintain its own health, adapt to change, and evolve without losing its purpose. AI agent technology makes this kind of autonomy far more achievable. But capability alone is not enough. For autonomy to be complete, it needs governance. It needs internal regulation. It needs a structure that allows adaptation without collapse, change without drift, and evolution without loss of identity. That is why Beer’s cybernetic model matters.


Next, I want to look at a modern adaptation of these ideas, and how to build a much better Totally Not Evil Robot Army.

Friday, April 3, 2026

Cybernetic AI Agents for Robots (#1)


 "The Future's So Bright, I Gotta Wear Shades"

[editor's note:  I have returned after a hiatus due to life, jobs, and other distractions (Factorio!! this is why we don't have my robots everywhere!). AI is everywhere now, but for my blog/writings I will still struggle to produce for you my thought content. I may use AI to shape it up and improve the clarity. But the thoughts written are mine.]

A new world of AI Agents

A great deal has changed in the world since 2025. AI has advanced significantly, and agent technology has taken a major step forward. It is becoming evident that AI systems are moving toward a higher level of intelligent capability, with important implications for robotics.

The core concept is straightforward. Instead of robots being purely deterministic systems—fixed sets of code that either follow predefined instructions or simply receive commands about what to do next—a robot could have the ability to analyze how to achieve a given result on its own. In other words, the system would be given a purpose or goal, but it would be allowed to determine its own approach. It could assess options, simulate possibilities, decide what to do, and then take action accordingly.

A key part of this shift is that AI agents today are beginning to show the ability to write quality code, rewrite code, adjust code, and modify their operating processes to better fit the goals they are trying to achieve. This is more than just using a Agent Planner with a set of Skills. It is the ability to build the Skills needed to accomplish an objective.

Oddly enough, much of this is still very linguistic and centered around the LLM. Large language models do not directly drive real-time operations well, but they do have the ability to write the code that can drive those real-time operations. That is where I think this begins to apply to robotics.

You could imagine a robotic system that has the ability to adapt to its sensors and reconfigure how those sensors are used. It could adjust how motion is implemented, or even how control systems are tuned, in order to achieve the result it is aiming for. This higher-level function would have to operate on a completely different layer than the real-time operating system, motion control, or sensor loop.

That difference in timescale is important. This higher level of operation would run on a different sequence of events than real-time sensing, actuation, and low-level control. But that does not mean it cannot exist. It simply means that the architecture must separate high-level adaptive reasoning from low-level real-time execution.

Such a system would likely require some connection to a large language model. Today, we have powerful models in the cloud, but AI and LLM technologies are rapidly being compressed into smaller footprints. Capabilities are increasingly moving toward deployment on smaller devices, to the point where something like this could eventually be embedded directly within a robotic platform.

This may also change how humans interact with machines. This is how we may begin to talk to robots. It is how you might talk to your car and ask what is wrong with it. It is how you might tell your dishwasher what you want it to do and have it respond conversationally, including explaining the likely impact of your command.

But that is only the first step. Talking to machines is one thing. The deeper shift is that the robot itself could have the ability to rewrite the code it uses to perform the functions it needs to do. That is a much more significant capability.

A robot like this could evolve over time. It could even reach the point where it can work with modular sensors or drive mechanisms, and when presented with new hardware, it could update its own operating parameters and implement new functionality on its own. In that sense, it would not merely execute control. It would adapt its own control.


And that leads naturally into Cybernetics...