Robot Projects

Friday, April 3, 2026

Cybernetic AI Agents for Robots (#1)


 "The Future's So Bright, I Gotta Wear Shades"

[editor's note:  I have returned after a hiatus due to life, jobs, and other distractions (Factorio!! this is why we don't have my robots everywhere!). AI is everywhere now, but for my blog/writings I will still struggle to produce for you my thought content. I may use AI to shape it up and improve the clarity. But the thoughts written are mine.]

A new world of AI Agents

A great deal has changed in the world since 2025. AI has advanced significantly, and agent technology has taken a major step forward. It is becoming evident that AI systems are moving toward a higher level of intelligent capability, with important implications for robotics.

The core concept is straightforward. Instead of robots being purely deterministic systems—fixed sets of code that either follow predefined instructions or simply receive commands about what to do next—a robot could have the ability to analyze how to achieve a given result on its own. In other words, the system would be given a purpose or goal, but it would be allowed to determine its own approach. It could assess options, simulate possibilities, decide what to do, and then take action accordingly.

A key part of this shift is that AI agents today are beginning to show the ability to write quality code, rewrite code, adjust code, and modify their operating processes to better fit the goals they are trying to achieve. This is more than just using a Agent Planner with a set of Skills. It is the ability to build the Skills needed to accomplish an objective.

Oddly enough, much of this is still very linguistic and centered around the LLM. Large language models do not directly drive real-time operations well, but they do have the ability to write the code that can drive those real-time operations. That is where I think this begins to apply to robotics.

You could imagine a robotic system that has the ability to adapt to its sensors and reconfigure how those sensors are used. It could adjust how motion is implemented, or even how control systems are tuned, in order to achieve the result it is aiming for. This higher-level function would have to operate on a completely different layer than the real-time operating system, motion control, or sensor loop.

That difference in timescale is important. This higher level of operation would run on a different sequence of events than real-time sensing, actuation, and low-level control. But that does not mean it cannot exist. It simply means that the architecture must separate high-level adaptive reasoning from low-level real-time execution.

Such a system would likely require some connection to a large language model. Today, we have powerful models in the cloud, but AI and LLM technologies are rapidly being compressed into smaller footprints. Capabilities are increasingly moving toward deployment on smaller devices, to the point where something like this could eventually be embedded directly within a robotic platform.

This may also change how humans interact with machines. This is how we may begin to talk to robots. It is how you might talk to your car and ask what is wrong with it. It is how you might tell your dishwasher what you want it to do and have it respond conversationally, including explaining the likely impact of your command.

But that is only the first step. Talking to machines is one thing. The deeper shift is that the robot itself could have the ability to rewrite the code it uses to perform the functions it needs to do. That is a much more significant capability.

A robot like this could evolve over time. It could even reach the point where it can work with modular sensors or drive mechanisms, and when presented with new hardware, it could update its own operating parameters and implement new functionality on its own. In that sense, it would not merely execute control. It would adapt its own control.


And that leads naturally into Cybernetics...

No comments:

Post a Comment