On action selection and animats

animats embodiment note

More than a decade ago, while reading the “Computational cognitive neuroscience” book, I came across the idea of action selection. The premise immediately raised questions, like:

I kept pulling threads from different sources. One of them:

González, Fernando & Prescott, Tony & Gurney, Kevin & Humphries, Mark & Redgrave, Peter. (2000).

An embodied model of action selection mechanisms in the vertebrate brain.

10.7551/mitpress/3120.003.0018.

That paper, among a few others about animats, made me think that real organisms rely on embodiment, and that requires quite a bit more in terms of reasoning, associating, remembering (changing the views on the known things), all in real-time with a multitude of approaches to handling data.

Take, for example:

Meyer, Jean-Arcady & Guillot, Agnès & Girard, Benoît & Khamassi, Mehdi & Pirim, Patrick & Berthoz, Alain. (2005).

The Psikharpax project: Towards building an artificial rat. Robotics and Autonomous Systems. 50. 211-223.

10.1016/j.robot.2004.09.018.

To quote the introduction:

several researchers consider that it is quite premature trying to understand and reproduce human intelligence – whatever this expression really means – and that one should first try to understand and reproduce the probable roots of this intelligence, i.e., the basic adaptive capacities of animals. In other words, before attempting to reproduce unique capacities that characterize man, like logical reasoning or natural language understanding, it might be wise to concentrate first on simpler abilities that human beings share with other animals, like navigating, seeking food and avoiding dangers.

My questions from above got partial answers, but led to more and more questions.

At this point, I am rewriting the whole text for yet another time, going on a tangent about how many things it links to and how much there is to discover. It’s easy to get way too focused on the “essentially contested concept” nature of the word “intelligence”, with our understandings differing in principle, people nodding at the start only to keep arguing about its definition, criteria, and scope for weeks on end. For this note, I think it’s best if I stop myself here and touch each specific topic in subsequent notes.

The papers above motivated me to keep building small environments, several of which I return to when I want to think about embodied agents. Below is a simplified, 2D version controlled by an HTN game AI, illustrating the principle.

The robot’s task is deliberately simple: remove the cardboard boxes from the arena.

What this toy example makes clear is that even in a trivial 2D world, action selection is not just about choosing the next action. It’s about continuously reconciling planning, perception, and physical constraints. The agent’s failures, such as getting stuck, oscillating, or committing to suboptimal pushes, aren’t bugs so much as evidence that something is missing from the model. That gap makes such environments useful as thinking tools.