12 Comments
Jan 4Liked by Machine Learning Street Talk

"Many representationalist AI researchers do think that agents have internal representation of the world and do explicit planning." -- most people I know who do cognitive modelling (who are more or less aligned with neurosymbolic AI) would not have a problem with the world model / generative model / representation being somewhat external or distributed, as long as it can be *interrogated*. It must be able to be used as a model, to run simulations, otherwise what is the point of having it at all?

Other points of feedback if you are planning to use this as springboard for further: it might be worth having a good, lay, watertight definition or explanation of what entropy is somewhere. Also, the wall of fire para is pretty speculative and vague. Could be a really nice idea if developed a bit?

Expand full comment
author

Hello Steph, that's really interesting. I am touching on the explicit planning and goals here though, not that agents couldn't have a world model. FEP theory has always been clear that they can have a world model The agents are performing explicit rollouts/planning on most AI implementations. I wonder though, cynically, whether I am being too puritanical. There is the "pure free energy principle" which is self-organizing and emergentist, then there is the "practical implementations" which make a bunch of trade-offs such as explicit agents, explicit planning and goals to make the problem tractable. Are we left with something still significantly better than reinforcement learning?

The wall of fire thing is speculative - a "galaxy" brain thought! When I refer to entropy I am referring to the "information content" of something i.e. high entropy == high information content. For example, I pick up a spoon, a remote control, a hair dryer and a pocket calculator from my desk and juggle them together. I have just created such a "high entropy" event, that it has probably never occurred before and no machine learning model would have been trained on it. So the wall of fire represents the tendency of machine learning models to kill all forms of specificity without human supervision. All the specificity or "information" comes from humans, and they draw it from their immediate physical and social worlds.

Expand full comment

Well, it seemed to me you were drawing quite a significant dichotomy between a) having a world model, which you put at one extreme associated with symbolism, internal representations, static knowledge, gofai, etc, and b) being all diffuse, distributed and external, maan, with no planning, just vibes and intuition, maan. I am exaggerating for effect, of course, but isn't that what your para about objections to distributed cognition of 'derision or deliberate obfuscation and avoidance' boils down to? ie that models have to be used otherwise they are useless, and what else can they be used for if not planning? Unless... Perhaps there is a big difference between the kind of fully planned, get-the-goal planning you are talking about, and the kind of interrogation of a model I advocate, ie. just-in-time, next-step only, speculative, counterfactual reasoning, what-would-happen-if, just muddling along type of simulation I think likely to be possible in a modular, subcomponent way under the level of consciousness.

Expand full comment

I guess further clarifying questions could be:

- is all planning goal-directed? Or can planning ever be just a step at a time into the unknown

- is all model-interrogation planning? It seems to me that a componential, distributed, non-conscious system could hypothesise or simulate a situation using a model, and then act on scrappy info, maybe according to heuristics or in a binary buddhist way (do I turn away or towards?), with only the vaguest impersonal ideas of goals (I know when something is painful for example, and I want to avoid it)

Expand full comment
author

I don't think symbolism, gofai or explicit knowledge is relevant here i.e. the comments would apply to, say RL - it's more of an internalist vs externalist thing, and more importantly, not how you think the world is, but rather -- how you model it as an AI researcher. FEP folks think the world is diffused, but model it in an integrated / half-way-house way.

I added a section to expand on the planning part 'I thought the Free Energy Principle was “planning as inference” or “implicit planning”?' - i.e. explicit planning is always goal directed, implicit planning is as-if goal directed, but the goals are diffuse.

"is all model-interrogation planning?" - well this is the interesting thing, which I think we have been teasing out on our discussions on discord too. It is "planning as inference" if the effective plan is in your model, because, as an agent you have been effectively sharing information in your environment.

Expand full comment
Jan 4Liked by Machine Learning Street Talk

I admit I do mix up 'how I think the world is' and 'how to model it as an AI researcher'. That's because I am trying to model the human mind. I'm trying to use models to get at how we actually work, what we actually are. So for me they are very similar, if not the same. Maybe I could benefit from dissociating them a bit

Expand full comment

Are you going to keep up the Newsletter?

Expand full comment
author

Sorry for the hiatus Michael - I've been insanely busy on the main YT channel and scaling the production team over there. We do have an article coming out here soon (on creativity in AI)

Expand full comment

Thanks Tim, excited to see what you have planned next.

Expand full comment

Love your mister microwave opening. And I know you what you mean about how the debt must always be paid. I was working on something today, in a great flow. I paused for what I thought would be just a second to grab some synonym terminology, and the next thing I knew I’m going back and forth, back and forth with the prompt master— haggling. Lost energy, little gained, an illusion of productivity.

Expand full comment

https://www.linkedin.com/pulse/intelligent-agents-agi-active-inference-free-energy-principle-ludik-z3oof

This article has been inspired and triggered by (1) some insightful questions and opinions about when things become agents and what agency is by Tim Scarfe in his excellent Substack blog on "Agentialism and the Free Energy Principle" as well as the corresponding MLST podcast episode "Does AI have Agency?" and (2) my recent engagements with the Active Inference and Free Energy Principle (FEP) community as well as the VERSES team (the likes of Gabriel René, Dan Mapes, Jason Fox, Philippe Sayegh, etc.) as they are getting out of their "stealth mode" phase into the AI revolution limelight! A special shoutout also to Denise Holt with whom I also had many discussions such as the one on her Active Inference AI & Spatial Web AI Podcast "Navigating the AI Revolution with Dr. Jacques Ludik: Insights on Active Inference, Ethics, and AI's Societal Impact" as well as her superb communications and curation of relevant content such as "The Ultimate Resource Guide for Active Inference AI | 2024 Q1", "Unlocking the Future of AI: Active Inference vs. LLMs: The World vs. Words", etc. See also Charel van Hoof's 5-part series Learn by Example - Active Inference in the Brain on Kaggle.

In his Substack blog, Tim Scarfe makes some key points or at least articulate opinions or elephants in the room (some of which will be controversial in some circles) that needs to be explored further within the broader AI community to ensure significant AI progress. As Active Inference underpinned by the Free Energy Principle is also specifically under discussion here and promoted by VERSES as a promising direction for human-centric autonomous intelligent agents, it would be of great interest to get the perspectives of the Active Inference & FEP research community as well as the VERSES folks that are directly involved in the practical implementations of Active Inference. I would also be particularly keen to hear viewpoints of people like Karl Friston, Yann LeCun, Joscha Bach, and others (including folks from OpenAI, Google DeepMind, etc.).

Expand full comment

FEP is basically the Butterfly Effect. Inference will replace transformers for this years AI/ML hype cycle. Mathematics is reductionism, but also a poor attempt to replace software and xNN's (ie, FEP does nothing for AI/AGI research).

Expand full comment