9 Comments
Jan 4Liked by Machine Learning Street Talk

"Many representationalist AI researchers do think that agents have internal representation of the world and do explicit planning." -- most people I know who do cognitive modelling (who are more or less aligned with neurosymbolic AI) would not have a problem with the world model / generative model / representation being somewhat external or distributed, as long as it can be *interrogated*. It must be able to be used as a model, to run simulations, otherwise what is the point of having it at all?

Other points of feedback if you are planning to use this as springboard for further: it might be worth having a good, lay, watertight definition or explanation of what entropy is somewhere. Also, the wall of fire para is pretty speculative and vague. Could be a really nice idea if developed a bit?

Expand full comment

Love your mister microwave opening. And I know you what you mean about how the debt must always be paid. I was working on something today, in a great flow. I paused for what I thought would be just a second to grab some synonym terminology, and the next thing I knew I’m going back and forth, back and forth with the prompt master— haggling. Lost energy, little gained, an illusion of productivity.

Expand full comment

https://www.linkedin.com/pulse/intelligent-agents-agi-active-inference-free-energy-principle-ludik-z3oof

This article has been inspired and triggered by (1) some insightful questions and opinions about when things become agents and what agency is by Tim Scarfe in his excellent Substack blog on "Agentialism and the Free Energy Principle" as well as the corresponding MLST podcast episode "Does AI have Agency?" and (2) my recent engagements with the Active Inference and Free Energy Principle (FEP) community as well as the VERSES team (the likes of Gabriel René, Dan Mapes, Jason Fox, Philippe Sayegh, etc.) as they are getting out of their "stealth mode" phase into the AI revolution limelight! A special shoutout also to Denise Holt with whom I also had many discussions such as the one on her Active Inference AI & Spatial Web AI Podcast "Navigating the AI Revolution with Dr. Jacques Ludik: Insights on Active Inference, Ethics, and AI's Societal Impact" as well as her superb communications and curation of relevant content such as "The Ultimate Resource Guide for Active Inference AI | 2024 Q1", "Unlocking the Future of AI: Active Inference vs. LLMs: The World vs. Words", etc. See also Charel van Hoof's 5-part series Learn by Example - Active Inference in the Brain on Kaggle.

In his Substack blog, Tim Scarfe makes some key points or at least articulate opinions or elephants in the room (some of which will be controversial in some circles) that needs to be explored further within the broader AI community to ensure significant AI progress. As Active Inference underpinned by the Free Energy Principle is also specifically under discussion here and promoted by VERSES as a promising direction for human-centric autonomous intelligent agents, it would be of great interest to get the perspectives of the Active Inference & FEP research community as well as the VERSES folks that are directly involved in the practical implementations of Active Inference. I would also be particularly keen to hear viewpoints of people like Karl Friston, Yann LeCun, Joscha Bach, and others (including folks from OpenAI, Google DeepMind, etc.).

Expand full comment

FEP is basically the Butterfly Effect. Inference will replace transformers for this years AI/ML hype cycle. Mathematics is reductionism, but also a poor attempt to replace software and xNN's (ie, FEP does nothing for AI/AGI research).

Expand full comment