Discussion about this post

User's avatar
Steph's avatar

"Many representationalist AI researchers do think that agents have internal representation of the world and do explicit planning." -- most people I know who do cognitive modelling (who are more or less aligned with neurosymbolic AI) would not have a problem with the world model / generative model / representation being somewhat external or distributed, as long as it can be *interrogated*. It must be able to be used as a model, to run simulations, otherwise what is the point of having it at all?

Other points of feedback if you are planning to use this as springboard for further: it might be worth having a good, lay, watertight definition or explanation of what entropy is somewhere. Also, the wall of fire para is pretty speculative and vague. Could be a really nice idea if developed a bit?

Expand full comment
The Singularity Project's avatar

The Cultural Intelligence of humanity, developed over a 100k span, and in particular its Language component, is the basis for what we call Artificial Intelligence today. Humans stopped evolving biologically and evolution continued first through Cultural Intelligence and now through Technological Intelligence.

Expand full comment
13 more comments...