Janet Axelrud on the Soul
See her Substack piece Rice On A Flat Plate - A Metaphor for Trust.
Subscribe to Curious Wandering Souls.
Here's the audiobook Janet was listening to:
YouTube demonetized him. You can support him at https://buymeacoffee.com/nolanreads.
Subscribe to Nolan Reads.
Her balloon and the sun metaphor reminded me of L'homme est le seul oiseau qui porte sa cage by Claude Weiss:
Subscribe to Dust.
On the importance of words, again this made me think of this talk and the paper on Emerging dynamic regimes and tipping points from finite empirical principles by Sergio Cobo-López, Matthew Witt, Forest L. Rohwer and Antoni Luque. Verbal communication seems to set the scale of observations of the participants, so that their reality is shared. Without that common web of verbal communications each person lives in their own private world of experience, which can become to be like a cage.
Think about the ancient rites associated with seasons and the scales of empirical observation of the temperature. Then think about the ways we think people lived their lives in those days. We know more about the large-scale societal structures of great civilizations than we do about the smaller groups who left fewer archaelogical traces of their beliefs.
Then think about the different sorts of concerns of people in the public sphere. I can only talk about the public sphere as I see it, which is my bubble, or my cage of you like. I see some people concerned with the state of the computer gaming industry, I see others concerned with the erosion of consumer rights, others concerned with the state of the software industry, some are concerned with civil rights like privacy and fair representation in front of the law, others concerned with the state of the national economy, others with the state of the global economy, others with the state of the natural systems like droughts, fires and flooding, Others are concerned with the dynamics of global weather systems and ocean currents. Each of these concerns presents people (who are all observers) with information that informs their decisions about what to do in their lives. So the information they get actually causes them to act in certain ways and what they do must inevitably affect the world and so it will change what they observe. But their actions also inevitably change what others observe too. So when we try to understand the whole earth with all the systems it supports, we find they are constantly evolving in ways that are very hard to predict, and it is principally the actions of intelligent observers that are the source of the variety in the data. So that is what the emergence of dynamic regimes and tipping points from finite empirical principles means to me. It's trying to understand how to learn a language without a teacher, but at the same time we are actively inventing that language, like children are believed to do when they create creoles together in multilingual groups.
To get back to the question about responsible use of "AI" that started this thread, if you try to use AI to abstract that language, and to thereby gain a means to speak to the system, and thereby control it in some ways, then it may seem to have been successful at first --- and you will be led to wonder about the source of the intelligence and the power you have tapped into --- but once you start to use that control you will find that the system has changed, and new things will become relevant, which cannot be expressed in the language your AI has abstracted, because these newly relevant dynamical pathways are a consequence of the abstraction of the dynamics of the system by that finite empirical process. See Jenann Ismael - Cracks in the Edifice of Determinism and Computerphile - Aric Floyd Testing AI LLMs with Newcomb's Problem.
See also James Corbett on Cybernetics and John Harte - Disturbance and Recovery Dynamics in Complex Systems.
Olive Badger on an AI that apparently thinks words are the only reality:
My comment:
Why are you shadow banned?! People scared of knowing what's really going on? That was a really great analysis. I can't help but wonder whether this whole AI hype-cycle isn't aimed at building up "ecosystem" dependencies to such an extent that in a year or so they will be able to get a government bailout, just like the banks got. Because so many companies drank the coolaid, it all got "too big to fail", even though it was never profitable, given the energy costs of the infrastructure.
Subscribe to Olive Badger.
Maybe someone should try asking some of these LLMs whether there are indications of the future being one where intelligence is limited by available energy and materials, or whether intelligence might one day be liberated from any such constraints?
See Irai Regmi's Substack https://radicaltransitions.substack.com/.
Subscribe to FREE.
Charlotte Moser on models and prediction:
Subscribe to Charlotte Moser.
Stafford Beer on Cybernetics ("the science of control and communication in the animal and the machine" 48:25)
50:57 on economic models. See Nima Alkhorshid talking with Richard Wolff and Michael Hudson.
55:51 Models explained in terms of Category theory, more or less.
Subscribe to Javier Livas Cantu.
Comments
Post a Comment