The Physics of Free Will
A dialogue with Chris Fields
“If anything has free will, everything does.” — Chris Fields
False Certainty
I used to think that physics was more or less set in stone. It certainly seemed that way in school. From lecture to lecture, the attitude my teachers imparted on me was that our understanding of the physical world was like bedrock. All the truly awe-inspiring scientists, theories, and discoveries listed in my textbook were centuries old. If ever something from the last three or four decades was mentioned, it read like little more than an epilogue to a field that had already reached its conclusions.
My teachers at that time summarised it this way: the universe is a deterministic machine, made up of unconscious atoms and particles “bouncing around” in the fabric of spacetime—a bit like billiard balls on a pool table—dutifully following the laws of physics on the path written in the initial conditions of the Big Bang. One consequence of this is that there is really no free will. After all, we humans are made of the same deterministic material as everything else. It further meant that our rich and complex inner worlds are either illusory or at most unimportant. Our subjective experience makes no difference to anything, because the laws of physics are “observer-independent”—they don’t care who, what, or where you are. In fact, the whole universe may in reality be one static, timeless block. The work that remains for curious minds is to tidy up this picture. Physics is essentially “complete”.
At the time, I couldn’t help but feel that there was something deeply unsatisfying about all this. My teachers’ claims were at once vague and self-certain. My questions about the mind, life, and our sense of free will were met with hand-waving answers. Why do we feel like we have free will if everything was pre-determined at the Big Bang? Well, my teacher would say, that’s because our brains create the illusion of free will! Why would a deterministic universe create brains that create illusions? There is no “why” to the universe, I was told, it just happened like that. Okay, so how do unconscious particles create beings with brains capable of experiencing illusions? This must have been an irritating question, because I never got an answer.
I would only learn later that my teachers’ claims were based almost entirely on classical physics. What they failed to tell me was that our certainty in the classical worldview had largely collapsed with the introduction of quantum theory, information theory, and various developments in mathematics like the famous Gödel theorems. The 20th century had brought a cascade of strangeness to contend with. There were wave-particle dualities, quantum teleportation, the elusive nature of time, and the measurement problem. Observers had some unknown significance, spacetime might not be fundamental, and this thing called “information” was no longer a parochial aspect of human life, but potentially a ubiquitous feature of reality. The worldview I had been taught in school was really a hangover from a physics that was more than a century out of date. Physics was not static at all, its dynamism and activity had simply been obscured by the school curriculum!
That hangover has sustained itself in part because the emerging worldview is far stranger and offers less conceptual convenience than the classical. And, because no one can quite agree on it. Quantum theory still has no firmly accepted “interpretation”, and many physicists still insist that it must be harbouring “hidden variables” that would render the universe wholly deterministic again if only we could discover them. And so, perhaps it is not surprising that a science teacher at a small school would stick to the classical worldview rather than let impressionable young minds be muddled by the chaos.
What a shame it is to kill the curiosity of children by lecturing them with false certainty! For my part, I was so dissatisfied with my physics education that I ended up opting for psychology and neuroscience when I reached university. But my old questions about physics and how it could give rise to the living and mental worlds remained.
It is some of these same questions that appear to have animated Chris Fields, a physicist and former molecular biologist affiliated with the Allen Discovery Center at Tufts University in Boston. I was put in touch with Fields after speaking with the neuroscientist Karl Friston and the biologist Michael Levin, who both belong to a loose coalition of scientists around the world who, in my opinion, are doing the most fascinating scientific work of our time. Although I cannot claim to speak for them, I view their work as an attempt to bridge physics with biology and the cognitive sciences, and to ask seriously the kinds of questions which my own teachers found intolerable.
What I found in speaking to Chris Fields was indeed a view of physics radically different from what my teachers had given me.
New Information
GUNNAR: I’d like to start us off with the most foundational aspect of your thinking. As I understand it, you see physics as being about information processing. I have to admit that “information” is not really a concept that ever came up in my own physics education, so I’m curious to understand how it fits into your view.
CHIRS FIELDS: So this link from physics to information, I think one can trace back to the 1870s with Ludwig Boltzmann, who invented statistical mechanics. And what Boltzmann did was reinterpret the notion of entropy, which had been invented as a concept only three decades earlier by Rudolf Clausius, as a relationship between energy and temperature. What Boltzmann did, which I think was very fundamental, was to relate the notion of entropy to a notion of uncertainty.
He defined entropy in terms of the number of states a system can have that an observer can’t distinguish. And he was thinking about ideal gases. So imagine a bunch of gas in a box, and the observer is on the outside measuring temperature and pressure. The observer can’t see the individual velocities or positions of the atoms inside the box. So there’s a huge number of possible states that the gas can be in that the observer can’t distinguish by measuring temperature and pressure on the outside. And so Boltzmann related entropy to the log of that number of states that can’t be distinguished, and introduced Boltzmann’s constant as a conversion factor to get the units right.

So what this did was represent energy or entropy in terms of ignorance or uncertainty. One could then, for the first time, ask the question: Suppose I’m an observer, and I want to find something out. I want to do an experiment, ask a question. So let’s say I have a particular system that I am investigating. By characterizing the system in terms of uncertainty, I can now find out how much energy I have to spend to ask a question of the system, and thus to decrease my uncertainty by the value of one state. And if you think in terms of systems with two states, then that’s decreasing it by one bit.
With this way of thinking, one can actually get from Boltzmann’s ideas to Landauer’s principle, which describes the relationship between energy and information, with just one line of algebra, basically. More specifically, Landauer’s principle tells you how much energy you have to spend to erase one bit of information, which is equivalent to resolving your uncertainty in an irreversible way. And that’s quite incredible, because Landauer’s principle wasn’t actually enunciated until the 1960s. So the real consequences of what Boltzmann figured out all the way back in the 1870s weren’t realized until the 1960s, even though information theory per se was invented by Claude Shannon in the 40s.
So the math was done kind of slowly, and the physics took a long time to catch up with it in terms of thinking about this issue of reducing uncertainty by making measurements. By the mid 1920s, there was a full-fledged debate about what quantum theory meant, and quantum theory kind of set aside this question of how scientists actually make measurements. That turned into the measurement problem, which became a philosophical problem that generated all sorts of metaphysical speculation.
But, in a sense, Boltzmann had answered that question already in the 1870s by saying: What experimenters do is use energy to extract information from a system. Now, because we’re finite systems, if we’re going to use energy, we have to have some time to use it. And time x energy is action in physics, and Planck’s constant (written as ℏ or h-bar) is actually the quantum of action. It’s the basic thing that’s quantized in quantum theory. One can think of Planck’s constant as the minimal action required under the very best of circumstances to answer one yes/no question, and thus acquire one bit of information.
GUNNAR: And acquiring one bit of information then means reducing uncertainty by one bit?
CHRIS FIELDS: That’s right, and not just reducing uncertainty in the abstract, but reducing my uncertainty, since I’m the observer, I’m the one doing the experiment.
GUNNAR: And is reducing uncertainty in this case the same as reducing entropy? In which case, is my entropy decreasing, or is the entropy of the system I am observing decreasing?
CHRIS FIELDS: It’s decreasing the entropy of the system I am observing for me, since I am the one expending energy and receiving information about the system. That is really the crucial aspect.
The next big name to adopt this way of thinking was John Wheeler in the early 1970s. He essentially revived general relativity and started to connect it to quantum theory, so in many ways, he’s the founder of quantum gravity research. He was a student of both Einstein and Bohr, and he was the one who came up with the notion that the only thing that is fundamental in physics is information. So he really started this physics as information business in the 70s and 80s. From there, one starts to get a whole movement in quantum computing, quantum information, and quantum gravity that has to do with conceiving of physics in terms of information. And today, that’s a huge community and a very big deal because of quantum computing being a potentially practical resource.

Shut Up and Calculate!
GUNNAR: Would you say that this whole ‘information’ business is still a contentious issue in the physics community? Is the mainstream still conceiving of physics as being fundamentally not about information, but about physical stuff influencing each other mechanistically?
CHRIS FIELDS: I don’t know for sure. I would have to go back to undergraduate school and see what’s actually being taught. But probably the physical “meaning” of information in physics is still somewhat contentious. When I was in school, it was still the “Shut up and calculate!” period. So you were not encouraged to even think about this sort of stuff. But I think that’s beginning to wane.
There’s a wonderful paper from 2018 called “Making Better Sense of Quantum Mechanics” by David Mermin, who is a solid state physicist [Mermin was also the person that originally coined the phrase “Shut up and calculate!” to express his dismay with the ‘Copenhagen interpretation’ of quantum theory in an article from 1989 titled “What’s Wrong With This Pillow?”]. Mermin is a very respected figure when it comes to the foundations of quantum theory, and in this paper, he basically took the side of Christopher Fuchs and other people who had invented this notion of “interpretations” of quantum theory as a very personal experimental Bayesian reasoning-based endeavor (their interpretation is known as QBism). And the point of Mermin’s paper was to say: Look, physics community, what we’re doing here is really observing things, and what we’re describing is really our experience as experimentalists. So if I make an observation and I characterize something as having a particular quantum state, then it has that state for me, because I made the observation, and you may make a different observation, and it’s gonna have a different state for you. So forget about this business of objective quantum states. They don’t exist. It’s all personal. It’s all observational.
One can wonder whether that was what Bohr or Heisenberg actually thought. It’s not quite clear what they thought. They were trying to move a scientific revolution along, not do philosophy. But at least there are now very senior, well-respected voices in the physics community who are thinking along these lines, which would have been regarded as hopelessly radical a few decades ago.

GUNNAR: Are you arguing that this informational view of physics actually describes the universe we live in, or is it just a consequence of the way humans observe and sample the world? In other words, is the informational view only relevant because we are bounded human beings, forced to contend with all the limitations on knowledge which our condition entails, or does it purport to describe reality as it really is?
CHRIS FIELDS: If you take the physics seriously, then it really is a statement about what our universe is like. Keep in mind that this framing of observers is not limited to human beings. An observer is just a physical system, any system, that’s interacting with an environment. Observation is about sending information back and forth between that system and its environment.
GUNNAR: So, any physical system that exists is an observer?
CHRIS FIELDS: That’s right, if it exists in a particular way. Specifically, if it exists in a way that is—and here I’ll use Karl Friston’s language—“conditionally statistically independent” of its environment. In other words, if it has a Markov blanket. Or in quantum information theoretic language: if it exists in a way that it is separable from, so not entangled with, its environment. Entanglement is the absence of conditional statistical independence.
SIDE NOTE: Karl Friston is the author of the Free Energy Principle (FEP), which began as a theory of the brain but has since been expanded to explain the dynamics of systems at every scale, including particles, cells, and so on. Chris Fields is responsible for putting forward a quantum theoretical formulation of the principle. In its strongest form, the FEP says that any ‘thing’ that persists through time must maintain the integrity of its boundary with its environment by minimising free energy, which is essentially synonymous with reducing uncertainty. Any system is said to have internal states (the states contained within the boundary of the system) and external states (the states of the system’s environment). The boundary that separates these states is denoted by a so-called Markov blanket, which can be further decomposed into sensory states and active states. The internal states of any system are then said to encode a generative model which contains beliefs and makes predictions about the system’s environment. Long story short, the FEP implies that every ‘thing’ that exists, be they people or particles, is a kind of cognitive agent that exchanges information with its world and attempts to maximise the “evidence” for its own existence in order to persist through time.



GUNNAR: Does this mean that when we speak about an entangled system, that such a system is “boundaryless”?
CHRIS FIELDS: Yes, but with a caveat. So if we want to be strict, we can specify some quantum system, and then we can specify some boundary in that system, some decomposition of that system into two components. So call the system U and call the components A and B. Now, I can write down the state of U as the joint state of A and B, and then I can ask: Does that state factor? Does the joint state of A and B equal the state of A times the state of B? And if it factors like that, then they are conditionally statistically independent and separable. If it doesn’t factor, they’re entangled. So if the joint state AB doesn’t factor, then U is in an entangled state with respect to that boundary that I specified, and with respect to the description I used to characterize the states of A and B.
So once again, we get this subjectivity, and this is now a 20-year-old result. But again, it is one of those results that sinks in slowly. First, it sinks in formally, and then people start to think about what it means. Again, you know, “shut up and calculate”…
Our understanding of this subjectivity is due initially, I think, to a paper by Paolo Zanardi back in 2001, where he showed that if you are investigating a particular system, then whether or not it is entangled depends on how you describe it. So entanglement is not ontological, it’s observer-relative. What we’re seeing here is a kind of deconstruction from the bottom up of this notion of objectivity and the replacement of this notion with one of observer-relativity. There are many people involved in this. Carlo Rovelli introduced what he called the relational interpretation of quantum mechanics, which pointed out, in a way slightly different from the way that Christopher Fuchs did, that if I want to talk about the quantum state of B, I have to say what the state of A is, and basically how A is describing B.
Free Will
GUNNAR: It seems like this view strongly contradicts the way we’re usually taught to think about physics—at least the way I was taught to think about it in school. Usually we’re asked to imagine these definite physical objects with objective properties, and then we should think of these objects sort of “bouncing around” like billiard balls according to the laws of physics. So you get this linear cause-and-effect relationship between “stuff out there”. But in this relational picture, it sounds almost like we should think of physical systems less as “stuff bouncing around” and more like minimal agents of some kind making “decisions” based on the information they possess. In other words, we are not just talking about things affecting each other mechanistically; rather, we are talking about agents interacting, and through that interaction, they share some information, which then results in them taking some kind of action based on that information.
CHRIS FIELDS: Yes, and again, there’s a beautiful, 20-year-old paper by Conway and Kochen, actually a series of two papers, called the Free Will Theorem papers that are relevant here. The first paper is somewhat heuristic, where they basically set up a thought experiment in which you can imagine a physicist interacting with a particle in an experimental setting. What they showed is that if you have this situation where two systems are interacting and one of those systems, presumably the physicist, can be regarded as not being controlled by the causal forces local to it (which means everything in its past light cone or everything that could causally impact it), then the other system can’t be regarded as being causally determined either. So this first theorem basically says: If anything has free will, everything does.
The second paper is a little bit more formal, and it cashes out this notion of observation or interaction a little more carefully. And it basically says that under any physically reasonable notion of what observation means, everything has free will. So in our universe, no thing’s behavior is completely determined by the events in its past light cone. At the end of the first paper, they are answering questions from critics, and the last rhetorical question is: “So are you guys saying that this really applies to electrons?” And Conway and Kochen’s answer is “Yes”! So they were very open-minded and honest in this paper; they basically said: Look, this is what the formalism says, and we’re sticking with it.

GUNNAR: OK, so from this perspective, all physical systems—even elementary particles—can be thought of as observers and perhaps even as “agents” with some minimal free will. But I assume there must still be crucial differences between, say, a particle and a person, or a particle and a cell. For example, I was just listening to a conversation you had with Michael Levin and Joscha Bach, and I may have misunderstood what he was saying, but I believe Bach was making some argument that the cell is the smallest structure that we might say has some kind of agency or cognition. I might have that wrong, but he was at least saying that something very significant changes once you get to a system like a cell.
CHRIS FIELDS: If I remember right, he was talking about having the resources to implement interesting computations across time, and in a sense being a kind of computationally closed system, or what Maturana and Varela would have called an “autopoetic” system—meaning a system that keeps itself going through time.
Now, the physics I’m talking about does raise a number of questions about that, some of which have to do with scale. As the energy scale goes up, the behavior of even elementary particles starts to become very interesting. The Feynman diagram you need to describe what’s going on gets bigger and bigger and more complicated as the energy scale goes up. So one way of thinking about it may be that there’s a whole lot more going on than we know about, but the cell is the smallest thing that we know of so far that seems like an autopoetic system, that actually keeps itself going through time.
But then that raises the question: how do we know what counts as a system? And that question starts to get into the topic which I’m centrally interested in, which is: How do systems go about characterizing their environments? And that question is very difficult, in a sense, because these systems are bounded by Markov blankets that they can’t see through. So, if I’m a system, what I’m really seeing is a play of bits on my boundary. And I have to come up with a theory of what’s going on in my environment based on what those bits are doing. That’s the “generative model”, in Karl’s language.
GUNNAR: Yes, and I suppose then the problem is how to understand what kind of world really exists beyond the boundary. Donald Hoffman, I believe, talks about this idea of us having something like a dashboard interface where we’re receiving all this information about the world, but that information is not the world itself. The metaphor he gives is that of a pilot flying a plane, and they have a dashboard with all these dials reporting measurements about the sky outside the cockpit, thus allowing the pilot to effectively navigate, even when they can’t actually see their environment clearly. His argument is that our perceptions are akin to that, and that our sensory-perception system has evolved to help us survive, but not necessarily to show us the true nature of reality.
CHRIS FIELDS: Yeah, Hoffman’s notion of an interface is essentially the same as Karl’s notion of a Markov blanket, or the notion of a holographic screen from physics. This is another deep principle from the 90s, first enunciated by Gerard ‘t Hooft. But again, as an idea it can be traced back into the 19th century, not in precisely the same form, but in a similar form. And it’s the idea that if you have a system, and it has a boundary, then the most you can know about that system is the information that can be encoded on its boundary. What the holographic principle adds to that general idea is that the amount of information that can be encoded on the boundary is finite. It adds a combination of ideas from relativity and quantum theory that basically say: if some information is flowing out of a boundary, it’s got to have a physical carrier, and that physical carrier has to get through the boundary. So it essentially asks: What’s the smallest dimension on the boundary that one photon can get through? And the smallest dimension is the Planck area, which is ludicrously small, something like 10 to the minus 70th meter squared. But that’s still a finite number. So the holographic principle says, not only is the information available about the system only the information encoded on its boundary, but that it’s finite. So what it adds to the Markov blanket notion, or to Hoffman’s interface theory, is that those interfaces have finite coding capacity.

GUNNAR: I wonder if this is related to something Karl Friston told me about the Free Energy Principle. He explained that the reason why systems can’t share all of their information across boundaries is because, if they did, then this would be equivalent to those systems dissolving into each other and therefore “dying”, so to speak.
CHRIS FIELDS: Yeah, in the Free Energy Principle, a system can only continue to exist if it has internal states. So it has to have states that its environment can’t interact with directly. We can translate that into quantum theoretic language: A system can only exist as an entity if it’s not entangled with its environment. You can’t have the environment reaching in and messing with your internal states. And from a heuristic point of view, the system is using those internal states to do computations, and to figure out what its environment is doing and to run its generative model. If the environment is reaching in and messing with those states, then they’re not available as a computational resource to the system. So all of these ideas hold together, even though they’ve been stated in these many different ways, they’re intertranslatable.
GUNNAR: And is this kind of computation that you’re talking about the same as what Joscha Bach was saying is special about the autopoetic capacity of cells?
CHRIS FIELDS: Not quite. I mean, certainly systems that are smaller than cells are doing computations of this kind. Molecules are doing computations, even though they can’t rebuild themselves out of their own components. They need other parts of the cell to do that. My laptop does lots of computations, but it is not an autopoetic system. So those two are distinct ideas.
To back up a little bit, the question becomes: how do we identify autopoetic systems? Which is part of the question of how we identify systems at all and then characterize them. What are the components of the generative models that we use, or that potentially any system could use, to look at these bit strings on its boundary, which are finite, and factor those into descriptions of distinct systems and then construct theories of those things that render some of them autopoetic? So here we’re essentially asking: How does any system do science, and what kind of science can different systems do? So what can an E. coli understand about its environment? What can a paramecium understand about its environment, etc, etc?
Scaling Upwards
GUNNAR: That brings me to a question I’ve been increasingly curious about, which is: What about systems that we humans are embedded in, like say societies or ecosystems? Because under the traditional view that I’m familiar with, higher-scale systems like societies and ecosystems are not “real” in the same sense that organisms, molecules, and particles are. They are less like real, physical systems, and more like useful abstract summaries of the behaviours of lots of individual “real” systems. Under this view, a society wouldn’t have any kind of information processing capability or agency. But it seems to me that under the kind of physics you are endorsing, that it is actually reasonable to consider these higher-scale systems as “real things” that follow some of the same dynamics as their components.
CHRIS FIELDS: Right, so the nice thing about the Free Energy Principle and about much of this physics is that it’s entirely scale free. It doesn’t make any assumptions about how complicated or how big the system that one’s talking about is. So why not talk about ecosystems or something like that and ask: What can an ecosystem know about its environment, which is other ecosystems, plus the rest of the planet, plus the rest of the universe? And what are its computational capabilities? And how are those implemented by whatever subcomponents it has, which consist of organisms? This way of thinking actually allows us to ask those questions, whereas they wouldn’t necessarily make much sense under more traditional physics.
GUNNAR: Do you have any thoughts on how the scale of a system is or isn’t tied to its cognitive or agential capability? Because when I was speaking to Michael Levin, he was talking about how just because a system is a scale higher than another, let’s say a society of human beings, that doesn’t mean that the society as a whole will be “more intelligent” than the individuals within it.

CHRIS FIELDS: Well, as you were just saying, it’s not clear that it is. We really don’t know. So when we look at an organism like me, for example, and compare me to my cells, there are certainly things that I can do that my cells can’t do, but there are lots of things my cells can do that I can’t do, right? So in a sense, any individual cell would be really dumb in my environment, but I’d be really dumb in its environment. So in talking about these things, I think it’s crucial to keep in mind: What is the environment of the system we’re talking about, and what kind of information is that environment sending to the system in question? And what kind of information is the system sending back to its environment? So if we think about the environment of a cell, it doesn’t bear any resemblance to my environment. My cells are seeing other cells, and they’re exchanging chemical messages and electrical messages and mechanical messages of various kinds. But their senses are completely different from mine. Their environment is asking a completely different kind of thing of them than my environment asks of me.
Nonetheless, we can still use the kind of agential or psychological language that we use for ourselves to talk about cells. So we can talk about ‘communication’, we can talk about ‘coercion’ or ‘deceptive signaling’, and other such things. And we do talk that way in biology. When discussing cancer or various other aspects of developmental biology, we talk about cells cooperating and competing. We even use economic language, like resource sharing, etc.
So when we ask about a society or an ecosystem, it’s hard to even describe what that environment is like. Not just the computational capabilities of that system, but the sensory and action capabilities of that system, and what actions that system is taking with respect to other systems at its scale. So we can think about the climate system constraining what we can do, and we can think about what we do affecting the climate system. But it’s harder for us to think about how the climate system interacts with other entities at its scale.
GUNNAR: It feels like we are only really beginning to find useful ways of asking these questions. Until now, disciplines like sociology have usually been seen as “soft sciences” because they concern themselves with higher-scale systems that are really difficult to characterize and “measure” in the way we can with lower-scale systems. And there seems to be an inherent difficulty there. For example, it seems inconceivable that an individual cell in my body could attain any conception that its behaviour is actually part of this larger being that is me. When I decide to reach out to grab this cup of tea and drink from it, no individual cell in my body has any idea that that’s going on, even though its actions make an inextricable contribution to my action. And so I wonder whether human beings have reached some sort of privileged position where we can have a rigorous science of higher-scale systems that we are ourselves embedded in, or is there some kind of “Gödel limit” to what we can know?
CHRIS FIELDS: I think the answer to that is, in a sense, “yes” and “yes”. I mean, we do have this amazing ability to imagine things and come up with theories. But all of those theories do have this kind of “Gödel limit”, right? There are lots and lots of undecidable questions, and the more complicated and sweeping the theory is, the more undecidable questions there are in it. So Gödel does come back to bite us over and over again and remind us that we can formulate questions that we can’t answer.
But we do have a very interesting capability to formulate questions. And that’s really what we’re talking about here. Can we construct imaginative theories of what it’s like to be the climate system or something like that? And the answer is “yes”. I mean, we can even write poetry about that. The question is, can we test those theories? Do we have any idea of how to actually make them empirical? And I think that’s an open question. It’s not clear that we do, it’s not clear that we don’t. It’s not clear that we’ve tried!
When we think about cognitive capacity, one of the things we’re thinking about is memory and communication bandwidth, and so on. And we’re certainly capable of constructing artifacts that have much better memory than we do, and much better communication bandwidth than we do. I mean, we’re using one of the latter right now! So it may well be that we can construct computational infrastructure that we can interact with in a way that produces theories or models or even tests that we couldn’t otherwise produce. Groups of humans need these sorts of extra resources to be able to do lots of interesting computational things. We got a big jump when we invented language, for example, and another big jump when we invented writing. I believe Einstein said something like: “My pencil and I are more clever than I.” So we have been able to create these external resources that we can use, in addition to what we have inherently, as it were. And I think that’s what’s happening with AI. I think that AI is actually a transition on the order of the transition to having language. It may change things about as much as language changed us, which was rather a lot.
We’re still in the early days of this way of thinking, but I do very much think that the tools and methods for investigating these kinds of questions more rigorously are coming. And I think what we’re seeing now is that the language that was invented for these “higher-scale” disciplines like economics and sociology is increasingly being used in biology, neuroscience, and physics to talk about smaller-scale systems. So we have a notion of agency in fundamental physics, and a notion of agency from biology, and then we have these notions of social agency that come out of disciplines like economics. And all those notions really are being merged now. I see them being merged in many, many places.
So one of the early effects of this may be a kind of top-down effect, where one starts to view the body, for example, as a society and a kind of economy. I mean, it clearly is an ecosystem when you think of it from a microbiome or holobiotic perspective. These are large-scale concepts being applied to medium-sized systems. So the story here, I think, is of an accelerating conceptual exchange, up and down these scale boundaries and what used to be disciplinary boundaries that hopefully are rapidly dissolving.
Chris Fields and his colleague James Glazebrook have published a new book: Distributed Information and Computation in Generic Quantum Systems. The book offers a detailed account of the kind of physics discussed here, and how it applies to topics like artificial intelligence, biology, and neuroscience.
You can read more about Fields’ work on his website. He also has a whole course on ‘Physics as Information Processing’ available for free on YouTube here.

