Home About Résumé Chronological

Emergent Causation

Created/Modified: 2019-02-12/2019-02-13

The idea of emergent causation is not new, but defining a precise measure of it is.

Parts of the critique of reductionism are overblown. Reductionism is a bet, not a dogma, as in: "I bet I can explain all this in terms of the bits!" There may be a few people who treat it dogmatically, but neither dogmas nor the people who hold them are very interesting.

I'm a reductionist--which means I take the reductionist bet whenever I'm faced with a new problem--but I've never been convinced by the argument that because atoms behave deterministically (they don't) then humans can't have free will (they do.) For a start, this strikes me as the fallacy of composition, which is the claim that because the constituents of a thing have a property, that thing must have that property.

For example: people are made of atoms, which are tiny, therefore people are tiny.

More interestingly: electronic logic components like AND and OR gates have outputs that instantaneously reflect their inputs, but if we wire them together appropriately we get a circuit that has memory (this was pointed out to me many years ago by Adam V. Reed, an electrical engineer, who generously commented it was "simple enough for a physicist to understand").

You can study NAND gates until you're purple, and you won't find any evidence of memory-like behaviour. Memory emerges from a particular configuration.

So the notion of emergent causation is not new.

On the other hand, the notion of a measure of emergent causation--the idea of "effective information"--is new, and potentially interesting.

The idea is disarmingly simple, if I'm understanding it correctly: there is a scale at which the ratio of (predictive power)/(information required) is maximized. We could predict the behaviour of a duck, for example, by knowing the positions and momenta of all the particles that make it up. We would write down the Hamiltonian and evolve it forward, and lo and behold we would know what the duck would be doing some time later. Or we could look at a much larger scale and notice the duck was floating adjacent to a tasty bit of duckweed, and using that vastly smaller amount of information predict what the duck would be doing some time later, albeit with considerably less accuracy.

At an intermediate scale--the scale at which consciousness emerges, supposedly--we could look at the activation potentials of the all the neurons in the duck's brain and make a prediction that was both very accurate in terms of the duck's future physical disposition and requiring much less information than knowing the whereabouts of every atom in the duck. That's the scale at which "effective information" is maximized.

There are a couple of things I have problems with, though.

Inferring ontology from causality is a tricky business, and a highly contingent one.

Entities are parts of reality a knowing subject--me or you, say--has drawn an edge around by an act of selective attention. They are useful tools that a being like us can use to understand the world. This does not make them ontologically significant: epistemic utility is all they need to have. Nor do they need an especially crisp definition, because they are useful, not "really real" in that special way that a certain type of ontologist wants to ascribe to some parts of reality.

When it comes to the question of "what is real?", my own position is the antithesis of John Lennon's, who famously said, "Nothing is real." He was wrong.

Everything is real.

But some aspects of reality, and some ways of drawing edges around parts of reality, are more useful--not more real--than others.

Furthermore, predictivity does not imply causality. Geocentric epicycles were a perfectly adequate predictive mechanism, but they completely missed the boat on causality. The entities one would impute on the basis of epicycles didn't exist, which is to say, there was no aspect of reality that could be used by anyone not aware of epicycle theory to draw lines around parts of reality that epicycle theory labelled "celestial spheres".

To validate the casual imputation of a theory one must find an independent way of drawing an edge around the entity that is supposed to be doing the causing. Call this process "independent edgification". Otherwise the entity is purely an artifact of the theory, not of independent reality.

If you took a space ship out and crashed it into a celestial sphere, that would justify imputing causation: "The motions of the wandering stars are caused by the celestial spheres to which they are attached, and into which I have just crashed my spaceship. Please send help." Absent the possibility of independent edgification, ontological claims are speculative at best, unwarranted--and probably wrong--at worst.

This is why we keep looking when we impute an entity that causes an observed phenomenon. It isn't enough for us to say, "Baryonic dark matter causes the anomalous rotation curves of spiral galaxies", or "Non-baryonic dark matter causes the anomalous motions of galaxy clusters." In each case we have an obligation to go out and hunt the snark we are claiming as a cause, and to bring back pictures, or a carcass, or casts of footprints, or something else. Some independent evidence it exists. Otherwise we're engaged in the time-honoured but epistemically sterile business of making up just-so stories, and three hundred years after Newton we'd look pretty foolish claiming the stories we make up are especially likely to approximate reality.

So it is interesting that effective information is--like epicycles--a predictive convenience. Does that means it has causal influence?

The Law of Causality

The law of causality, as I mean it, says, "What a thing now is causes what it does now." The temporal aspect is important when dealing with non-local phenomena, but not very interesting in the present case.

What matters is that when we see something happen--the swirl of a snowflake through the air, for example--we say something like, "The snowflake has mass so it falls, and its area and surface properties are such that the currents of air that it passes through exert enough force to defect its simple downward motion into a complex dance."

The mass of the snowflake and the forces it feels are the cause of its motion.

Because of Newton's third law--the law of reaction, which says that for every action there is an equal and opposite reaction--we know that if an entity acts, it must act upon something. Otherwise, there would be no equal and opposite reaction, which would violate Newton's third law.

This is one of the great engines of inference in Newtonian physics: every time we see something act, we know it must have acted upon something else. When we find that "something else" we have found another entity with its own nature (it's own "is-ness") and its own actions ("does-ness").

What a thing is causes what it does.

All this breaks down when we talk about quantum mechanics, where atoms decay, for example, without (re-)acting on anything. That's a topic for another time.

What matters for now is that imputing causation is much easier when something like the law of reaction is in play.

What does it mean to say "The effective information of this bundle of neurons causes the person's hand to rise"?

Is there a way we can put a point on that question that focuses on causality rather than explanation?

The Law of Explanation

No one ever talks about the law of explanation, but it is the complement of the law of causality: what a thing is explains what it does.

The difference is one of perspective: causes are concrete, explanations are abstract.

The mass, area, and surface roughness of that particular snowflake causes it to move as it does in the wind.

Newton's second law, gravity, and fluid mechanics explain the motion of any given snowflake.

Causes are concrete instances of explanations, acting in specific circumstances.

Explanations are abstractions of causes into general claims about the properties of things (what they are) and how those things relate to actions (what they do.) Aside: the problem of induction in this view reduces to the problem of coming up with definitions of is-ness that can be used to explain does-ness in a non-contradictory way, where "non-contradictory" covers both internal (self-consistent) and external (consistent with other known facts) domains. We search for general properties that maximize explanatory power, which might have something to do with effective information, or not.

In any case, causes are specific and concrete; explanations are general and abstract:

"The law of universal gravitation" is the explanation of a the motion of the Earth about the Sun.

"The sun's gravitational mass" is the cause of the Earth's motion about the Sun.

The explanatory statement is justified by the causal one. We get to the explanatory statement by understanding the causes, first. Going backward doesn't work so well, as epicycles (and phlogiston, and for all we know "dark matter", demonstrate.)

Newton saw an apple fall and wondered why it accelerated toward the centre of the Earth instead of taking off sideways, or in the direction of Toronto. He came to realize that if the mass of the apple was attracted by the mass of the Earth, that would cause the motion of the apple. And if masses attracted each other, that would cause all kinds of motions, from the tides to the orbits of the planets and the other celestial bodies, like comets. Explanation followed.

The account around effective information is going in the opposite direction: it notices we can explain something, and wonders if there is a cause underlying that explanation.

Ontology

Macrostates can't exist without microstates.

In thermodynamics, macrostates are lumps of matter. They have properties like mass and temperature and heat content.

Micorstates are the positions and velocities of all the particles that make up the lump of matter. Each particle has a mass, but no particle has a temperature, because a temperature is a distribution of velocities, not a velocity. Particles have kinetic energy, but not heat energy, because heat is a disordered (information free) collection of particles with a particular distribution of kinetic energies.

In a deterministic system, the specific microstate at any moment determines the microstate at any future moment. And if we know the full description of the micro-state we know the full description of the macrostate.

The inverse is not true: a lump of matter at a given temperature can have any number of possible microstates, and we could swap one for the other without noticing, so long as we didn't look.

A macrostate is ontologically reducible to both a specific microstate (the one it is actually in) and a large but finite number of possible microstates (the ones it could be in without violating the macroscopic description.)

Ontological purists will likely not be comfortable with that claim: they will say, "The macrostate must have a specific ontology, a single microstate."

How come?

We impute causality to macrostates: we say, "The high temperature of my coffee caused its heat to flow out into the surrounding room."

That is a causal claim.

There's no mention of microstates or the statistics of Newtonian collisions integrated over Maxwell-Boltzmann distributions.

We say the macrostate caused the way it changed, in perfectly ordinary causal language.

No matter what microstate the macrostate happens to be in at the start, it will be in the same macrostate at the end: cold.

"Temperature" in physics is treated as having the same ontology as "velocity". Simply because it's an emergent property doesn't make it any less (or more) real.

Does considering the scale at which effective information is maximized add anything to this?

It isn't obvious to me how it does, but I've got an inkling it might.

One further observation: there are micostates accessible via one kind of macrostate that are inaccessible to the same macrostate considered as a thing of a different kind.

For example, there are any number of thermodynamic microstates that fulfil the temperature constraint on a duck, but many of them do not fulfil the constraints imposed by the duck's physiology. Thermodynamics treats the duck as a lump of moderately complex matter: an "ugly bag of mostly water" according to some Star Trek TNG creature. Physiology treats the duck as a product of evolution. These put quite different constraints on the future state of the duck, and a thermodynamicist would be at a loss to explain why a large number of potential micro-states are never observed, given the temperature of the duck at time t=0.

Supervening laws--emergent laws, biological laws, psychological laws, moral laws--violate the ergodic hypothesis, which says that thermodynamic systems explore all microstates that are consistent with a given macrostate.

This might be interesting avenue of exploration for anyone curious about free will, which in fact I am.

Contact Home
Copyright (C) 2018-2019 TJ Radcliffe