My current blog is epistem.ink. This one is here just for archival purposes.

# Determining determinism is indeterminable

I enjoy listening to a few philosophy podcasts, browsing through the Standford encyclopedia and I even have a few favorite contemporary modern philosophers which I enjoy reading the works of.

My discomfort with philosophy usually arises when scientific knowledge ends up being the turning point of a discussion since said knowledge often seems misinterpreted. However, this usually ends up being unproblematic, and I'm often not an expert in the subject matter either, so I let it slide.

However, there is one subject that makes me grunt and pull my hair whenever I hear discussed determinism. I strongly believe that determinism would be viewed as a childish concept not worth discussing if most people had a summary understanding of skepticism, turing machines (or computing machines in general), and complexity theory.

So my goal with this article is to convince you, dear reader, that you should never again use the word "determinism" or "deterministic" outside of narrow engineering or scientific discussions. You should certainly never use it when talking about a human brain or the universe itself.

## I - What is determinism

To figure out that event `X`

happening at time `t`

was deterministic, we must be able to tell, with data collected at a time before `t`

, that `X`

will happen.

Ideally, in order not to fall into a data contamination or overfitting trap, this prediction itself should happen before time `t`

. But in a laxer world, it's fine if we infer that `X`

will happen post-factum, as long as we are only using the knowledge we had at a time before `t`

.

If this is possible, then `X`

is deterministic, it's happening at time `t`

the way it did was dictated by prior causes.

Unless this is possible, we have no way of knowing that `X`

happened because of prior causes. We can assume `X`

happened due to prior causes because it's a useful model of the world that has bought us many scientific advancements, but we don't have a guarantee.

Maybe `X`

is just something that randomly happens now and then, completely unrelated to the rest of the universe. Maybe some incomprehensible entity that we can't observe, for which the concept of causality or correlation makes no sense, bought about `X`

. Or, to put it more plainly, maybe events like `X`

are a special class of events that completely escape our current framework of thinking about the world.

The only way to be sure this is not the case is to determine that `X`

is going to happen by examining what we think are its prior causes. Which usually involves building a mathematical model that predicts `X`

, usually one running on a computer.

Now, determinism is obviously a question of scale. If we go down to the smallest level postulated to be worth talking about, that of the particle from the standard model, determinism is arguably undefined.

But I don't understand physics well enough to bring it into this, so let's stay at the level where causality makes sense.

## II - Most systems are not proven to be deterministic

I often hear people refer to things like human behavior being deterministic and all I can think of is:

When the hell did you travel 200 years in the future !?

Most things, especially complex systems, are not deterministic. I can, for example, be fairly certain that something like human behavior is a system way too complex to be proven deterministic anytime soon, if ever.

For one, the data gathering required to make the inferences about its behavior is in the realm of science fiction. We ought to be able to know the state of every single neuronal body, axon, microglia, and nm^3 of fluid in the brain at a given moment.

Furthermore, we ought to be able to record exactly what that particular human is hearing, seeing, smelling, and touching at that given moment. And we need a perfect model of his whole body in order to infer the signals that it will send to their brain.

This data is arguably impossible to gather without killing the very human we want to determine the behavior of (thus never being able to confirm that our predictions about their behavior are true, thus leaving it undetermined).

But this argument involves a huge amount of skepticism and it could be applied to even smaller systems, to the point where it can prove determinism is a faulty concept. But let's say that 99.9% accuracy on a prediction of behavior is the same as "determining" someone's behavior, let's allow causality to have some error built-in. Now we might be able to predict the human's behavior with only a fraction of the above data since we are allowing for a margin of error on the prediction... could we?

I don't know, but I think it's worthwhile thinking about systems that we can't predict the behavior of:

Given exact information about the shape and composition of fluid being poured in a cup at moment `t`

(during the pouring) we can't, even provided immense computing power, determine the shape of that fluid at moment `t+1`

with a high precision (say < 1nm positional error for every 5nm^2 "patch" on its surface). Fluid dynamic is difficult.
Given 10 balls in motion in a 0-G setting, all of their masses at most a few orders of magnitude different from one another, it's impossible to determine the movement of all the balls outside of a few specific edge cases (see n-body problem). This is a trivial problem, we aren't talking about x000,000,000,000....,000 data points, as is the case with predicting a brain's behavior. We are talking about 50 data points (initial position, speed, and mass for each body)
Given the most common household activities that you did today if you isolate a part of the activity and try to model it (using knowledge about the external forces), you'd likely fall flat trying to predict: the flicker of a tea candle, the way toilet paper swirls in the air and breaks when you pull on it, the way your shirt folds upon your body when you slip into it, the forces your foot motion exercise upon the fabric in your sock.
It "feels" like all of the above systems ought to be deterministic. But other than prior experience about a few systems we spent a lot of time observing being deterministic, we have no evidence of that. We certainly don't have 100% certainty. But... maybe we have, 99% certainty? 99.99%?

However, this certainty surely must decrease as we advance to ever more complex systems. We can talk about the way a cup breaks on a fall as being deterministic despite *not* knowing how to make predictions about it. It's similar enough to things we have concluded with certainty are deterministic (by being able to "determine" them).

But can we talk about systems that are potentially trillions of time more complex than the ones above as being deterministic? I think not, it doesn't seem like a reasonable logical leap. Surely we must prove some systems of equal complexity are deterministic, or at least "kind of deterministic" before making that assumption. Otherwise, we are in the realm of "faith" and "belief", not of rational inference and observation.

## III - The universe is certainly *not* deterministic

Leaving aside the issues with making determinism make sense in a world where general relativity applies, we can still intuit that the universe is almost certainly not deterministic even in a world where time is the same everywhere.

Think of it this way, we've got all information needed to describe the totality matter at time `t`

and we want to determine what this information will be at `t+1`

.

Part of this matter must then be turned into a computing machine that can determine the information describing all matter at `t+1`

. Now, there are a few cases here, and all of them make no sense:

#### 1. Determining future events, where the difference between t and t+1 is the smallest quantile of time possible

In this case, the problem is moot, since, by the time the computation is done, we'd already be at time `t+1`

. Since the smallest possible time difference between `t`

and `t+1`

is required for time to have passed at all, and thus for a computation to have been made.

#### 2. Determining future events, where t+1 is far away in the future compared to t.

In this case, the problem requires some basic understanding of computational theory to "get", but this is easy enough to explain on an intuitive level.

Think about a computer trying to "determine" its own state at a future time `t+n`

, starting at time `t`

. Where `t+n`

is `n`

states ahead of `t`

. A "state" being something conveniently defined as the smallest unit of time and also the time it takes for our magical computer to execute a single instruction (thus potentially changing its internal state).

The only generic way to do it is for the computer to "model" its own behavior in the future.

Thus, we end up with the computer modeling step `t+1`

, then based on that modeling `t+2`

... and so on, until it ends up modeling `t+n`

. But by the time the computer "models" the step `t+n`

, it's actually reached the point in time where `t+n`

happens, thus collapsing things to case 1).

There are edge cases, in that there are programs with some periodicity built into them, where for an `n`

large enough we could predict the state at `t+n`

before we reach that point.

For example, imagine that after every instruction the computer has a second instruction to "wait in the same state for the amount of time needed to execute one instruction", then we could reasonably well concoct a way to predict the computer's behavior at `t+n`

that only takes ~`n/2`

time.

But there is nothing that would make us expect that the universe is so kind to us as to model itself using rules that place its behavior in such an edge case.

Thus, even if every single thing in the universe was part of an entity (computer) meant to determine its own state at time `t+n`

, there is no suggestion it would be able to do so before `n`

time passes.

More broadly, if we don't want to turn the whole universe into a computer, the problem becomes even harder. We not only have to solve an extra-hard case of the halting problem (aka what I described above), but we also have to do so with enough computational resources left to predict the behavior of all the things outside of our computer.

#### 3. Determining past events

But assume an even weaker version, where we want to determine the "state" of the universe at a time `t'`

, which has already passed us by, using information from time `t''`

, a time before `t'`

. We end up having two problems:

We must store the information from time `t''`

somewhere to run our model, and we must store the information from time `t'`

somewhere to validate our predictions. Thus, within the universe, we must store information about at least 2x previous states of the universe. There is no intuition telling us that this is possible.

It might be that we can "compress" the state of the universe by some factor, but it might be that this is impossible, or that it is possible, but that factor is smaller than 2.

Furthermore, even if we can do that, the time it would take for our model to compute state `t'`

from state `t''`

would automatically have to be longer than the smallest possible quantile of time, otherwise, we are back at problem 1).

The only situation where a deterministic universe might make sense is one where the amount of "stuff" we can work with is constantly expanding. That is to say, we can store information and make predictions about time `t-n+1`

and `t-n`

at time `t`

, because the universe consists of just `x`

datapoints at `t-n`

, however it has `3*x`

datapoints at `t-n+1`

and `3*n*x`

datapoints at `t`

.

But this would only apply to case 3), which is the "weakest" possible way to prove determinism. For cases 1) and 2) you're still stuck with having to postulate that "predicting" the universe is not an NP-hard problem, even though most problems related to predicting the internal future state of a computer on that same computer are.

Essentially, by having to prove that the universe is deterministic within the universe itself, we almost certainly lack the resources to do so... since we must build computational machines that are part of the universe, by definition.

## IV - Intuitions about determinism

In other words, based on our current understanding of computation, it's very likely that the universe is *not* deterministic.

Going down further, we have no proof that most systems are deterministic. Keep in mind that the above problems of computation and data gathering applies to even smaller systems.

There is no way to "prove" that the universe would accommodate for a computer powerful enough to predict even the most trivial of events, such as the way a waterfall flows. There is no way to "disprove" it either, that can only be done when talking about predicting the whole state of the universe.

But since no proof or disproof of determinism for almost any given system exists, I think that even with a degree of skepticism as tiny as that of a Kantian, one must conclude that we can't determine if most systems are deterministic.

Saying "the universe is deterministic" is arguably provably wrong, or at least very improbable, it requires some dare I say "miraculous" coincidences for it to be the case. So it bothers me when people use this as a premise, I think it shows a very poor grasp on the concept of determinism.

Saying "system X is deterministic" is not provably wrong, but it should be taken as a thought experiment and nothing more. One need not postulate gods or ghosts to have a universe that is not deterministic. The only thing we need to postulate is fundamental limitations to the human condition, which make certain systems too complex to understand for the most advanced tools we might ever hope to build.

*Note 1: Kind of 30 minutes of constant writing rant rather than a carefully pondered article. So the syllogisms used might have some obvious flaws I didn't notice. Feel free to point them out and I will try to fill in the gaps (or retract the article if my chain of logic just doesn't hold up to scrutiny)**Note 2: Kind of talking around general relativity, particle-level physics, and quantum computing. In part, because I don't understand them well enough and in part because, to the degree that I do understand them, bringing them into the discussion would even more easily invalidate the concept of determinism. But again, people that have an intuitive understanding of those concepts are probably not my target audience here. if you think that by ignoring them or some other physics-related observations or generally-accepted models I'm missing an important "counter-argument" to my view, please point it out to me.**Note 3: About half of this article can be summarized as "the halting problem" and the other half as "the things David Hume said about causality". But I'm trying to distill them down a bit since my target audience for this consists mainly of people that probably don't know much about either (or at least about 1 of them).*

If you enjoyed this article you might also consider reading Causality and its harms, "Gödel, Turing, and Friends", or the works of various skeptic philosophers (I'd recommend starting with summaries of what Hume wrote).

Published on: 2020-11-28