SCIENCE EATS ITS YOUNG
Let's start by talking about scientific literacy. I'm going to use a weak definition of scientific literacy, one that simply requires familiarity with the Baconian method of inquiry.
I don't want to place an exact number on this issue, but I'd wager the vast majority of the population of "educated" countries scientifically illiterate.
I - The gravity of the issue
I first got a hint that this could be a real issue when I randomly started asking people about the theory of gravity. I find gravity to be interesting because it's not at all obvious. I don't think any of us would have been able to come up with the concept in Newton's shoes. Yet it is taught to people fairly early in school.
Interestingly enough, I found that most people were not only unaware of how Newton came up with the idea of gravity, but not even in the right ballpark. I think I can classify the mistakes made into three categories, which I'll illustrate with an answer each:
- The Science as Religion mistake: Something something, he saw apples falling towards earth, and then he wrote down the formula for gravity (?)
- The Aristotelian Science mistake: Well, he observed that objects of different mass fell towards Earth with the same speed, and from that he derived that objects attract each other. Ahm, wait, hmmm.
- The Lack of information mistake: Well, he observed something about the motion of the planets and the moon... and, presumably he estimated the mass of some, or, hmmm, no that can't be right, maybe he just assumed mass_sun >> mass_plant >> mass_moon and somehow he found that his formula accounted for the motion of the planets.
I should caveat this by saying I don't count mistake nr 3 as scientific illiteracy, in this case, I think most of us fall in that category most of the time. Ask me how gravity can be derived in principle and I might be able to make an educated guess and maybe (once the observations are in) I could even derive it. But the chances of that are small, I probably wouldn't know the exact information I'd need which can be measured with 17th-century devices. I most certainly don't have the information readily sitting in my brain.
It's mainly failure modes 1 and 2 that I'm interested in here.
II - And Science said: Let there be truth
I think failure mode 1 is best illustrated by the first youtube result if you search for "how newton discovered gravity". This failure mode includes two mistakes:
- Not understanding the basis of the actual theory (in this case 'gravity' is presented as "objects fall towards Earth", rather than objects attracting each other proportional to their mass and distance).
- Not understanding the idea of evidence as a generator of theory.
In this failure, mode science works more or less like religion. There's a clergy (researchers, teachers, engineers) and there are various holy texts (school manuals, papers, specialized books).
I think a good indication of this failure mode is that people stuck here don't seem to differentiate between "what other humans in authority positions are saying" versus "what we observe in the world" as having fundamentally different epistemic weight.
Good examples here are e.g. young-earth creationists, people the believe the earth was created ~6000 years ago. Most of these kinds of people are obviously not scientists, but some are, a quick google search brings up Duane Gish (Berkely P.hD) and Kurt Wise (professor at a no-name university in Georgia).
However, young-earth creationism is not the only unscientific belief system people have, there are insane conspiracy theories aplenty, from vaccines being brainwashing mechanisms or 5G causing viral infections.
This kind of insanity is usually not represented in people affiliated with scientific or engineer institutions, but I'm unsure it is for the right reasons.
That is to say, assume you think of science as a religion. Your epistemology is based on what other people tell you, you weigh that by their social rank and thus derive what you hold as "truth".
Assume you are a doctor that falls into this category and 70% of your friends tell you "5G towers cause covid-19". Well, then, you could probably start believing that yourself. But keep in mind, it's not the only number of people that matters, the status also matters. If the priest tells you about the word of God that counts 100x as much as the village idiot telling you about the word of God.
Even with this context, if our good doctor's boss tells him "covid-19 infection is caused by an airborne coronavirus that passes from human to human via various bodily fluids dispersed in the air and on objects", then whatever this boss told him would have enough status magnitude to make him set his opinion on the more scientifically correct explanation.
The problem here is that our good doctor would be unable to come up with this explanation on his own, even in a hypothetical, he lacks even the foundational epistemology required to understand how such answers can be derived.
Even worst, our doctor's boos could share his epistemology, all that would be needed is for her boos to have told her the same thing and she would have believed it in an instant.
This Science as a Religion worldview is likely sprinkled through all engineers and scientists. The reason we don't see it is that for it to become obvious, one needs to start believing an obviously insane thing (e.g. young-earth creationism), however, the chance of this happening is fairly low since it would require all their peers to also believe insane things.
As long as "correct" ideas are observed throughout his professional environment, unless he is socially inept, he will only hold the correct idea.
You would need to look at his research or question him on the scientific method or on his epistemology more broadly in order to spot this mistake. Sadly enough, I've yet to find a university that has "scientific epistemology" as a subject on the entrance exam or even as a graduation or employment requirement.
I won't speculate as to how many people who are called scientists and engineers fall into this failure mode. I think there's a gradient between this and failure mode nr 2.
However, it should be noted that this failure mode is unobvious until a new idea comes along. Then, the real scientists will assume it's probably false but judge it on its merit. The religious scientists will assume it's false because their peers haven't said it's right yet.
This is both an issue in regards to new ideas proliferating and an issue with the scientific consensus. Scientific consensus is valuable if you assume everyone pooled reasoned their way through theory, independent research, and primary source dissection to reach a conclusion.
In a world where 90% of scientists just assume that science works like a religion, a 96%-4% consensus is not a good indicator for implementing policy, it's an indicator that the few real scientists are almost evenly split on the correct solution.
This is bleak stuff, if most scientists were understanding science as a religion then the whole institution would be compromised. Not only would academia have to be thrown in the bin, but all evidence and theory produced for the last half-century would have to be carefully curated and replicated before it can be considered scientifically true.
Surface level intuitions want me to think there's a significant probability this might be the case with certain sub-fields. But my theory of mind and the fact that science seems to keep progressing tells me this is unlikely to be the case in relevant areas.
III - If there's a fit there's a way
In short, these are the people that don't understand why a regression being fit on all the data is different from using the same regression to determine correlation strength via cross-validation.
I think most people and most scientists probably fall under the second failure mode, they are not Baconians or Popperians, but rather they are Aristotelians.
Aristotle understood the idea that we can observe the world and we can come up with theories about how it works based on observation.
He lacked was a rigorous understanding of how observations should be undertaken. He was probably unaware of the idea of having similar experimental error standards and replications as the rules by which the validity of data can be compared.
He lacked an understanding of the language of probability which would allow him to formulate these experimental standards.
He lacked an understanding of falsifiability and Occam's razor, he didn't have a rigorous system for comparing competing theories.
In an Aristotelian framework, dropping 3 very heavy and well-lackered balls towards Earth and seeing they fall with a constant and equal acceleration barring any wind is enough to say FG = G * m1 * m2 / r^2
is a true scientific theory.
If things like the constant G
and the mass of the ball and the radius of the earth are already known, then the Aristotelian has no issue with declaring the theory correct. He needn't ask:
- Why do you assume this holds for all objects? After all, the only thing we have observed is three objects falling towards Earth. Even more, the balls are too light to observe this effect between them.
- Why can this equation not be simpler? I could simplify this equation to only a single term if what you wished to describe is just the fall of objects towards the Earth, which is the only thing your experiment is showing anyway.
- Why is dropping 3 balls enough to derive anything? Why are 2 not enough, why aren't 100 needed? Also, why is weight the property in question here and not some other property of the ball? Maybe it works for lead balls but not for copper balls?
I will grant I might be straw-manning Aristotle here, he would have been able to ask some of those questions, he just didn't have a rigorous frameworks from which to derive them. He was working from Aristotelian logic and intuition.
This seems to be the kind of failure that most people fall into, and why wouldn't they, it's an intuitive spot to be in.
To exemplify the sentiment, let me quote a former director of the Yale-Griffin Prevention Research Center, an organization I chose randomly because I came upon a pseudo-scientific article written by him:
But science was never intended to question the reliable answers we already had. Science can and should certainly invite us to question answers, too, but not all answers are subject to doubt.
The organization in question here seems perfectly respectable, their research is not wors than any other medical research (which is not high praise, I just want to say it's not an outlier).
This is the core of the Aristotelian mistake, the assumption that we shouldn't question everything, the assumption that the way the world works is mostly obvious. You should leave it alone and just look at it non-judgementally, not try to nitpick various edge cases in our understanding.
This is a good enough point of view from where one can do engineering, but obviously not so for science. The very purpose of science is to take "obvious" things and see where they become no longer "obvious" and try to come up with better theories that explain those edge cases... ad infinitum.
- In Galileo's time, it was obvious that the "nature" of an object dictated the speed with which they fall.
- In Newtons' time, it was not obvious one couldn't apply the same laws of motion both on Earth and in "the heavens"
- When de Morveau was born phlogiston caused fire.
- When Max Plank rose to prominence the universe was obviously contiguous and deterministic.
- Space and time were obviously separate and necessary entities to do physics when Einstein was beginning to operate.
- Nuclei were obviously indivisible until the 30s, ten years later they were divisible enough to be the basis of a weapon that could destroy humanity.
For an engineer, questioning the obvious is usually a waste of time, for a scientist, it's the only good use of time.
But, why is the Aristotelian mistake seemingly so common nowadays? Why do most "scientists" and virtually all people lack the understanding of how to reduce the world to rigorous predictive theories?
Because...
IV - Science eats its young
Imagine you are a computer scientist in the 50s. You can write programs in the form of binary bunch cards and get some primordial Von Neuman machines to execute them... sometimes, it's really hard, there are loads of bugs and loads of hardware restrictions.
Your program risks breaking the computer, returning a seemingly correct but actually erroneous result, or working just part of the time because of a physical error (e.g. an actual dead bug) in the room-sized monstrosity it's running on.
So obviously your work will require becoming a decent digital hardware engineer. You certainly know precisely how your computer functions, from the high-level component down to the fabrication method for the transistors or switched inside. That's because assuming computers "just work" is skipping over the biggest hurdle, the fact that computers are usually really bad at "just working" and the issue often lies in the hardware.
But skip forward to today and most programmers couldn't even describe how a CPU works in principle. And why would they? That's the magic of modern computers, the fact that you don't have to understand how a CPU works to write a website. But this would become problematic if some programmers suddenly had to work on computer hardware.
This is more or less the problem with science. We spend the first 20+ years of people's lives teaching them "obvious" things that just work. Theories that are well defined and never failed, theories they could never derive nor judge the merit of. But nowadays we believe the theories are mostly correct so we aren't teaching them anything wrong.
Maybe they are thought how to run experiments, but if their experiments contradict the "expected results" we just write it off as an error and tell them to try again, we don't fawn over the setup until we discover the error. Replicative lab work in college requires proving that existing theories and observations are true, even though real replication should be focused on the exact opposite.
When people ask why something is true they are given the Aristotelian explanation: Well, look at case x,y,z, it works in all of those cases, so it's true. Because most teachers don't have the required epistemology to say anything else, they are Aristotelians. Why would they be otherwise?
By the time people have the "required context" to look at theories that are under the lens of examination and "kind of works but not really", they are in their mid-20s. But these are the only theories that that matter, the only theories for which we still need science.
After 20+ years of teaching people that experiments are wrong if they generate unexpected results and that the universe is a series of theories that work because they work on some particular examples... we suddenly expect them to generate theories and experiment using a whole different epistemology.
On the other hand, a 14-year-old is probably not capable of scientific discovery, he would just be rediscovering obvious things people already know. So we see it as pointless to tell him "go out and do science the right way" if all the information produced is already known. I harp on about this more in Training our humans on the wrong dataset... so I won't restate that entire point, suffice to say, I think this is a horrible mistake.
The only way to teach people how to do science, to teach them how science works, and to get new and interesting discoveries that break out of the current zeitgeist... is to have them do it. Ideally have them do so starting at age 10, not at age 30. Ideally have 100% of the population doing it, even if just for the sake of understanding the process. Otherwise you end up with people that are rightfully confused as to what the difference between science and religion is.
But I think the issue goes even further:
V - Epistemic swamps and divine theories
A problem I kind of address in If Van der Waals was a neural network is that of missing information in science.
For some reason, presumably, the lack of hard-drives and search engines, people of the past were much more likely to record theories and discard experiments.
This seems to me to be one of the many artifacts the scientific establishment unwittingly carried over from times past. In the current world, we have enough space for storing as much experimental data as we want. From the results obtained at CERN down to every single high school with a laboratory.
But theory in itself is useless for the purpose of science. At most, it's a good mental crutch or starting point, since you'd rather not start from 0. Maybe if the inductive process by which it was deduced is re-discovered it can serve as an example or inspiration, but in itself, it has little value.
Indeed, I think theory can be rather harmful. Theory is a map of the world, it's a good starting point if one wants to extend the map, but a horrible starting point if one wants to correct it since a lot of things are interlinked, it's hard to correct something without changing everything. It has built-in biases and mistakes that are hard to observe, especially if the original data an experimental setup is unavailable to us.
Finally, I don't wish to say that the "religious" failure mode and the "Aristotelian" failure modes are all bad.
The fact most people don't have any basis for their ethics system and just learn it "religiously" from their peer group is a feature, not a bug. If people were convinced going around killing people is ok until they could understand and found a reasonable ethical system that discourages murder society couldn't exist.
In case you haven't noticed, this article and most of the stuff you read is "Aristotelian" in nature. I am not using all the evidence that I could be using, I am not providing ways to falsify my viewpoint, I am basing my arguments on pleasant rhetoric and a few key examples to illustrate them, examples for which I don't even have the exact data or an exact set of questions to replicate them.
If we couldn't start with "Aristotelian" thinking we would forever be in decision paralysis. Unable to come up with new ideas or say anything new about the world. The purpose of the scientific method is to bring extreme rigor to the things which are widespread and useful enough to require it. A fun chat about politics over a glass of wine is perfectly acceptable without hard evidence, implementing a policy that affects the lives of millions of people isn't.
Published on: 1970-01-01