This whole piece is a fantastic discussion of contemporary issues in separating science from pseudoscience, and why we should all have a little more ‘epistemic humility’ about what we consider true and factual.
It’s All So Fuzzy
Let’s take the question of muscle and what makes it grow. We can answer this question fairly well with some simple observations. Namely, you can go to any gym and you’ll notice that the people with well-developed muscles all tend to lift weights. That’s an anecdotal observation, but fortunately various research studies, both observational studies watching athletes and more direct interventions in the lab, have validated this “no kidding” conclusion.
We’ve got a lot of data that tells us yes, lifting weights makes your muscles grow. That’s an objective fact, in as much as we can ever define objective facts about exercise. If you want to get bigger, you pick up a barbell, or a dumbbell, or at least go to a cable station. You wouldn’t want to start running 10 miles a day. Extremes like this are easy to pick out — and they often tell us nothing interesting.
We turn to science in hopes of more detail. Case in point: it’s more or less true that all the people with big muscles lift weights, so we have that, but notice that little Jimmy also lifts weights and he doesn’t have big muscles. There’s obviously a relationship between “big muscles” and “lifts weights”, but the opposite — that all people who lift weights get big muscles — doesn’t hold true. We’re missing something.
Continue reading “Fuzzy Science”
[If you’re just finding this post, I’d suggest reading Two Minds and a Flame War for the first part of this series.]
Defending the Hive
Some of you will have undoubtedly noticed that I like to be contradictory and maybe even inflammatory at times. There’s a reason for this.
I’m finding myself increasingly bothered by the high-fiving and cheerleading that happens within groups, far more than I am by the differences in beliefs or ideas between groups. I fell compelling to play the contrarian just to shake up the paradigm. Complacence and mindless agreement are never for our benefit, and if we don’t challenge our assumptions we can fall into orbit around ideas which, however good they might be, nevertheless represent an intellectual rut (what you might call a “local minima”).
Look at how programs come and go in fads. A few years ago, it was Westside and the conjugate method. The time before that, when Cyberpump ruled the primitive internet, it was HIT and Hardgainer programs. Bryan Haycock’s HST might have been in there somewhere.
Of course hardly any of the crowd going on about the fads-of-the-moment — Starting Strength, perhaps the “Madcow 5×5” or that counterfeit knock-off that’s more a testament to the viral nature of ideas than effective training practices — has any awareness of that. Let me be clear: the former two aren’t bad programs. Reading Bill Starr’s source material, and the comments Glenn Pendlay has made on that methodology, are some of the best things you can do insofar as learning how to get stronger.
Taken for what it is, I don’t find this much of a problem. At least people are exercising, and exercising according to a cluster of rules which I largely agree with (such as: use a barbell, train reasonably heavy, progress in strength over time).
What I react to is the way that the ritual, the climb onto the bandwagon, is presented as the one and only way to success. There is no grasp of the underlying principles, no understanding of the history of strength-training; only obsession with tokens and totems and a missing of the deeper point.
The ritual of fad programs points back to the self-justifying brain. Groupthink, by definition, shuts off critical thought and transforms you into a mouthpiece. It’s too easy to get sucked into a game of telephone, in which ideas that were once good and fixed on solid ground are warped into garble. Obviously religion and politics are the two main offenders, but step into any social network online or off and you’ll find it.
Groupthink is always dangerous, no matter what view is being expressed. It becomes most noxious when our brain, convinced that it’s Right, interacts with other brains that hold different views. The result is a flame war.
Are you objectively right, as most undoubtedly think, or is your brain just doing a marvelous job of convincing you that you are?
How We Can Know Things
When mulling over the above, keep in mind that I’m talking in relative terms. I believe that there are views that are noxious in themselves, just as I think there are some things that we can accept as objectively good, and I have no problems in making value judgments in those instances.
I’m not going to get into any depth about morality and value judgments because I doubt most of you care, and because it’s a whole pallet of canned worms, but it’s a nice segue into what is important for fitness- and nutrition-related arguments:
How do we make those judgments of right and wrong?
This is not a trivial question. The extremes I mentioned above, things which I consider categorically “good” or “bad”, hint at the problem. I don’t care much for religiously-motivated violence, for example, just as I don’t like cruelty or needless violence towards animals. These are actions which, to me, are wrong and no explanation is likely to change that.
Things become less clear when dealing with matters that lie in between the poles. Determining “right” and “wrong”, in natural sciences, should be easy enough. We have a clear standard in the form of the science: we simply need to create a description of the natural world, phenomena with well-defined and, presumably, easily-describable causes and effects.
It should be simple to say “right” or “wrong”, and to science’s immense credit, it works for a great many things. Drop a book and it will fall to the ground. That single observation kicks off a cycle of inference and deduction that can, eventually, lead you to all of Newton’s laws and perhaps beyond. From that falling book you can infer a more general principle: “when you drop something, it falls to the ground”. That’s your theory, which you can use to make predictions, like “when I drop this pen, it will fall to the ground”.
So you spend the next few decades dropping things in different places and times, objects of different weights and colors and densities, and each time your theory is validated: drop an object and it will fall towards the ground. Each observation reinforces the theory. After awhile, you’re probably justified in writing this up as a rule or law.
But this hides a clever trick of thought, and one that will trip you if you aren’t careful: your theory is only as good as the extent and quality of your observations. What if, by some perverse coincidence, you’ve found the one region on your planet where gravity points down, and if you’d walked a little further you’d discover the place where things actually do fall up? You could fix that with a few more observations and some geometry, of course, as the Greeks did when they figured out the Earth was round and thus objects falling towards its center were likely to do so wherever you stood.
What if, as we found in the 20th century, when you go really fast, or when you zoom down to the smallest levels of reality, “dropping things” doesn’t work as you expect?
Or, in a conclusion that might keep you awake for the next few nights, what if David Hume was right and cause and effect are only psychological illusions rather than a fundamental principle of nature? You might wake up one day and find that, regardless of how well-tested your theory and how many times you’ve observed the same phenomena, nothing works as you thought.
Science presumes a set of assumptions in order to work (assumptions like “cause and effect are necessary for the universe to make sense”), and all of our knowledge builds on the idea that the universe works like this. We assume that cause and effect aren’t going away, and we assume that our descriptions of nature, which follow from that, are equally true. These are not unreasonable assumptions, nor are they particularly troubling, but they do point out why we have to be careful in getting too ambitious with what we “know”: Any scientific finding is tenuous, and it’s correctness predicated on the fact that it very well might be wrong.
Right (and wrong) come down to likelihoods and probabilities. There are no Lists of Truths in science; only lists of good ideas which we can rank according to how likely it is that they’re good ideas. As far as you can know based on your collection of observations, dropping objects makes them fall. In every test, over decades of time, you’ve never found a single instance where dropping an object leads to it floating in mid-air, or doing anything but falling toward the ground. At this point it’s reasonable for you to consider this a natural law.
What you can’t do is rule out that you’ll never see the exception. There may well be certain places or objects or sets of conditions where you’ll drop an object and see it float, or fly up into the sky. All you can do is say that, according to what you’ve seen, this is how things work. You can say that the odds of discovering otherwise, based on your observations and the observations of all your friends who discovered the way stars move in distant parts of the universe, are shockingly low. Obviously this isn’t true in our actual universe, as our set of observations is quite a bit more robust, but the possibility could have been the case before we made those findings.
That’s all just fine, and science has a wonderful track record to prove that probabilistic law-making works. In physics, and in “close-enough” fields like chemistry, certainty of this type is fairly easy to establish. In applied sciences, like biology, it’s harder to nail things down; the interesting things hide away in tangled mazes of causes and effects, and it isn’t always easy to figure out heads from tails.
Exercise physiology and nutrition are even further removed from the foundations, being particular specializations of biology, and that adds an extra layer of trouble.
[More to come shortly, with a look at exercise research and how “certain” our knowledge really is.]
I often find myself describing science as a limiting factor. The typical lay-view, reinforced by Hollywood, stereotypes scientists as mysterious figures in labcoats handing down edicts the same way a king would hand down laws to his peasants. But it doesn’t quite work that way.
Most fields relevant to us — falling under the considerable umbrella of biology — are descriptive sciences: variable X causes event A, under circumstance Y. We watch it, write it up, and try to figure out what’s going on based on what we already know.
Rayleigh scattering causes the sky to appear blue on cloudless days. That’s the process of descriptive science. Watch a thing happen, and then explain the immediate causes and the circumstances in which it happened. Descriptive science leads to an ever-greater level of detail as causes and effects are established, leading us down the rabbit hole as more questions arise from each answer.
In these fields, published research establishes boundaries. Very rarely do you run into any kind of prescriptive knowledge, the What To Do, step-by-step user-manual kind of knowledge that seems expected by a considerable fraction of gym-goers. You can imagine how these conflicting views create friction between science and practice.
In the softer domains of personal training and S&C coaching, you run into real and very valid criticisms of exercise science research. While there are good points to make regarding validity and generalization — points I often agree with — dismissing research without consideration isn’t helping anyone. I find that to be as unhelpful as the crowd that can’t make any decisions without a Pubmed abstract.
Continue reading “Narrowing the Bullseye”
I was recently reminded of one of my favorite articles.
It’s by Isaac Asimov, one of my favorite authors. Mr. Asimov died back in 1992, but occasionally one of his gems will resurface and I’m reminded again why I enjoy his work so much.
This article, titled The Relativity of Wrong, was written to demonstrate a crucial, but still poorly understood, facet of science: the idea that a statement or idea can be less wrong than another. What, you might ask, does this have to do with strength training?
As it turns out, it has plenty to do with it. More specifically, it has plenty to do with the volumes of information (and misinformation) that pervade the industry, and the poor (if any) reasoning ability that comes along with this. Since my schtick in this game involves using principles of logic and critical thinking to tear down idiocy, it’s very relevant.
Mr. Asimov’s frustration and subsequent rebuttal are in many ways parallel to what goes on in the fitness industry.
It’s unfortunate that the mindset that he, and others of his kind, so actively try to discourage is so rampant. It’s not just in the fitness industry; you see this all over. When you can’t even teach science in schools because of superstitious traditions, you’ve got a problem.
With the levels of bro-science and general anti-intellectualism at all time highs, I feel the need to occasionally interject things such as this in order to help chip away at some of the ignorant thinking.