This whole piece is a fantastic discussion of contemporary issues in separating science from pseudoscience, and why we should all have a little more ‘epistemic humility’ about what we consider true and factual.
Earlier today I was pointed to an article by Nassim Taleb on volatility and uncertainty. Regulars will recall that randomness, uncertainty, and variability have been topics of fascination for me lately. Taleb has been key in making these tumblers fall into place, largely thanks to The Black Swan.
Taleb’s discussion of uncertainty, of his ‘negative epistemology’, resonated with me, not least of which because it unmasked the appearance of certainty and control that pervades our comfy first-world lives. As I’ve related lately, I think this illusion extends to fitness communities on a deep level. The resulting obsession with analysis creates a mess: dichotomies between “bro” and “science”, overwhelming neuroses about squat form and diet macros and who even knows what else I don’t see since I quit reading forums and Reddit.
The Problem with “Bro-Science”
To watch most internet discussions, you’d think that science was a contest to see who could fish the most abstracts out of Pubmed. In areas as fuzzy as exercise and nutrition, there just isn’t enough research, and what is there doesn’t cover a sufficient breadth, to be the final answer to all our questions. If you eliminate any evidence without a DOI number, you’ve crippled your knowledge base. That’s as shameful as any puffed-up Curl-Bro ranking knowledge by weight class.
The internet has taken to calling “Bro science” on any sort of trial-and-error gym-observations, with unfortunate consequences. It isn’t “Bro science” when someone discovers, through trial and error, what works for their circumstances. If that’s the case, then I have bad news for you: so is most everything we consider foundational in exercise science. If someone has trained with a particular program or a particular style of training, and they’ve genuinely gotten results with it, then that is the end of the argument.
I will add a necessary caveat here: we have to distinguish genuine results from what I call “gym delusions”. A gym delusion happens when someone mistakes, say, feeling winded, or puking, or having sore muscles or a case of rhabdomyolysis for actual long-term results. A gym delusion means that the thing being done isn’t actually leading to measurable results like larger muscles, bigger lifts, or lower body-fat. Thanks to a hyper-active System 1, the immediate feeling is substituting for measurable results.
Continue reading “The Confidence of Ignorance”
It’s All So Fuzzy
Let’s take the question of muscle and what makes it grow. We can answer this question fairly well with some simple observations. Namely, you can go to any gym and you’ll notice that the people with well-developed muscles all tend to lift weights. That’s an anecdotal observation, but fortunately various research studies, both observational studies watching athletes and more direct interventions in the lab, have validated this “no kidding” conclusion.
We’ve got a lot of data that tells us yes, lifting weights makes your muscles grow. That’s an objective fact, in as much as we can ever define objective facts about exercise. If you want to get bigger, you pick up a barbell, or a dumbbell, or at least go to a cable station. You wouldn’t want to start running 10 miles a day. Extremes like this are easy to pick out — and they often tell us nothing interesting.
We turn to science in hopes of more detail. Case in point: it’s more or less true that all the people with big muscles lift weights, so we have that, but notice that little Jimmy also lifts weights and he doesn’t have big muscles. There’s obviously a relationship between “big muscles” and “lifts weights”, but the opposite — that all people who lift weights get big muscles — doesn’t hold true. We’re missing something.
Continue reading “Fuzzy Science”
[If you’re just finding this post, I’d suggest reading Two Minds and a Flame War for the first part of this series.]
Defending the Hive
Some of you will have undoubtedly noticed that I like to be contradictory and maybe even inflammatory at times. There’s a reason for this.
I’m finding myself increasingly bothered by the high-fiving and cheerleading that happens within groups, far more than I am by the differences in beliefs or ideas between groups. I fell compelling to play the contrarian just to shake up the paradigm. Complacence and mindless agreement are never for our benefit, and if we don’t challenge our assumptions we can fall into orbit around ideas which, however good they might be, nevertheless represent an intellectual rut (what you might call a “local minima”).
Look at how programs come and go in fads. A few years ago, it was Westside and the conjugate method. The time before that, when Cyberpump ruled the primitive internet, it was HIT and Hardgainer programs. Bryan Haycock’s HST might have been in there somewhere.
Of course hardly any of the crowd going on about the fads-of-the-moment — Starting Strength, perhaps the “Madcow 5×5” or that counterfeit knock-off that’s more a testament to the viral nature of ideas than effective training practices — has any awareness of that. Let me be clear: the former two aren’t bad programs. Reading Bill Starr’s source material, and the comments Glenn Pendlay has made on that methodology, are some of the best things you can do insofar as learning how to get stronger.
Taken for what it is, I don’t find this much of a problem. At least people are exercising, and exercising according to a cluster of rules which I largely agree with (such as: use a barbell, train reasonably heavy, progress in strength over time).
What I react to is the way that the ritual, the climb onto the bandwagon, is presented as the one and only way to success. There is no grasp of the underlying principles, no understanding of the history of strength-training; only obsession with tokens and totems and a missing of the deeper point.
The ritual of fad programs points back to the self-justifying brain. Groupthink, by definition, shuts off critical thought and transforms you into a mouthpiece. It’s too easy to get sucked into a game of telephone, in which ideas that were once good and fixed on solid ground are warped into garble. Obviously religion and politics are the two main offenders, but step into any social network online or off and you’ll find it.
Groupthink is always dangerous, no matter what view is being expressed. It becomes most noxious when our brain, convinced that it’s Right, interacts with other brains that hold different views. The result is a flame war.
Are you objectively right, as most undoubtedly think, or is your brain just doing a marvelous job of convincing you that you are?
How We Can Know Things
When mulling over the above, keep in mind that I’m talking in relative terms. I believe that there are views that are noxious in themselves, just as I think there are some things that we can accept as objectively good, and I have no problems in making value judgments in those instances.
I’m not going to get into any depth about morality and value judgments because I doubt most of you care, and because it’s a whole pallet of canned worms, but it’s a nice segue into what is important for fitness- and nutrition-related arguments:
How do we make those judgments of right and wrong?
This is not a trivial question. The extremes I mentioned above, things which I consider categorically “good” or “bad”, hint at the problem. I don’t care much for religiously-motivated violence, for example, just as I don’t like cruelty or needless violence towards animals. These are actions which, to me, are wrong and no explanation is likely to change that.
Things become less clear when dealing with matters that lie in between the poles. Determining “right” and “wrong”, in natural sciences, should be easy enough. We have a clear standard in the form of the science: we simply need to create a description of the natural world, phenomena with well-defined and, presumably, easily-describable causes and effects.
It should be simple to say “right” or “wrong”, and to science’s immense credit, it works for a great many things. Drop a book and it will fall to the ground. That single observation kicks off a cycle of inference and deduction that can, eventually, lead you to all of Newton’s laws and perhaps beyond. From that falling book you can infer a more general principle: “when you drop something, it falls to the ground”. That’s your theory, which you can use to make predictions, like “when I drop this pen, it will fall to the ground”.
So you spend the next few decades dropping things in different places and times, objects of different weights and colors and densities, and each time your theory is validated: drop an object and it will fall towards the ground. Each observation reinforces the theory. After awhile, you’re probably justified in writing this up as a rule or law.
But this hides a clever trick of thought, and one that will trip you if you aren’t careful: your theory is only as good as the extent and quality of your observations. What if, by some perverse coincidence, you’ve found the one region on your planet where gravity points down, and if you’d walked a little further you’d discover the place where things actually do fall up? You could fix that with a few more observations and some geometry, of course, as the Greeks did when they figured out the Earth was round and thus objects falling towards its center were likely to do so wherever you stood.
What if, as we found in the 20th century, when you go really fast, or when you zoom down to the smallest levels of reality, “dropping things” doesn’t work as you expect?
Or, in a conclusion that might keep you awake for the next few nights, what if David Hume was right and cause and effect are only psychological illusions rather than a fundamental principle of nature? You might wake up one day and find that, regardless of how well-tested your theory and how many times you’ve observed the same phenomena, nothing works as you thought.
Science presumes a set of assumptions in order to work (assumptions like “cause and effect are necessary for the universe to make sense”), and all of our knowledge builds on the idea that the universe works like this. We assume that cause and effect aren’t going away, and we assume that our descriptions of nature, which follow from that, are equally true. These are not unreasonable assumptions, nor are they particularly troubling, but they do point out why we have to be careful in getting too ambitious with what we “know”: Any scientific finding is tenuous, and it’s correctness predicated on the fact that it very well might be wrong.
Right (and wrong) come down to likelihoods and probabilities. There are no Lists of Truths in science; only lists of good ideas which we can rank according to how likely it is that they’re good ideas. As far as you can know based on your collection of observations, dropping objects makes them fall. In every test, over decades of time, you’ve never found a single instance where dropping an object leads to it floating in mid-air, or doing anything but falling toward the ground. At this point it’s reasonable for you to consider this a natural law.
What you can’t do is rule out that you’ll never see the exception. There may well be certain places or objects or sets of conditions where you’ll drop an object and see it float, or fly up into the sky. All you can do is say that, according to what you’ve seen, this is how things work. You can say that the odds of discovering otherwise, based on your observations and the observations of all your friends who discovered the way stars move in distant parts of the universe, are shockingly low. Obviously this isn’t true in our actual universe, as our set of observations is quite a bit more robust, but the possibility could have been the case before we made those findings.
That’s all just fine, and science has a wonderful track record to prove that probabilistic law-making works. In physics, and in “close-enough” fields like chemistry, certainty of this type is fairly easy to establish. In applied sciences, like biology, it’s harder to nail things down; the interesting things hide away in tangled mazes of causes and effects, and it isn’t always easy to figure out heads from tails.
Exercise physiology and nutrition are even further removed from the foundations, being particular specializations of biology, and that adds an extra layer of trouble.
[More to come shortly, with a look at exercise research and how “certain” our knowledge really is.]
I’m fascinated by the human mind. The mind, such as there is a thing we can refer to with that word, is where all the interesting things about humankind go on. It’s also poorly understood, even by the legions of bright people who have studied and reflected upon it for thousands of years.
Vagueness aside, you’ll notice that we’ve got a Pretty Good intuitive grasp of thoughts and sensations, such that we can communicate reasonably well most of the time. The fact that you can read my words and (hopefully) understand what I want to convey attests to that. Sometimes, Pretty Good is good enough.
Knowing how people operate is a crucial skill in any instance that involves other people. Case in point, fitness training and nutrition. These fields are applied science, and on paper at least we should be able to craft perfect workout programs and diets — at least, you’d think that according to much of the internet.
Often, though — likely more than not — these perfection-seeking schemes fail. Why can’t people just do what we know is right? Why do all these pig-headed people disagree with my perfectly-designed workout? Why do people not eat according to these scientifically-derived principles that ensure success?
Continue reading “Two Minds and a Flame War”