[If you’re just finding this post, I’d suggest reading Two Minds and a Flame War for the first part of this series.]
Defending the Hive
Some of you will have undoubtedly noticed that I like to be contradictory and maybe even inflammatory at times. There’s a reason for this.
I’m finding myself increasingly bothered by the high-fiving and cheerleading that happens within groups, far more than I am by the differences in beliefs or ideas between groups. I fell compelling to play the contrarian just to shake up the paradigm. Complacence and mindless agreement are never for our benefit, and if we don’t challenge our assumptions we can fall into orbit around ideas which, however good they might be, nevertheless represent an intellectual rut (what you might call a “local minima”).
Look at how programs come and go in fads. A few years ago, it was Westside and the conjugate method. The time before that, when Cyberpump ruled the primitive internet, it was HIT and Hardgainer programs. Bryan Haycock’s HST might have been in there somewhere.
Of course hardly any of the crowd going on about the fads-of-the-moment — Starting Strength, perhaps the “Madcow 5×5” or that counterfeit knock-off that’s more a testament to the viral nature of ideas than effective training practices — has any awareness of that. Let me be clear: the former two aren’t bad programs. Reading Bill Starr’s source material, and the comments Glenn Pendlay has made on that methodology, are some of the best things you can do insofar as learning how to get stronger.
Taken for what it is, I don’t find this much of a problem. At least people are exercising, and exercising according to a cluster of rules which I largely agree with (such as: use a barbell, train reasonably heavy, progress in strength over time).
What I react to is the way that the ritual, the climb onto the bandwagon, is presented as the one and only way to success. There is no grasp of the underlying principles, no understanding of the history of strength-training; only obsession with tokens and totems and a missing of the deeper point.
The ritual of fad programs points back to the self-justifying brain. Groupthink, by definition, shuts off critical thought and transforms you into a mouthpiece. It’s too easy to get sucked into a game of telephone, in which ideas that were once good and fixed on solid ground are warped into garble. Obviously religion and politics are the two main offenders, but step into any social network online or off and you’ll find it.
Groupthink is always dangerous, no matter what view is being expressed. It becomes most noxious when our brain, convinced that it’s Right, interacts with other brains that hold different views. The result is a flame war.
Are you objectively right, as most undoubtedly think, or is your brain just doing a marvelous job of convincing you that you are?
How We Can Know Things
When mulling over the above, keep in mind that I’m talking in relative terms. I believe that there are views that are noxious in themselves, just as I think there are some things that we can accept as objectively good, and I have no problems in making value judgments in those instances.
I’m not going to get into any depth about morality and value judgments because I doubt most of you care, and because it’s a whole pallet of canned worms, but it’s a nice segue into what is important for fitness- and nutrition-related arguments:
How do we make those judgments of right and wrong?
This is not a trivial question. The extremes I mentioned above, things which I consider categorically “good” or “bad”, hint at the problem. I don’t care much for religiously-motivated violence, for example, just as I don’t like cruelty or needless violence towards animals. These are actions which, to me, are wrong and no explanation is likely to change that.
Things become less clear when dealing with matters that lie in between the poles. Determining “right” and “wrong”, in natural sciences, should be easy enough. We have a clear standard in the form of the science: we simply need to create a description of the natural world, phenomena with well-defined and, presumably, easily-describable causes and effects.
It should be simple to say “right” or “wrong”, and to science’s immense credit, it works for a great many things. Drop a book and it will fall to the ground. That single observation kicks off a cycle of inference and deduction that can, eventually, lead you to all of Newton’s laws and perhaps beyond. From that falling book you can infer a more general principle: “when you drop something, it falls to the ground”. That’s your theory, which you can use to make predictions, like “when I drop this pen, it will fall to the ground”.
So you spend the next few decades dropping things in different places and times, objects of different weights and colors and densities, and each time your theory is validated: drop an object and it will fall towards the ground. Each observation reinforces the theory. After awhile, you’re probably justified in writing this up as a rule or law.
But this hides a clever trick of thought, and one that will trip you if you aren’t careful: your theory is only as good as the extent and quality of your observations. What if, by some perverse coincidence, you’ve found the one region on your planet where gravity points down, and if you’d walked a little further you’d discover the place where things actually do fall up? You could fix that with a few more observations and some geometry, of course, as the Greeks did when they figured out the Earth was round and thus objects falling towards its center were likely to do so wherever you stood.
What if, as we found in the 20th century, when you go really fast, or when you zoom down to the smallest levels of reality, “dropping things” doesn’t work as you expect?
Or, in a conclusion that might keep you awake for the next few nights, what if David Hume was right and cause and effect are only psychological illusions rather than a fundamental principle of nature? You might wake up one day and find that, regardless of how well-tested your theory and how many times you’ve observed the same phenomena, nothing works as you thought.
Science presumes a set of assumptions in order to work (assumptions like “cause and effect are necessary for the universe to make sense”), and all of our knowledge builds on the idea that the universe works like this. We assume that cause and effect aren’t going away, and we assume that our descriptions of nature, which follow from that, are equally true. These are not unreasonable assumptions, nor are they particularly troubling, but they do point out why we have to be careful in getting too ambitious with what we “know”: Any scientific finding is tenuous, and it’s correctness predicated on the fact that it very well might be wrong.
Right (and wrong) come down to likelihoods and probabilities. There are no Lists of Truths in science; only lists of good ideas which we can rank according to how likely it is that they’re good ideas. As far as you can know based on your collection of observations, dropping objects makes them fall. In every test, over decades of time, you’ve never found a single instance where dropping an object leads to it floating in mid-air, or doing anything but falling toward the ground. At this point it’s reasonable for you to consider this a natural law.
What you can’t do is rule out that you’ll never see the exception. There may well be certain places or objects or sets of conditions where you’ll drop an object and see it float, or fly up into the sky. All you can do is say that, according to what you’ve seen, this is how things work. You can say that the odds of discovering otherwise, based on your observations and the observations of all your friends who discovered the way stars move in distant parts of the universe, are shockingly low. Obviously this isn’t true in our actual universe, as our set of observations is quite a bit more robust, but the possibility could have been the case before we made those findings.
That’s all just fine, and science has a wonderful track record to prove that probabilistic law-making works. In physics, and in “close-enough” fields like chemistry, certainty of this type is fairly easy to establish. In applied sciences, like biology, it’s harder to nail things down; the interesting things hide away in tangled mazes of causes and effects, and it isn’t always easy to figure out heads from tails.
Exercise physiology and nutrition are even further removed from the foundations, being particular specializations of biology, and that adds an extra layer of trouble.
[More to come shortly, with a look at exercise research and how “certain” our knowledge really is.]