The title here is clear enough; how often have you heard someone say that you can find research to support any position?
In this post, I want to touch on why that’s not true.
The Abstract Isn’t The Study
It’s all too tempting to do a quick search on Pubmed, dig up some abstracts that seem to be agreeing with you, and use them as support in an argument.
Problem is, the abstract may or may not contain enough information to draw conclusions. An abstract is, by definition, a brief summary of an article that gives you the rundown. The abstract should tell you all the basics; what was being studied, how it was studied, the results of the experiment, and what conclusions can be drawn.
Depending on the scope and complexity of the research, an abstract may cover most of the relevant points, but it could just as easily hit only the high points. For this reason, having the full paper on hand is necessary if you really want to see what went on.
Why is this important? Because knowing who or what was being studied, the methods used for the experiment, how data was collected and analyzed, and what conclusions were drawn (and why) are all important when you want to understand research.
All too often, somebody will just read the conclusion of an abstract and assume that it backs him up, only to find out later that it actually says the exact opposite. If he’d bothered to actually look at the methods used and what the authors said about their conclusions, he’d have realized that just relying on the conclusion was being hasty.
Point being, you can’t jump to conclusions simply based on a few lines in an abstract. You have to evaluate the entire study, even understanding its limitations, before you can take any meaning from a piece of research.
Limits of Research Methodology
This would demand an entire article in itself, so I’m going to give a very quick summary.
Research can be very picky. Almost by definition, scientific experimentation focuses on very narrowly-defined goals. You might be looking for the action of Molecule X in response to Compound Y in a specific breed of rats with genetic ID 5443.34.
What should you be looking for when trying to interpret a study?
Who were the subjects? The who is very important, because a good deal of research into exercise physiology, nutrition, and pharmacology (drugs) is done in either rats or in special populations of humans. By special populations, I mean people that have certain diseases, like diabetes, cancer, etc., or people that fit certain conditions, like the elderly or obese. In comparison, it’s somewhat rare to see healthy, trained humans in the research.
This might not seem all that important, but it’s critical – even special populations of humans can have altered responses to exercise, diet, and/or drugs, simply due to their physiological status. For example, the actions of a drug that alters insulin sensitivity might show fantastic effects on glucose control in a diabetic, but we just can’t assume it will have magical properties in a healthy, exercising person. There’s a very real difference in correcting a deficiency and creating above-normal levels; this would be an example of a hasty generalization. One set of conditions does not equate to another set of conditions.
How many subjects were there? For simple reasons of statistical analysis, it’s suggested that ~30 subjects be present for a research study. The reasons are largely beyond this article, but 30 is about where a Bell curve will appear in the data; you can be sure that you’ve got a sufficiently random sample and that the data will be applicable to the population at large.
A drawback to a lot of studies using humans is the small sample size. If you’ve got only eight people in a trial, it’s a bit hard to derive any significance from the data – you can’t be sure if your results are meaningful, or if they’re just random noise. As a rule the more people involved, the more likely the study is applicable.
How was the experiment performed? Sometimes for the sake of having a measurable effect, researchers will have to perform an experiment that only loosely matches real-world conditions. This is not a problem in research, but it is a problem if you assume that the study will map to the real world. A lot of research into strength training will use very outlandish protocols, things like 8 sets of 10 maximal eccentric contractions. This will create an effect, sure, but you can’t assume that the results of that study will have any relation to your workouts; nobody actually does a workout like that. It’s a matter of applicability – while that study might have useful conclusions, it would be a mistake to assume that your workout would resemble that workout.
How was data collected and analyzed? Data collection and statistical analysis can provide their own problems, depending on the experimental methods used and how meaningful the sample size is. As mentioned, a study with five subjects is generally going to be far weaker than a study with 50 subjects.
What conclusions did the author draw? In most studies, the authors will discuss how their results fit into the body of data – whether it agrees with and confirms other results, or whether it disagrees. If there’s any disagreements, they’ll often give ideas on why, and how their results might reconcile with everything else. Often they’ll acknowledge any particular limitations of their work, and caution against any hasty conclusions.
It’s About Context
At the end of the day, it comes down to context. The best way to interpret any one study is to be aware of what all the other related research says.
It’s very unlikely that any single study is going to truly break the mold. More likely than not, there’s some quirk or detail that makes the superficial conclusion appear misleading. It’s a problem of the reader misinterpreting the study, not the study itself.
While revolutionary discoveries do happen from time to time, most research is simply expanding on existing known information. Despite how pop-culture and media make it appear, science is not divided or in conflict. While there are some disagreements, science agrees on far, far more than it disagrees on.
It’s a matter of context. If you’re aware of how the puzzle looks, you can be pretty sure of what the missing pieces will look like. If you only have one puzzle piece, you’ll have a hard time figuring out what the final product will be – and any guess you make will probably be wrong.
The False Dilemma of Science
Getting back on topic, we can see now why it’s not so simple as “you can find research to support anything”. Maybe you can if you don’t actually understand the big picture; and you certainly can make it seem that way if you have no problem being dishonest.
The idea that science is useless boils down to a fallacy of logic: since you can (allegedly) find studies that back up any claim, any claim based on research can be thrown out.
This fallacy makes the mistake of taking two unequal viewpoints and making them equal; it completely rules out the idea of quality or competence. If a person is well-read in a field, understands all the research, and then proceeds to make a claim, you cannot equate that with a person having no prior training in the field and fishing through Pubmed for one abstract that, at first glance, says something to the contrary.
Superficially, it might appear that both sides are “backed up by research”. But really? One person is considering the big picture and knows what the puzzle looks like. The other has one puzzle piece and tries to describe the finished product. There’s just no comparison. Those two viewpoints are not equal.
In fact, this is as good a time as any to invoke Asimov’s classic article on ‘wrongness’.
In general, there will be very clear trends and very clear (if incomplete) conclusions. There’s good odds that any single study that appears to say the opposite can be reconciled with the overall trend. In short, the problem is not with the research; it’s with the people that aren’t qualified to interpret it.
Frankly, anyone that feels “you can find research to support anything” is just showing a naive understanding of what research is and how it’s interpreted. In the worst cases, it boils down to intellectual dishonesty. The idea that all positions “based on research” are equal, without any consideration of quality, is even more ridiculous.