Monday, March 26, 2018

Why you should never trust the "because science" argument


A while ago it became public that Coca Cola was the primary (if not sole) funder of the "Global Energy Balance Network" which was an institute that funded "research" into obesity. They were a huge funding source for CU's Medical Research facility. I know that CU ended up giving the money back after the public relations nightmare went public. (If you like reality TV, check out the fat-shaming show Extreme Weight Loss, which was filmed at CU's Anschutz campus.)

The official message from big food is "everything in moderation" and "obesity is from sloth". Anything else would make them culpable for the intentionally designed hyper-palatable "food" they sell. So they need to fund studies that show you can eat any kind of "low-fat" diet and lose weight by exercising. That puts the onus on the consumer for their obesity. (Or they can run reality TV that shows people losing tons of weight by exercising too much.)

So how is research in the pseudosciences done? You pick a hypothesis "exercise is the way to reverse obesity", and the null hypothesis "exercise has no effect on obesity." Then you design an experiment to try to reject the null hypothesis.

Since no experiment can control for everything, and there are errors in measurement, you'll never be 100% sure that your experiment accepted/rejected the null hypothesis, so you talk about probabilities.

The most common probability is called the p-value, which is the probability that the results of your experiment are due to random chance and not the thing you care about.

So if someone says, "our study showed that exercise showed a reduction in fat mass by xx pounds," you also have to look for the statement "p<0.05," which would mean that there is a less than 5% chance that the results are due to chance, which can be roughly interpreted as being 95% confident the result is real and not simply due to chance.

In most "sciences" the 5% threshold is the commonly accepted threshold for what is called statistical significance. On the surface, that sounds pretty good. Rarely are you 95% sure about any decision. (Though don't get me started on fat tails and bad prediction of extreme events.)

But what if you were a big research funder (say, the GEBN, for example) who funded 907 researchers but only officially acknowledged 42 of them? Quick, what's 42 divided by 907?

4.6%

So what happens if I'm looking for "science" to produce a specific result that supports my business?  I could just fund a whole bunch of them, and randomly (5% of the time, using commonly accepted practices), I'll get the result that I want, purely by chance. Then all I have to do is not publicize the 95% of studies that didn't show what I wanted and point at the ones that did confirm my hypothesis. Then I claim that the science is on my side.

This looks pretty suspicious to me.

It's even worse for testing of pharmaceuticals. You only have to publish the positive results (my drug did something) and not the negative results (my drug had no effect, or did bad things). Since we never see the negatives and only the positives, we don't know what the ratio of positive to negative results is, and we have no clue if any of these results are only due to chance.



Today's Workout

bench press 8-5-3x5
6 renegade rows between sets

then

superset:
next step in pull-up progression and
deadlift 5x35%, 65%x5, 5x85%, 5x70%

then

AGT day:
1 arm swing, bell size +1, 5x(10R+10L)/1:00
TGU, bell size +1 5x(1R+1L)/2:00