Tuesday, April 4, 2017

The state of science is GRIM

Josh said I nerd sniped him on yesterday's math problem - he's the only one who responded to my trick question. credit: www.xckd.com
So yesterday I posed a question (a trick question with no answer):
Let's say that I report in my paper that the mean of my results is 4.13 (nothing fancy, simple arithmetic mean).  That mean came from three observations and those observations were counts of things (that means non-negative real integers). Given that information, tell me what my three possible observations are.
The trick is that the precision of a mean is 1/n the precision of the values you're averaging. If we're averaging 3 integers, then the mean can only be x.000 x.333, or x.666. Even for the nerd gym, this is a little off the wall to be talking about...or is it?

We've done a great job teaching everyone how to get strong. You guys really understand how to do things correctly and safely. My next mission is to get everyone to understand nutrition because good food is even more important than exercise. But I'm up against a huge wall - the misinformation from academia and the government.

What would you say if the Executive Director of the USDA’s Center for Nutrition Policy and Promotion, appointed by Obama, who has been cited almost 21,000 times and wrote the 2010 USDA Nutrition guidelines, didn't even bother making up his data, he just made up the averages...and those averages were mathematically impossible?

It's possible that an author can make a mistake, but how about 150 mistakes? Or what about the supposed oversight that having 21,000 citations is supposed to provide?

Even if the math were correct, have you heard about "p-hacking?" Again, a bit of detail: when a paper reports that something is "significant", they don't use the word significant the way we do in conversation, to mean important or relevant. They mean "statistically significant," or statistically reliable, defined as the probability (p) of the results happening by random change is less than some agreed-on percentage, typically 5% (p<0.05).

What's 5%? One chance in 20. What if you have 5 conditions you're studying and you want to see if one of them has an effect on another? You set up a pair-wise test saying does A effect B, A-C, A-D, etc. Well, with 5 conditions, there are 20 pair-wise combinations, and with your threshold at 5%, just by chance, one of them will show up as statistically significant and then you get to write a paper about that and how you're so smart that you were able to tease that connection out of the data. What happens when you look at hundreds of variables? You get lots of false positives you can write about, which gets you 21,000 citations and gets you appointed to write the nutrition guidelines for the government.


ABCDE
AX
BX
CX
DX
EX

I knew about p-hacking, and it's even discussed in academics now, but I'm so glad that the GRIM program exists and that there are freelance investigators out there checking papers. At least folks will now have to make up their data and report self-consistent means rather than just making up the means.

Here's the whole exposé about the rogue researchers finding fraud in the highest places of academics.



Warm-up

row 500 / run 400
crawling lunge
10 KB swings or snatches
double KB overhead lunge
10 TGUs or windmills
10 goblet squats
5 pull-ups or push-ups or dips

Strength

snatch pull 5-3-2-3x2

Accessory/Skill

2 windmills between strength sets

Group Workout

3 rounds:
row 1,000m
rest 4:00, complete 5 heavy power cleans during your rest



Subscribe to our RSS Get our blog posts in your inbox

Sign up for classes
Free endurance program ebook