TOO MUCH, TOO LITTLE SLEEP TIED TO ILL HEALTH IN CDC STUDYAnd naturally we’re all aware of the competing studies that exist too. One study shows that eggs are bad for you; another that they’re good for you. One study shows how margarine is a healthier alternative than butter; another that butter is better for you. With so many competing studies, you can find a scientific backing for just about any position you want to take (especially in health matters).
Study: Long-Term Breast-Feeding Will Raise Child's IQ
WOMEN, WANT A HEALTHY MARRIAGE? MARRY MAN UGLIER THAN YOU, STUDY SAYS
STUDY: FOOD IN MCDONALD'S WRAPPER TASTES BETTER TO KIDS
Study: 1 in 50 U.S. babies abused, neglected in 2006
The existence of so many studies helps to emphasize a point regarding statistical analysis. Despite being a powerful tool, if you do not set up the guidelines and restrictions for your samples properly any statistics you observe won’t amount to a hill of beans. And we’re not even talking about the inherent fluctuations that require the existence of error bars (that’s the line that says +/- 3%, for example). Nor are we even addressing political manipulation of statistics in the form of pollaganda. Instead, I’m talking about something at the heart of statistics itself—it’s a universal.
To demonstrate what it is, let us first ask a simple question. When we do a statistical analysis of some observation, for what reason are we doing it? As you can see in the above headline examples, most of the time studies are done to find a causal linking between some object and/or action and some result. Thus, the first headline above says that too much or too little sleep (the cause) is “tied” to “ill health” (the effect). We also see that women should marry uglier men for a healthy marriage (in a study obviously written by an ugly man).
Now let us assume that there is a correlation that all these studies found. Let us assume that it is the case that people who sleep less than six hours a night weigh more than those who sleep eight hours a night, and that women who married uglier men (however that is defined) are in healthier (however that is defined) marriages. The fact of the matter is that when you compare any subset of a group, however you wish to define that subset, with the rest of the group as a whole, you will find things that the small group has in common at a statistically higher rate than the group as a whole. This happens automatically and does not mean that it is relevant in a causative sense!
To give a simple example, let’s examine hockey (since I like hockey). There are 30 teams in the NHL. Of those 30 teams, 7 are named after animals (the Penguins, Bruins, Thrashers, Panthers, Ducks, Coyotes, and Sharks) and 7 are named after people-groups (the Islanders, Rangers, Canadiens, Senators, Blackhawks, Oilers, and Kings). Each group of 7 constitutes 23% of the teams in the League.
There have been 80 Stanley Cups awarded since 1926. During that time, teams named after animals have won 8 Stanley Cups, which means that they won 10%. However, teams named after people-groups have won 39 Stanley Cups during that time, which means they won 49% of them. Clearly, having a team named after a people-group instead of after an animal provides a statistical advantage to a hockey team…
Perhaps someone could argue that the statistical data isn’t fair. After all, the Thrashers (1999), Panthers (1993), Ducks (1993), Coyotes (1996), and Sharks (1991) are all teams that did not exist before the 1990s! On the other hand, the Rangers, Canadiens, Senators, and Blackhawks all existed in 1926 (the start of this survey). Furthermore, the Kings were founded in 1967, the Oilers in 1971 and the Islanders in 1972. Of the animal teams, only the Bruins were around in 1926 (the Penguins were founded in 1967). Thus, using 1926 as the baseline (since before that there were other teams besides just NHL teams that could play for the Cup), the average year of founding for animal teams is 1981 and for people-group teams it’s 1945.
However, we can adjust for that. Animal teams have won a Cup on average every 3.25 years they’ve existed; while people-groups win a Cup for every 1.59 years they’ve existed. Clearly, it still remains better to have a team named after a people-group than an animal. (And I’m not biased since I cheer for the Avalanche, which is neither a people-group nor an animal…)
Now here’s the thing. The statistical data that I’ve given here is all correct (assuming I didn’t make any typos or anything of that nature), but every rational person would immediately recognize that the type of name a sports team has, has no bearing on the performance of that team. This is an attribute that is linked statistically, but the statistical linkage is accidental rather than causative.
Every time that we do these surveys and examine the numbers we have to realize that there are some number of things that will be discovered in common that are accidental correlations. The problem is that we ignore most of these connections. And when I say we ignore them, I don’t mean that we test the data and then go, “This isn’t relevant” but we do not even look for them in the first place. After all, were it not for the fact that I was looking for an example for this blog entry I would never have cared what percentage of teams named after animals won the Stanley Cup. This correlation would have been excluded a priori as being irrelevant.
But these irrelevant correlations are important to statistical analysis! Why? Because since a certain percentage of linkages are accidental, we have to account for them in our conclusion. In other words, we have to have some way of determining if the link we discover is causative or if it is merely the kind of statistical fluke you get when examining hockey mascots. And that means that we would need to examine all possible connections and discard those that are accidental in order to find out if the statistical percentages are covered.
That, however, is impractical to the point of impossibility. After all, it is relatively easy to come up with statistical correlations between things. For instance, with my hockey example it took me all of 15 minutes to come up with that correlation. The longest part was pulling up the Wiki sheets on the number of Stanley Cup wins various teams had had. Indeed, based on my experience I would argue that it is so easy to come up with meaningless links between data that it will always remain more likely that a correlation is accidental than causative. That is, for every one true causative link between a subset of a group and the average of the entire group, I would argue there are several accidental links. And these accidental links are not always as obviously accidental as the examples I’ve given. (For a less obvious example, think of the correlation between diabetes and obesity. Does one cause the other? Or is it just a statistical fluke, similar to the names of hockey teams?)
If it is so difficult to prove our position statistically due to the possibility of accidental links, then what good is it to come up with a statistical correlation in the first place? For most studies that you read about in the media, the answer is: “None.” However, for scientists there remains one thing that a truly causative link can do that an accidental link cannot do that saves the field. A truly causative link will enable you to make a prediction that you can test and verify. If something is causative then it will continue to cause the effect at the same rate. On the other hand, if it is accidental then it is a random linkage, and random linkages will break down through further testing. For instance, the fact that people-group teams have won more Stanley Cups than animal teams does not help us predict who will win the Stanley Cup this year or next year or the year after that; therefore, it is an accidental link rather than a causative link. However, if further testing shows that the percentages of obese people who get diabetes remains constant, then we can have more confidence that that is a truly causative link rather than simply a statistical accident.
So there are some ways to salvage statistics. But it requires that we be able to conduct further tests with our predictions in place in order to sort out whether we have a meaningful causative link or a meaningless accidental link. If we cannot conduct those further tests, then any causative links will be lost in the noise of the countless accidental links. They may be true, but it is impossible to verify it.