Wednesday, July 6, 2011

When We See What We Want

Head Case: Jonah Lehrer

In 1981, Harvard paleontologist Stephen Jay Gould published "The Mismeasure of Man," a fierce critique of various scientific attempts to measure human intelligence. Mr. Gould began the book with a takedown of "craniometry," a popular 19th-century technique that attempted to find correlations between skull volume and intellect.

Getty Images

Even scientific measurements can be thrown off by our preconceived notions.

His harshest criticisms were directed at Samuel Morton, an American physician who became famous for demonstrating, in 1839, that different human races had different skull sizes. This led many of Morton's contemporaries to conclude that intelligence was a racial trait and that some races were inherently smarter than others.

Needless to say, Mr. Gould, who died in 2002, despised Morton's racist ideology. But he went further than that, delving into Morton's raw data to show that his beliefs had warped his science. Because Morton knew what he wanted to find—that whites had the biggest heads—he ended up mismeasuring the skulls of his subjects.

Before long, Morton became a case study in scientific bias, a warning to researchers that their preconceived notions can dramatically influence what they discover. Although Morton considered himself objective, he was a shoddy observer, blinded by his own beliefs.

Or so we thought. A new study by a team of anthropologists led by Jason Lewis of Stanford has reanalyzed Morton's data, measuring more than 300 of the skulls used in the original research. To their surprise, the anthropologists discovered that the overwhelming majority of Morton's skull data was accurate. Although they strongly criticize Morton's racial theories, and note that variations in skull size are largely determined by climate (not by genetics or innate intelligence), they conclude that he did not fudge the facts.

How, then, did Mr. Gould come to his harsh conclusion? According to the anthropologists, Mr. Gould was guilty of the very same flaw he saw in Morton. By reanalyzing Mr. Gould's own analysis, they demonstrate that he cherry-picked data sets, misused statistics and ignored inconvenient samples. As the scientists note, "Ironically, Gould's own analysis of Morton is likely the stronger example of a bias influencing results."

The larger lesson of the Gould-Morton affair is that bias is everywhere, that many of our studies are shot through with unconscious errors and subtle prejudices. To Paul Simon, we see what we want to see and disregard the rest.

In recent years, it's become clearer that these psychological shortcomings are a serious societal problem. Because we believe we're impervious to bias—we're blind to our own blind spots—we assume that our judgment isn't affected by financial incentives or personal opinions. But we're wrong.

This problem has been most convincingly demonstrated in medical clinical trials. A 2005 study of psychiatric drug trials found that when academic researchers were funded by a drug company, they were nearly five times as likely to report that the treatment was effective. (A similar pattern was found with oncology drugs.) What makes this result so disturbing is that all of these studies were randomized, double-blind trials, which are typically regarded as the gold standard of medical evidence. And yet the financial incentives seemed to decisively influence the data.

Sometimes, even small amounts of money can have big consequences, shaping our views of the evidence. A 1994 study of physicians who requested that drugs be added to the list of approved hospital medications showed that they were far more likely to have accepted free meals or travel funds from drug makers. Other studies have found that the rate of drug prescriptions spikes after doctors meet with a pharmaceutical sales representative, especially when the representatives are bearing gifts.

Such biases don't just influence scientists and doctors. A 2003 study by economists at Carnegie Mellon and Harvard looked at how "independent" auditors are biased by their relationships with clients. (In most instances, auditors are hired and fired by the firms they are supposed to investigate.) The economists found that professional auditors were significantly more likely to approve of questionable accounting practices when they were done by the firm paying their bills.

What this depressing research demonstrates is that the only way to get objective data is to have institutions that assume objectivity doesn't exist. It's not enough to force scientists and doctors to declare conflicts of interest, because our biases seep in anyway. Rather, we need to do a better job of funding truly independent studies and approaching with extra skepticism those that are not. We should also encourage researchers to make their raw data public, as Samuel Morton did, so that others can check it. As Stephen Jay Gould proved all too well, men are inveterate mismeasurers.