Studies The American Heart Association Doesn’t Want You To Read

From the Fat Head Blog

In my last post, I compared the organizations that have been promotingartercloggingsaturatedfat! hysteria for the past 40 years to prosecutors who refuse to believe they put an innocent man in prison — even when new evidence says that’s exactly what happened.  Never mind that pesky new DNA test, they insist. You have to look at the totality of the evidence. Someone who either works for or believes in the American Heart Association even left a comment to that effect (along with a couple of links):

You don’t take fringe studies from 50 years ago that contradict the vast majority of lipid research over the last half century and make a conclusion. You look at the body of evidence.

Well, the Sydney Diet Heart Study wasn’t exactly a “fringe” study.  It was a controlled clinical trial that ran for seven years and included 458 subjects.  Unless human biology has changed in the past 50 years, the results are still relevant.

I’ll examine that “vast majority of lipid research” the commenter linked to in a moment.  First I’d like to deal with the argument that we must consider the totality of the evidence.

No, we don’t.  Good scientists don’t consider a hypothesis to be validated unless the evidence supporting it is consistent and repeatable.  As the science philosopher Karl Popper explained, if your hypothesis is that all swans are white, as soon as I start finding black swans, your hypothesis is wrong.  It’s been falsified.  If you find 100 white swans and I only find three black ones, you might insist that the “totality of the evidence” is in your favor, and it is – but your hypothesis is still wrong.

Another reason I don’t buy the “totality of the evidence” argument is that selection bias and publication bias are both rampant in nutrition science.  I’ve read papers where the conclusions simply weren’t supported by the actual data.  Studies that don’t produce the results the investigators wanted are often buried.   If your academic paper supports conventional wisdom, it’s far more likely to be published.  As Dr. Uffe Ravnskov can tell you from personal experience, papers that challenge conventional wisdom are often rejected over and over, with little or no explanation … unless you consider “this just has to be wrong” an explanation.  So when researchers decide to do a meta-analysis of published studies, there’s a good chance they’re analyzing a stacked deck.

With that in mind, let’s start by looking at some of the “totality of the evidence” offered by the arterycloggingsaturatedfat! crowd, then move on to a few black swans.

The first link from our “body of evidence” commenter was to this study, a meta-analysis of eight studies.  And how were those studies selected?  Did the investigators go out and examine the entire body of evidence?  Hardly.  Here’s a quote from the study:

Of 346 identified articles, 290 were excluded based upon review of the title and abstract. Full texts of the remaining 54 manuscripts were independently assessed in duplicate by two investigators to determine inclusion/exclusion. Forty-six studies were excluded because they did not meet inclusion and exclusion criteria.

Most of the “body of evidence” was excluded merely by reading titles and abstracts.  I’m not claiming the investigators rejected studies that didn’t support conventional wisdom, but the potential for cherry-picking is certainly there.  Out of 346 studies they identified, they ran their analysis on just eight.

To their credit, the researchers discussed the weaknesses of the eight studies they selected:

Many of the identified randomized trials in our meta-analysis had important design limitations.  For example, some trials provided all or most meals, increasing compliance but perhaps limiting generalizability to effects of dietary recommendations alone; whereas other trials relied only on dietary advice, increasing generalizability to dietary recommendations but likely underestimating efficacy due to noncompliance. Several of these trials were not double-blind, raising the possibility of differential classification of endpoints by the investigators that could overestimate benefits of the intervention. One trial used a cluster-randomization cross-over design that intervened on sites rather than individuals; and two trials used open enrollment that allowed participants to both drop-in and drop-out during the trial. The methods for estimating and reporting PUFA and SFA consumption in each trial varied, which could cause errors in our estimation of the quantitative benefit per %E replacement. One of the trials also provided, in addition to the main advice to consume soybean oil, sardines to the intervention group, so that observed benefits may be at least partly related to marine omega-3 PUFA rather than total PUFA consumption.

Enough said about that one.

click here for rest of article Studies The American Heart Association Doesn’t Want You To Read

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s