Many people accept that we should eat less red and processed meat for a variety of reasons, one of which is our health. This is unfortunate, because there is no sound scientific evidence to support such a recommendation. Certainly, many nutritional studies have been published on this issue, however, the evidence they provide is circumstantial and inconclusive. It would be tedious to address those publications and their limitations directly – better to provide an overview of nutritional studies in general, so as to understand the good from the bad. I will start with a series of questions to have in mind if reading a nutritional study. I don’t recommend reading press articles about nutritional studies – as wearisome as it will be, if the truth matters it will be necessary to track down the study itself.
Q1. Was the study a Randomised Controlled Trial (RCT) or was it an Association Study (AS)?
This is the most fundamental question to ask. The point of difference is that an RCT follows scientific principles, whereas an AS does not. The majority of nutritional studies will be an AS.
For a study to be recognised as science, it should:
1. Propose a testable hypothesis, possibly based on preliminary observation.
2. Design experiments to test that hypothesis.
3. Conduct the experiments and analyse the results.
4. Accept, reject or modify the hypothesis accordingly.
Let’s say we have formed an hypothesis that consuming red-meat increases mortality. To address this scientifically, we should carry out an RCT. One way to do that would be to: (1) recruit a large number of people, representative of the population as a whole (age, gender, ethnicity, health etc); (2) divide them randomly into two matching groups (the ‘R’ in RCT); (3) make some change to red-meat consumption in one group (e.g. reduce it and replace with white-meat) while leaving the other group alone (known as the control, the ‘C’); (4) follow both groups for a number of years (5 is often enough for mortality studies); (5) determine whether there was a significant difference in mortality between the two groups over the trial period (the ’T’); (6) Accept, reject or modify the hypothesis.
An RCT is a gold standard for testing a nutritional hypothesis. You won’t come across many of them though. An exception seems to be low-carbohydrate diet studies, which challenge the nutritional establishment with RCTs.
This is uncommon in general though. For example, an RCT has never been undertaken to test whether eating red-meat increases mortality in the general public. Nor is one likely to be. Something to ponder when health authorities tell you to eat less red meat.
You may also be surprised to know that an RCT, in healthy men and women with a range of ages, has never been carried out to test what is known as the diet-heart hypothesis (that saturated fat increases blood cholesterol and causes heart disease). This is the cornerstone of current dietary guidelines and first proposed over 40 years ago, yet it has not been subjected to an RCT in the general public. The RCTs that there are, have been undertaken in select sub-populations (e.g. men with a history of heart disease, or inmates of mental hospitals), and none have confirmed the hypothesis. That’s why, after all this time, it is still called an hypothesis (see Zoe Harcombe’s review).
The type of nutritional study that you are likely come across most often will be an AS. An AS looks for an association (correlation) between two variables (for example, red-meat consumption and mortality) in a pre-existing data set. For example, an AS might look at a survey of a group of people and notice that those who ate more red meat had higher mortality. That’s the association. The temptation is to infer that eating red meat increases mortality. But that’s an hypothesis about causality. An AS differs from an RCT in that it does not involve experiments that can test an hypothesis – that is why an AS is not science. An AS is a descriptive statistic, from which causality cannot be inferred.
I’ll illustrate this point with three examples. (1) We notice that more umbrellas are out when it is raining than when it is fine. Further observation shows that umbrellas and rain associate rather strongly – too strongly to be down to chance. That’s the association part, and where an AS should stop. The problem arises when the AS tries to infer or imply causality from that association, because, in the absence of further observation or experiments, we could infer that umbrellas cause rain. (2) Firefighters associate with fires in buildings – does that mean firefighters cause fires in buildings? (3) More seriously – cholesterol is associated with plaques in blood vessels, but that doesn’t necessarily mean that cholesterol causes plaques – the cholesterol may have been sent there to manage the plaque (one of the roles of cholesterol is anti-inflammatory). If this latter scenario holds, then lowering cholesterol (e.g. with statins) would be ineffective in reducing plaque, just as taking peoples’ umbrellas away won’t stop it raining, and sending firefighters home won’t put out the fire. Causality matters.
A further serious drawback for an AS is that it cannot determine whether an association is a direct one in either direction: ‘A’ might be associated with ‘B’, however, A may not cause B and B not cause A. They might just both associate with a cofactor, ‘C’.
An example: Shortly, I will critique an AS (on red and processed meat consumption and mortality) to illustrate some of the points I am making in this post. In that study, the authors report a strong association between red meat consumption and death from respiratory disease. This was one of the strongest associations of those that they tested for. Does that mean that eating meat increases the risk of death from respiratory disease? Of course not, there was a ‘C’ factor – smoking (itself the most common risk-factor for respiratory disease). There were over twice as many smokers in the high red-meat group compared to the low red-meat group. All the authors were showing was that smoking is a risk for respiratory disease. Red meat consumption associated with smoking, and respiratory disease associates with smoking, therefore red meat consumption associated with respiratory disease in their analysis. The authors did not acknowledge or discuss this glaring confound.
Is there a place for AS in science? Yes – to help develop a testable hypothesis.
If ‘A’ associates with ‘B’, then it would be legitimate to form the hypothesis that ‘A’ causes ‘B’ (or ‘B’ causes ‘A’), but an experiment needs to be designed (such as an RCT) to test that hypothesis. An AS, or a series of ASs, should not inform our dietary decisions.
There is a third type of nutritional study that you may come across – a Meta Analysis (MA). This is where the results of multiple studies are combined in a grand analysis to reach an overall conclusion. If the MA is an analysis of a series of RCTs, it’s conclusion will be of importance. If it is an analysis of a series of ASs, it’s conclusion will carry no more weight than the ASs themselves.
Q2. What is the population being studied?
Was it human? Many nutritional conclusions get drawn from rodent studies and other laboratory work. Be particularly suspicious of high meat/fat studies in mice, as mice are herbivores and not biologically-suited to such a diet. We are.
If it was a human study, were participants drawn from a representative cross-section of the population? Or, was it carried out in a select sub-population (e.g. inmates of mental hospitals, overweight men, or retired middle-class Americans – see later).
Are the findings likely to be relevant to you? Dietary guidelines are directed at everyone, so they must be confirmed by studying a general population.
Q3. How was the outcome reported – relative or actual change?
First, was the outcome clinical or a biomarker? A study using clinical outcomes, such as actual cardiovascular disease, carries more weight than one using biomarkers such as cardiovascular risk factors (cholesterol, blood pressure, etc).
There are two common ways that a difference in outcome measure between groups can be reported – relative (exaggerated) or actual (real).
It is crucial to understand what this means, and to look out for it, because it is a common ploy. I will explain with an hypothetical example for simplicity.
Let’s say that the risk of colorectal cancer in a healthy population is 3%. Now, let’s say that a nutritional study (e.g. of red-meat) finds that in people who eat meat the risk increases to 4%.
But, a study that suggests we should eat less red-meat, because it is associated with a 1% increase in the risk of colorectal cancer, is not going to get widespread attention or a press article.
There’s a craftier way – quote what is known as the relative difference. The risk started out at 3%, and increased by 1%, in other words, it increased by a third (1/3). That’s a 33% increase in risk. So, using the same data but expressed differently, it is possible to associate red-meat consumption with a 33% increase in the risk of colorectal cancer. A headline is assured with a number like that.
This is common and accepted practice. The relative difference can be given prominence in the Abstract and Conclusions, with the actual difference buried somewhere else (such as in a Table). Press reports are almost guaranteed to state only the relative difference.
Both numbers are mathematically correct of course, it is only the public perception that is being manipulated.
Q4. What is a Hazard Ratio (HR)?
At times, you may see the results presented as a HR, or related calculations like odds ratio (OR), relative risk (RR), etc. These mean subtly different things, but essentially they are a ratio of outcome measures between two groups. They should be presented as a range (confidence interval), and if that range includes the number 1, then the ratio does not indicate a significant difference between groups (a ratio of 1 indicates the outcome was the same in both groups).
In nutritional studies, the HR tends to be abused. For a HR to express a meaningful association, it should have a value of at least 2. Even then, there would be great uncertainty as to the truth of the association, and it would need to be followed up with an RCT. I have never seen a HR of 2 or more reported in any nutritional study. I have even seen the value of 1.01 used to indicate an unequivocal association (see later).
Q5. What version of ‘significant’ applies?
Headlines frequently state that something significantly increases the risk of something else, however, scientists have a very explicit definition of the word ‘significant’. It is a statistical definition, and it means this: A difference is significant if the likelihood that it could have occurred purely by chance is less than 1 in 20 (i.e. 5%). It is expressed as a p-value (probability-value), so you will see p<0.05 (which is 5% expressed as a decimal) in the scientific paper itself. If the difference is stronger statistically, you may see p<0.01 or p<0.001. The significance is not an expression of the magnitude of the difference, rather it is an expression of the confidence that the difference is real and not due to chance.
Common usage of the word ‘significant’ is quite different, it means something like: remarkable, outstanding or important. Common usage is about the magnitude of the difference, not the statistical confidence in that difference.
Which means that a scientist may say that a change (of 1%) is significant, whereas the magnitude of the change is not significant in the common sense of the word.
Q6. What does ‘risk’ mean?
There is a lot said about increased or reducing the risk of something. The ‘risk of all-cause mortality’ is a common one. But, bear in mind that the risk of all-cause mortality is exactly 100% across the whole population. We are all mortal.
So what risk usually is taken to mean is that lifespan is shortened, and in a given study window more people die. It has nothing to say about how much lifespan is shortened on average. There are ways to estimate this, and typically it will be a few days to a few weeks over the course of a lifetime.
Also, risk is typically presented in the negative. In the hypothetical example I used earlier, the risk of red-meat consumption and colorectal cancer was 4% (up from 3%). That means that 4 out of 100 people eating red-meat might get colorectal cancer, whereas 3 people out of 100 would get it anyway, even without red-meat consumption.
However, the results could also be expressed in the positive. We can look at how many people don’t get colorectal cancer: We can point out that 96 out of 100 people eating red-meat will probably not get colorectal cancer, whereas 97 people would not get it anyway. A relative improvement of 1/96 = 1.04%.
It is easy (and common) to play these games (and others), depending on the desired message.
Q7. Conflicts of interest
Scientific journals are getting better at demanding conflict-of-interest statements and increased transparency. However, they are almost always financial (did industry fund the study, has the scientist previously received industry funding, does the scientist sit on a company board or advise one, etc).
However, there are multiple other hidden conflicts of interest. To understand nutritional studies, you might need to consult an investigative journalist.
Some questions to think about: Do the senior scientists on the publication sit on influential committees that set health guidelines? Are they publishing studies to prop up those guidelines? Do the scientists have an ideology? For example, are the scientists vegetarians and publishing a negative paper on meat consumption? Does the Editor of the journal, who decides on publication, have their own interests? Editors can send submitted manuscripts to reviewers that may be on-message. Scientists can chose to send manuscripts to friendly editors. Some scientific journals insist that the authors themselves submit a list of potential reviewers, further enabling bias in the peer-review process.
I’ve gone on long enough.
I will now give a case example to show how some of these issues can come together.
An Illustrative paper
“Mortality from different causes associated with meat, heme iron, nitrates, and nitrites in the NIH-AARP Diet and Health Study: population based cohort study”
This paper was published last year (2017) in the prestigious British Medical Journal (BMJ). It reported on over 500,000 participants, had a 16-year followup and, on a casual glance, appeared to be a substantial and important contribution to the literature. In press articles, the thousands of participants and long follow up were emphasised to give gravitas to the conclusions. These were the conclusions:
“The results show increased risks of all cause mortality and death due to nine different causes associated with both processed and unprocessed red meat, accounted for, in part, by heme iron and nitrate/nitrite from processed meat. They also show reduced risks associated with substituting white meat, particularly unprocessed white meat.”
You can’t get more on-message than that. Let’s take it apart.
The first question as always – was it an RCT or an AS?
It was an AS – the word ‘associated’ can be seen in two places in the conclusion above. While the conclusion reports association, the most likely take-home message is causation: that red-meat increases the risk of all-cause mortality, including 9 specified causes. That was the message in the popular press too, and the message that can be expected to diffuse into the public perception. Strictly, the authors do not say that, but not saying it in this way is disingenuous – they would well know how their conclusion would be interpreted.
What was the population studied?
The study describes itself as a “population based cohort study”. However, it was not representative of the US population as a whole – the study population came from a pre-existing NIH-AARP Diet and Health Study. The AARP (American Association of Retired Persons) is a US not-for-profit that provides services to “enhance the quality of life for all as they age”. Thus, it is a self-selecting, aged population who chose to pay an annual fee to be a member of AARP. The demographics are predominately white (93%), university educated, presumably privileged middle-class individuals, retired and over the age of 50 (most frequently 65-69). The US dietary guidelines are directed at everyone over 2 years of age irrespective of anything else. Members of AARP are not representative of that demographic. What can they say about the metabolic needs of a 3-year old child, a rapidly-growing male adolescent, or an active construction worker?
Furthermore, the AARP members were not sampled randomly across the US, they came from chosen locales. This meant that the demographics did not include substantial numbers of African Americans (4.5%) or hispanics (2%), or groups that might be of low socio-economic advantage. Such people may be less likely to eat red meat (because it is expensive) but have increased mortality (because poverty associates strongly and negatively with health). The inclusion of these groups could have reversed the findings of the study.
The NIH-AARP study was a one-off survey carried out in 1995/6. The study founders mailed questionnaires out to 3.5 million members of AARP, and about 500,000 filled them in and returned them (a 14% response rate). Already, we have a subset whose characteristics will not be representative even of AARP members. The 124-item questionnaire asked responders to estimate their consumption of food, drink and portion sizes over the preceding year, and asked a series of medical, lifestyle and demographic questions.
That was it. The responders were not contacted again and no further data was collected. The veracity of their food recall was not tested. Everything hinges on this one food recall questionnaire, and on the assumption that participants made no further changes to their diets, or lifestyle, for the next 16 years.
The authors of the paper consulted the US Social Security Administration Death Master File to determine mortality up until 2011 (16 years on), and the National Death Index to determine the cause of death. They then looked for associations between mortality, cause of mortality, and self-reported consumption of red and processed meats 16 years earlier. The choice of 2011 as the cut off year is curious, given that their paper was published in 2017.
What did they compare?
The authors divided participants into 5 groups according to meat consumption. They then studied the highest group (one fifth of the participants) compared to the lowest group (another one fifth). Anyone between these limits (which was the majority – three fifths, or 60%) did not feature in the primary analysis. Hence, the study can say nothing about the health implications of low-moderate, moderate or moderate-high meat consumption.
Some ‘C’ factors (confounds)
The authors acknowledge that participants in the highest fifth of red-meat consumption were more likely to be male, have a higher body mass index (BMI), be current smokers, be diabetic and to estimate their health status as poor to fair. I have already shown how smoking was a likely confound for respiratory disease – add in diabetes, overweight and generally poor health and they have some serious confounds.
Therefore, it should come as no surprise that this group turned out to have the higher mortality, which may have nothing to do with red meat. If two groups are to be compared, they should be matched for everything except the variable of interest (meat consumption). The authors were not able to do this because it was an AS based on a survey, it was not an RCT based on experimental data.
The authors dismissed these confounds and claimed that their analysis was adjusted for “sex, age at entry to study, marital status, ethnicity, education, fifths of composite deprivation index, perceived health at baseline, history of heart disease, stroke, diabetes, and cancer at baseline, smoking history, body mass index, vigorous physical activity, usual activity throughout day, alcohol consumption, fruit and vegetable intakes, total energy intake, and total meat intake”. The reliability of adjusting for this many important confounds, while looking for just one association (red-meat), is absurd, and the authors make no effort to be convincing (or even to explain the adjustment process). In plain-speak, ‘adjusting’ under these circumstances should be read as ‘making it up’.
The study design used hazard ratios (HRs) to express their results. Recall that a number that is statistically greater than 1 is considered to be a positive association. In general, the average HR was around 1.25. A HR of this size is trivial, and would not normally be considered (except in nutritional science) to indicate a genuine association. Nonetheless, the authors come to unequivocal conclusions from these data.
The use of marginal HRs (or other related statistics) to reach an unequivocal conclusion is the norm for nutritional studies. Once a conclusion is reached, the marginality of the data doesn’t matter anymore. For example, this sentence comes from the Introduction to the paper:
“High intakes of heme iron have been shown to be associated with cancer and cardiovascular disease (ref)”.
Looking up that reference, it turns out that the range for relative risk (it’s like a HR) was 1.01 at the lower end. That is so close to 1 that it is unequivocally meaningless. Nevertheless, those authors drew the conclusion: “Higher dietary intake of heme iron is associated with an increased risk of cardiovascular disease.” (my underline).
The present paper refers to that conclusion, and the 1.01 gets lost to history. This is how tenuous data is used to form strong conclusions that propagate through the literature and amplify the public perception. It also shows how wary a reader of nutritional studies needs to be.
I will leave this paper here. There is nothing exceptional about it, it is common for nutritional studies to be ASs, to draw strong conclusions from marginal statistics, to be unrepresentative of the population, to have limitations in design that stem from not being designed prospectively, and to present their conclusions in a way that is technically correct but open to misinterpretation.
– – –
It is commonly accepted that there are health concerns surrounding processed meat too, and nitrites and nitrates used in preserving that meat usually get the blame (they do in the above paper too). Again, there are a number of issues to bear in mind, the first of which may surprise:
1. In 2010, Food Standards Australia New Zealand conducted a survey to estimate Australians’ dietary exposure to nitrates and nitrites. They reported that: “Most of our dietary exposure to nitrates and nitrites is through fruit and vegetables”
Their data for nitrites: “Vegetables (44-57%) and fruits (including juices) (20-38%) were the major contributors to estimated dietary nitrite exposure across the population groups. Nitrite exposure from processed meats accounts for only a relatively small amount of total dietary nitrite exposure (5-7%).”
The data for nitrates was similar.
The vegetables with the highest concentration of nitrites and nitrates were leafy green vegetables such as spinach or silverbeet. This makes sense, because nitrogen is absorbed from the soil (nitrogen is an important natural fertiliser) for making chlorophyll, and oxygen is expired through leaves during photosynthesis. Hence nitrogen and oxygen co-exist in the parts of plants where chlorophyll is concentrated and photosynthesis active – the leaves. Nitrite is a simple chemical combination of 1 nitrogen atom and 2 oxygen atoms, while nitrate combines 1 nitrogen with 3 oxygens. Thus, the raw materials, and their juxtaposition, favour the formation of nitrite and nitrate in dark green leaves with high levels of photosynthesis.
2. If you look at the ingredients list of processed pork products, you will see that the top ingredient is pork. It doesn’t say ‘pork meat’, just pork. That tells you something already. The economic reality is that many processed pork products probably contain parts of the pig that can’t be used for any other purpose. This may include parts and organs of the animal that you do not normally consider to be food. This is more likely for mass-produced pork products with indeterminate texture. How does this impact on health, irrespective of nitrites or nitrates?
3. Under what circumstances was the processed meat eaten? Was the pepperoni on a pizza with a beer? Was the ham or polony in a sandwich with margarine? Was the bacon in a club sandwich? Was the hot-dog in a white-bread bun with a sugary tomato sauce? In other words, is the food (mostly carbohydrate) that went with the processed meat a risk factor?
Likewise with red (unprocessed) meat. Was the beef consumed in the form of a hamburger with a side of fries and a large Coke? If the beef was taken out and replaced with tofu, would that make the meal healthier? Did the gastro-pub grilled steak come with a bunch of beer-battered chips?
In real-world situations these complexities matter, but they are seldom acknowledged or accounted for in an AS (they weren’t in the paper I have reviewed just now).
4. You may read that it is the nitrosamines (a combination of nitrites/nitrates and amino acids from protein) that are of concern. It is worth noting that amino acids are in vegetable sources of protein and, in the cooking pan or in the acidic environment of our stomach, nitrosamines have the potential to form in conjunction with nitrites/nitrates from vegetables and fruits.
5. Still, a consideration with processed meat is to recognise that it is processed. In general, it is prudent to minimise processed food in our diet. This will depend on quality though – e.g. chicken nuggets and pancetta are available on the same planet, but they occupy different worlds. Be aware of the provenance.
Summary: My overall conclusion regarding processed meat is the same as for other nutritional claims – it is much more complicated than we are told, or expected to believe, and processed meat cannot be considered in isolation from a host of other lifestyle factors. Just as is the case for red meat, an RCT has never been conducted to determine whether processed meat is harmful to our health.
– – –
The elephant in the room
The elephant in the room is that red meat consumption has been in decline for decades, despite worsening public health. The following chart, from the US Department of Agriculture data and published here (Figure 2, reproduced by Fortune magazine), shows red and white meat consumption since 1909. Red meat consumption was on the rise until the late 1970’s, when the Dietary Guidelines for Americans were first published. These guidelines, without a scientific basis, urged people to limit saturated fat and thus red meat. The population responded, and red meat consumption has fallen ever since, whereas white meat consumption has continued to rise in compensation – we are doing exactly what we were told. Note the more precipitous decrease in meat consumption in the last few years – are anti-meat campaigns gaining ground?
I think that this is a telling chart. During this period of decline in red meat consumption, we have had an increasing epidemic of modern diseases (obesity, type 2 diabetes, Alzheimer’s disease, heart disease, cancer etc), all of which were rare in the early 1900’s. Yet, our red meat consumption (red line) has now returned to about the same level that it was in ~1909. Reducing red meat consumption has not improved our health. We’ve conducted an experiment on ourselves, and it has failed. It would be foolish to persist with this message.
– – –
Where to from here?
With so much wrong with nutritional studies of health, on what should we base our diets? I think we should look to the science of our biology, and not to nutritional science. We need to understand our physiology (metabolism, endocrinology, neurology, etc) and take guidance from our evolutionary biology, if we want to know how and what we should eat. The science of our biology is sound. It is on this basis that I follow a ketogenic diet. I do not follow the messages of health authorities, because I know they don’t have the evidence, to my satisfaction, to back them up.
Print This Post