IngredientsResearch
Our StoryHelp
Shop now
IngredientsResearch
Find a farmCommunityRecipes
Our StoryHelp & Support
Shop now
How to Read a Nutritional Study Without Being Misled — how to read nutritional studies
Home/Guides/Science/How to Read a Nutritional Study Without Being Misled
Science

How to Read a Nutritional Study Without Being Misled

Every week a new study lands in the headlines. Eggs are either life-saving or poisonous. Coffee is either a superfood or slowly killing you. The science never seems to settle. But here's what nobody tells you: most of the confusion isn't because the science is wrong. It's because you're being shown the wrong type of science.

Organised
Organised
6 min read Updated 21 Aug 2025

The nutrition world is drowning in conflicting headlines. And yet the vast majority of people reading those headlines have no framework for understanding which studies matter, which ones are noise, and which are actively designed to mislead. If you're going to navigate modern nutrition, you need to learn to read the research.

The two study types that matter

Not all studies are created equal. In fact, the two dominant types of nutrition research sit at opposite ends of the credibility spectrum.

Observational studies watch what people do and track what happens. A researcher might follow ten thousand people, measure their coffee consumption, and note how many develop heart disease over five years. This is cheap, easy, and fast. It's also almost useless for proving causation.

Why? Because people who drink coffee often differ from people who don't in ways nobody measured.1 Coffee drinkers might exercise more, earn more, sleep less, work longer hours, or eat differently. Any of those unmeasured factors could be driving the disease outcome, not the coffee itself. The researchers call this confounding. You call it a mess.

A newspaper headline screaming "Coffee consumption linked to heart disease" is almost certainly an observational study. These studies generate headlines because they're easy to publish and cheap to run. They're also terrible at proving causation.

Most nutrition headlines come from observational studies. They can show correlation. They cannot prove causation. This distinction will save you from panic.

Randomised controlled trials (RCTs) are different. The researcher assigns people to either the treatment group or the control group, usually randomly.1 One group drinks coffee, the other doesn't. Everything else is controlled. Over the same five years, the researcher measures outcomes in both groups.

This is expensive, time-consuming, and rigorous. And it's the gold standard. When an RCT shows that coffee reduces heart disease, you can actually trust it, because the only systematic difference between groups is coffee consumption itself.

The problem? RCTs for nutrition are rare. They're expensive. Long-term RCTs are even rarer and even more expensive. So most of what you read is observational work, dressed up to sound more conclusive than it actually is.

Relative risk versus absolute risk

This is where most health journalism becomes outright deceptive, whether intentionally or not.

Imagine a study finds that people who eat ultra-processed food have a 40% increased risk of heart disease compared to people who don't. The headline writes itself: "Ultra-processed food increases heart disease risk by 40%." Sounds terrifying.

But here's the trick. That 40% figure is a relative risk increase. If your baseline risk of heart disease is 5%, and it increases by 40%, your new risk is 7%. The absolute risk increase is 2 percentage points. That's very different from a 40% increase sounding like.

Relative risk makes small, clinically meaningless changes sound catastrophic. If a rare outcome becomes slightly less rare, the relative risk number rockets.2 But in real terms, almost nobody is affected.

A 40% relative risk increase on a 1% baseline is a 1.4% absolute risk. The headline advantage of relative risk hides this. Always ask: what's the actual number of people affected?

Drug companies and health journalists love relative risk for this reason. It sells. It panics. It drives clicks and prescriptions. But if you care about what actually happens to your body, you need absolute numbers.

When you read a study claiming a massive risk increase, the first question to ask is: risk increased from what? If the baseline risk is tiny, the increase probably matters less than the headline suggests.

Hidden confounding variables

People are complicated. They don't exist in controlled environments. So when a study tracks a population over time, dozens of variables are moving simultaneously.

A study might find that people who take vitamin D supplements have fewer respiratory infections. But vitamin D takers are wealthier, better nourished, sleep more, exercise more, and live in less crowded housing. Any of those could be doing the work. The vitamin D itself might be irrelevant.

The researchers try to account for confounders statistically, adjusting the numbers to subtract out factors like age, sex, and income. But they can't adjust for what they didn't measure. They didn't measure sleep quality, stress, whether people use public transport, or how often they hug their family. Those invisible confounders could swamp the effect they're looking at.

Every observational study carries this burden. And every observational study headlines assumes confounders have been handled. They haven't.

Why funding matters more than you think

Who pays for the research? This matters more than most people realise.

Studies funded by the dairy industry find milk is healthy. Studies funded by seed oil companies find seed oils are protective. This isn't usually deliberate fraud. It's subtler. Researchers choose which outcomes to measure, which analyses to run, which results to highlight. Those choices compound. By the time the paper is published, the funding bias has quietly shifted the conclusion.

This is called HARKing: Hypothesising After Results are Known.3 A researcher runs the analysis, sees what the data says, then reports that analysis as if it was planned beforehand. If you run enough analyses, some will look positive by chance alone. The researcher reports the positive one and buries the rest.

The remedy is simple: look at the conflict of interest statement. If the study is funded by an industry with a financial stake in the outcome, hold the conclusion lightly. Not because the researchers are dishonest, but because subtle bias is woven through the entire research process.

The healthy user bias trap

Here's a bias that catches everyone. People who take supplements or follow health advice tend to be wealthier, more educated, and more health-conscious overall. Those traits predict better health regardless of whether the supplement works.

A study might find that supplement users have lower disease rates. But supplement users also exercise more, eat better quality food, see doctors regularly, and take seriously their own health. The supplement might not be doing anything. The healthy user bias is.

This is why supplement studies almost always show positive effects in observational data but fall apart in RCTs. The RCT strips away healthy user bias by randomising people and keeping them identical in every way except whether they take the supplement. Suddenly the effect disappears.

Healthy users differ from non-users in dozens of ways. Most of those differences drive health outcomes, not the intervention being studied. Observational data cannot separate these effects.

If you're tempted by a supplement study showing miraculous results, ask: is this from an observational study tracking healthy users, or a randomised trial? The answer changes everything.

Sample size: when numbers lie

A study of fifty people will show random noise disguised as a pattern. A study of five thousand people will show a real effect if one exists, and noise is averaged away.

Most nutrition studies are small. Fifty participants here, a hundred there. With small samples, the role of pure chance explodes. A spurious finding has an easy time hiding in the noise.

Researchers use statistics to ask: how likely is this result if there's no real effect? If the answer is less than 5%, they call it statistically significant. But here's the trap. With a small sample, you need a large effect to reach significance. With a large sample, even tiny effects become significant.

A study of five thousand people showing a 1% improvement is far more trustworthy than a study of fifty people showing a 30% improvement, even if the fifty-person study reaches statistical significance. Sample size matters, and it matters hugely.

The bottom line

When a nutrition headline arrives, ask four questions. Is this an observational study or an RCT? Is the reported number a relative risk increase or an absolute one? Could confounding variables explain the result? And who funded it?

Most nutrition stories fail at least two of those tests. The ones that pass all four are rare. But when you find them, you can actually trust them. You've moved from reacting to headlines to reading the science. That's the game changer.

References

  1. 1. Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. New England Journal of Medicine. 2000;342(25):1887-1892. https://pubmed.ncbi.nlm.nih.gov/10861325/
  2. 2. Stegenga J. Down with the hierarchies. Topoi. 2014;33(2):313-322. See also Schwartz LM, Woloshin S, Welch HG. Misunderstandings about the effects of race and sex on physicians' referrals for cardiac catheterization. New England Journal of Medicine. 1999;341(4):279-283. https://pubmed.ncbi.nlm.nih.gov/10413741/
  3. 3. Kerr NL. HARKing: hypothesizing after the results are known. Personality and Social Psychology Review. 1998;2(3):196-217. https://pubmed.ncbi.nlm.nih.gov/15647155/
Organised subscription - 1 pouch, 1 bottle and 1 whisk
Organised
30 servings · one scoop a day
100% grass-fed
Free UK shipping
Made in the UK
SubscriptionSave £10
1 pouch · £2.63 per serving£89 £79
Family SubscriptionSave £28
£2.50 per serving£178 £150
2
Select your frequency
Every Month
OR
One-Time Purchase
£89
1
100-day money-back guarantee
Skip, pause or cancel anytime
Find out more about Organised →
Keep reading
  • Science & Research
    You Can't Get All Your Nutrients from Plants Alone: Here's the Science
    Explore nutrients that plants cannot provide: B12, heme iron, retinol, creatine, carnosine, K2. A nuanced look at plant-based diet limitations without dogma.
  • Science & Research
    Peptides in Organ Meats: Beyond Vitamins and Minerals
    Organ meats contain bioactive peptides (carnosine, anserine, glutathione) that muscle meat lacks. Learn why whole organs are nutritionally different.
  • Science & Research
    Vitamin K2: The Nutrient Nobody Talks About
    Learn about vitamin K2, the nutrient that directs calcium to bone and away from arteries. Discover MK-4, MK-7, and the best food sources.
In this guide
  1. 01The two study types that matter
  2. 02Relative risk versus absolute risk
  3. 03Hidden confounding variables
  4. 04Why funding matters more than you think
  5. 05The healthy user bias trap
  6. 06Sample size: when numbers lie
  7. 07The bottom line
  8. 08References
Loading Trustpilot reviews…
Read enough?

Nourishment, without the taste.

Once you know how to read a study, you can evaluate the claims that follow. That's when real nutrition decisions become possible.

Try Organised→
Free UK delivery · 100-day money-back guarantee

Nourishment for every generation.

Follow us

Shop

  • Organised Blend
  • All Products
  • Beef Organ Protein Powder
  • Grass-Fed Organ Supplement
  • Beef Liver Powder

Explore

  • Our Story
  • Find Farms
  • Ingredients
  • The Organised Code

Community

  • Articles
  • Recipes
  • Community

Support

  • Help & Support
  • Account
  • Shipping Policy
  • Refund Policy

Nutritional guides and local farmer updates below

By signing up you are agreeing to the terms and conditions. Read our Privacy Policy.

Guaranteed safe checkout

VisaMastercardJCBAmexPayPalApple PayGoogle PayKlarna

© 2026 Organised. All rights reserved.

Privacy Policy & CookiesTerms & Conditions