An Interview With Hanna Saltzman
Hanna Saltzman is the author of No White Coat Necessary: The Science of Everyday Health and has worked as a public health organizer/advocate, a health researcher, and a health journalist. She is currently a medical student at the University of Michigan Medical School. She has worked in health research, journalism, and advocacy. She has contributed to health-related research published in journals including the Journal of Women’s Health, The American Journal of Obstetrics & Gynecology, Contraception, The Journal of Global Health, Foot and Ankle International, The Journal of Midwifery & Women’s Health, and Clinical Orthopaedics and Related Research. She also happens to be a friend of mine from college.
Susannah Emerson: How did you approach your book, No White Coat Necessary: The Science of Everyday Health, and all of the research that went into it?
Hanna Saltzman: My goal with this book was to write easy-to-understand, engaging explanations of the scientific concepts underlying basic questions we often wonder about our health, aimed primarily at non-science readers. It’s my hope that through better understanding the basic science, readers will be empowered to understand advice given to them by health professionals, to wade through health information they encounter on the news or online, and feel inspired to take healthy actions.
My research for this book focused on getting the best information possible about basic health science concepts that have been well-established with a consensus among health professionals. I primarily drew on systematic review articles, such as those published by the “gold-standard” independent health research organization called the Cochrane Collaboration, guidelines published by the Centers for Disease Control, the National Institutes of Health, and a medical database called “Up to Date.” I also had several experts in medicine review my drafts with a focus on fact-checking.
One of my favorite sections was the last one, where you offer guidance for how to "wade through" studies and wellness articles in the news. Why did you want to delve into the topic of mis or misleading information?
In our internet era, information about health abounds, which I think is a double-edged sword. It’s wonderful that people have more resources than ever to take ownership of their own health and find the information they need to empower them to stay healthy. However, the inundation of health information can also be overwhelming and deceptive. Overwhelming because studies contradict each other, making it challenging to come to a meaningful conclusion about how to apply information from health studies to our own lives; and deceptive because both poor study design and journalistic sensationalism can lead us to believe “facts” about our own health that aren’t actually true.
I wanted to [give readers] a better sense about the extent to which articles that you read in the news are relevant to your own health. We have more resources at our fingertips than ever before regarding health information. That means that it’s up to us to figure out what to trust.
In your opinion, why are there so many conflicting articles on the internet?
I think there are two main reasons: first, health is incredibly challenging to study and well-intentioned studies with similar research questions can end up with widely contradictory information based on factors such as their study design and the demographics of their study population; and second, journalists and bloggers often misinterpret study results (usually unintentionally but perhaps sometimes for sensationalism).
How do the same studies yield such different headlines?
I’d bet this is a combination of misinterpretations of study findings and a desire for getting more views through a snazzy headline. In my experience as a health journalist, ... I had some frustrating moments when editors wanted to use headlines to get as many clicks as possible with disregard for the accuracy of the headline. I understand that it sounds better to have a headline saying, “Your soap is giving you cancer!” than “a specific chemical found in some soaps was associated with a slight increase in colon cancer in a small number of rats, and we don’t yet know how this applies to humans.” Nonetheless, it frustrates me that health headlines are often so misleading (even when the articles themselves are portraying studies accurately).
Do you remember the most sensationalist headline you came across in your research? (Or not in your research.)
Here are a few...
Sugar as Addictive as Cocaine, Heroin!
Cancer patients should avoid chocolate: Ingredient Makes Tumors Spread Around The Body!
Bras Shown to Cause Cancer!
(That last one is from the UK’s Daily Mail—always a good place to go for a good laugh about health headlines!)
What's an example of conflicting or misinterpreted information that comes from the same findings?
An example I mention in my book is the idea that red wine is better for our health than white wine or other types of alcohol. It may be, but the science is far from conclusive. Articles that tell you to choose red wine as your drink of choice are usually focusing on a compound called resveratrol, which is found in grape skins. The hype started when an experimental study found that resveratrol extended the lifespan of mice. Aside from the fact that humans and mice are different, there’s a big problem here: the mice received straight resveratrol. Turns out that red wine only contains a teensy amount of resveratrol, so little that humans would have to drink 1000 liters of wine every single day to get the same concentration as was in the study! Which means you’d die from alcohol poisoning way before getting anywhere close to the concentration of resveratrol investigated in the study.
Are there any sources that you recommend for consistently clear information?
The Cochrane Collaboration and Up To Date are top-notch. They both have summaries free-of-charge, but unfortunately the full text can often only be accessed through a university account or by paying.
Do you have checkpoints that you recommend to readers who are trying to determine the relevance or validity of a study? How can we figure out what a study is actually telling us?
In my book, I focus on three main questions that readers can ask to better understand the extent to which a given health study does (or does not) relate to their own lives.
- Question 1: What type of study are you reading about? (Experimental or observational? One study or multiple studies?)
- Question 2: Whom are you reading about? (Humans or rats? If humans, what age are they and where do they live? How many?)
- Question 3: Doubling what, exactly? (E.g. Does “doubling cancer” risk mean going from 0.05% chance to 0.10% chance, or does it mean going from 25% to 50%? The second doubling is a lot more meaningful than the first).
Hanna expands upon these actionable steps that we can take when we don't know what to make of a miraculous or disastrous headline in her No White Coat Necessary: The Science of Everyday Health, and she has very generously provided us with an excerpt from it. Read on below, people!
This interview has been condensed and edited for clarity.
How can I tell whether a health study relates to my own life?
By Hanna Saltzman
Health news is fickle. In any given week, we’re likely to hear that something might (a) kill us, (b) cure us, and (c) cure us of one thing but kill us with another in the process. Take alcohol, for example. Relaxing with a glass of wine can feel good, but whether it’s healthy in the long run is another question—and a highly contested question at that. One day the news tells us that a drink a day keeps the doctor away. But then we’re told that’s only the case if our drink of choice is red wine, which has some unpronounceable compound that supposedly makes us live longer. The next day, though, the news informs us that alcohol causes cancer, but then we hear that it might actually prevent cancer so we’ve got to hedge our bets…
This sort of ambiguity abounds in the world of health science. Media sensationalism is certainly part of the problem, but beyond that, studying the human body is challenging. In health science, we don’t get to our destination by speeding along a highway; we get to it by winding along dirt roads without much of a map. Mistakes, false leads, and dead ends abound… but eventually we head in the right direction.
To discover that right direction, we rely on scientific studies. Although all studies have flaws, not all are created equal. In the words of Nobel Prize-winning economist Robert Solow: “The fact that there is no such thing as a perfect [method of sanitation] does not mean that one might as well do brain surgery in a sewer.” When applying this idea to health research, the fact that there’s no such thing as a perfect medical study doesn’t mean that we might as well give up on science and throw the better studies out with the worse ones.
How to distinguish those better health studies from worse ones can be tricky, but there are a few key steps to keep in mind. Here, we are going to focus on three steps to take while reading a health news article to get a better sense of whether its conclusions apply to your life.
Step 1: What type of study are you reading about?
Health studies can be broken into two main groups: experimental and observational. In an experimental health study, a researcher introduces a change (such as a treatment) and then records how that treatment affects participants’ health by comparing health outcomes between people who did and did not receive the treatment. The big advantage here is that by controlling what’s changed, researchers can show that something (e.g. the treatment) causes something else (e.g. improved health).
But experimental studies can’t be done for every health topic. Sometimes, this is for ethical reasons—it’s unethical to assign someone to a treatment that you know might hurt them, just to check and see if it actually does. Other times, experimental studies could hypothetically be done without being unethical, but in reality they’re too hard to control. In these situations, the next best option is an “observational study”. This means that a researcher studies participants without trying to administer a treatment or change them.
In addition to figuring out whether you’re reading about an experimental or an observational study, it’s also helpful to see whether you’re reading about a single study or a group of studies. Even the best study in the world has limited value if it’s the only study that has tried to answer a specific question, because it doesn’t tell us if the results would be the same if the study happened again. When several studies have looked into a similar question, then healthcare researchers can compile those studies together and identify trends. These trends then get published in either a “systematic review article” or in medical guidelines put out by healthcare organizations.
Step 2: Whom are you reading about?
For both experimental and observational studies, it’s important to consider who is being studied. First, is it humans or animals? And if it’s humans, does it have a lot of people from a diverse demographic makeup that are followed for a long period of time? For example, an observational study with, say, 10 men ages 88-92 living in rural Montana surveyed at a single point in time is less widely applicable than a study with, say, 10,000 men and women, ages 24-92, living across the United States, surveyed every six months for 10 years. Similarly, if a study found that people with a specific rare disease shouldn’t drink milk, it doesn’t tell you anything about whether milk is problematic for people who don’t have that rare disease.
Step 3: Doubling what, exactly?
After you have a rough sense of the type of study that you’re reading about and who’s being studied, the next thing to look for is what the study actually found out—and this is where the media often overinflates or gets it plain wrong. Let’s say you read a headline that says, “Kryptonite doubles your risk of being too weak to save the world!” This doesn’t actually tell you anything about what “doubling risk” actually means.
If the initial risk of becoming too weak to save the world was 1 in 3, then kryptonite’s ability to double that risk brings that up to 2 in 3. That’s a big change: without kryptonite, you’ll probably keep your world-saving superhero powers, but with it, you’ll probably lose those powers. But if the initial risk of becoming too weak to save the world was 1 in 3000, then “doubling risk” by being near kryptonite just brings that up to 2 in 3000. This, too is technically still a “double” of risk—but the actual change isn’t much at all, as the risk is still tiny. News articles love to use the “double/triple/quadruple” factor in headlines, but you can often outsmart them by figuring out what it really means! And then, you can decide for yourself if the actual risk bothers you—or not.
Learn more about putting these key steps into action, and other evidence-based ways to empower your health, in No White Coat Necessary: The Science of Everyday Health.