This week I opened up Science Daily and found an article that said there is no evidence to support the use of B-vitamins for reducing the risk of heart attack, stroke or death associated with cardiovascular disease. Seems pretty straightforward doesn’t it? But there was also an article the same day that I got saying that B-vitamins were effective in reducing cardiovascular disease in celiac patients, using precisely the same mechanism as the first study. Was it because the patients were allergic to gluten? I doubt it- no mechanism seems appropriate- and I thought I’d write about why studies may bring contradictory information.
The first study was a meta-study. That means it looked at past studies and basically averaged the results. That appears to be a great idea. But what about the individual studies that they looked at? If the studies were not well done, then the data is useless and biases the average. I have a link to an excellent article by Jonathan Treasure called “Medline and the Mainstream Manufacture of Misinformation” which addresses this well, as well as the way recaps of meta-studies tend to reinforce bad information.
Studies can come out with bad data for a variety of reasons:
- The studies use in a meta study aren’t the same. Some use one dose of a product and others use another which may be too low to see an effect. Some only use men, others men and women, who have different physiology. The sample may be drawn from different ethnic or financial populations. Studies may or may not filter out people with what are called comorbities, associated diseases, like cardiovascular disease with diabetes. But in the real world people may have combinations of related diseases.
- The sample size is too small. I saw a Japanese study this week showing that Lion’s Mane caused liver damage based on only 3 patients. All 3 were being treated for cancer and in the fine print two of them had only started on the mushroom supplement.
- The patients have something going on that could affect the results like cancer treatment in the above example. The ephedra ban was based on the death of a ballplayer who was taking medication that could hurt his kidneys, was exercising in the hot sun and was dehydrated. But the ephedra was blamed. Similarly aristolochia was banned based on a weight loss pill in Europe that used the wrong species combined with toxic pharmaceuticals and for a longer time than in traditional medicine, so no one knows what caused the problem.
- The herb or supplement is not assayed. I subscribe to Consumer’s Guide and you would be amazed at how many products do not have the amount of substance they claim, even from good companies. Supplements aren’t unregulated- this is illegal, but it is common. Herbs are frequently not identified by species and are not assayed for strength, so the results of the study are suspect. Many scientific studies just buy supplements off the shelf, accepting that they are what they say, which is pretty shoddy.
- The placebo is not inert. I see this in herbal studies all the time where they figure that people will know that a sugar pill is not the colored or aromatic herb in question, so they give a different colored or aromatic herb that has some overlapping effect. I also see this in acupuncture studies where the “sham” points are really active or the sham techniques stimulate the points. See my article on Toothpick Acupuncture. The problem is often that they don’t have real herbalists or acupuncturists or nutritionists designing the studies that involve their modalities.
- Studies that are done in vitro, in testtubes or petri dishes, have no value whatsoever except for topical applications. You don’t apply goldenseal to the lungs directly, it goes through your digestion and blood stream, engages immune factors in your blood and will be greatly diluted. Lots of the supplement industry research depends on this kind of misleading study.
- The way the studies are expressed may exaggerate the difference. You can find a huge percentage difference in the change for a drug, but the number of people affected is so tiny as to not justify putting everyone on it, at expense and with side effects. Mammograms in young women tend to create more false surgeries with associated pain and suffering than is justified by the accurate surgeries, plus uncountable other women are exposed to radiation. A good discussion with the math is shown here.
- The levels of significance are too high or too low. The analysis of the Atkins Diet compared to a low calorie diet concluded that the while the Atkins diet was more effective immediately, that it wasn’t significantly more effective over the long term. But the 76.4% higher persistent weight loss differences favored Atkins (9.7 vs. 5.5 lbs.), and at a level that was certainly significant to the losers.
- The study doesn’t look at a broad enough picture. In the study above, the Atkins participants had lower triglycerides, better lipid profiles and lower blood pressure, but only the weight loss was considered.
- The study rests upon assumptions that may not be valid. The research on cholesterol assume that low cholesterol means fewer deaths by heart disease, so statin drugs are reported as preventing heart disease, even if that relationship is suspect as in the article below.
- The summaries don’t say what the data shows. In the Farmingham Study, women with a higher fat diet had fewer cancers overall, but a few unusual cancers had higher incidences. This was in the heat of the “fat causes cancer” era so to retain funding they wrote about the unusual cancers in the summaries, and this is what the press picked up. In a study where Pima Indian students were given comprehensive diet and exercise advice in schools and food service was changed, but no lower obesity or diabetes was found, the summary talked about the success in implementing dietary change in the schools.
- The wrong part of the plant may be used. After a European echinacea study that used stems was found ineffective, summaries and headlines trumpeted that echinacea doesn’t work. But the parts that work are the roots and seed heads, and the study was probably done to see if there was value in the waste products that they might salvage.
- The plant or supplement may be tried for something the substance doesn’t treat. St. John’s wort was decried as “ineffective” for ADHD, but this is not a traditional or even reasonable use for the nerve-pain or injury herb. When it was used for major depression, as opposed to minor depression, headlines reporting the study said it was ineffective. And when neither Prozac, Zoloft or St. John’s wort outperformed a placebo for depression, only the herb, which did better than the drugs, was cited as ineffective. (Other studies showed it as more effective than Prozac or a Placebo. There is a reason that St. John’s wort is prescribed more than all SSRIs combined in Europe where doctors study botanical medicine. )
- An inactive form of the supplement or herb may be used. In the St. John’s Wort/ADHD study above, they also gave an extract where the most important component was oxidized. I see this all the time.
- The researchers may have an agenda. In that study again, the lead researcher has many ties to the pharmaceutical industry and depends upon it for grants. This is perhaps the most egregious problem in medical research and even the New England Journal of Medicine has editorialized against it.
- Bad science. Treasure cites the case of a study implicating ginseng as causing insomnia and other side effects when given with the prescription medication phenelzine. It was based upon a single case of an elderly woman who took an undisclosed dose for an undisclosed period of time of a Natrol product. The report included no medical history or background. The Natrol supplement contained many herbs and vitamins, one of which was Eleuthrococcus senticosis, then called “Siberian ginseng” and which is not a ginseng at all. The researchers didn’t contact Natrol for verification of ingredients and did not assay them. But in Medline searches of side effects of Panax ginseng, this will come up and it has been cited by both conventional medical sources like Stockley’s Drug Interactions and a number of botanical monographs like one by the WHO in 1999 as pertaining to Panax ginseng. With Medline and the internet this can be promulgated indefinitely.
- Bad translation can be a problem, even from a common language like German. It was believed that Echinacea purpurea should only be given for 10 days and then it would be ineffective. This was based on a mistranslated graphic from a German study (below) where the herb was given only five days and persisted for 10 before it dropped off. Given that some of the most promising herbal studies come out of India, China, Japan and Iran, translation can be a problem.
- Literature searches using Medline are common and lead to erroneous conclusions. Literature searches are more egregious than meta studies because they often pick up unauthenticated correspondence and speculative connections and there is less interest in winnowing out poor data.
- Bad inferences based upon literature searches and misunderstanding of pharmacodynamics. The badly translated information on Echinacea above was picked up by pharmacist Lucinda Miller in a literature search and she explained in an oft-quoted 1998 Archives of Internal Medicine article that “Echinacea purpurea is hepatotoxic after 8 weeks due to the presence of toxic pyrrolizidine alkaloids.” But the trace amounts of pyrrolizidine alkaloids in echinacea are not the toxic unsaturated type and cannot cause liver toxicity. And neither does any other constituent. But Miller’s statement has made it into books.
Finally, we are all biologically distinct and studies do not differentiate between those who might respond well and those for whom a medicine may be useless or cause harm. In Chinese medicine and Ayurveda, people were grouped into constitutional types or people expressing different patterns of illness. The same named disease in different people would have different treatments depending upon the patterns expressed. In western medicine, DNA testing may soon be used for the same purpose. So even if the majority of people tested do not respond to a medicine, it might work for you.
The first important thing with studies is to see whether the researchers have vested interests in the results, whether the research is paid for from pharmaceutical or supplement companies and whether it appears free from bias. The next thing is to see if the study is in vitro or in vivo. Unless the herb or supplement will touch the affected organ- as with skin or throat- in vitro tells you nothing useful. Third, if the study is in vivo, look to whether the study is about animals or people. We differ and so do our body reactions. Then look at sample size and how the sample was set up. If it was large enough and random, was it drawn from a population with a similar lifestyle, diet and physiology? And see if the study can be replicated, because you can get fluke results especially if the material tested is not assayed or correctly identified.
Scientific studies can be very useful and I read them all the time. But unfortunately they are not perfect and it takes discernment to tell what is good.
Herbalist David Winston tells of receiving a call from a long time patient who had been taking echinacea successfully for a decade, who told him that because of research “proving” that it didn’t work, she would not be taking it anymore. He was dumbfounded, asking her whether 10 years of experience didn’t trump an article about a study. What is important is that a substance works for you. Even if your reaction is in the minority, if something works, don’t stop using it.
Klaus Linde, et. al. Cochrane Review of Hypericum (St. John’s Wort for Major Depression)
Jonathan Treasure. Bastyr Hypericum Study and the Misinformation Machine
Jonathan Treasure. Medline and the Mainstream Manufacture of Misinformation
Jurcic K, Melchart D, Holsmann M, Martin P, et al. “Zwei probandenstudien zur stimulierung der granulozyphagozytose durch echinacea-extract-haltige präparate.” Zeitschrift för Phytotherapie 1989;10:67-70
Shane Foley. Echinacea Studies Revisited
Lucinda Miller. 1998 Herbal Medicinals: Selected Clinical Considerations. Arch Int Med. 158:20 2200-2211