The Canadian Guitar Forum banner

1 - 20 of 52 Posts

·
Premium Member
Joined
·
17,622 Posts
Discussion Starter #1
Study says four cups of coffee per day can lower risk of death

Study says four cups of coffee per day can lower risk of death

This stock image shows a hot cup of coffee from a French press. A new study shows that four cups of coffee each day could lower your risk of death.
(ROBERT INGELHART/GETTY IMAGES/ISTOCKPHOTO)
DAN GUNDERMAN
NEW YORK DAILY NEWS
Monday, August 28, 2017, 3:01 PM

How’s that fourth cup of joe?

A new study from Spanish researchers suggests that higher coffee consumption is linked to a lower risk of death, USA Today reports.

The study was presented at the European Society of Cardiology Congress in Barcelona, Spain.

The findings — as concluded by the Hospital de Navarra in Pamplona, Spain — suggest that those who drink at least four cups of coffee per day had a 64% lower risk of death than those who did not consume, or hardly consumed, coffee, USA Today writes.


This stock image shows women enjoying some coffee. The study, from Spain, suggests that four cups per day could be beneficial to your health.
(RAWPIXEL LTD/GETTY IMAGES/ISTOCKPHOTO)
The study tracked the health of 20,000 participants over the course of about 10 years.

Findings even suggest that those 45 and older had a 30% lower chance of death if they drank two additional cups of coffee per day.

Dr. Adela Navarro, a cardiologist at the hospital which conducted the study, said that the findings suggest that four cups could be the staple of a healthy diet.

This conclusion falls in line with two additional studies, published this year, suggesting that coffee had positive benefits on the body. One suggested a lower risk of death by means of heart disease, cancer, stroke, diabetes and kidney disease, USA Today writes.
 

·
Premium Member
Joined
·
17,196 Posts
Without reading the study, I'm going to suggest that the coffee is being consumed black with no sugar or dairy.
Which is fine, if you buy good coffee.

I buy timmies...
 

·
Registered
Joined
·
24,719 Posts
A lot of folks, and apparently a high percentage of science reporters, or reporters assigned to "the science beat", don't understand the difference between an observational study and a true experiment.

The overwhelming majority of studies that get trotted out as somehow establishing a causal link between dietary component X and health outcome Y, are nothing of the sort. They are observational studies and not experiments. A true experiment randomly assigns individuals to a treatment condition. The underlying assumption is that is the individuals did not somehow self-select such that any individual differences at the outset are randomly distributed across all treatment and control conditions. When one conducts an animal study, or simply an in vitro cell-culture study, cell cultures or animals, are randomly assigned to treatment conditions, such that one can reliably attribute whatever you observe to the treatment applied (assuming it also meets criteria for statistical reliability).

Unfortunately, when we study humans, and especially their consumption habits, the researcher doesn't necessarily get to pick who opts into the study, nor do they get to randomly assign who receives the treatment and who doesn't. On top of that, such studies are sometimes retrospective, or otherwise relying on what the participant tells you they have consumed. The investigator can help things along by gathering as much background on the participants in different treatment conditions as they can, in search of other possible explanations for why they observed what they did. But often that is not enough for such observational studies to to draw inferences as strong as those of true experiments.

The epitome of such things for me was a Scottish study I read in The Lancet years back. I'm sure that many of you have heard somethng to the effect that a little bit of alcohol consumption has a sort of "protective effect" on the heart. These researchers studied a large group of men and looked at both alcohol consumption levels and indices of heart health. What they observed was the usual pattern that heavy alcohol consumption was associated with poorer heart health, and that as typical self-reported consumption decreased, heart health improved, until they got to zero consumption and found an increase in heart problems. Evidence for "protective" effects? Nah. When they looked at the tea-totallers more closely, they found a significant proportion were actually former alcoholics, and had now sworn off the stuff. It wasn't that a wee dram protects. Rather the heart damage had already been done, and the zero-intake participants reflected the effects of past heavy intake, not the effects of avoiding a wee dram.

Again, because one can often not randomly assign humans to the treatment conditions of interest, you have to be cautious about what miraculous results get trumpeted in the press.

Same goes for more social-behaviour things. We cannot randomly assign people to conditions, like we're dealing out a deck of cards: "Every third person is going to be placed in an emotionally abusive single-parent home, while the first and second persons will be placed in the homes of a psychologically healthy two-parent or a psychologically healthy single-parent home, taking the names of all those children out of a hat." So when we ask questions about what the effects of this or that are, researchers are stuck with the results of observational studies, which may provide very suggestive cues, but rarely any smoking gun of A causes B.

Long story short, if the news item shows a very clear causal link, with known mechanisms at play, I can be persuaded. BUt when all it says is "We saw a greater/lesser incidence of X in people who engaged in more Y", there is really nothing to conclude. It is simply one piece in a 5000-piece jigsaw that will require a lot more evidence to draw strong inferences.
 

·
Registered
Joined
·
10,520 Posts
A lot of folks, and aparently a high percentage of science reporters, or reporters assigned to "the science beat", don't understand the difference between an observational study and a true experiment.

The overwhelming majority of studies that get trotted out as somehow establishing a causal link between dietary component X and health outcome Y, are nothing of the sort. They are observational studies and not experiments. A true experiment randomly assigns individuals to a treatment condition. The underlying assumption is that is the individuals did not somehow self-select such that any individual differences at the outset are randomly distributed across all treatment and control conditions. When one conducts an animal study, or simply an in vitro cell-culture study, cell cultures or animals, are randomly assigned to treatment conditions, such that one can reliably attribute whatever you observe to the treatment applied (assuming it also meets criteria for statistical reliability).

Unfortunately, when we study humans, and especially their consumption habits, the researcher doesn't necessarily get to pick who opts into the study, nor do they get to randomly assign who receives the treatment and who doesn't. On top of that, such studies are sometimes retrospective, or otherwise relying on what the participant tells you they have consumed. The investgator can help things along by gathering as much background on the participants in different treatment conditions as they can, in search of other possible explanations for why they observed what they did. But often that is not enough for such observational studies to to draw inferences as strong as those of true experiments.

The epitome of such things for me was a Scottish study I read in The Lancet years back. I'm sure that many of you have heard somethng to the effect that a little bit of alcohol consumption has a sort of "protective effect" on the heart. These researchers studied a large group of men and looked at both alcohol consumption levels and indices of heart health. What they observed was the usual pattern that heavy alcohol consumption was associated with poorer heart health, and that as typical self-reported consumption decreased, heart health improved, until they got to zero consumption and found an increase in heart problems. Evidence for "protective" effects? Nah. When they looked at the tea-totallers more closely, they found a significant proportion were actually former alcoholics, and had now sworn off the stuff. It wasn't that a wee dram protects. Rather the heart damage had already been done, and the zero-intake participants reflected the effects of past heavy intake, not the effects of avoiding a wee dram.

Again, because one can often not randomly assign humans to the treatment conditions of interest, you have to be cautious about what miraculous results get trumpeted in the press.

Same goes for more social-behaviour things. We cannot randomly assign people to conditions, like we're dealing out a deck of cards: "Every third person is going to be placed in an emotionally abusive single-parent home, while the first and second persons will be placed in the homes of a psychologically healthy two-parent or a psychologically healthy single-parent home, taking the names of all those children out of a hat." So when we ask questions about what the effects of this or that are, researchers are stuck with the results of observational studies, which may provide very suggestive cues, but rarely any smoking gun of A causes B.

Long story short, if the news item shows a very clear causal link, with known mechanisms at play, I can be persuaded. BUt when all it says is "We saw a greater/lesser incidence of X in people who engaged in more Y", there is really nothing to conclude. It is simply one piece in a 5000-piece jigsaw that will require a lot more evidence to draw strong inferences.
What about confounding variables?
 

·
Registered
Joined
·
24,719 Posts
What about confounding variables?
Correct. That is the fundamental problem with observational studies. There may be other causes for the observed relationships that one didn't measure; covariates or what some folks refer to as "the third variable problem".

There is lots of research on the putative "effects" of being in daycare. One of my former profs was involved with the New York Longitudinal Study of children, and they examined things from a slightly different angel. The question they asked was "Who gets put in daycare?". In other words, daycare is not just randomly imposed on children. Parents make decisions to return to work and put their kid in daycare. And sure enough, they found that parents' ratings of the toddler's temperament and "easiness" was quite predictive of whether the parnet chose to remain at home or whether they chose to ut the kid into daycare. So right off the bat, when comparing the "effects" of daycare by comparing kids who were or won't placed in daycare, you're lookng at kids who were actually different from the get-go. So are the observed outcomes a consequence of daycare, or something else? When people self-select whether to be in the "treatment group" or not, often you can't tell. YOu can get hints that add up over studies, but nothing especially conclusive.

Of course, confounds can occur even with stuff you did measure, as well as with stuff you didn't. And the reason why we value true experiments more than observational studies is because of their ability to rule out confounds....assuming they are well designed, well-measured, and analyzed properly.

Those with a statistical background will likely note that "chance" is also a potential confound that can create the impression of a causal relationship within the context of what is ostensibly a true experiment. My wife works in food safety now, and was telling me over dinner about a study she read today that used 8 rats per treatment level. The investigators used different rats in an older study and observed an effect, but changed rat strain, stored the substance of interest differently, and failed to find any relationship the 2nd and 3rd time out.

I used to do an exercise in class where I'd hand out two envelopes to students in the front row, ask them to draw out 4 slips of paper, without looking, and read to me what it said on them. Then draw 4 more, and so on. At each step, I'd write the numbers on the board and ask the class if those numbers were data for two groups of something - could be anything - did they think the two groups were "different"? Initially, they would say yes, but as the slips of paper would get removed from the envelopes, it became apparent that the two envelopes contained the exact same thing: the numbers 1 through 24, written on slips of paper. Examine the entire "population" within each envelope, and it was clear they were no different. Examine only a sample, however, and you can get fooled, by the magic of chance.

Chance is also a confounding factor, even when you DO engage in random assignment.
 

·
Registered
Joined
·
10,520 Posts
Correct. That is the fundamental problem with observational studies. There may be other causes for the observed relationships that one didn't measure; covariates or what some folks refer to as "the third variable problem".

There is lots of research on the putative "effects" of being in daycare. One of my former profs was involved with the New York Longitudinal Study of children, and they examined things from a slightly different angel. The question they asked was "Who gets put in daycare?". In other words, daycare is not just randomly imposed on children. Parents make decisions to return to work and put their kid in daycare. And sure enough, they found that parents' ratings of the toddler's temperament and "easiness" was quite predictive of whether the parnet chose to remain at home or whether they chose to ut the kid into daycare. So right off the bat, when comparing the "effects" of daycare by comparing kids who were or won't placed in daycare, you're lookng at kids who were actually different from the get-go. So are the observed outcomes a consequence of daycare, or something else? When people self-select whether to be in the "treatment group" or not, often you can't tell. YOu can get hints that add up over studies, but nothing especially conclusive.

Of course, confounds can occur even with stuff you did measure, as well as with stuff you didn't. And the reason why we value true experiments more than observational studies is because of their ability to rule out confounds....assuming they are well designed, well-measured, and analyzed properly.

Those with a statistical background will likely note that "chance" is also a potential confound that can create the impression of a causal relationship within the context of what is ostensibly a true experiment. My wife works in food safety now, and was telling me over dinner about a study she read today that used 8 rats per treatment level. The investigators used different rats in an older study and observed an effect, but changed rat strain, stored the substance of interest differently, and failed to find any relationship the 2nd and 3rd time out.

I used to do an exercise in class where I'd hand out two envelopes to students in the front row, ask them to draw out 4 slips of paper, without looking, and read to me what it said on them. Then draw 4 more, and so on. At each step, I'd write the numbers on the board and ask the class if those numbers were data for two groups of something - could be anything - did they think the two groups were "different"? Initially, they would say yes, but as the slips of paper would get removed from the envelopes, it became apparent that the two envelopes contained the exact same thing: the numbers 1 through 24, written on slips of paper. Examine the entire "population" within each envelope, and it was clear they were no different. Examine only a sample, however, and you can get fooled, by the magic of chance.

Chance is also a confounding factor, even when you DO engage in random assignment.
It's ok man, I know about conducting statistical research.
 

·
Premium Member
Joined
·
17,196 Posts
You what? Like...on purpose?
Yup. Dont need it to taste good, just need to stay awake. Theres one close to my job and one close to my place. I know the prices and what to expect.

I dont have time to be a coffee snob, i'd rather sleep through making a pot...
 
  • Like
Reactions: jdto

·
Registered
Joined
·
18,563 Posts
I drink a pot a day most days just because I like the taste!
 

·
Premium Member
Joined
·
13,091 Posts
What are the exclusions? Was this a controlled study? How large was the study? Etc. ad nauseum
 

·
Registered
Joined
·
3,634 Posts
was the study just on black coffee?

I suspect all the sugary, creamy, milkshakey coffee concoctions most people drink at coffee shops would probably not be very good to drink 4x a day

and what about Pollops? I thought drinking coffee gave you those, which are linked to ass-cancer
 

·
Registered
Joined
·
4,998 Posts
Yup. Dont need it to taste good, just need to stay awake.
That's what Rooster pills are for, and you spend less time on 'bladder breaks'. ;)
Actually, chocolate covered espresso beans are even better, because we know from that other 'study' how great chocolate is for you.
 
1 - 20 of 52 Posts
Top