PRACTICAL POINTERS
FOR
PRIMARY CARE
ABSTRACTED MONTHLY FROM THE JOURNALS
AUGUST 2004
SUGAR-SWEETENED BEVERAGES,
WEIGHT GAIN, AND INCIDENCE OF TYPE 2 DIABETES
SUGAR-SWEETENED SOFT DRINKS,
OBESITY, AND TYPE 2 DIABETES.
RELATIONSHIPS BETWEEN
2-HOUR POSTCHALLENGE PLASMA GLUCOSE, AND HBA1C
EFFECTS OF GLYCAEMIC
INDEX ON ADIPOSITY, GLUCOSE HOMEOSTASIS, AND LIPIDS
PREVALENCE, CARE, AND OUTCOMES
WITH DIET-CONTROLLED DIABETES
PRIMARY PREVENTION OF CVD
WITH ATORVASTATIN IN TYPE 2 DIABETES
CLINICAL USE OF
MENOPAUSAL HORMONE THERAPY FOR SYMPTOM RELIEF
APPROPRIATE USE OF
OPIOIDS FOR PERSISTENT NON-CANCER PAIN
TERMINAL SEDATION: AN
ACCEPTABLE EXIT STRATEGY?
REDUCING USE OF FEEDING TUBES IN PATIENTS WITH ADVANCED
DEMENTIA
JAMA, NEJM,
BMJ, LANCET PUBLISHED
BY PRACTICAL POINTERS, INC.
ARCHIVES INTERNAL MEDICINE EDITED BY
RICHARD T. JAMES JR. MD
ANNALS
INTERNAL MEDICINE 400 AVINGER LANE, SUITE 203
DAVIDSON NC 28036 USA
Rjames6556@aol.com
www.practicalpointers.org
HIGHLIGHTS
AND EDITORIAL COMMENTS AUGUST 2004
8-1
SUGAR-SWEETENED BEVERAGES, WEIGHT GAIN, AND INCIDENCE OF TYPE 2 DIABETES
IN YOUNG AND MIDDLE-AGED WOMEN
This study examined the relationships between
sugar-sweetened beverage consumption (especially soft drinks), weight gain, and
risk of diabetes in a large cohort of young and middle-aged women.
Over the entire 10 year period, women who increased their sugar-sweetened soft
drink (S-SSD) intake from low to
high had larger increases in weight compared with women who maintained a low
intake, or substantially reduced their intake.
In contrast, women who decreased their S-SSD reduced their total energy consumption by 319
kcal/d. Women who decreased their
intake during the first 5 years and maintained a low level gained less weight
than those who increased their intake (2.8 kg. vs 4.4 kg).
Participants whose consumption of diet soft drinks increased
from one drink or less per week to more than one drink per day gained
significantly less weight (1.6 kg)
than women who decreased their intake
from one or more drinks daily to 1 drink or less per week (4.2 kg). [Ie, consumption of calorie free drinks
apparently to some extent, blunts ingestion of calorie-containing foods.]
Greater S-SSD consumption was strongly associated with
progressively higher risk of type 2 diabetes.
(RR
= 1.9 in women consuming one or more drinks per day vs those consuming less than one per month.)
Sugar-sweetened fruit punch was also associated with
increased risk of diabetes. (RR = 2.0)
“Pure” fruit juice was not associated with risk of diabetes.
Over 8 years, there were positive associations between
sugar-sweetened beverage consumption and both greater weight gain and risk of
type 2 diabetes, independent of other known risk factors.
Energy provided by sugar-sweetened beverages does not
affect subsequent food and energy intake. (Ie, little or no compensation by
reduction in intake of other foods.) Weight gain and obesity result from the
positive energy balance.
Fruit juice was not
associated with diabetes risk in this study. This suggests that naturally
occurring sugars in beverages may have different metabolic effects than added
sugars
This is an
important life-style consideration.. It convinces me to ask a screening
question, especially for overweight patients and patients with type 2
diabetes—“How many Cokes and how many Diet Cokes do you drink every week?”
Grocery
stores offer a wide range of fruit flavored drinks (fruit punches). Some
contain high amounts of fructose and sucrose. Some contain an artificial
sweetener. Look at the “Nutrition Facts” label.
A 12-oz can
of Coke contains 42 gm of sugar (high fructose corn syrup or sucrose).
A 12-oz can
of Diet Coke contains zero calories.
(Aspartame).
My “pure” orange juice contains 36 grams of sugar per 12 oz, almost as much as 12 oz of Coke. What is the metabolic difference?
Note also that many other foods (especially breakfast
cereals) contain a high concentration of sugar. Do these foods, in contrast to
S-SSD, have a higher satiety value? Moral: “Give your pancreas a break”. RTJ
8-2 SUGAR-SWEETENED
SOFT DRINKS, OBESITY, AND TYPE 2 DIABETES.
When individuals include liquid carbohydrate
consumption in their diet, they do not
reduce their solid food consumption. An increase in liquid carbohydrates leads,
perversely, to even greater caloric consumption of other foods.
“A better mechanism for weight gain could not have
developed than introducing a liquid carbohydrate with calories that are not
fully compensated for by increasing satiety.”
Conversely, intake of diet (non-sugar containing) sodas is associated with a lowering of risk of childhood obesity.
“Reducing sugar-sweetened beverage consumption may be
the best single opportunity to curb the obesity epidemic.”
This convinces
me to ask a routine screening question, especially for overweight patients and
patients with type 2 diabetes. How many Cokes and how many Diet Cokes do you
drink every week?” RTJ
8-3 DIAGNOSTIC
AND THERAPEUTIC IMPLICATIONS OF RELATIONSHIPS BETWEEN 2-HOUR POSTCHALLENGE
PLASMA GLUCOSE, AND HEMOGLOBIN A1C VALUES
Hemoglobin A1c is the preferred method to monitor
long-term glycemic control. Lower levels are associated with reduced risks for
both micro- and macro-vascular diabetic complications. Current recommendations
vary: a target of 7%; 6.5%; 6% and below.
This study (of a large cohort of apparently healthy
individuals) determined the relationships between fasting plasma glucose (fasting PG), 2-hour post-challenge
plasma glucose (2-h PG), and HbA1c
levels. All had HbA1c levels under 7%.
HbA1c vs
fasting PG:
As HbA1c increased, fasting PG gradually increased. When HBA1c reached about 6.5%, the mean fasting PG was abnormal. (110 and over).
HbA1c vs 2-h
PG:
As the HbA1c increased, the 2-h PG increased—much more rapidly than the fasting PG. The top 4 deciles of HbA1c were associated with an impaired glucose tolerance. (2-h PG over 140). This included many individuals with HbA1c levels under 6%. The 2-h PG is a more sensitive marker of abnormal glucose metabolism than fasting PG.
HbA1c vs
diabetes:
DM2 was diagnosed in a few individuals with HbA1c levels as low as 5%. Prevalence of DM2 was higher in individuals with HBA1c levels 5.3% to 6%. About half of those with HbA1c over 6.33% had DM2.
Subdivided by HbA1c levels:
Impaired glucose tolerance (%) Diabetes mellitus (%)
HbA1c below 5% 16 1
HbA1c 5% to 5.4% 37 5
HbA1c 5.5% to 6.9% 53 24
Persons with fasting PG and HbA1c levels in the upper range of “normal” may be at increased risk of cardiovascular disease due to increased post-meal PG levels.
An appreciable number of individuals with a normal
fasting glucose will have an abnormal 2-hour PG.
Their
condition may be undiagnosed and untreated.
As HbA1c rises, 2-h PG increases at a much greater rate than fasting PG
levels, and contributes more to the increase in HbA1c levels. 2-h PG is a more
reliable indicator of abnormal glucose metabolism than either fasting PG or
HbA1c, and a sensitive indicator of risk of developing diabetes.
The diagnostic use of fasting PG levels, is
suboptimal. The upper limit for HbA1c, and fasting PG (at 110) is set too high.
Only 17% of subjects in this study who had impaired GT had an abnormal fasting
PG level. This supports the recommendations of the WHO that the oral glucose
tolerance test be the main diagnostic procedure.
Most individuals with HbA1c levels between 6% and 7%
have normal fasting PG levels but abnormal 2-h post challenge PG levels.
Attempts to lower HbA1c will require treatment preferentially directed at
lowering postprandial glucose levels.
This
relatively short article presented a great deal of data.
The message
is—clinicians should depend more on the 2-h PG than on either the fasting PG or
the HbA1c to determine abnormal glucose metabolism. HbA1c is not a reliable
method for diagnosing impaired glucose tolerance or DM2. Fasting PG (even at
the new standard of 100) is not as indicative of abnormal glucose metabolism as
the postprandial PG.
As a general
life-style measure, post-meal glucose levels should be maintained at low levels
by a low glycemic load diet.
I believe
primary care clinicians, when screening for DM2, can ask their patients to come
to the office about 2 hours after a meal to have their blood glucose checked. A
GTT can be requested if the post prandial PG is high.
But, beware
of formally labeling a patient as being “diabetic” or having “abnormal glucose
tolerance” if the 2-h PG levels are not unduly high. The levels may change over
time. When diagnosed by blood tests, “Once a diabetic, always a diabetic” is
not true. Levels may change according to weight loss, and diet. Labeling may
adversely affect insurance and employment and cause anxiety.
Measuring
2-h PG levels can lead to earlier preventive therapeutic interventions than
either HbA1c or fasting PG. RTJ
8-4 EFFECTS
OF DIETARY GLYCAEMIC INDEX ON ADIPOSITY, GLUCOSE HOMEOSTASIS, AND PLASMA LIPIDS
IN ANIMALS.
Several trials have reported that persons consuming a
low glycemic index (GI) diet weigh
less than those consuming a high GI diet. However, clinical outcomes in humans
cannot be attributed solely to GI because interventions designed to modify GI
unavoidably also influence intake of fiber, and alter palatability and energy
density.
This study, in rats and mice, examined the effects of
GI on adiposity and other endpoints.
Despite maintenance of similar body weight, rats given
high GI food gained more body fat, and had less lean body mass compared with
the low-GI animals. The GI had an independent effect on body composition. Rats
in the high GI group required less food to gain the same weight. This suggests
that they had become more metabolically efficient.
The high-GI group had greater increases over time in
the areas under the curve for blood glucose and plasma insulin, a lower plasma
adiponectin, much higher plasma
triglyceride concentrations, and severe disruption of islet-cell architecture.
“We speculate that the striking chronic primary
peripheral hyperinsulinemia induced by the high-GI diet alters nutrient
partitioning in favor of fat deposition, shunting metabolic fuels from
oxidation in muscle to storage in fat.”
This study
fits in well with the preceding articles.
I rarely
abstract studies based on animal research. We cannot confidently extrapolate
the results to humans.
As these
investigators suggest, long-term studies on effects of GI and GL in humans
would be extraordinarily difficult.
If you access
“Glycemic index” on Google, you will receive much more information on
individual foods than you may wish to know. For those seeking low GI diets,
implementation is simpler than the vast tables of numerical values of
individual foods suggest: avoid “sugar”
(sucrose), use breakfast cereals based on oats, barley and bran (and without
added sugar); use grainy breads made of whole seeds; reduce the amount of
potatoes; enjoy all types of nuts, fruits, and vegetables (except potatoes),
and eat plenty of salad vegetables with an oil dressing. Most refined foods (bread,
refined cereal, potato, and glucose and sucrose) have high GI. Most non-starchy
vegetables, fruits, legumes, and nuts have low GI.
Breakfast
cereals based on refined grains and with added sugar present a “double whammy”
The food industry is gradually improving the number of available healthy foods.
Although
controversy still exists as to the clinical importance of GI and GL, it makes
common sense to me. (But, beware of “common sense”. RTJ)
8-5 PREVALENCE,
CARE, AND OUTCOMES WITH DIET-CONTROLLED DIABETES IN GENERAL PRACTICE
By tradition, a substantial number of people with type
2 diabetes (DM2) have been managed
without medication. They are usually offered dietary advice. Irrespective of
whether patients remember or follow the advice, they are referred to as being
managed by diet alone.
This study aimed to establish the proportion of
patients with DM2 in general practice treated by diet alone. And to determine
levels of complications and quality of care received as compared with patients
on medication.
Overall, about 1/3 of patients were managed with diet
alone. But, there was a great variation between practices (16% to 73%).
Diet-alone patients were much less likely to have received monitoring for
HbA1c, BP, lipids, smoking, microalbuminuria, and foot pulses.
Patients on diet alone were more likely to have high
BP, and were less likely to receive antihypertension drugs. They were 45% more
likely to have a high cholesterol, and less likely to receive lipid-controlling
drugs.
Although some individuals might be effectively managed
by diet alone, there is a case for better surveillance and for more intensive
therapy.
I believe
that in the U.S. many patients with known diabetes are not treated and followed
as carefully as they should be. And many more who have diabetes and do not know
it are not treated at all. RTJ
8-6 PRIMARY
PREVENTION OF CARDIOVASCULAR DISEASE WITH ATORVASTATIN IN TYPE 2 DIABETES
Current prescription rates for lipid lowering drugs in
patients with DM2 remain low, even in patients with established cardiovascular
disease (CVD).
This study assessed the effectiveness of a 10 mg dose
of atorvastatin (Lipitor) vs placebo in primary prevention of CVD in patients with DM2. None had high
concentrations of LDL-c. The trial was stopped 2 years early because of
demonstration of significant benefit.
None had documented history of CVD. All had at least
one risk factor: retinopathy, macro- or
micro-albuminuria, current smoking, or hypertension. The risk of a major
cardiovascular event in these patients was 10% over 4 years.
Incidence of major cardiovascular events was 25 per
1000 person-years at risk in the placebo group vs 15 per 1000 person-years at
risk in the atorvastatin group. Therefore, allocation of 1000 patients to
atorvastatin would avoid 37 first major events over a 4-year follow-up. 27
patients would need to be treated for 4 years to prevent one event. [NNT (for
4 years to benefit one) = 27]
“The debate about whether all patients with DM2 warrant statin therapy should now focus on
whether any patients can reliably be identified as being at sufficiently low
risk for this safe and effective treatment to be withheld.”
These data challenge the use of a particular threshold
level of LDL-c as the sole arbiter of which patients with DM2 should receive
statin therapy (as in the case of most current guidelines). Target levels of
LDL-c (100 mg/dL) could be lowered.
An editorialist comments: The conclusions of the study—“Seems too far-fetched in view of
the available clinical trials and epidemiological data”. He cites 4 large
studies of lipid control which contained many patients with DM2. Two of the
four did not report a statistically significant reduction in coronary disease.
Two did.
Clinical trials enroll carefully selected patients.
The results cannot necessarily be extrapolated to primary care practice. Many
patients may be at low risk and the benefit/ harm-cost ratio may be too low to
warrant long-term treatment. Some may be at higher risk of adverse effects from
statins. As always, individualization is required.
I believe the
majority of patients with DM2 will benefit from statin therapy for primary
prevention.. Most will have one or more additional risk factors. There would be
no question regarding secondary prevention.
Authors and
publishers persist in presenting relative benefits (rather than absolute
differences). Thus, they reiterate that treatment with atorvastatin was related
to a 37% reduction in major coronary events; a 31% reduction in coronary
revascularizations; a 48% reduction in stroke; and a 27% reduction in deaths.
This can be
very misleading. I believe statements of relative benefits should be eliminated
from published reports.
8-7
SHORT-TERM MENOPAUSAL HORMONE THERAPY FOR SYMPTOM RELIEF
Hormone therapy (HT)
provides the most effective relief of menopausal symptoms. Recently caution has
been recommended because of an associated increase in risks of coronary heart
disease (CHD), stroke, breast cancer
(BC), and pulmonary embolism (PE). However, the average risk is very
low—7 additional cases of CHD events, 8 more strokes, 8 more PEs, and 8 more
BCs per 10 000 women each year. (3
chances per 1000/year.)
HT is associated with small losses in life expectancy.
It is inappropriate for primary
prevention of chronic disease. There is no place for its use in
asymptomatic women.
This study identified women who would benefit from
short-term HT by exploring the trade-off between relief of menopausal symptoms
and risks of inducing disease.
Among women at low risk for CHD (no risk factors) , HT
extended quality-adjusted life expectancy (QALE) even if menopausal symptoms were mild
(reducing quality of life by as little as 4%). Among women at high CHD risk (3
risk factors), HT extended QALE if symptoms lowered quality of life by 12% or
more.
“Whether short-term is beneficial or harmful depends
primarily on a woman’s treatment goals, the severity of her estrogen-responsive
symptoms, and her cardiovascular disease risk.” Each woman’s values must be
incorporated into the HT decision. Is
she willing to accept a very small risk of breast cancer; coronary events;
stroke; in order to be free of menopausal symptoms?
If the goal is to maximize QALE, HT can be beneficial,
especially among women at low CVD risk, even when menopausal symptoms are mild.
The decision to use HT depends on its efficacy in
alleviating symptoms. Not all symptoms during perimenopause are due to
declining estrogen levels. Not all respond to HT. There is a large placebo
effect. Clinical trials examining the impact of HT on symptoms suggest that a
1-month trial is sufficient to determine response. If response is not
satisfactory, HT may be withdrawn.
I have found
results of simulating “models” unhelpful in primary care practice. They cannot
readily be applied to individuals. However, they may tilt clinicians toward or
against using certain interventions, especially if there are other studies
indicating a balance of benefit to harm.
A 12%
reduction in quality of life seems to me to be a relatively small reduction. I
believe if HT improved symptoms and QOL in women with such a moderate reduction
of QOL, it would be much more strongly indicated in women with severe symptoms
(eg, a 50% reduction in QOL.)
Treatment-associated
risks increase with duration of HT.
American College of Obstetricians and Gynecologists is considering
advising short-term HT (less than 5 years) for control of menopausal symptoms.
Estrogen-alone
therapy (in hysterectomized women) is much safer than combined
estrogen/progestin.
I believe
most primary care clinicians are generous in prescribing HT. The dose of HT
should be kept as low as possible to relieve symptoms, and continued for as
short a time as is necessary.
I would be
more hesitant to prescribe HT for a woman who smokes, has high BP, diabetes or
glucose intolerance, uncontrolled lipids, a strong family history of BC, is a
carrier of the breast cancer gene, or has a past history of BC or CVD. RTJ
8-8 APPROPRIATE
USE OF OPIOIDS FOR PERSISTENT NON-CANCER PAIN
Primary care clinicians have been torn between
opposing perspectives on use of opioids for severe non-cancer pain. Risks and
benefits continue to be debated.
Beginning in the mid-1980s, the consensus view of pain
specialists rapidly shifted toward less restrictive use. By 1997, opioid therapy for chronic pain was
described as an “extension of the basic principles of good medical practice”.
Data highlighted the variability of the population with chronic pain and showed
highly favorable outcomes in some patients who received opioid therapy. Many
patients with chronic pain who were treated with opioids for months or years
had constant pain relief, no significant toxicity, and stable or improved
function. Data confirmed that patients could handle these drugs responsibly,
without the problematic behaviors of abuse and addiction.
Unfortunately, increased medical use was associated
with heightened abuse. The burgeoning non-medical use of oxycodone and other drugs,
combined with media stories of abuse-related tragedies, seemed to threaten
medical practice with a regulatory backlash. Pain specialists perceived that
championing this therapy could not continue without a clear focus on the risks
of abuse, addiction in predisposed persons, and diversion to an illicit
market. This evolution brought the
concept of balance to the level of the individual practitioner. Safe and
effective prescribing requires skills in pharmacotherapy and in risk assessment
and management.
Current recommendations for appropriate use for
persistent non-cancer pain aim for a balance. The primary goal is comfort.
Opioid therapy is just one tool among many to manage pain.
I believe
most primary care clinicians would refer these patients to a pain center, if
available. Occasionally the burden of prescription will fall on the generalist.
I would suggest that the single main consideration for long-term opioid use
would be—“Know the patient well”. Know his personal history as well as his
medical history. If the sufferer lives
in the community and he and his family have been known to you for years, the
risk of abuse would be minimized. RTJ
See the
abstract for web sites. RTJ
8-9 TERMINAL
SEDATION: An Acceptable Exit Strategy?
Terminal sedation is used by the physician to sedate a
terminally ill patient until coma develops in order to alleviate intolerable
suffering refractory to conventional palliative measures. It is controversial.
It is illegal except in Oregon.
It has been condemned by some as euthanasia in
disguise. Others, such as the U.S.
Supreme Court Justice O’Connor, have endorsed the practice arguing that “a
patient who is suffering from a terminal illness and who is experiencing great
pain has no legal barrier to obtaining medication from qualified physicians to
alleviate that suffering, even to the point of causing unconsciousness and
hastening death”.
One of the major objections to terminal sedation is
that its intent may be to kill the patient in order to alleviate suffering. The
intent of palliative care, by contrast, is to relieve suffering, even if the
treatment, such as opioids, shortens life. “Intent matters”—in law and in
ethics. The rule of “double effect” states that foreseeable adverse consequences
of treatment (ie, side-effects) are
acceptable only if they are not
intended. A second objection is that
terminal sedation could take place without the patient’s consent, a process
indistinguishable from involuntary euthanasia.
In America, we worry that patients who lack access to
care, or whose values differ from those of their physicians, might be
euthananized without their consent. And that the rate of terminal sedation
might be high in the U.S. because physician-assisted suicide is largely unavailable.
“We need to control the use of terminal sedation by
developing and implementing practice guidelines.” We must confirm the diagnosis, consider alternative approaches,
and obtain informed consent.
Is the
patient conscious and competent to make his own decisions when nearing death?
Then possible approaches to end-of-life care would include:
1) Request
palliative, comfort care until death.
2) Request
withdrawal of life-sustaining treatment.
3) Decide to
cease intake of food and fluids
4) Request
administration of a state of coma, or near coma, to be sustained until death.
5) Request a
prescription of barbiturates to have on hand if the patient wishes at some time
to take them.
(Although
this is not legal in almost all states, I believe physicians invoke the
practice in many cases by subterfuge.)
If the
patient is not competent to make his own decisions, or if his directions before
loss of competency were not clearly stated, the problem becomes more difficult.
A surrogate must then decide what is in the best interest of the patient.
Options would include only 1), 2) and 3). RTJ
8-10 USE OF
RAPID-CYCLE QUALITY IMPROVEMENT METHODOLOGY TO REDUCE FEEDING TUBES IN PATIENTS
WITH ADVANCED DEMENTIA
Feeding tubes are often placed in patients with
dementia who are hospitalized for an acute illness; often contrary to the
wishes of patients and families.
A growing body of research over the past decade has
questioned the utility of placing feeding tubes in patients with advanced
dementia. There is no evidence that
feeding tubes in this population prevents aspiration, prolongs life, improves
overall function, or reduces pressure sores. Feeding tubes may adversely affect
quality-of-life. Patients may require wrist restraints to prevent pulling on
the tube. They may develop cellulitis at the site, and be deprived of the
social actions and pleasure surrounding meals.
The aim of this study was to describe the effect of
efforts of an interdisciplinary team focusing on an educational program to
change the staff’s approach to use of feeding tunes. After completion of the
program, use of feeding tubes declined dramatically. This prevented patients
from receiving futile treatment.
I omitted
details of the study, and concentrated only on a reminder that feeding tubes
may be harmful and overused in primary care practice. I do believe, however,
that use is declining. RTJ
ABSTRACTS
AUGUST 2004
Ask—“How
Many Cokes, And How Many Diet Cokes Do You Drink Every Week?”
8-1
SUGAR-SWEETENED BEVERAGES, WEIGHT GAIN, AND INCIDENCE OF TYPE 2 DIABETES
IN YOUNG AND MIDDLE-AGED WOMEN
The prevalence of type 2 diabetes (DM2) has increased rapidly during the
last decades in parallel with the obesity epidemic.
Soft drink consumption increased by 61% in adults, and
almost doubled in children and adolescents from 1977 to 1997. Evidence suggests
an association between intake of sugar-sweetened soft drinks (S-SSD) and risk of obesity in
children. Data among adults is limited.
In addition to contributing to obesity, S-SSD might increase risk of diabetes
because they contain large amounts of high-fructose corn syrup, which raises
blood glucose similarly to sucrose.
Soft drinks are the leading source of readily
absorbable sugars in the US diet. Each serving represents a considerable amount
of glycemic load which may increase risk of DM2. (One 12 ounce can of S-SSD
contains 40 to 50 g of sugar.)
This study examined the relationships between
sugar-sweetened beverage consumption (especially soft drinks), weight gain, and
risk of diabetes in a large cohort of young and middle-aged women.
Conclusion:
Higher consumption of sugar-sweetened beverages was associated with
weight gain and increased risk of DM2.
STUDY
1.
A prospective weight-change analysis conducted from 1991 to 1999
included over 51 000 women. (A subset of the Nurses’ Health Study). All were
free of diabetes and other major chronic diseases at baseline.
2. Complete dietary information and body
weight were ascertained periodically. Determined intake of S-SSD and
sugar-fortified fruit punches. Related
changes in intake (higher or lower) during the 10 year period to weight change
an incidence of type 2 diabetes.
3.
Main outcome measures = weight gain and incidence of DM2.
RESULTS
1. Women with a higher intake of S-SSD
tended to be less physically active, to smoke more, and the have a higher
intake of total energy; and lower intake of protein, alcohol, and cereal fiber.
(Ie, a generally poorer lifestyle.)
2. Their intake of total carbohydrates,
sucrose, and fructose, as well as the overall glycemic index was also higher. Their starch intake was lower.
3.
Effect of increasing S-SSD intake
during the observation period:
A. Some women (n = 1007) increased their
S-SSD consumption from 1 per week or less to 1 per day or more. This was
associated with an increase in total energy intake of 358 kcal/d.
B. Over the entire 10 year period, women
who increased their S-SSD intake from low to high had larger increase in weight
(4.4 kg) and increase in BMI (1.6) compared with women who maintained a low
intake, or substantially reduced their intake. Women who increased their S-SSD during the first 5 years, and maintained a
high intake over the next 5 years , gained on average 8 kg over the 10 years.
C. Women who increased consumption of S-S fruit
punch from one drink or less per week to 1 drink or more per day gained
more weight compared with women who decreased consumption.
4.
Effect of decreasing S-SSD intake:
A. Women who decreased their S-SSD from 1 drink per day to 1 or fewer drinks per
week, (n = 1020) reduced their total energy consumption by 319 kcal/d
B. Women who decreased their intake during the first 5 years and maintained a
low level gained 2.8 kg. (vs 4.4 kg in
those who increased their intake.)
5.
What about diet soft drinks? (artificial
sweetener):
A. Participants who increased diet soft drinks from one drink or less per week to more
than one drink per day gained significantly less
weight (1.6 kg) than women who decreased
their intake from one or more drinks daily to 1 drink or less per week (4.2
kg). (Ie, higher intake related to less
weight gain.)
6.
Risk of diabetes:
A. During 10 years (over 716 000
person-years), 741 new cases of type 2 diabetes were documented. Greater S-S SD
consumption was strongly associated with progressively higher risk. (RR = 1.9 in women consuming one or more drinks per day vs those consuming
less than one per month.)
B. Sugar-sweetened fruit punch was also associated
with increased risk (RR = 2.0)
C. Fruit juice was not
associated with risk of diabetes.
DISCUSSION
1. Over 8 years, there were positive
associations between sugar-sweetened beverage consumption and both greater
weight gain and risk of type 2 diabetes, independent of known risk factors.
2. Sugar-sweetened soft drinks may
contribute to weight gain because of the low satiety of liquid foods. Energy
provided by sugar-sweetened beverages does not affect subsequent food and energy intake. (Ie, little or no
compensation by reductions in intake of other foods.) Weight gain and obesity
result from the positive energy balance.
3. Other studies have reported that S-S SD
added to an ad libitum diet over 3 to 10 weeks significantly increased caloric
intake in overweight persons and resulted in an increase in body weight.
“Consumption of sugar sweetened soft drinks has also been associated with
greater risk of obesity in children, while consumption of diet soft drinks has not.”
4. The lower weight gain associated with
reduction in intake of S-SSD compared with stable intakes suggests that women
do benefit from decreasing consumption.
5. S-SSD, in addition to causing weight
gain may increase risk of diabetes because of their high amount of rapidly
absorbed carbohydrates. Their high fructose content has the same effects on
blood glucose as sucrose. A fast and
dramatic increase occurs in insulin levels as well as glucose. S-SSD
contributes to a high glycemic index of the overall diet, a risk factor for
diabetes in this study population and other cohort studies.
6. Fruit juice was not associated with diabetes risk in this study. This suggests that
naturally occurring sugars in beverages may have different metabolic effects
than added sugars. Fruit juices generally have a lower glycemic index than
sugar-sweetened soft drinks and sugar-sweetened fruit punches. Fruit juices can be recommended over fruit
punch or S-SSD as the least of 3 evils. Fruit juices are not completely safe if
the extra energy associated with their consumption is not associated with
lowering of intake of other foods. Some fruit punches contain only a small
proportion of fruit juice, but large amounts of added high-fructose corn syrup,
therefore providing little nutritional value compared with pure fruit juices.
7. The authors, as usual, state that
observational studies cannot prove causality.
CONCLUSION
Higher consumption of sugar-sweetened beverages was
associated with a greater magnitude of weight gain, and an increased risk of
development of diabetes, possibly by providing excessive calories and large
amounts of rapidly absorbed sugars. Higher intake of Diet soft drinks (artificial sweetener) was associated with less
weight gain.
JAMA August 25, 2004; 292: 927-34 Original investigation, first author
Matthias B Schultze, Harvard School of Public Health, Boston Mass.
============================================================================
“A
Quick Screening Test for Increased Risk of Obesity and Type 2 Diabetes.”
8-2
SUGAR-SWEETENED SOFT DRINKS, OBESITY, AND TYPE 2 DIABETES.
(This editorial comments and expands on the preceding
study.)
“Sugar-sweetened soft drinks contribute 7.1% of total
energy intake, and represent the largest single food source of calories in the
US diet.” The rise of obesity and type 2 diabetes parallels the increase in
sugar-sweetened soft drink consumption. One can of S-SSD added to the caloric
intake each day could lead to a 15 lb weight gain in one year.
Conversely, intake of diet (non-sugar containing) sodas is associated with a lowering of risk of childhood obesity.
When individuals increase liquid carbohydrate
consumption, they do not reduce their
solid food consumption. An increase in liquid carbohydrates leads, perversely,
to even greater caloric consumption of other foods.
“A better mechanism for weight gain could not have
developed than introducing a liquid carbohydrate with calories that are not
fully compensated for by increasing satiety.”
“Sugar-sweetened beverage consumption as a marker for
an unhealthy lifestyle has the potential of being a quick screening test for
increased risk of obesity and type 2 diabetes.”
“Reducing sugar-sweetened beverage consumption may be
the best single opportunity to curb the obesity epidemic.”
JAMA August 25, 2004; 292: 978-79 Editorial by Caroline M Apovian, Boston
University School of Medicine, Boston, Mass
============================================================================
2-H
PG Levels Can Lead To Earlier Preventive Interventions Than Either Hba1c or
Fasting PG
8-3
DIAGNOSTIC AND THERAPEUTIC IMPLICATIONS OF RELATIONSHIPS BETWEEN 2-HOUR
POSTCHALLENGE PLASMA GLUCOSE, AND HEMOGLOBIN A1C VALUES
Hemoglobin A1c is the preferred method to monitor
long-term glycemic control. Lower levels are associated with reduced risks for both
micro- and macro-vascular diabetic complications. Current recommendations vary:
a target of 7%; 6.5%; 6% and below.
This study determined the relationships between
fasting plasma glucose (fasting PG),
2-hour post-challenge plasma glucose (2-h
PG), and HbA1c levels.
Conclusion:
The “normal” fasting PG (up to 110 mg/dL in this study) and the
recommended HbA1c levels are set too high. Most patients with HbA1c between 6%
and 7% had an abnormal 2-h PG.
STUDY
1. Analyzed data on 457 healthy
individuals (mean age 52). All had HbA1c values less than 7%. All underwent oral GTT screening as
potential research volunteers, or for diagnostic purposes.
2. Although not a population-based survey,
the authors believe the results are representative of the general population.
RESULTS
1. 243 of 457 (53%) had a normal glucose
tolerance. The authors calculated the normal ranges of HbA1c, FPG, and 2-h PG
from this group.
A. Their mean HbA1c level was 5.05% plus
or minus 0.47%. The upper limit of normal was therefore 5.99% (mean + 2 SD).
B. Their mean FPG was 86 mg/dL plus or
minus a standard deviation of 8. This yields the upper normal limit of 102 mg/dL. (mean + 2 SD).
C. Their mean 2-h PG was 103 plus or minus
a standard deviation of 18.6. This yields an upper normal limit of 140. (mean + 2 SD).
(I remember
well when the top normal value for total cholesterol was 300 mg/dL. This was
based on values obtained from a large group of “normal, healthy” individuals.
The mean was calculated and the top normal set at 2 SD above the mean. The
problem, as we all know now, was that many of these individuals were not normal
or healthy. Many were at increased risk for cardiovascular disease.
Subsequently the top normal levels have been periodically reduced. I suspect
that the top normal levels for HbA1c, fasting PG, and 2-h PG will be lowered.
Indeed, the ADA has already lowered the recommended fasting PG to 100 mg/dL.
RTJ)
2.
Relation between HbA1c (under 7%) and fasting PG in apparently healthy
individuals:
The authors divided HbA1c levels into deciles.
HbA1c (mean) Fasting PG (mean)
Decile #1 (lowest) 4.33
% 83 mg/dL
Decile #9 5.93 98
Decile # 10 (highest)
6.46 114
(Fasting PG gradually rose as HbA1c increased. When
HBA1c reached about 6.5%, the mean fasting PG was abnormal.)
3. Relation between HbA1c and 2-h PG: As the HbA1c rose, the 2-h PG rose much more
rapidly than the fasting PG to abnormal levels.
HbA1c (mean) 2-h PG (mean)
Decile #1 4.33% 83
Decile #7 5.48% 150
Decile $8 5.65% 158
Decile #9 5.93%
160
Decile #10 6.46% 208
(The top 4 deciles of HbA1c were
associated with an impaired glucose tolerance. This included many individuals
with levels under 6%.)
4.
Relation between fasting PG and 2-h PG:
As the HbA1c increased, the 2-h PG increased much more
rapidly to abnormal levels than the fasting PG.
It is a more sensitive marker of abnormal glucose
metabolism than fasting PG.
5.
Relation between HbA1c and diabetes (DM2; 2-h PG over 200):
DM2 was diagnosed in a few individuals with HbA1c
levels as low as 4.93%-5.01%
Prevalence of DM2 was higher in
individuals with HBA1c levels 5.3% to 6%. About half of those with HbA1c over
6.33% had DM2.
6.
When subjects were subdivided by HbA1c levels:
Impaired glucose tolerance (%) Diabetes mellitus (%)
HbA1c below 5% 16 1
HbA1c 5% to 5.4% 37 5
HbA1c 5.5% to 6.9% 53 24
DISCUSSION
1.
Hyperglycemia has been identified as an independent and continuous risk factor
for cardiovascular disease.
2. Persons with fasting PG and HbA1c
levels in the upper range of “normal” may be at increased risk of
cardiovascular disease due to increased post-meal PG levels.
3. An appreciable number of individuals
with a normal fasting glucose will have impaired glucose tolerance (abnormal
2-hour PG). Their condition may be undiagnosed and untreated. 2-h PG increases at a much greater rate than
fasting PG levels and contributes more to the increase in HbA1c levels.
4. The 2-h PG is a more reliable indicator
of abnormal glucose metabolism than either fasting PG or HbA1c. And a more sensitive indicator of risk of
developing diabetes.
5. The diagnostic use of fasting PG
levels, as currently recommended by the ADA, is suboptimal. Only 17% of
subjects in this study who had impaired GT had an abnormal fasting PG level.
This supports the recommendations of the WHO that the oral glucose tolerance
test be the main diagnostic procedure.
6. This study supports the view that the
recommended “normal” fasting PG levels may be set too high. (The mean in this
study was 86 mg/dL.) The top normal would be 102, less than the current
standard of 109. (Indeed, the ADA recently reduced the top normal FPG to 100.)
7. Macro-vascular
complications account for most of the morbidity and mortality in people with
DM2. Setting lower 2-h PG and HbA1c
levels may be needed to prevent these complications. Achievement of a lower
HbA1c would depend on reducing postprandial hyperglycemia.
CONCLUSON
Most individuals with HbA1c levels between 6% and 7%
have normal fasting PG levels but impaired glucose tolerance (high 2-h post
challenge PG levels. The upper limits of normal for HbA1c, and fasting PG (at
110) are set too high. As a general life-style measure, post-meal glucose
levels should be maintained at low levels by a low glycemic load diet
Attempts to lower HbA1c will require treatment
preferentially directed at lowering postprandial glucose levels.
Archives Int Med August 9/23 2004; 164:
1627-32 Original investigation,
first author Hans J Woerle, University of Rochester, Rochester, NY..
Definitions: Plasma
glucose (PG) Whole blood glucosea
Normal fasting (old) 109
or less 100
The new ADA fasting normal 100 or less 90
Normal 2-h post challenge Less than 140 Less than 126
Impaired fasting 110-126 91-113
Impaired glucose tolerance
(2-h post glucose challenge) 140-199 126-180
Diabetes
Fasting 126
and above 113 and above
2-h post glucose challenge 200 and above 180 and above.
a My calculation. Remember that the whole
blood glucose (as often done by the patient or in the office by finger stick)
is 10% to 15% lower than the concomitant plasma glucose. Thus the normal
fasting and 2-h blood glucose is considerably lower than the plasma levels
stated in the article. The plasma levels are usually determined by a reference
laboratory. RTJ
=============================================================
GI
As An Independent Factor Can Cause Obesity And Increase Risk Of Diabetes In
Animals.
8-4
EFFECTS OF DIETARY GLYCAEMIC INDEX ON ADIPOSITY, GLUCOSE HOMEOSTASIS,
AND PLASMA LIPIDS IN ANIMALS.
The glycemic index (GI) of foods has been related to risk of obesity and diabetes on
experimental and theoretical grounds. Habitual consumption of high-GI foods in
meals (causing a large postprandial rise in blood glucose and insulin) could
initiate a sequence of metabolic events that stimulate hunger, promote fat
deposition, and place pancreatic beta-cells under increased stress.
Several trials have reported that persons consuming a
low GI diet weigh less than those consuming a high GI diet. However, clinical
outcomes in humans cannot be attributed solely to GI because interventions
designed to modify GI unavoidably also influence intake of fiber, and alter
palatability and energy density.
This study, in rats and mice, examined the effects of
GI on adiposity and other endpoints by strict control of the GI of their diets.
Conclusion: GI
as an independent factor can cause obesity and increase risk of diabetes in
animals.
STUDY
1. Rats and mice were given diets with
identical nutrients, except for the type of starch (high GI starch and low GI
starch). Energy intake was controlled so they gained the same body weight. (See the text for details of the experiment
which was continued over 18 weeks.)
RESULTS
1. Despite maintenance of similar body
weight, rats given high GI food gained more body fat, and had less lean body
mass compared with the low-GI animals.
2. The high-GI group had greater increases
over time in the areas under the curve for blood glucose and plasma insulin, a
lower plasma adiponectin a,
much higher plasma triglyceride concentrations, and severe disruption of
islet-cell architecture.
DISCUSSION
1. In this study, GI had an independent
effect on body composition. Rats in the high-GI group required less food to
gain the same weight. This suggests that they had become more metabolically
efficient.
2. Mice on the high-GI diet had almost
twice the body fat and less lean body mass, although the mean body weight did
not differ between groups.
3. “We speculate that the striking chronic
. . . hyperinsulinemia induced by the high GI diet alters nutrient partitioning in favor of fat
deposition, shunting metabolic fuels from oxidation
in muscle to storage in fat.”
4. A high-GI diet has been linked to
increased risk of type 2 diabetes. Hyperglycemia appears to lower beta-cell
insulin content partly through the effects of chronic over stimulation.
CONCLUSION
In this study, a high-GI diet per se adversely
affected body composition and risk factors for type 2 diabetes in rats and
mice.
Lancet August 28, 2004; 364: 778-85 Original investigation, “Mechanisms of
Disease”, first author Dorota B Pawlak, Harvard Medical School, Boston, Mass.
Comment:
a
I believe we will be hearing more about adiponectin. It is a protein
hormone secreted exclusively by fat cells [adipocytes]. It is a “good guy”,
regulating the metabolism of lipids and glucose. It influences the response to
insulin. Also have anti-inflammatory effects on endothelial cells. High levels
are associated with a reduced risk of myocardial infarction. Low levels are
found in people with obesity, insulin resistance, type 2 diabetes, and coronary
artery disease. RTJ
An editorial in this issue of Lancet (pp 736-37) by
Mark E Daly, Royal Devon and Exeter Hospital, Exeter UK, comments and expands:
The term glycemic index (GI) describes how a food, meal or diet affects blood glucose
during the postprandial period. It is defined as the capacity of a food
containing 50 grams of available carbohydrate to raise blood glucose as
compared with 50 grams of glucose in normal, glucose-tolerant persons.
Differences between foods are likely to be exaggerated in individuals with any
degree of impairment of glucose tolerance.
A diet with a low GI can still be high in total
carbohydrates.
The “glycemic load” (GL) of a food is essentially the product of the glycemic index and
the amount of available carbohydrate in the food consumed. The GL is an attempt to give a quantitative
means of comparing meals, foods or diets for their glycemic effects. A diet
with a lower GL can be achieved by choosing foods with a low GI, or by reducing
the quantity of carbohydrate consumed, or both.
Diets with low GI (and GL) have been associated with
improvements in insulin sensitivity. The GI of diets on a population level
inversely correlates with HDL cholesterol. The glycemic load of a diet predicts
both ischemic heart disease and type 2 diabetes in situations where major
contributors to the glycemic load have been breakfast cereals and potatoes.
There have been lingering doubts about the validity of
GI and GL studies in humans. Tightly controlled nutrition studies in humans
have been notoriously difficult to control.
For those seeking low GI diets, implementation is
simpler than the vast tables of numerical values of individual foods suggest:
avoid “sugar” (sucrose and fructose), use breakfast cereals based on oats,
barley and bran (and without added sugar); use grainy breads made of whole
seeds; reduce the amount of potatoes; enjoy all types of nuts, fruits, and
vegetables (except potatoes), and eat plenty of salad vegetables with an oil
dressing.
Most refined foods (bread, refined cereal, potato, and
glucose and sucrose) have high GI. Most non-starchy vegetables, fruits,
legumes, and nuts have low GI.
===============================================================================
Significant
Rates Of Complications And Less Likely To Be Adequately Monitored.
8-5
PREVALENCE, CARE, AND OUTCOMES WITH DIET-CONTROLLED DIABETES IN GENERAL
PRACTICE
By tradition, a substantial number of people with type
2 diabetes (DM2) have been managed
without medication. They are usually offered dietary advice. Irrespective of
whether patients remember or follow the advice, they are referred to as being
managed by diet alone. Not using medication originated in the era when the aim
of treatment was to maintain short-term freedom from symptoms of hyperglycemia.
The use of diet-alone was supported by results of early studies that failed to
find a convincing association between glycemic control and complications.
Anecdotal evidence suggests that there is a continuing
belief in the existence of “mild diabetes”—a group of people at low risk of
complications for whom active therapeutic management is neither indicated nor
cost effective. Studies have shown, however, that diet alone does not result in
adequate glycemic control. It is likely that many such patients need
hypoglycemic medication.
It is now known that tight glycemic control reduces
microvascular complications.
This study aimed to establish the proportion of
patients with DM2 in general practice treated by diet alone. And to determine
levels of complications and quality of care received compared with patients on
medication.
Conclusion:
Patients treated by diet alone had significant rates of complications.
They were less likely to be adequately monitored.
STUDY
1. A cross-sectional study selected over
7800 patients with DM2 from 42 general practices
2. Determined the proportion of patients
treated with diet alone, the use of medication (other than antihyperglycemic) ,
and the rate of complications compared with patients on hypoglycemic drugs.
3. Measured quality of care by identifying
cut points for: HbA1c (7.4%); body mass index (30); BP over 145/85: cholesterol
(over 192 mg/dL); creatinine (more than 1.3 mg/dL).
4. Also retrieved data on peripheral
pulses, microalbuminuria, neuropathy, retinopathy, smoking, and smoking
cessation advice.
RESULTS
1. Overall, about 1/3 of patients were
managed with diet alone. But, there was a great variation between practices
(16% to 73%).
2. Diet-alone patients were much less
likely to have received monitoring for HbA1c, BP, lipids, smoking,
microalbuminuria, and foot pulses.
3. Patients on diet alone were more likely
to have high BP, and were less likely to receive antihypertension drugs. They
were 45% more likely to have a high cholesterol, and less likely to receive
lipid-controlling drugs.
4. Of those treated by diet-alone, 60% had
vascular complications; 20% had
diabetes-related eye disorders; 9% neuropathy; 9% renal complications. “These
rates are much higher than in the population without diabetes.”
DISCUSSION
1. Overall, almost 1/3 of DM2 patients in
general practice were being managed with diet alone. Anecdotally, many patients
on diet-alone do not take their diet seriously.
2. Routine monitoring of these patients was
much lower compared with patients receiving antihyperglycemic drugs. However,
there was a 4-fold inter-practice variation. Clinical decision-making is
inconsistent in this area. Clearly, there is considerable scope for
improvement.
3. “Our findings suggest that the
management strategy of using diet-only in type 2 diabetes is still very common
and varies substantially between practices.”
4. Although some individuals might be
effectively managed by diet alone, there is a case for better surveillance and
for more intensive therapy.
CONCLUSION
Patients with type 2 diabetes treated by diet-alone
had significant rates of complications and were less likely than those on
medication to be adequately monitored.
Lancet July 31, 2004; 364: 423-28 Original investigation, first author J
Hippisley-Cox, Division of General
Practice, University Park, Nottingham, UK
=========================================================================
Safely
And Effectively Reduced Risk Of First Cardiovascular Events In Patients With
DM2.
8-6
PRIMARY PREVENTION OF CARDIOVASCULAR DISEASE WITH ATORVASTATIN IN TYPE 2
DIABETES
Type 2 diabetes (DM2)
is associated with a 2- to 4-fold increase in the risk of coronary heart
disease (CHD) and stroke. Primary
prevention is essential.
Observational studies suggest that lipid control
should have an important place in primary prevention of cardiovascular disease (CVD). Although the LDL-cholesterol is
not usually greatly increased in patients with DM2, it is at least as strong a
predictor of CHD as in the general population—a 1.6-fold increase in risk of
CHD for each 1 mmol/L increment in LDL. LDL levels also predict risk of stroke.
Trials of patients with DM2 and established CHD have reported that cholesterol lowering
substantially reduces risk of subsequent events (secondary prevention). Studies of primary prevention of CHD by lipid lowering in patients with DM2
(who had no history of CHD) reported a 16% to 33% reduction in risk of events.
Current prescription rates for lipid lowering drugs in
patients with DM2 remain low, even in patients with established cardiovascular
disease (CVD).
This study assessed the effectiveness of a 10 mg dose
of atorvastatin (Lipitor) vs placebo in primary prevention of CVD
in patients with DM2. None had high concentrations of LDL-c. The trial was
stopped 2 years early because of demonstration of significant benefit.
Conclusion:
Atorvastatin 10 mg/d safely and effectively reduced risk of first
cardiovascular events in patients with DM2.
STUDY
1. Multicenter study randomized over 2800
patients with DM2 (mean age 62) to: 1) atorvastatin 10 mg daily, or 2) placebo.
2. None had documented history of CVD. All
had at least one risk factor:
retinopathy, macro- or micro-albuminuria, current smoking, or
hypertension. The estimated risk of a major cardiovascular event in these
patients was 10% over 4 years.
3. Mean baseline total cholesterol = 205 mg/dL; LDL-c 117; HDL-c
54; triglycerides 150. (Ie, baseline
levels were not high.)
4. Primary endpoint = time to a first
occurrence of: acute CD event or stroke. Median duration of follow-up = 4
years.
RESULTS
1. Total cholesterol, triglycerides, and
LDL-c all were substantially reduced in the atorvastatin group. HDL was not significantly increased. Throughout the study the median LDL-c in the
atorvastatin group was typically about 77 mg/dL. At least 75% of atorvastatin
group had a LDL-c of less than 95 and at least 25% had a concentration below
64.
2. Over 100 patient-years, at least one
major cardiovascular event occurred in 83 (of 1428) patients allocated to
atorvastatin; and in 127 (of 1409) in the placebo group. (1.54 vs 2.46 per 100
patient-years). In absolute terms, fewer than one patient in 100 would benefit
over 1 year, and about 4 in 100 would benefit over 4 years. [NNT (for 4 years to benefit one) = 27]
4. No excess of adverse events were noted.
There was no occurrence of rhabdomyolyis. About equal numbers in each group had
myalgia. Increase in alanine transferase occurred in 1% in both groups.
DISCUSSION
1. Over 4 years, atorvastatin led to
reductions in major cardiovascular events in patients with DM2 who had no
history of cardiovascular events and in whom LDL-c levels were not high
(primary prevention).
2. The benefit did not vary by
pretreatment cholesterol levels.
3. On-treatment LDL-c concentrations were
substantially lower than current target amounts in most treatment guidelines.
4. No safety concerns were raised.
5. Lipids should receive at least as much
attention as glycemia and blood pressure.
6. Most patients with DM2 have at least
one other risk factor for cardiovascular disease. Therefore, the results of
this trial can be generalized to most patients with DM2.
7. “The debate about whether all patients with DM2 warrant statin
therapy should now focus on whether any patients can reliably be identified as
being at sufficiently low risk for this safe and effective treatment to be
withheld.”
CONCLUSION
Atorvastatin 10 mg daily safely reduced risk of a
first major cardiovascular event, including stroke, in patients with DM2 whose
baseline LDL-c levels were not considered high. [NNT (for 4 years to
benefit one) = 27]
These data challenge the use of a particular threshold
level of LDL-c as the sole arbiter of which patients with DM2 should receive
statin therapy (as in the case of most current guidelines). Target levels of
LDL-c (100 mg/dL) could be lowered.
Lancet August 21, 2004; 364: 685-96 Original investigation, by the Collaborative
Atorvastatin Diabetes Study (CARDS), first author Helen M Colhoun, Royal Free
and University College Medical School, London.
Comment:
One drug store quotes a price of $2.60 for one 10 mg
Lipitor—over $900.00 a year.
An editorial in this issue of Lancet (pp 641-42) by
Abhimanyu Garg, University of Texas Southwestern Medical Center, Dallas issues
a cautionary note.
The conclusions of the study--“Seem too far-fetched in
view of the available clinical trials and epidemiological data”. He cites 4
large studies of lipid control which contained many patients with DM2. Two of
the four did not report a statistically significant reduction in coronary
disease. Two did.
Clinical trials enroll carefully selected patients.
The results cannot necessarily be extrapolated to primary care practice. Many
patients may be at low risk and the benefit/harm-cost ratio may be too low to
warrant long-term treatment. Some may be at higher risk of adverse effects from
statins. As always, individualization is required.
“For patients with type 2 diabetes at moderate to low
risk of coronary heart disease, maximal lowering of lipids with diet, exercise,
weight loss, and rigorous glycemic control must be attempted before considering
lipid-lowering
drugs.a
a The problem
is that these measures are rarely successful, especially long-term. RTJ
====================================================================
Benefit
In Reducing Symptoms May Be Great; Risks Very Small.
8-7
SHORT-TERM MENOPAUSAL HORMONE THERAPY FOR SYMPTOM RELIEF
Hormone therapy (HT)
provides the most effective relief of menopausal symptoms. Recently caution has
been recommended because of an associated increase in risks of coronary heart
disease (CHD), stroke, breast cancer
(BC), and pulmonary embolism (PE). However, the average risk is very
low—7 additional cases of CHD events, 8 more strokes, 8 more PEs, and 8 more
BCs per 10 000 women per year. (3 chances per1000/y. Chances of death would be
much smaller.)
HT is associated with very small losses in life
expectancy. It is not appropriate for primary prevention of chronic disease. In
asymptomatic women, there is no place for its use.
The balance of benefits and harms will vary according
to baseline risk.
This study’s objective was to identify which women
benefit from short-term HT by exploring the trade-off between relief of
menopausal symptoms and risks of inducing disease.
Conclusion: In
women with mild-to-moderate menopausal symptoms, HT is associated with very
small losses in survival, but large gains in quality-adjusted life expectancy (QALE).
STUDY
1. Created a model simulating the effect
of short-term (2 years) of estrogen + progestin on life expectancy and QALE
among 50 year-old menopausal women with intact uteri. Findings were based on
the Women’s Heath Initiative. QOL scores were derived from the literature.
2. It assumed that HT affected QOL only
during the perimenopause and reduced symptoms by 80%.
RESULTS
1. Women with mild or severe symptoms
gained 3 to 4 months and 7 to 8 months of QALE respectively.
2. Among women at low risk for CHD (no
risk factors) , HT extended QALE even
if menopausal symptoms were mild (reducing quality of life by as little as 4%).
3. Among women at high CHD risk (3 risk
factors), HT extended QALE only if symptoms lowered quality of life by 12% or
more.
DISCUSSION
1. “Whether short-term is beneficial or
harmful depends primarily on a woman’s treatment goals, the severity of her
estrogen-responsive symptoms, and her cardiovascular disease risk.”
2. If the goal is to maximize QALE, HT can
be beneficial, especially among women at low CVD risk, even when menopausal
symptoms are mild. (But, only 15% of US women are at low risk—ie, no risk
factors.)
3. Women at high CVD risk can benefit if
they consider their symptom burden is high enough to justify the small risks of
treatment.
4. Each woman’s values must be
incorporated into the HT decision. Is she willing to accept a very small risk
of breast cancer; coronary event; stroke; pulmonary embolism, and even death in
order to be free of menopausal symptoms?
5. The decision to use HT depends on its
efficacy in alleviating symptoms. Not all symptoms during perimenopause are due
to declining estrogen levels. Not all respond to HT. There is a large placebo
effect. Clinical trials examining the impact of HT on symptoms suggest that a
1-month trial is sufficient to determine response. If response is not
satisfactory, HT may be withdrawn.
CONCLUSION
In women with menopausal symptoms, HT is associated
with very small chances of adverse effects but large gains in QALE. Women who
may benefit from HT can be identified by balancing the severity of their
symptoms with their risk of adverse effects.
Archives Int Med August 9/23, 2004; 164:
1634-40 Original investigation,
first author Nananda R Col, Brown Medical School, Providence, R.I.
===========================================================================
Appropriate
Use Requires A Balance
8-8
APPROPRIATE USE OF OPIOIDS FOR PERSISTENT NON-CANCER PAIN
Primary care clinicians have been torn between
opposing perspectives on use of opioids. Risks and benefits continue to be
debated. The debate is sometimes skewed by the rhetoric of groups with
antagonistic concerns. One group bemoans our culture’s “opiophobia” and promotes expanded access to
this therapy—the other condemns the use of long-term opioid treatment by
primary care clinicians who lack specific training in pain management or
addiction medicine.
The problem is compounded by state and federal
regulators whose objectives often seem uncoordinated and uninformed, if not
capricious.
More than 2 decades ago, the prevailing view of opioid
therapy was extremely negative. There was general fear of drug abuse, and the medical
community feared regulatory authorities.
Beginning in the mid-1980s, the consensus view of pain
specialists rapidly shifted. By 1997, opioid therapy for chronic pain was
described as an “extension of the basic principles of good medical practice”. Data
highlighted the variability of the population with chronic pain and showed
highly favorable outcomes in some patients who received opioid therapy. Many
patients with chronic pain who were treated with opioids for months or years
had constant pain relief, no significant toxicity, and stable or improved
function. Data confirmed that patients could handle these drugs responsibly,
without the problematic behaviors of abuse and addiction.
In 1995, a 12-hour controlled-release oxycodone (Oxycontin) was approved. Many
primary-care physicians adopted this treatment. “For those who interpreted a
rise in prescribing as a necessary step in the broader effort to address the
problem of unrelieved pain, the growth of opioid prescribing was a gratifying
trend.”
Unfortunately, increased medical use was associated
with heightened abuse. The burgeoning non-medical use of oxycodone and other
drugs, combined with media stories of abuse-related tragedies, seemed to
threaten medical practice with a regulatory backlash. Pain specialists
perceived that championing this therapy could not continue without a clear
focus on the risks of abuse, addiction in predisposed persons, and diversion to
an illicit market. This evolution
brought the concept of balance to the level of the individual practitioner.
Safe and effective prescribing requires skills in pharmacotherapy and in risk
assessment and management.
New guidelines have been presented by the WHO and by
pain societies, which despite limitations, contribute to an ongoing international
effort to promote appropriate use of opioid therapy of chronic pain. Current recommendations for appropriate use
for persistent non-cancer pain aim for a balance. The primary goal is comfort.
Opioid therapy is just one tool among many to manage pain.
The recommendations properly emphasize that tolerance
seems to be a minor problem, and that long-term use is safe enough to be
compatible with driving in most cases.
Lancet August 28, 2004; 364: 739-40 “Comment” by Russell K Portenoy, Beth Israel
Medical Center, New York
Comment:
A large volume of information can be accessed on the
internet:
www.painedu.org
Presents the Screener and Opioid
Assessment for Patients with Pain (SOAPP)
This is a tool to help clinicians determine how much
monitoring a patient on long-term opioid therapy might require.
www.painsociety.org
www.medsch.wisc.edu/painpolicy
www.painmed.org
www.britishpainsociety.org
================================================================================
8-9
TERMINAL SEDATION: An Acceptable Exit Strategy?
“Dying well” remains an elusive goal in America.
Comprehensive palliative care is gradually gaining acceptance as the standard
for alleviating suffering and promoting patient autonomy near the end of life.
At one end of the ethical spectrum, end-of-life care
includes withdrawal of life-sustaining treatment (now legally and ethically
accepted in the U.S.). This precedes up to 84% of hospital deaths. At the other extreme is voluntary euthanasia
(in which the physician ends a patient’s life at the patient’s request) a
practice illegal in the U.S., but not in the Netherlands.
In between, is physician-assisted suicide (PAS). The
physician provides the means, usually a prescription for a barbiturate for a competent, terminally ill patient to
take if he wishes to end his life.a Some states have legalized PAS (Oregon) some
have banned it (New York and Washington).
Another strategy is for the patient to exercise
control over dying by refusing food and drink. During the process, the
physician is expected to withhold artificial nutrition, and to alleviate the
resulting discomfort.
Finally, there is terminal sedation, in which the
physician sedates the patient until coma develops in order to alleviate intolerable
suffering refractory to conventional palliative measures. Artificial nutrition
and hydration are withheld.
Each of these raises ethical concerns. None is more controversial than terminal
sedation. It has been condemned by some as euthanasia in disguise.b Others, such
as the U.S. Supreme Court Justice O’Connor, have endorsed the practice arguing
that “a patient who is suffering from a terminal illness and who is
experiencing great pain has no legal barrier to obtaining medication from
qualified physicians to alleviate that suffering, even to the point of causing
unconsciousness and hastening death”.
A study in this issue of Annals from the Netherlands (pp 178-185; first author Judith A C
Rietjens, Erasmus MC, Rotterdam, Netherlands) reports that, of over 400
physicians interviewed, 52% had carried out terminal sedation, usually with a
benzodiazepine. Only 50% were invoked
for pain c; the
others for agitation, dyspnea, and anxiety. In 59% of cases physicians obtained
consent from the patient; and involved families in the decision making in
93%. Most of the patients who did not
participate directly were delirious or demented. This “leaves readers unable to
judge whether any patients received terminal sedation without either patient or
family consent”. The Dutch study made it clear that many physicians who used
terminal sedation were consciously seeking to end their patients’
lives—inducing coma and withholding life-sustaining treatments as a form of
“slow euthanasia” which they find more acceptable than direct euthanasia. Many Dutch physicians indicated that, in addition to
alleviating symptoms, they intended to hasten death. Most of the patients were
actively dying. In 40% of patients the physician estimated that the terminal
sedation did not appreciably shorten life, and one third believed it shortened
life by less than one week,
One of the major objections to terminal sedation is
that its intent may be to kill the patient in order to alleviate suffering. The
intent of palliative care, by contrast, is to relieve suffering, even if the
treatment, such as opioids, shortens life. “Intent matters”—in law and in
ethics. The rule of “double effect” states that foreseeable adverse
consequences of treatment (ie,
side-effects) are acceptable only if they are not intended.d A second objection is that terminal sedation
could take place without the patient’s consent, a process indistinguishable
from involuntary euthanasia.
In America, we worry that patients who lack access to
care, or whose values differ from those of their physicians, might be
euthananized without their consent. And
that the rate of terminal sedation might be high in the U.S. because
physician-assisted suicide is largely unavailable.
“We need to control the use of terminal sedation by
developing and implementing practice guidelines.”e We must
confirm the diagnosis, consider alternative approaches, and obtain informed
consent.
Annals Internal Medicine August 3, 2004; 141:
236-37 Editorial by Muriel R
Gillick, Harvard/Pilgrim/Harvard Medical School, Boston, Mass.
Comment:
a My understanding is that the majority of
patients receiving a prescription of barbiturates never use it. They die naturally, But, in the process of
dying, the patient and the family are comforted that the option is available if
they so choose. b The defining difference is the length of time
the patient takes to die. I believe it is a form of euthanasia. (Although
beneficial at times.) It is possible, I believe, to sedate a patient enough to
relieve suffering without inducing deep coma, thus allowing the patient to
maintain, at times, some degree of communication with the family. c In those in whom pain was the reason, one could
substantially control pain in 90% of cases by state-of-the-art palliative
care. d
My problem with this is—how does one gauge intent? Who gauges intent? e Practice
guidelines may be very difficult to apply to an individual patient. Individual
judgment is still necessary. Each
physician must rely on his own convictions and conscience. RTJ
=============================================================================
“There
Is No Evidence That Enteral Feeding Tubes Benefit Patients With Dementia.”
8-10
USE OF RAPID-CYCLE QUALITY IMPROVEMENT METHODOLOGY TO REDUCE FEEDING
TUBES IN PATIENTS WITH ADVANCED DEMENTIA
There is no evidence that enteral feeding tubes
benefit patients with dementia. They carry risks of medical complications, and
compromise quality-of-life.
Feeding tubes are often placed in patients with
dementia who are hospitalized for an acute illness; often contrary to the
wishes of patients and families. A small, but important, number of patients
with late stage dementia receive feeding tubs despite their explicit advance
directives stating their wish to forgo such treatment if they were to become
irreversibly ill and unable to make their own decisions.
A growing body of research over the past decade has
questioned the utility of placing feeding tubes in patients with advanced
dementia. There is no evidence that
feeding tubes in this population prevents aspiration, prolongs life, improves
overall function, or reduces pressure sores. Feeding tubes may adversely affect
quality-of-life. Patients may require wrist restraints to prevent pulling on
the tube. They may develop cellulitis at the site, and be deprived of the
social actions and pleasure surrounding meals.
In a large acute-care hospital in New York, physicians
were concerned about improving medical care for patients with dementia. They
undertook a quality improvement project to address the issue of feeding tube
placement in these patients. (Specifically, percutaneous endoscopic gastrostomy
and jejunostomy.) They aimed to identify the extent and nature of feeding tubes
in this population and to change medical practice so that patients with
dementia would receive medically appropriate treatment, consistent with their
wishes.
A chart review determined that feeding tubes were
placed in many patients with dementia. There was minimal documentation of
patients’ wishes regarding artificial nutrition and hydration. And few
instances of formal assessment of patients’ capacity for decision making.
Reversible causes for failure to eat were rarely addressed. The doctor’s
rationale for tube placement was documented infrequently.
The aim of this study was to describe the effect of
efforts of an interdisciplinary team focusing on an educational program to
change the staff’s approach to use of feeding tunes. After completion of the
program, use of feeding tubes declined dramatically. This prevented patients
from receiving futile treatment.
BMJ August 28, 2002; 329: 491-94 Original investigation, first author Carol
Monteleoni, Lenox Hill Hospital, New York.