The Myth of the Placebo Effect

Subscriber Only
Sign in or Subscribe Now for audio version

In 1955 Dr. Henry Beecher, an anesthesiologist at Harvard Medical School, published a landmark paper in the Journal of the American Medical Association, “The Powerful Placebo.” The article is remarkable for the claims it made, for its wide influence, and for its profound flaws. Beecher had been studying placebos — pharmacologically inert treatments, such as pills with no active ingredient — and reviewed evidence from fifteen clinical trials in which the effectiveness of real treatments to reduce subjective patient-reported outcomes, for instance pain, nausea, and anxiety, was tested by comparing them to placebos. Beecher concluded that, overall, in 35 percent of cases the condition was “satisfactorily relieved by a placebo,” which he took to be evidence of therapeutic effectiveness. He also discussed a few studies finding objective effects of placebos, such as the production of gastric acid and increased adrenal cortical activity. Because the effect seemed to occur more or less equally in a variety of conditions, Beecher inferred that “a fundamental mechanism in common is operating.”

It is difficult to overstate the impact Beecher’s paper has had. It has been cited close to a thousand times in scientific journals alone, and among researchers, physicians, and the general public it legitimized the idea that placebos are widely effective for therapy. This notion went largely unchallenged for forty years, and though in the last two decades there has been growing recognition that much of the evidence advanced for the placebo effect was tainted by errors and misunderstanding, the grip of the idea on the popular imagination seems unshakeable. A 2011 article on placebo in The New Yorker was tellingly titled “The Power of Nothing.”

Indeed, the paradoxical nature of the notion that an inert treatment could produce a therapeutic effect may help to explain its curious appeal. Since placebos are physiologically inert, any effect they might have would be through the patient’s mind. In the case of patient-reported outcomes, psychological explanations for placebo effects — for instance, that the experience of receiving treatment helps produce a sense of well-being, or that the expectation of improvement can encourage it — are indeed plausible. But the placebo effect has also often been touted as applying to objective outcomes, and interpretations of its mechanism have tended to focus on exotic notions of the mind’s ability to heal the body. There is something intriguing and comforting about the idea that the mere belief in the effectiveness of a treatment could make it effective. And the mystical aura surrounding the placebo effect may have indirectly contributed to the popularity of alternative medicines and therapies, which are often promoted as tapping into the body’s hidden potential to heal itself. But there is also something morally troubling about the use of placebos in therapeutic settings — after all, much of their effectiveness would seem to depend on physicians withholding information from their patients, or even lying to them.

The popular and technical literature about the placebo effect remains littered with errors and confusions, and the very volume of that literature seems strange since there is so little solid evidence demonstrating the effectiveness of placebos. In the wake of research showing that placebo effects are neither as large nor as widespread as previously believed, clearer thinking about placebos is long overdue.

Mind Over Matter

Medical treatment has always been about more than the attempt to cure illness. According to an old aphorism, often attributed to sixteenth-century French surgeon Ambroise Paré, medicine should aspire to “guérir parfois, soulager souvent, consoler toujours” (heal sometimes, relieve often, console always). Perhaps the notion of the placebo effect is popular in part because of the desire of both healers and patients faced with an intractable illness to at least do something. In an 1807 letter, Thomas Jefferson wrote that “if the appearance of doing something be necessary to keep alive the hope & spirits of the patient, it should be of the most innocent character. One of the most successful physicians I have ever known, has assured me that he used more of bread pills, drops of coloured water, & powders of hiccory ashes, than of all other medecines put together.” An 1811 medical dictionary defined placebo as “an epithet given to any medicine adapted more to please than to benefit the patient.”

The word placebo comes from the Latin for “I will please” and appears for instance in Jerome’s fourth-century Latin translation of Psalm 116:9 — “placebo Domino in regione vivorum” (“I will please the Lord in the land of the living”), a phrase that in the Middle Ages became part of a funeral rite. The earliest recorded medical use of the term may have been in 1772 by British physician William Cullen, who wrote of a particular treatment, “I own that I did not trust much to it, but I gave it because it is necessary to give a medicine, and as what I call a placebo.”

One possible explanation for why doctors through the ages have used placebos to treat patients is that, although placebos have no specific beneficial physiological effects, the patients’ belief in their efficacy may induce nonspecific effects through some kind of interaction between the mind and body, for instance a positive attitude that may help to strengthen the body’s resilience. The idea that the mind can induce physical effects has a long history. Michel de Montaigne in one of his 1580 Essays provides a vivid account of an apparent placebo effect:

A woman fancying she had swallowed a pin in a piece of bread, cried and lamented as though she had an intolerable pain in her throat, where she thought she felt it stick; but an ingenious fellow that was brought to her, seeing no outward tumour nor alteration, supposing it to be only a conceit taken at some crust of bread that had hurt her as it went down, caused her to vomit, and, unseen, threw a crooked pin into the basin, which the woman no sooner saw, but believing she had cast it up, she presently found herself eased of her pain.

It may well be that made-up treatments are the best cure for made-up afflictions, but Montaigne goes on to write that “all this may be attributed to the close affinity and relation betwixt the soul and the body intercommunicating their fortunes.”

The idea that the mind has powers to heal or harm the body has been prominent in various religious and philosophical traditions and today still has a widespread following, surely contributing to the ongoing popularity of the placebo effect. Ordinary life experience would seem to support this notion, providing ample evidence that our mental states can influence our bodies and sensations. Our emotions are generally accompanied by physical manifestations, such as flushed cheeks when angry, sweating when nervous, and trembling when scared. Under the right conditions, whether by accident or through trickery, people can have experiences that turn out to be illusory. Psychosomatic aspects of illness including the effects of stress have long been a focus of investigation. It is not surprising that efforts have been made to exploit such factors therapeutically, for instance through psychotherapy or techniques such as distraction to lessen pain.

It is well known that psychological phenomena like expectancy and classical conditioning can have physiological effects. In recent years, studies have investigated the neurological basis of such effects. In experimental settings, receiving a placebo has been linked with endogenous opioid production, and a recent study published in The Journal of Physiology found evidence that a placebo can induce changes in brain cells of patients with Parkinson’s disease who had previously received a real Parkinson’s drug. Although these studies hint at neurological mechanisms that may underlie psychological effects, they do not demonstrate clinical effectiveness of placebo, much less a more general placebo effect.

Many reports have suggested that there is an association between people’s attitudes and beliefs and their health outcomes. In her book Bright-sided (2009), Barbara Ehrenreich describes her experiences after being diagnosed with breast cancer and encountering the “implacably optimistic breast cancer culture.” She notes that

A positive outlook cannot cure cancer, but in the case of more common complaints, we tend to suspect that people who are melancholy, who complain a lot, or who ruminate obsessively about every fleeting symptom may in fact be making themselves sick.

In support of this suspicion, people often appeal to studies they believe demonstrate the mind’s power over the body. As Ehrenreich explains,

In contrast to the flimsy research linking attitude to cancer survival, there are scores of studies showing that happy or optimistic people are likely to be healthier than those who are sour-tempered and pessimistic. Most of these studies, however, only establish correlations and tell us nothing about causality: Are people healthy because they’re happy or happy because they’re healthy?

Ehrenreich does not consider another possibility, known as “confounding”: maybe people are happy and healthy due to a third factor. For example, it might be that exercise makes you happy and it also makes you healthy. If this were the case, then happy people who don’t exercise may not be particularly healthy. In any case, Ehrenreich’s point stands: associations between health outcomes and attitudes or beliefs do not establish causation. Nevertheless, the connection strikes many people as intuitive.

Another type of mind-body interaction is simply the product of behavioral change. It has been suggested that the whole context of patients’ therapy — including their sense that they are taking action to improve their health, their relationships with medical practitioners, visits to clinics or hospitals, and therapeutic rituals — may encourage them to make other changes. Patients’ experiences can influence their behavior, which in turn may affect their medical outcomes. This type of mind-body interaction is quite lacking in mystery. The question of whether it manifests as a significant placebo effect in particular contexts is another matter. But if behavioral change can affect health outcomes, the focus should be on identifying these behaviors, not on placebos.

Credulity and Deceptions
TNA48 - Barrowman - Mesmer - w300
The New York Public Library Digital Collections

Whether or not placebos affect the body through the mind, they are an essential tool for evaluating medical treatments; a placebo can provide a comparison case for a therapy being tested. Perhaps some of the earliest placebo-controlled experiments were performed in late-eighteenth-century France in order to evaluate mesmerism — Franz Anton Mesmer’s controversial “magnetic” therapies. Mesmer was said to have great success treating a wide variety of ailments using therapies based on his theory that a kind of magnetic fluid connected the planets, including Earth, and all living things, such that their motions influenced one another and could be manipulated with magnetic objects. Iron rods and even water and trees could be “magnetized” and then in turn be used to magnetize people, bringing about convulsions, fainting, and cures. After running afoul of the medical establishment in Vienna, Mesmer moved to Paris in 1778 and built a lucrative practice applying his treatments. As the popularity of mesmerism grew, so did the controversy surrounding it. In 1784, King Louis XVI ordered a commission to investigate the scientific validity of Mesmer’s practice. Among others, the commission included the great chemist Antoine Lavoisier and the renowned scientist and American ambassador Benjamin Franklin.

The commission conducted a number of experiments with blindfolded patients, some involving placebos. In one memorable case, placebo trees were used: the effect of non-magnetized trees was compared to that of a magnetized apricot tree in Ben Franklin’s garden. After several other experiments, using techniques of ritualized suggestion and expectation to induce in the subjects perceptions of physical sensations similar to those in mesmerism, the commission concluded in its report “that the imagination is the true cause of the effects attributed to the magnetism.” Summarizing their findings, the commissioners wrote that they had “demonstrated by decisive experiments, that the imagination without the magnetism produces convulsions, and that the magnetism without the imagination produces nothing,” so that “the existence of the fluid is absolutely destitute of proof.”

Mesmer’s technique was discredited, and in a letter to his grandson Franklin said of the commission’s report that “Some think it will put an End to Mesmerism. But there is a wonderful deal of Credulity in the World, and Deceptions as absurd, have supported themselves for Ages.”

But what to make of the apparent cures brought about by Mesmer’s techniques? The report remarked that it is nature that “cures the diseased” but that “sometimes she encounters obstacles,” which the physician, as “the minister of nature,” helps her to overcome. Franklin, in a different letter in which he commented on these investigations, wrote that there are “so many Disorders which cure themselves, and such a Disposition in Mankind to deceive themselves and one another on these Occasions.” This same point would be made again in the middle of the twentieth century when Beecher’s paper was published: the effect attributed to a placebo may really be nothing more than a change in the disease’s natural course.

Cause and (Placebo) Effect

It was no coincidence that Beecher’s article was published at the dawn of the randomized controlled trial (RCT), the study design now widely used in which participants are split up randomly into a group that receives the treatment being tested and a control group that does not, so that a fair comparison of their outcomes can be made. The development of the RCT, arising from key advances in scientific and statistical methodology, was a landmark achievement in medical research, and it has played a fundamental role in evaluating the effectiveness of medical treatments ever since. It also led to a flood of results from placebo-controlled trials. As in the investigations of mesmerism, the placebo plays a central role in ensuring that patients and physicians are kept blind to whether or not the active intervention is administered in addition to care that all participants in the study receive. Beecher wrote:

Preservation of sound judgment both in the laboratory and in the clinic requires the use of the “double blind” technique, where neither the subject nor the observer is aware of what agent was used or indeed when it was used. This latter requirement is made possible by the insertion of a placebo, also as an unknown, into the plan of study.

These were and still are standard elements of clinical trial methodology. But Beecher went further than advocating the use of placebos as comparators in randomized trials. He endorsed the “remarkable therapeutic power” of the placebo and even suggested that placebos could also have toxic effects, both subjective and objective. (This was later termed “nocebo,” which is not, as it may sound, a bit of wordplay, but rather comes from the Latin for “I shall harm.”)

The important place of placebos in RCTs seemed to give credibility to the idea of a placebo effect, and perhaps because of this scientific respectability, the evidence that Beecher offered for the effect was for a long time not thoroughly scrutinized. Instead of focusing on whether the effect existed, researchers focused largely on investigating the mechanisms by which it operated and the ways one might harness its power. One prominent line of inquiry was based on the observation that some people appear to respond to a placebo whereas others do not. While this could have been taken as evidence that the placebo effect is not a general phenomenon, it was instead suggested that people could be divided into placebo “responders” and “nonresponders,” and hence the task at hand was to determine how and why these groups differed. The investigators into Mesmer’s therapy techniques had already observed that the effects of the treatment sometimes coincided with certain features of the people receiving it. For instance, in one case the investigators reported being “astonished that three subjects of the lower class should be the only ones who felt any thing from the operation, while those of a more elevated rank, of more enlightened understandings, and better qualified to describe their sensations, have felt nothing.”

For decades following the appearance of Beecher’s paper, the question of how and on whom placebos are effective mostly supplanted the question of whether the effect exists, although the two questions often get conflated: answers about the supposed mechanism of the placebo effect are taken as evidence for its existence. An April 2015 article in The Atlantic explained that “the first real, physical proof of the placebo effect came in 1978,” referring to a study published in The Lancet, “The Mechanism of Placebo Analgesia.” But this study did not provide evidence for the existence of a placebo effect. Rather, it examined a possible mechanism — the role of endorphins — in people who had received a placebo after oral surgery and reported a reduction in pain. Taking for granted that the placebo itself was effective, the study concluded that “the analgesic effect of placebo is based on the action of endorphins.”

In an extensive 1962 review of research on the placebo effect published in the Journal of Chronic Diseases, Robert Liberman did note some important limitations of the research. In discussing Beecher’s paper, he pointed out that “Natural remissions of pain also occur and should not be confused with drug or placebo effects.” He also made the important observation that

no experiments on the placebo response itself have included control groups that receive no treatment whatever. This is a serious flaw in current medical research design and should be corrected if future results are to be validly interpretable.

Liberman believed that the placebo effect itself was real, and his paper endorsed the idea, including the claim that “Placebos can ‘produce’ objective physical changes also.” But he was aware of some of the shortcomings of research on the placebo effect. Unfortunately, his words of caution were largely ignored.

“Gross Exaggerations”

One source of evidence seems directly to support the placebo effect: in many randomized placebo-controlled clinical trials, placebo recipients tend to experience improvements in their conditions. Sometimes the improvement is substantial. At first sight, this might suggest that the placebo is responsible. But this is not necessarily so.

Consider osteoarthritis of the knee. Until recently, one common treatment for this painful and debilitating condition was arthroscopic surgery. However, a study published in the New England Journal of Medicine in 2002 showed that patients who received either of two different versions of this surgery showed no greater improvement than patients who received “sham surgery,” a type of placebo. The sham surgery involved incisions being made but without removing debris or smoothing joint surfaces. The study authors noted that these findings raise the question of whether “the billions of dollars spent on such procedures annually might be put to better use.” Indeed, a subsequent study in 2008 showed similar findings, and a 2009 recommendation paper stated that “For most patients with osteoarthritis of the knee, arthroscopic surgery offers little benefit.” This is an important result: a widely used treatment has been shown to be ineffective.

But an article in Scientific American expressed the result of the 2002 study in a different way: “Surprisingly, sham surgery seems to alleviate painful symptoms just as effectively as the real operation does.” This is a very subtle misinterpretation of the study’s findings. The study does not provide any information on how effectively sham surgery alleviates painful symptoms. The sham surgery was only used as a comparator so that the effectiveness of the real operation could be determined. What the Scientific American article was presumably referring to was the fact that patients generally reported modest improvement in pain following surgery, whether real or sham. This is an interesting observation, but it is not necessarily evidence of a placebo effect. The study authors commented that they had demonstrated “the great potential for a placebo effect with surgery, although it is unclear whether this effect is due solely to the natural history of the condition or whether there is some independent effect.” In clinical trials the placebo is used to examine whether treatments are actually effective, yet ironically when no effect is found — when treatment and placebo yield similar results — people will sometimes conclude that the placebo itself is effective.

As the authors of the study noted — echoing the earlier comments by Liberman and, nearly two centuries before him, Benjamin Franklin — one reason patients in the placebo group may improve is simply that, on average, many conditions show some degree of improvement over time.

In the 1997 paper “The Powerful Placebo Effect: Fact or Fiction?,” two German researchers, Gunver Kienle and Helmut Kiene, provided a detailed list of “factors that can create false impressions of placebo effects,” with particular attention to Beecher’s paper. One of these factors was spontaneous improvement, which they identified as a “major factor” in ten of Beecher’s fifteen trials. For example,

In a placebo-controlled drug trial on acute common cold, described as mild and of short duration, 35% of the patients receiving placebos felt better within 6 days (2 days after the onset of placebo administration). Beecher interpreted these improvements as an effect of the placebo administration. However, he did not consider that many patients with a mild common cold improve spontaneously within 6 days.

Erroneous conclusions about the presence of a placebo effect are often due to a confusion between correlation and causation. Placebo use may be associated with improvement even if the placebo is not the cause, but attributing any improvement in the placebo group to a placebo effect is to fall into the trap known as post hoc ergo propter hoc (“after this therefore because of this”). Spontaneous improvement is just one of several situations where such errors are easily made. (For more on this subject, see my article “Correlation, Causation, and Confusion” in the Summer/Fall 2014 issue of this journal.)

Another situation listed by Kienle and Kiene is fluctuation of symptoms, common in chronic diseases. In studies of such conditions, the rate of deterioration is sometimes ignored; instead the rate of improvement is reported and identified as a placebo effect. This was the case in Beecher’s reporting of several trials. As Kienle and Kiene wrote:

This is a very common mistake also in other literature about placebos: A 20% placebo effect is claimed for a placebo-controlled drug trial on patients with angina pectoris. However, in the same trial, 72% of the placebo-treated patients deteriorated.

A subtle but significant phenomenon known as “regression to the mean” can also show improvements that are often mistaken for a placebo effect. Consider how patients are recruited into clinical trials. To be eligible to participate, patients must have a certain severity of illness. In the 2002 osteoarthritis study, for example, patients had to have moderate knee pain or worse — at least a 4 on a scale from 0 to 10. When pain is assessed a second time, there tends to be a regression toward the mean, which is to say a less extreme measurement. Why does this happen? One reason is that, as already noted, many chronic illnesses show fluctuations in severity. At the time of recruitment into a study, a patient’s illness may be near its worst; when measured again, it is likely to have diminished. Another reason relates to the fact that no measurement is entirely free of error. (And self-reported pain scales, being wholly subjective, may raise particular problems in this regard.) Some patients may meet the eligibility criteria because of a spuriously elevated initial measurement; subsequent measurements are likely to be lower. This point was clearly articulated and demonstrated in a 1983 paper published by Clement McDonald and coauthors in the journal Statistics in Medicine, arguing that “most improvements attributed to the placebo effect are actually instances of statistical regression.” But the paper received little attention, perhaps in part because regression to the mean is notoriously difficult to understand.

Kienle and Kiene noted that in some studies patients in the placebo group also received other treatments, which could plausibly explain their improvement. For example, the placebo group in an angina study listed in Beecher’s original paper also received nitrates.

In all, Kienle and Kiene listed a total of ten factors that may give a false impression of a placebo effect, including misquotation and uncritical reporting of anecdotes, both of which are remarkably common problems in this literature. Of the 15 trials cited by Beecher, 14 provided sufficient information for Kienle and Kiene to review. They concluded that “in all of these trials the reported outcome in the placebo groups can be fully, plausibly, and easily explained without presuming any therapeutic placebo effect.” They also reviewed an additional 800 articles on placebos and reported that they could not find “any reliable demonstration of the existence of placebo effects,” concluding that “the extent and frequency of placebo effects as published in most of the literature are gross exaggerations.”

“Conceptual and Methodological Confusion”

What kind of study could produce reliable evidence of a placebo effect? Echoing Liberman’s comment, McDonald and his coauthors in their 1983 paper noted that “conclusive proof of a causal role of placebo treatment requires a controlled trial comparing placebo-treated with non-treated patients.” Attempting to draw conclusions about placebo effects without an untreated control group is what Danish researcher Asbjørn Hróbjartsson calls “the classic methodological error” in this field.

In fact, a number of randomized trials with a placebo group and a no-treatment control group have been carried out, and in 2001 Hróbjartsson together with another Danish researcher, Peter Gøtzsche, published a review of 114 such studies, covering a wide range of clinical conditions. Some of the studies measured objective outcomes, such as laboratory data, while other studies measured subjective patient-reported outcomes, such as pain. The authors did not find a statistically significant difference between placebo and no-treatment groups, except in studies of pain treatment as well as in other studies involving subjective outcomes that were measured on a continuous scale — things like anxiety and nausea.

The authors have since published two updated analyses, with similar results. This is what they concluded in their most recent review, the 2010 article “Placebo interventions for all clinical conditions”:

We did not find that placebo interventions have important clinical effects in general. However, in certain settings placebo interventions can influence patient-reported outcomes, especially pain and nausea, though it is difficult to distinguish patient-reported effects of placebo from biased reporting.

The biased reporting they refer to stems from a fundamental limitation in these randomized trials: they cannot be double-blind. Patients receiving no treatment will necessarily be aware that they are receiving no treatment. For self-reported outcomes this increases the risk of systematic error, or bias. Pain is an especially problematic outcome in this regard because it is strongly influenced by factors such as emotional state and anxiety. Patients receiving no treatment may feel disheartened and report less improvement. And in some situations patients receiving no treatment may seek out alternative treatments, thereby biasing results. Patients who know they are receiving treatment, whether or not it is in fact a placebo, may feel encouraged and report greater improvement. They may even do so in a conscious or subconscious attempt to please investigators. Also, if patients are not blind to their treatment group, it may be difficult to maintain blinding of the personnel responsible for assessing outcomes, introducing a further risk of bias. For these reasons, even with an appropriately designed study, rigorous estimation of placebo effects is challenging. In a 2011 paper, Hróbjartsson and coauthors commented that “randomization to placebo and no-treatment is the best research design we have in estimating effects of placebo” but that this “remains an approximate and fairly crude method.”

Some observers have taken the fact that a mild placebo effect has been demonstrated using this method, in which patients cannot be kept blind, to mean that placebo administration need not be deceptive. But it is somewhat difficult to see how this could be put into therapeutic practice. In a 2010 study called “Placebos without deception,” whose lead author Ted J. Kaptchuk of Harvard Medical School is a prominent supporter of the idea of placebo effects, patients were told that

1) the placebo effect is powerful, 2) the body can automatically respond to taking placebo pills like Pavlov’s dogs who salivated when they heard a bell, 3) a positive attitude helps but is not necessary, and 4) taking the pills faithfully is critical.

Even in this “non-deceptive” administration of placebo, it seems that patients have been given some questionable information.

Studying the placebo effect in randomized trials with a no-treatment group also ought to help us to be more precise in defining it and to reject the confusing definitions that often complicate the discussion. According to many definitions, whether explicit or implicit, the placebo effect is the change following receipt of a placebo, which is also, and somewhat more informatively, referred to as “placebo response.” (Even this term invites the false idea that whatever change follows receipt of placebo is a response to it.) The term “placebo effect” has been used so vaguely and variously that in their 2010 review Hróbjartsson and Gøtzsche remark that

This term does not only imply the effect of a placebo intervention as compared with a no-treatment group, but is also used to describe various other aspects of the patient-provider interaction, such as psychologically-mediated effects in general, the effect of the patient-provider interaction, the effect of suggestion, the effect of expectancies, and the effect of patients’ experience of meaning.

In a 2002 article on the challenges of estimating placebo effects, Hróbjartsson writes that

Generally the conceptual and methodological confusion in the field of placebo is of such a magnitude that references to placebo effects are incomprehensible without further clarification. It might be time to stop using the term placebo effect and instead specify which kind of intervention one is referring to, and how its effect was measured.

We Want to Believe

There is a remarkable gap between the slim scientific evidence for a general placebo effect and conventional wisdom on the subject. Belief in a ubiquitous and powerful placebo effect seems unshakeable, reinforced by hundreds of scientific papers that have been written about the topic and a steady stream of credulous media reports, such as a 2009 Wired article claiming that “dummy capsules can kick-start the body’s recovery engine.”

Among non-specialists, the placebo effect confirms the intuition that state of mind can influence physical well-being, that the world is more mysterious than we know, and that the knowledge of experts is not as complete as they might have us believe. Among the scientifically literate, including science journalists and scientists themselves, the idea of the placebo effect suggests intriguing neurological-physiological mechanisms. For the popular media, it is a perennial favorite that combines the prestige and authority of science with a suggestion of mysterious forces.

Ironically, while the placebo lies at the heart of rigorous scientific evaluation of the efficacy of treatments, science journalists are often surprisingly willing to accept anecdotes about the placebo effect at face value and tend to make simple errors in selecting and interpreting evidence. New studies that apparently support the placebo effect are often accepted uncritically, even by scientists.

Uncritical acceptance of the placebo effect may be harmful in several ways. It may encourage magical thinking and make people more susceptible to quack therapies. It may also distract attention from the refinement of effective therapies and the development of novel ones. And if health care providers make clinical use of placebos, they may find themselves engaging in deceptive practices, possibly damaging their relationships with patients.

Hype about the “amazing” placebo effect says more about the cultural appeal of the idea than it does about solid evidence supporting it. This is a troubling sign that an idea that resonates with experience and cultural meaning may be alluring enough to evade scrutiny, even among scientists. The best evidence indicates that the placebo effect is not a general phenomenon. But at some level it seems that evidence is beside the point; we simply want to believe. Perhaps belief in the placebo effect is itself the ultimate placebo effect.

Nick Barrowman, “The Myth of the Placebo Effect,” The New Atlantis, Number 48, Winter 2016, pp. 46–59.
Related

Delivered to your inbox:

Humane dissent from technocracy

Exhausted by science and tech debates that go nowhere?