What Can Psychology Tell Us About The Controversy Over Prostate Cancer Screenings?

Image of prostate cells from a tissue culture under a microscope

Last year, the U.S. Preventive Services Task Force came out against the prostate-specific antigen (PSA) test. This test has been widely used for years as a way of screening male patients for prostate cancer, the most commonly diagnosed cancer in men. The Task Force’s decision stoked a huge controversy and generated outrage among the many doctors and cancer survivors who firmly believe that the test works. However, if the Task Force's decision was based on sound science, why did it create such a political firestorm? According to a new paper published in the journal Psychological Science, the answer lies in human psychology [1].

First, research has consistently found that people are more persuaded by anecdotes than they are by statistics. Thus, when people hear a personal story about how a medical procedure benefited a specific individual, they become convinced of that procedure’s effectiveness, even in cases where the statistical data suggest a different conclusion. Similarly, research on the identifiable-victim effect has found that people are willing to spend more money to save the life of a victim who is identified with a name or photo than a victim who remains unidentified [2]. Taken together, these findings tell us that at least part of the reason the Task Force’s decision was so controversial stems from the fact that many people know someone personally who has undergone a PSA test, received treatment, and survived, and people trust this personal knowledge and experience more than the actual data.

In addition to this, the Task Force’s report was full of numeric data, which general audiences often have a hard time evaluating and understanding. For instance, when people look at medical data, the thing they usually pay most attention to is how many persons are still alive after undergoing a certain treatment or procedure. While this information is certainly important, it cannot be the only thing considered—we also need to look at people who did not receive the treatment and see how many of them are still alive. In other words, we need to evaluate the treatment group compared to a control group. Let me explain: if the only thing people see is that the vast majority of patients who had a PSA test lived (which is true), they might mistakenly conclude that it is highly effective and saves lives. In reality, what the Task Force found was that the number of men who lived was the same regardless of whether they had the PSA test or not! In other words, the data show no benefit at all of having the screening. The Task Force also found that among people who underwent PSA screenings, there were a large number of false positive results that led to many unnecessary biopsies, treatments, and side effects, which is a whole other issue--not only is the PSA ineffective at reducing deaths, but the test may actually be harmful to men's health due to overtreatment. As you can see, all of the data need to be taken into account before you can draw conclusions about any medical test.

The authors of this paper speculate that similar factors might also help to explain the controversy that ensued when the same Task Force recently revised their recommendations for how often women should get mammograms. So can anything be done to improve communication about medical data in the future? The authors suggest that one potential solution is for reports to emphasize pictorial displays instead of numeric data because people have a much easier time understanding illustrations than they do endless tables of numbers. In this case, a picture really is worth a thousand words.

Want to learn more about Sex and Psychology? Click here for previous articles or follow the blog on Facebook (facebook.com/psychologyofsex), Twitter (@JustinLehmiller), or Reddit (reddit.com/r/psychologyofsex) to receive updates. 

[1] Arkes, H. R., & Gaissmaier, W. (2012). Psychological research and the prostate-cancer screening controversy. Psychological Science, 23, 547-553.

[2] Jenni, K. E., & Loewenstein, G. (1997). Explaining the identifiable victim effect. Journal of Risk and Uncertainty, 14, 235-257.

Image Source: 123rf.com