Cancer screening breakthroughs can sound like miracles—but sometimes the bold headlines hide the most worrying details. And this is the part most people miss: a test can look “exciting” on paper and still fail many of the people who trust it.
The Galleri blood test, created by US company Grail, has generated a wave of optimism because it aims to detect many different cancers from a single blood draw, potentially catching disease earlier when treatment is more likely to work. Researchers hope that, if it eventually becomes part of routine health checks, it could significantly improve early detection and help save lives by spotting cancers before symptoms appear. At first glance, it sounds like the kind of medical innovation everyone has been waiting for.
The test has attracted global attention after early trial data led some researchers to describe the results as “exciting,” a word that naturally sticks in headlines and public memory. Galleri is currently being trialed by the UK’s National Health Service (NHS), and early promotional material highlights its ability to pick up signals from 50 different types of cancer—a claim that understandably fuels public hope and media buzz. But here’s where it gets controversial: those big numbers do not tell the full story about how well the test actually works in practice.
According to a widely cited press statement, the test can detect signs associated with 50 cancers and, among people who receive a positive result, correctly identifies cancer in 62 percent of cases. In simple terms, if you test positive, there is roughly a six-in-ten chance that you truly have cancer, based on this trial. That 62 percent figure is known as the “positive predictive value” (PPV), and it has been heavily promoted as evidence that the test performs well.
There is another number that sounds very impressive at first: among people who did not have cancer in the study, the test correctly reported a negative result 99.6 percent of the time. This metric reflects “specificity,” which describes how good a test is at avoiding false alarms in people who are actually healthy. On the surface, PPV of 62 percent and specificity of 99.6 percent look like a major leap forward for cancer screening technology.
But here’s where it gets controversial again: headline statistics can create a sense of security that does not always hold up under closer inspection. The trial that produced these figures—called Pathfinder 2—involved 23,161 adults over the age of 50 from the US and Canada, none of whom had a prior cancer diagnosis at the time of enrollment. Out of all these participants, 216 received a positive Galleri result, and later, 133 of those were confirmed to have cancer.
Those 133 true cancer cases among 216 positive tests are what produce the 62 percent PPV that has been so widely discussed. Put another way, around 38 percent of people who got a positive result did not actually have cancer, meaning more than a third of positives were false positives. For an individual, that could mean weeks or months of anxiety, extra scans, invasive procedures, and emotional stress—all triggered by a test that ultimately turns out to be wrong. Is that an acceptable trade-off if the test also finds cancers earlier for some people?
Specificity, while high, also has consequences when scaled to a population level. A 99.6 percent specificity means that for every 1,000 people without cancer, about four would still receive a false positive result. That sounds tiny in isolation, but if everyone aged over 50 in the UK—more than 26 million people—were screened, that small percentage would translate into more than 100,000 people being told they might have cancer when they do not. Imagine the strain this could put on healthcare systems, from extra imaging and biopsies to the psychological burden on patients and families.
Yet the number that has received far less attention—and arguably should be front and center—is sensitivity, which measures how often the test successfully catches cancer when it is actually present. For the Galleri test in this trial, sensitivity was 40.4 percent. That means the test missed close to three out of every five cancers that appeared during the following year. In plain language: a negative result from this test does not reliably mean you are cancer-free.
For people hoping Galleri might act as a universal early-warning system, that sensitivity figure is likely to feel disappointing. A test that misses most cancers raises serious concerns that some patients could be falsely reassured, assuming a negative result gives them the “all clear” and delaying further checks, symptoms reporting, or regular screening they would otherwise pursue. And this is the part most people miss: a test that looks “good enough” in aggregate can still be dangerously misleading for individuals if its limitations are misunderstood.
Statisticians and trial experts emphasize another important nuance: numbers like PPV, specificity, and sensitivity are not fixed, unchanging properties of a test. They are estimates based on data, and they come with uncertainty and confidence intervals. In addition, performance in tightly managed clinical trials often looks better than in everyday medical practice, where patient populations, conditions, and implementation are more variable. In real-world use, Galleri’s accuracy could be lower than what the trial suggests.
So where does all this leave the Galleri blood test in the bigger picture of cancer screening? A balanced view is that it could become a helpful tool in certain screening strategies, especially as part of a broader system that includes established tests, imaging, and clinical judgment. However, it would be risky for doctors or patients to treat a negative Galleri result as a definitive “no cancer here” verdict, given its relatively low sensitivity.
There are other practical challenges as well. Even in its current form, the test is expensive—priced at US$949 (about £723) in the United States—which raises questions about who can access it, how health services could afford to offer it at scale, and whether that investment is justified. Most importantly, there is still no solid evidence that rolling out this test widely actually reduces deaths from cancer, which is the ultimate measure that any screening program needs to meet.
So yes, the early data is encouraging in some respects, and the underlying technology is genuinely innovative. But perhaps the hype needs to be dialed down, and expectations reset: this test may represent progress, yet it is not a magic bullet that will, on its own, solve the global cancer burden. Here’s a controversial question to end on: should health systems and individuals embrace a costly test that misses the majority of cancers it aims to detect, simply because it feels like a step into the future?
What do you think—does the potential benefit of detecting some cancers earlier justify the anxiety, cost, and risk of false reassurance that come with a test like this? Do you believe we should move quickly to adopt such technologies, or wait until the evidence clearly shows they save lives and not just generate headlines? Share where you stand—do you find the excitement around this test inspiring, premature, or somewhere in between?