By Wayne Myrvold

Big news of the day is the press conference that CERN held this morning, announcing strong evidence of something that has some of the right properties to be the Higgs boson.

Is today the day that will forever mark the “discovery” of the Higgs boson? We like nice, neat milestones like that, but things aren’t so simple. Finding the Higgs boson isn’t like looking under the couch to see whether your car keys are there. The data presented at today’s press conference involved analysis of approximately one quadrillion, that is, one million billion, or 1,000,000,000,000,000, proton collisions. The reason for this is that data from a small number of collisions could, as a statistical fluke, produce apparent patterns that aren’t indicating anything interesting going on. If the pattern persists in a huge number of events, then we can be highly confident that it’s not a fluke. The strategy is akin to compensating for inaccuracy of measurements by repeating a  measurement many times and averaging the results.  Even if there is a lot of error in each individual measurement, if these errors are distributed symmetrically around the true value of the quantity to be measured, then the average of a large number of measurements will very probably be close to the true value.

So, questions about of the moment at which the Higgs boson gets discovered  are misleading. Here’s a better picture: as data accumulates, we gradually become more certain that we’re seeing something with the properties that we expect a Higgs boson to have. Eventually, the evidence becomes good enough to convince even the most die-hard skeptics, provided that they’re willing to adjust their beliefs in light of the evidence. But initial skepticism comes in degrees, so there isn’t a precise point at which this happens.

We should get used to this way of thinking about science. Evidence comes in degrees, as does strength of belief. As evidence accumulates, reasonable doubt dissipates, but we never reach a moment at which we’re absolutely certain.

However, if you’re going to make an announcement, you have to choose when to make it and what you’re going to say, and so you have to set a threshold of certainty for telling the world that you’ve made a discovery. Given the gradual nature of the accumulation of evidence, there will have to be some arbitrariness about this. The convention adopted by particle physicists is a 5-sigma signal, meaning that the chance, on the supposition that there’s nothing there and were just seeing random fluctuation of the data, of getting indications at least this strong, are about one in a million. So, we’ve got very strong evidence that there’s a particle, decaying in the way we expect a Higgs boson to decay, in the mass range indicated by the experiments.

There’s an interesting upshot to all of this. We’re thinking about the choice of whether to announce a discovery as a decision. Those who study decision theory have long emphasized that, in coming to a decision, you ought to consider the consequences of your actions, and make evaluations of those consequences.

In this case, the CERN researchers have a choice: announce a discovery, or wait. They do this in a state of incomplete certainty about whether there’s actually a Higgs boson. So, in evaluating their choices, they have to consider the consequences of that choice if there is a Higgs boson, and if there isn’t. If they announce a discovery, and there is no Higgs boson, they’ve made a premature announcement (embarrassing). If there is one, they get credit for discovering it. If you’ve chosen a career in particle physics, this is a highly desirable outcome!  If, on the other hand, they don’t announce a discovery, then they don’t run the risk of announcing a discovery of so

mething that doesn’t exist, but they miss out on the chance of getting credit for discovery.

On this analysis, announcing a discovery is the right choice when you’re sure enough that the advantages of being right outweigh the risk of being wrong. This means that the threshold of certainty depends on what is at stake, and how you evaluate the outcomes.

There’s a framework for analyzing decisions like this. We represent your belief state by numerical degrees of belief in various states of the world, which we treat using the mathematics of probability. You attach values to various outcomes, indicating your relative preferences between the outcomes (these are called utilities, an ugly word, but we’re stuck with it). You evaluate each of your candidate actions by listing the possible consequences and taking a weighted average of the utilities of these consequences, where the weightings are your degrees of belief in those consequences. That is, you list the consequences, multiply the utility of each consequence by your degree of belief in this consequence, and add all these up. You choose the action that has the highest value for this weighted average. This is called maximizing expected utility (MEU).

This means that evaluation of outcomes plays a role in any decision about what to report. But how something is valued can vary from person to person. Does this inextricable role of values in decision-making pose a threat to the objectivity of science?

No. The traditional ideal of objectivity is: don’t let what you want to be true affect your judgment of how strongly the evidence supports a claim, or how credible you regard that claim. Note that there are two ingredients in the MEU account of decisions: your degrees of belief in the consequences of your decisions, and your valuations of possible outcomes. The traditional ideal of objectivity can be seen as the injunction not to let your valuations of the consequences of a proposition being true or false affect your degree of belief in that proposition. To believe something because you want it to be true is known as the fallacy of wishful thinking, and it is, indeed, a fallacy.

These matters have been discussed at length by Heather Douglas in her book, Science, Policy, and the Value-Free Ideal. In it, Douglas distinguishes between direct and indirect roles for values in science. In a direct role, values would act as a surrogate for evidence, providing warrant or reasons to accept a claim. Douglas regards this role as illegitimate, and I do, too. An indirect role of values lies in deciding how certain one has to be that a claim is true in order to act as if it is true. This is a perfectly legitimate role, when making decisions regarding actions, such as what to announce to the public. Another legitimate role is, of course, in deciding what research to undertake. Research involves considerable investment of time on the part of the researchers, and, in some cases, of which the Large Hadron Collider is a prime example, considerable investment of money on the part of governments or other bodies funding the research. At the beginning of the research, one is uncertain about what the outcome will be. In my view,  thinking of this as an exercise in maximizing expected utility can be  good way to analyze such decisions. But, if we do so, it is essential that there be a role for purely cognitive values, that is, valuing increased understanding of nature whether or not it has practical payoff. We learn something deep about the way the world works when we learn about the Higgs boson, whether or not there is any practical use for this knowledge.  Cognitive values can, indeed, be incorporated within the MEU framework. For those interested, I’ve discussed this in my paper, “Epistemic Values and the Value of Learning,” forthcoming in the journal Synthese.