Science is more of an art than a science.
Creating knowledge is hard. It's about as hard as creating art. One difference is we rarely see artist's works in their incredibly messy stages of development, and we rarely expect artist's work to inform public policy.
T. S. Eliot's The Waste Land was initially entitled He Do The Police in Different Voices--a reference to a line from a character in Dickens' Our Mutual Friend who reads the newspaper to his wife and does the police in different voices, as well as an allusion to the different narrative voices that populate the poem--and the early drafts were nearly twice as long as what we got after Ezra Pound had his way with it. I don't have much use for Pound as a poet, but as an editor we owe him a great note of thanks, because a lot of what Eliot wrote... wasn't very good.
Now imagine if he'd done that in public, rushing the first draft out to press and then publishing a new draft after every edit. People who never wrote a line of poetry in their lives would be scoffing at how Eliot wasn't much an artist. He changes his mind every few weeks, moves whole sections of the poem around, cuts out large swathes of it, changes the rhyme scheme in other parts... its as if he's figuring it out as he goes along!
We have a tendency as a species to over-emphasize ends relative to means. This is understandable: we want to read the poem, see the painting, watch the movie, read the book, have the answers on wearing masks, and we want them to be definite, finished, complete and canonical things, NOW!
We can't have that.
Knowledge, like art, is never quite done. It can't be. That is its nature. A work of art is an exploration of a theme, an idea, an image, a feeling, a moment, a life, a death, a place, a time, a something... and yet the one concrete work we have is just a single possibility among an more-or-less narrow but pretty much infinite distribution of possibilities. Would changing a single brush stroke change the Mona Lisa, or the Night Watch? How about two? Take one of Bruegel the Elders rich scenes of outdoor life and add a branch to a tree, a minor character in the background... it's still a work the artist could have created when they made the one we had, even though it's not the same painting.
Within this universe of exploration and open possibilities there remains the opportunity to change, to grow, to learn. With artistic "perfection" there is none of that.
Once upon a time ideas of artistic perfection were a big deal. Beauty was Truth and all that. Fortunately they don't have so much currency these days, maybe because of the relative democratization of artistic production in the 20th century: it's hard to sell ideas of artistic absolutism when you're talking to a public that includes a fair sprinkling of serious amateur artists and a growing number of professionals, all of whom know their own work, and their own methods of working, are simply not reflected in notions of a final perfection.
Likewise, for thousands of years, philosophers thought that "certainty" was the most important feature of knowledge.
In the last decade of the 20th century is became apparent that this is precisely wrong: knowledge is by nature uncertain, and understanding and managing uncertainty is the goal of anyone who creates knowledge. This is sometimes called the "Bayesian Revolution", although not everyone who was part of it would necessarily describe it they way I do.
Before the Bayesian Revolution we were taught that through some as-yet-unknown alchemy we could transmute uncertain observations to certain facts and ideal theories. The quantum and relativistic revolutions of the early 20th century upset that model a bit, but after a little mid-century angst things settled back down pretty nicely.
After the Bayesian Revolution we understood that knowledge was created by updating uncertain beliefs in the face of new evidence to create new beliefs. Sometimes they're less uncertain. Sometimes they're more uncertain. Knowledge is like that: sometimes when we learn new facts we become a lot less sure about what we believe than we were before.
Let's say we have three ideas: masks do nothing, masks slow the spread of the disease, masks increase the spread of the disease. We start with the idea that masks do nothing is the most plausible, masks slow the spread is next, and masks increase the spread is a fairly distant third. We can give each degree of belief a number between 0 and 1 (exclusive... as we'll see, neither 0 nor 1, which represent certainty, is allowed) and because we have an exhaustive set of beliefs about masks the numbers have to add up to 1, because one or the other of those ideas has to be the case.
So let's say "masks do nothing much" gets 0.6, "masks slow things down" gets 0.3, and "masks make things worse" gets 0.1. These are what are called our "prior" beliefs: what we know going into the game. Notice that none of these are anything like "100% certain for sure", which is what a lot of people seem to want from experts. Sorry: you can't have it. Doesn't exist. Ask for something easy, like an honest politician or a generous capitalist.
Not everyone is going to have the same beliefs to start out with, which is one of the fun things about being a Bayesian: disagreement is no big deal, because Bayes' rule, which is what we use to update our beliefs, is mathematically provable to be the only way of updating that ensures we'll eventually converge on very similar beliefs even if we start with quite different ones and come across new evidence at different times in a different order. This might be wrong, of course, and a Bayesian would be open to evidence that it was, but so far I've not seen anything along those lines other than people who think they are clever saying, "How can you be 100% certain for sure Bayes' rule is all that?" as if literally everyone who has thought at all about this stuff hadn't had that thought already.
Furthermore, Bayesianism actually shows the power of diversity. In a group of people some are going to have better priors, and that will get them to the more plausible ideas earlier than others, so if we maintain a dialogue with people we disagree with, we have the chance to learn from them (and they from us) which creates the possibility of everyone being able to learn at the fastest possible rate instead of being stuck at the rate our own priors would have given us. A group of people communicating like this is sometimes called "the scientific community", but there's no particular reason it has to be restricted to what we traditionally think of as "science".
So far as we can tell, Bayesianism is the only way of knowing, and as such it works everywhere--I've read excellent Bayesian work in history and English, where a degree of formal inference replaces the airy authorial "it seems obvious from this...", which is why some people aren't wild about it. Anti-Bayesian philosophers are apt to call this attitude of Bayesian universalism "scientisism", but they're wrong.
So we were talking about masks. Notice that although I've given "masks do nothing" the lion's share of the plausibility at 0.6, it still isn't that close to 1... but we still have to do just one thing: wear masks or not? What will it be?
Given those plausibilities, "not" would be the way I'd go, and this was the direction most public health officials took up to a month or so ago. But look at what they've done: they've taken a distribution of beliefs, "0.3 better, 0.6 nothing, 0.1 worse" and turned it into a single, definite, policy statement: don't wear masks.
That is what we pay experts for. Creating knowledge is hard. Creating policy from knowledge is even harder, because we have to move from an uncertain distribution of plausibilities to one definite, crisp, simple, actionable "do this, don't do that" directive.
A lot of my professional work has been acting as an interface between scientists and engineers, and this is the difference between them: engineers need single numbers, scientists work with distributions. Someone on the engineering side on my team once came to me and said, "I've talked to one of the science guys for a couple of days now, and I know a whole lot more about this problem that I did, but I still don't know what this number should be."
The scientist had the knowledge, the engineer had the need, but in between there lies judgement. I went and talked to the scientist and came back and told the engineer, "Set the value to three." Why? Because based on what the scientist said it was a value that probably wasn't horribly wrong, which is often a better approach than trying to find a value that's right.
Principles like this--the physician's "first, do no harm" and the like--are what we pay experts to apply to uncertain, complex, knowledge so they can generate actionable, relatively simple, guidance. Because while knowledge is by nature uncertain, action is by nature definite: you do what you do, and nothing else. You can't 90% wear a mask and 10% not wear it at the same time and in the same respect. You're either "wearing it" within the bounds of whatever the technical definition of "wearing" is, or not.
So a month ago, based on the best data we had and applying principles of public health policy that have been refined over a century, the best advice was "wearing masks is not indicated."
But evidence accumulates, and when evidence accumulates Bayesians update their beliefs. This is awesome, btw. The alternative is to not update our beliefs in the face of new evidence. We know what that looks like: the Middle Ages. NAZI Germany. The Soviet Union. North Korea. Early modern New England, where my ancestors felt that their gut feeling that Goody Proctor was a witch was more than enough to set fire to her.
And in the modern world, religious conservatives are still violently insisting that what they believed all along is certainly true, while they are dying in droves from a preventable disease because they refused to update their belief that this is a plague sent by their scriptural god to punish the gays, or the Jews, or the weirdos.
Far from complaining that expert advice is changing, we should be deeply grateful to live in a time where knowledge is being continually created by a process of Bayesian updating of beliefs. To believe otherwise is to say "Eliot should have published the first draft of 'The Waste Land'... err... 'He Do the Police in Different Voices'... and never changed a jot of it."
Did we really need a poem that contained a long section in iambic pentameter with such gems as:
Leaving the bubbling beverage to cool, Fresca slips softly to the needful stool, Where the pathetic tale of Richardson Eases her labour till the deed is done
But what is this "Bayesian updating" I keep going on about?
It's all about how likely the new evidence is if one or another idea is true. If masks work, it's pretty likely that societies where masks are common have lower rates of transmission. If masks make things worse it's really unlikely. If masks do nothing it's a little unlikely, but there may be other social factors that are causing the difference. So when we see that societies where masks are common appear to have lower transmission rates, we multiply "masks work" by some number a bit bigger than 1 (pretty likely!), "masks do nothing" by a number smaller than 1 (a little unlikely), and "masks make things worse" by a very small number (not very likely at all).
The precise value of those numbers should be the ratio of how likely the data is given the idea is true divided by how likely the data is just by chance, but we usually don't know the latter, and outside of the world of professional data analysis we don't much care. What we're trying to do as Informal Bayesians is update our beliefs in a more-or-less sensible way. As Aristotle said, we shouldn't expect super-high precision when the subject is basically wobbly. But we should do what we can within the bounds of that wobbliness: a little uncertainty is no excuse to just throw good sense to the winds and do something stupid like trust our gut, which always--always--ends up with marginal people being burned at the stake.
Knowledge begins when we as knowing subjects begin to acknowledge, understand, and manage uncertainty.
In the case of our mask data, we might end up after updating with "wearing masks does nothing" at 0.5, "wearing masks helps" at 0.45, and "wearing masks makes things worse" at 0.05.
So things have changed, but not very conclusively.
One of the many things non-scientists don't appreciate is how little we can conclude from any one set of data in most cases. There are almost no "crucial experiments". Mostly it's a matter of slow accumulation.
At this point, we're 95% sure wearing masks doesn't make things worse, so we start telling people, "Wear 'em if you got 'em." Not a strong recommendation, but a definite change from where we were.
And then we learn more about asymptomatic transmission, and realize that masks can really help stop asymptomatic people from spreading the disease even if they don't help stop the disease on the receiving end, and our plausibilities shift again, this time to "masks do nothing" at 0.2, "masks help" at ~0.8, and "masks make things worse at ~0.01, where the fuzziness in ~0.8 and ~0.01 add up to 0.8, so the total is still exactly 1, because something has to be true, even though we'll never know what it is.
So expert advice changes again: airline passengers should wear masks, masks should be worn in close quarters situations, etc.
Isn't that just wonderful? Think how bad it would be if "experts" simply stopped learning at some point, and were still giving advice based on the notion that "bad air" caused disease, or "appeasing god's wrath" was the way to prevent it.
When people complain about experts changing their advice based on new evidence, that's the world they are asking for: one where the scientific revolution never happened, where supposedly unchangeable (but radically re-interpretable) scripture is the basis for all "knowledge", and half the world's population is wiped out by a plague every thousand years or so.
But here's the thing about Bayesian updating: because the process is multiplicative, and we are always multiplying two non-zero numbers together--our prior belief about how plausible and idea is, and how likely the evidence would be if the idea was true--so no matter how small the plausibility of any idea gets, it can never reach zero. Which means, on the other end of the scale, no alternative idea can never reach 1, which is certainty.
Certainty is not knowledge. It's an idea that will never change, that CAN never change, no matter what the evidence. We even have a word for ideas like that: faith. And faith is an error, a mistake, an impossible-to-reach point on the axis of knowledge that, when we land there, we should feel a bit like a dog that has managed to bite its own tail. Fun to chase, not so fun when your teeth dig in.
Most of the public, and a good many within the scientific community, are still catching up on all this. Philosophers are likewise largely confused, and true to form there are many of them who still preach the gospel of certainty.
This is because the Bayesian Revolution is a true Kuhnian paradigm shift. It isn't just a change in the answers we have to various questions we have about the creation of knowledge, it's a change in the kinds of questions we can meaningfully ask, and the kind of things we accept as a valid answer.
Before the Bayesian Revolution scientists still sought certainty. We were taught there was one definite answer and our work consisted of moving up the staircase the led to it. This was already pretty clearly a bad model for most science, but there were strong social and psychological forces that helped keep it afloat: a lot of non-physicists looked to physics as the ideal of science (a phenomenon known as "physics envy") and physicists... well, we were pretty happy to bask in the glow of that envy.
But fundamental physics kind of stalled out after 1983, after the discovery of the W and Z bosons at CERN: we had what looked like a complete description--called the Standard Model--of the four forces of nature with just a few missing bits to fill in and no very obvious way forward, and that is still the situation we're in today, almost forty years later, although we've filled in two of the missing bits with the discovery of neutrino mass and the Higgs boson.
At the same time, biology--a much better model for how science is actually done in the general case--got a huge technological boost with new gene sequencing and gene expression measurements systems. Suddenly the most important science was not this axiomatic, mathematical, prim, proper, and perfect individual, but a loose and sloppy collection of half-assed approximations and selectionist fairy-tales, where precise prediction was a fantasy but knowledge was pouring in over the landscape like a tsunami.
Almost everything in biology is not just uncertain but flagrantly so.
And yet most people alive today learned what little they did about how it works back when physics was still queen of the skies, and they are--not entirely unreasonably--expecting that what their high school teachers told them about science being "certain" is correct.
Unfortunately it isn't, any more than artworks are "finished".
So what we're getting today is a view into how knowledge is made. Like Eliot's poem, it staggers across a landscape in a bit of a drunkard's walk, never quite sure where it is, sometimes changing directions quite suddenly, now and then falling over or getting chased by sheep.
This is a good thing. Since certainty is neither possible nor desirable, and if we insist on it we freeze knowledge into an ossified fossil of itself, dead and unchanging.
I'll take living, vital, inherently uncertain knowledge over that any day of the year.