Science

Stagnating Science Or Sign Of Success?

Written by admin

FILE- In this file photo dated Friday, April 17, 2015, a national library employee shows the gold Nobel Prize medal awarded to the late novelist Gabriel Garcia Marquez, in Bogota, Colombia. There is no bigger international honor than the Nobel Prize, created by 19th-century Swedish industrialist Alfred Nobel, and the 2016 laureates will be named over the coming days to join the pantheon of greats who were honored in years gone by. (AP Photo/Fernando Vergara, FILE)ASSOCIATED PRESS

Patrick Collison and Michael Nielsen have an article in The Atlantic with the attention-grabbing headline Science Is Getting Less Bang For Its Buck, which prompted a fair amount of discussion on social media. It’s a clear improvement over a lot of lost-Golden-Age narratives in that they make an effort to quantify the fall from the past, but I still found it unconvincing.

Their most original contribution to the genre is a survey that attempts to quantify the decline in the importance of science using Nobel prizes as a proxy. They surveyed a large number of scientists, asking them to rate the relative importance of Nobel-winning discoveries from two different decades, and find a slight tendency to rank work from the early part of the 20th century more highly than more recent discoveries. This, they argue, is a sign that we’re not making discoveries with the same fundamental importance today as we were back in the day.

There are two problems with this, the first being that there’s really only a plausible downward trend for physics (the importance-vs-time graphs for the Chemistry and Medicine prizes are pretty flat), and it’s not that impressive even for social-science results. More importantly, though, the arguable peak in importance comes in the 1920’s and 1930’s, during the development of quantum mechanics. I don’t think the revolutionary progress of that era is something we could reasonably expect to be sustainable– that was a sui generis moment in physics, and using it as a starting point skews things in a way that’s not really appropriate. It’s sort of forced on them because the Nobel Prizes don’t go back all that far, but using that measure means they’re inadvertently using a classic “How-to-Lie-with-Statistics” trick.

They have a couple of other arguments that I found kind of weak as well, including a reference to the increase in co-authorship of papers:

[S]cientific collaborations now often involve far more people than they did a century ago. When Ernest Rutherford discovered the nucleus of the atom in 1911, he published it in a paper with just a single author: himself. By contrast, the two 2012 papers announcing the discovery of the Higgs particle had roughly a thousand authors each.

Again this is using a major outlier, this time on the modern end– the LHC papers have outsized author lists because the LHC is an enormous undertaking, and really the only game in town for work at the high-energy frontier.

USSR postage stamp ”Ernest Rutherford. Scheme of diffusion of alpha particles (Rutherford experience)’1971 year. Ernest Rutherford, 1st Baron Rutherford of Nelson, (1871 – 1937) was a New Zealand-born British chemist and physicist who became known as the father of nuclear physics..Getty

Even a more reasonable measure– median number of authors for papers posted to the arxiv, or some such– would run afoul of changing norms, though. Rutherford’s papers were technically single-author works because that was the standard at the time. Having looked at a lot of early-20th-century physics papers over the last several years, though, when you dig into these older experiments with one or two authors, they usually turn out to have a lot more people involved– they’ll have postscripts or footnotes that thank a number of technicians and assistants. In the intervening century, we’ve decided that the contributions of those people deserve recognition, so if modern standards were applied, most of those papers would have several co-authors. The historical standard under-counts the number of people actually involved in the work, in a way that makes the expansion look worse than it is.

My primary complaint with the article, though, is that the reduction in “bang for the buck” in physics in particular seems to me to be less an indicator of a troubling stagnation than a sign of success. That is, a slowing in the rate of discoveries of fundamental importance, and in increase in the cost of those discoveries, is exactly what we ought to expect from science functioning as it should.

Obviously, this pre-supposes a particular model of the proper functioning of science. What I have in mind when I say that is the idea of science as a process converging on an ever-more-accurate representation of reality. Initial discoveries are relatively easy to make and contribute to our understanding on a relatively coarse scale, while each successive generation fills in finer details but at greater cost.

In this Wednesday, Nov. 25, 2009 photo, a librarian looks at the original scientific paper by Isaac Newton with details and a drawing of his reflective telescope from 1672, at the library of Britain’s Royal Society, in central London. Dozens of epoch-changing moments are preserved in the library of Britain’s Royal Society, an academy of scientists founded in 1660 to gather, discuss and spread scientific knowledge, a role it still fills today. Its members, dedicated to discovery through observation and experiment, form a roll-call of scientific fame: Isaac Newton, Benjamin Franklin, Charles Darwin, Stephen Hawking. All contributed scientific papers that together recount what geneticist Alec Jeffreys, the father of DNA fingerprinting and a current member of the society calls “this amazing journey over the past 350 years.” (AP Photo/Lefteris Pitarakis)ASSOCIATED PRESS

We can see this sort of progression toward finer detail and higher prices in the long history of physics: the field was launched by discoveries in classical mechanics involving big objects and large forces– the motion of the planets, the behavior of objects in free fall. These are common part of everyday experience, so the cost is relatively small, and the increase in our knowledge is rapid and dramatic.

As mechanics became well developed, physicists began to turn to less common, less significant interactions: electric and magnetic forces. These are incredibly important on a fundamental level, but much harder to see in ordinary circumstances– creating electrostatic forces and magnets in a way that allows controlled investigations of their interactions was not trivial, particularly in the eighteenth and early nineteenth centuries as this stuff started to take off. The increase in our knowledge is still large, but the cost is considerably greater.

As time goes on, you get to quantum mechanics, which involves even tinier and more exotic physics, requiring even greater sophistication to carry out the necessary measurements. Quantum effects are essential for understanding our world– I’ve got a whole book on this— but you could be forgiving for not knowing that, because they’re only apparent after a bit of digging. And that digging costs money– by the early 1900’s, physics had moved almost entirely into formal, institutional settings as the resources needed to make progress started to exceed the capabilities of even idle aristocrats.

This continues on through the dawn of nuclear physics, and then into the era of ever-larger accelerators. The effects being studied become more and more subtle, and the experiments needed to study them become more and more expensive. At each step of the process, we’re filling in finer and finer details of our picture of the universe, and it take more money and effort to nail down those details.

Undated picture of English chemist and physicist Michael Faraday. (AP Photo)ASSOCIATED PRESS

Now, this is not to say that these details aren’t important. From the perspective of technology and economics, the electromagnetic and quantum revolutions are vastly more important than the Newtonian ones, because most of our modern technology uses electric current to power transistor-based processors that rely on the quantum nature of semiconductors. If you think about it in terms of an improving approximation of reality, though, the scale at which we’re working is getting smaller and less obvious all the time, and that’s a good thing. If we were still making cheap and easy discoveries at the everyday scale of Newtonian physics, something would be horribly wrong with our model.

Now, there is a problem here from the bang-for-buck perspective, in that the remaining fundamental mysteries in physics– beyond-Standard Model particles and quantum gravity– seem highly unlikely to provide the basis for new technologies in the way that electromagnetism and quantum mechanics did. I guess there’s a possibility that an experimental solution to the problem of dark matter might lead to some trillion-dollar method for extracting energy from the dark sector, but that seems pretty remote. I don’t think that represents any failure on the part of science as a whole, though. Instead, it’s an indication of just how well we’ve succeeded

And, it should be noted, a slowing in the rate of fundamental breakthroughs does not necessarily portend the end of practical progress. There’s still plenty of room to push the limits of science that we understand pretty well already– new advances in materials and technologies based on the quantum ideas discovered in the 1930’s. And there’s arguably a lot more room for breakthroughs on the life-science side of things. We’re not going to run out of science to do anytime soon, even if it becomes harder and costlier to make new breakthroughs at the most finely detailed scales.

Let’s block ads! (Why?)


Source link

About the author

admin

Leave a Comment