
The Crisis In Physics: Are We Missing 17 Layers of Reality?
Season 11 Episode 6 | 18m 19sVideo has Closed Captions
What if, just before we reach the bottom, we find out that reductionism fails?
Big things are made of smaller things, and those smaller things are made of smaller things still. That’s reductionism in a nutshell, and digging our way to the smallest layer has been one of the primary goals of physics for ever. But what if, just before we reach the bottom, we find out that reductionism fails?
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback

The Crisis In Physics: Are We Missing 17 Layers of Reality?
Season 11 Episode 6 | 18m 19sVideo has Closed Captions
Big things are made of smaller things, and those smaller things are made of smaller things still. That’s reductionism in a nutshell, and digging our way to the smallest layer has been one of the primary goals of physics for ever. But what if, just before we reach the bottom, we find out that reductionism fails?
Problems playing video? | Closed Captioning Feedback
How to Watch PBS Space Time
PBS Space Time is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipBig things are made of smaller things,# and those smaller things are made of smaller things still.
That’s reductionism in a# nutshell, and digging our way to the smallest layer has been one of the primary# goals of physics for ever.
But what if, just before we reach the bottom, we# find out that reductionism fails?
You are made of cells, and your behavior can be# thought of as the collective behavior of those cells—whether it’s the collective throbbing# of cardiac cells behind your heartbeat or the coordinated firing of neurons behind your# subjective experience.
We don’t need cell biology to describe your pulse rate—we# just need a number of beats per minute, and we don’t need the neurophysiology of ion# channels and action potentials to understand your fear of clowns—psychology and maybe# psychiatry are more useful there.
If we go deeper, those heart and brain cells are made of# molecules and then atoms, whose interactions can be understood by chemistry without much attention# paid to the quantum mechanics at work even deeper down.
The universe has layers, roughly sorted according# to size, and each layer can be described by its own set of rules or “dynamics” that we can use# without having to know the rules of the layer below.
It’s pretty convenient that we don’t have# to do quantum mechanics to understand how people work.
And we take this peculiar feature of our# universe for granted, but perhaps we shouldn’t.
It’s also convenient that to understand nature, we# only need to look to smaller and smaller scales.
Understand the smallest, then in principle# we’ve understood the largest.
This notion, broadly described as reductionism, has been the# cornerstone of much of science and certainly physics since the ancient Greeks.
But this too# may be an assumption that needs to be questioned.
In a recent episode, we talked about the# hierarchy problem—the fact that the Higgs particle’s small mass doesn’t feel natural in# the context of this reductionist worldview.
Some physicists suggest that this is# a strike against reductionism.
Others, like Sabine Hossenfelder who responded to our# video with one of her own, contends that it’s this notion of “naturalness” that is misused.# But reductionism has been a foundational tenet of physics for ever, and naturalness has# long been considered an important tool.
This video is going to lay down some# groundwork to give you a solid sense of how these concepts are fundamental# to much of physics, which will help us understand why we should be surprised that these# approaches seems to fall apart at the bottom.
Let’s start by defining some concepts.
The# idea that big things can be understood in terms of small things is called# methodological reductionism.
We also have the related concept of theory# reductionism that I’ll come back to.
A few other terms: a level of new dynamics arising# from a deeper, smaller layer emerges from or is emergent from that deeper layer.
The dynamics—the# rules describing the layer—arise from the potentially very different dynamics of its many# parts.
And those parts don’t have the emergent dynamics specifically coded in them.
If the# emergent dynamics can be described without any reference to the underlying dynamics, we say# the emergent system has dynamical independence.
Perhaps the simplest example is in# thermodynamics, where the positions, momenta of countless particles—literally# 10^27 independent-ish numerical properties or degrees of freedom in the air in a typical# room—reduced to a few tightly related variables: temperature, pressure, density and# volume for an ideal gas in equilibrium.
Or take fluid flow.
The Navier-Stokes equations# predict the time-dependent behavior of fluids with incredible accuracy, relating the density,# momentum, various stresses, etc.
of the fluid with no reference to the nature of the individual# particles making up the flow.
Their properties are subsumed in the global parameters of the# fluid.
For example, the maddeningly complex attractive interactions between fluid particles# just becomes a single number: the viscosity.
There are plenty of other examples# where constituent particles of a system have no knowledge of the emergent dynamics that come f... from the patterns within a susurration# of starlings to quasi-particles formed by electron holes in superconductors or by# skyrmion knots in magnetic dipole arrays.
In physics, a theory that works in a particular# narrow range of some parameter space is called an effective theory.
Newtonian gravity is an# effective theory of the more general general relativity, with Newton only working# in cases of relatively weak gravity.
The most prominent type of effective theory# in physics is the effective field theory, which is what it sounds like—an effective theory# that’s also a field theory, meaning it describes properties that extend through space, like in# fluid dynamics or general relativity or quantum field theories.
In an effective field theory, the# restriction on the validity of the theory is size scale.
The component parts of an EFT—be they particles# of air or quantum elements of the fabric of space—are small compared to the emergent field.# Zoom in close enough and our EFT fails—things like temperature and viscosity become poorly# defined and then meaningless as we enter the realm of individual elements.
But zoomed# out far enough, the underlying elements are so numerous and so small compared to the EFT# scale that this behavior can be averaged over; their statistics are highly predictive,# even if the individual elements are not.
We say that an EFT integrates over the many# degrees of freedom of its underlying elements.
We also call this process coarse-graining, in# that our pixel scale is coarse enough that we can’t resolve the underlying elements.
At# this point, we see ensemble parameters, and find relationships between those parameters.# Sometimes those coarse-grained relationships are exquisitely reliable and robust, and the erratic# behavior of the underlying parts are no longer needed to predict the system’s behavior.
This# is the dynamical independence I mentioned.
But an effective theory breaks when we move# beyond its range of validity.
An effective field theory breaks down below a certain# size, when the graining is too coarse to capture the increasingly important behavior of# the constituent elements of the emergent field.
A key requirement for an effective field theory to# work is a separation of scales—it has to be big enough compared to its parts that it’s not overly# sensitive to the random messiness of those parts Here’s an example of pushing an effective# theory too far: at the end of the 19th century, physicists were trying to calculate the spectrum# produced by hot objects—the blackbody spectrum—the light radiated by particles colliding or vibrating# with thermal energy.
Based on the assumptions of classical physics, physicists were able to# correctly calculate the spectrum produced by hot objects, but only for the low energy, long# wavelength end of that spectrum—the infrared or IR in the case of the Sun’s spectrum.
When they tried# to calculate the short wavelength ultraviolet, UV side, they found that high energy particles# contributed far too much intensity, causing the spectrum to brighten indefinitely in that# direction.
The calculations didn’t match reality.
This was the so-called ultraviolet catastrophe.
It# was solved when Max Planck realised that classical physics didn’t give a good description of how# particles vibrate, and that description got worse the higher the energy.
His solution was the# birth of quantum mechanics.
We’ve talked about it.
The ultraviolet catastrophe resulted from# pushing classical mechanics beyond its limit of applicability.
By analogy, we often# call the limit of an effective theory the “UV cutoff”--that’s the energy where it stops working# and we need to transition to the more general, more true description.
In this case,# it was quantum mechanics.
By the way, once we’re in the quantum realm, the# UV cutoff corresponds both to a maximum energy of validity and to a minimum# size, for reasons we’ve discussed.
Let’s step back a second.
We can describe# the world in terms of effective theories that are emergent from deeper theories.# Those deeper theories are, in principle, more fundamental, but in general are, themselves,# also effective theories with their own UV cutoffs.
They are course-grainings of something even# deeper.
This process of zooming in to smaller and more and more fundamental layers of reality# has led us through classical physics to quantum mechanics with an ultimate eye to a single,# most fundamental theory.
Via this methodological reductionism we hope to also achieve the other# type of reductionism: theory reductionism, which proposes that all effective theories# are just different manifestations, or special cases of a single, underlying# mastering theory—a theory of everything.
At least that was the hope.
But we# saw hints that this program might be in trouble when we looked at the hierarchy# problem recently.
Let me refresh your memory.
Let’s go all the way down to the smallest scale# at which we have a well-tested field theory.
That’s the Standard Model of particle physics,# which describes the elementary particles like electrons and quarks in terms of oscillations# in quantum fields.
If we accept the premise of theory reductionism then all of nature should# be explainable in the context of a single, master theory.
The Standard Model is not that# theory because it doesn’t include gravity.
That makes it an effective field theory—a# coarse-graining of some deeper theory.
But where is the UV cutoff of the Standard# Model?
We could argue that the deepest layer of all could be at the size-scale of# the Planck length, at 10^-32m.
That’s around 17 orders of magnitude smaller than the scale# of the heavier of the Standard Model particles.
This is where we know that general relativity# and quantum mechanics come into hopeless conflict and so both of them fail as effective field# theories and we need a theory of quantum gravity.
So the quantum gravity scale could be our UV# cutoff.
But many physicists argue that the cutoff is much closer in energy to the Standard# Model.
That makes sense if we take our lead from the ultraviolet catastrophe.
Max Planck solved it# when he showed that contributions to the blackbody spectrum by high energy particle oscillations are# suppressed in the more “true” quantum theory.
This suppression kicks in at the same energy that the# classical theory stopped making good predictions.
So where does the Standard Model stop# becoming predictive?
Some would argue that it’s when we try to predict particle masses.# Particle masses are a free parameter of the Standard Model—something measured in labs# rather than predicted by the theory.
But, as good reductionists, we expect that the# mechanism explaining these masses is hidden in the deeper layer, beneath the Standard# Model’s UV cutoff.
It’s a bit confusing, because the mechanics of the Standard# Model—quantum field theory—does actually explain mass—at least in principle.
Mass arises# from a type of interaction with quantum fields.
But, in a manner analogous to the ultraviolet# catastrophe, particle mass can become very large—even infinite—if you naively add the# contributions from high energy oscillations in those quantum fields.
The main culprit is# the Higgs boson, which in turn grants mass to other elementary particles.
Rewatch# the last video for the nitty gritty.
The point here is that the power of the# Standard Model to explain nature appears to fail at around the mass of the Higgs boson.# Taking their queue from, among other things, the ultraviolet catastrophe, many physicists# expected to find new physics close to this mass—physics that would suppress or cancel high# energy contributions.
They did not find that physics.
At least, the Large Hadron Collider hasn’t yet,# and it’s probed quite a bit deeper than where this physics was supposed to# appear in the form of new particles.
So could the mechanics that define the# Standard Model’s parameters lie much deeper than is accessible to the LHC, or perhaps# even to any plausible future accelerator?
Could this so-called UV theory only emerge many# orders of magnitude larger in energy and smaller in size than the scale of the Standard# Model, perhaps even at the Planck scale?
Many would say that this feels “unnatural”,# claiming it requires very precise tuning of the parameters of that UV theory.
In an# upcoming episode I’ll properly address the fine-tuning argument more rigorously,# including whether we should expect to expect the Standard Model parameters and# the UV parameters from which they emerge to be anything other than what they are.# Some, like Sabine, think we should not.
For now, I want to end with an illustration of the# strangeness of the situation, in terms of size, not energy.
What is the typical separation# of scales between familiar dynamically independent levels?
Think starling# flocks to individual birds to bird cells to molecular machinery to atoms to the# elementary particles of the Standard Model.
We zoom in 2-4 orders of magnitude each# time, and then we have this chasm that could be up to 17 orders of magnitude between# these elementary particles and the deepest layer.
There’s as much difference between the# Planck scale to the scale of the Higgs as there is between the atom and the blue whale.
Imagine we# tried to find out what blue whales are made of and we found no supporting structure for 17 orders# of magnitude—no organs, no cells, no molecules.
Just atoms of blue whale that somehow managed# to get their act together to make a cetacean.
This is a problem because it gives us a sense that# the atoms knew what they were trying to build, which contradicts the whole idea of our# methodological reductionism.
The parts should not know about the whole—emergence# goes from small to big.
This requirement is a bit more nuanced than that.
There IS# feedback between the generating and emergent layers.
For example, a cell can’t live without# its host organism, molecular machinery requires a stable environment of the cell, even the lowerly# quark can only exist in composite particles.
There’s a feedback between scales—small# generates large, large stabilizes small.
But this feedback acts locally typically each# scale directly influences a layer above and stabilizes a layer below—similar# to how the dynamics within each layer is spatially local.
Throughout most of# nature, this interplay between emergence and stabilizing feedback works when the separation of# scales is not too small but also not too large.
It's hard to imagine a mechanism for that# feedback over the 17 orders of magnitude from the Higgs to the quantum gravity scale.
If this is true, it seems like there are really only two broad possi... underlies and defines the Standard Model# just is the way it is.
It’s doing its thing, oblivious to the fact that incredible richness# arises many orders of magnitude away in scale.
That does seem a bit lucky, or finely tuned.
We# could also invoke the anthropic principle here, in that only universes with such# a quality end up getting observed - by blue wha... There’s a solid case to be made for# this, but you do need a ridiculous number of universes to make it work.# We’ll come back to that some time.
Or option 2) Perhaps a type of downward causation# or large-to-small feedback is stronger than the standard reductionist paradigm generally# permits.
Perhaps the Higgs does tell the lower levels how to get their act together# over vast scales to stabilize its mass, or perhaps the Higgs mass is stabilized# not from below but rather from above.
The general term for this kind# of anti-reductionist reversal of influence between the small and large, the# ultraviolet and infrared, is UV-IR mixing.
And there’s some strong motivation to think# that this might be an answer—we already see UV-IR mixing in one very familiar case—that# of gravity.
But that’s also for another time.
In either case, something weird and cool# is going on at the bottom of reality, and the clue is in the separation# of reality’s emergent layers, from blue whales down to the# quantum fabric of spacetime.
Support for PBS provided by: