Home Artists Posts Import Register

Content

Physics progresses by breaking our intuitions, but we’re now at a point where further progress may require us to do away with the most intuitive and seemingly fundamental concepts of all—space and time.

—-

Physics came into its modern form as a description of how objects move through space and time. They are the stage on which physics plays out. But that stage begins to fall apart on the tiniest scales and the largest energies, and physics falls apart with it. Many believe that the only way to make physics whole again is to break what may be our most powerful intuition yet. In our minds, space and time seem pretty fundamental, but that primacy may not extend beyond our minds. In many of the new theories that are pushing the edge of physics, spacetime at its elementary level is not what we think it is.

We’re going to explore the “realness” of space and time over a few upcoming episodes. We’ll ask: Do our minds hold a faithful representation of something real out there, and if not, why do we think about space and time the way we do? And if space and time aren’t fundamental, what IS. What do space and time emerge from?

But today we’re taking the first step by exploring how the notion of absolute space and time in physics came about in the first place, and how that notion is beginning to fall apart.

We have this sense of space as an extended emptiness - a volume waiting to be filled with matter - a regular, continuous, mappable … space, in which everything that exists is embedded. Meanwhile time is the continuous rolling of future into past through the present, all governed by the same unstoppable clock. But this idea of space and time as having an existence “out there”, independent of its contents, became cemented in popular intuition relatively recently, at the same time that they became cemented in physics. However humans have been arguing over the reality or the fundamentalness of the dimensions for millennia.

We can summarize the two main conceptions of spacetime as either relational— space as a network of positional relationships of objects —or absolute—a real entity that exists independently of objects, and rather, contains the objects. The latter seems to have emerged only relatively recently.

Let’s start with the ancients. They certainly thought a lot about space—after all, they had maps and they invented geometry. But the geometries of Euclid and Pythagorus and others didn’t need the notion of space as an absolute entity—they were relational. For example, a triangle is defined by the relative lengths of its sides and its internal angles. You don’t need a coordinate grid to define a triangle—which is good, because the ancient Greeks didn’t have one. Sure, their maps had longitude and latitude, but they didn’t have our own mathematical habit of gridding up empty space with x, y, and z axes. As such, they didn’t tend to think of empty space as having its own independent existence.

The idea of the coordinate grid came much, much later. Perhaps you’ve heard of the Cartesian coordinate system. X, y, and z axes, each at 90 degrees to the others and gridded up so that any point in space can be defined with three numbers - the value of the closest grid-mark on each of the axes. This idea feels pretty intuitive to many of us, but it wasn’t commonly used until after 1637, when the French mathematician and philosopher Rene Descartes made it cool. With the coordinate system, it became possible to represent abstract numerical concepts in spatial terms—for example, by graphing an algebraic function. But it also gave us a tool for describing arbitrarily large and imaginary physical spaces—and this application would soon revolutionize all of physics. Regarding the actual nature of space, Descartes was firmly in the camp of philosophers like Plato, who didn’t believe in empty space. Descartes said that space is only real as far as it defines the extension of objects and matter. But the invention of the first true mathematical coordinate system opened the door for a very, very different conception of space.

And that new conception was almost entirely due to Isaac Newton. He gave us a set of equations that could, apparently, completely describe the motion of objects and how those motions change through the forces of their interactions. Newtonan mechanics are built on Descartes’ coordinates, and assume a universal clock. They proved wildly successful— revolutionary, really. So much so that many, including Newton, began to see the foundational building blocks of the mechanics—the coordinate of space and time—as in some way physically real.

Newton himself insisted that space is absolute; it exists completely independently of any objects within it. The empty volume implied by the Cartesian grid is a thing in itself. And time is also absolute. From Aristotle to Descartes, “time” was mostly understood as a counting of events. But In Newton’s view, there’s a single universal clock that keeps the same time for all observers--time passes “by itself ”, even in the absence of any change. Newton also believed that there was an absolute notion of stillness. Like, a master frame of reference whose x, y, and z axes are unmoving, and if your position was fixed relative to those axes then you were truly still.

This is contrary to the ideas of Galileo a century prior, who showed us that velocity is relative—the speed you measure for another traveller depends on your own speed. The laws of physics are the same in any non-accelerating, or inertial frame, and so all such frames are equal. While Newton accepted the mathematical consequences of Galilean relativity, he thought that impossibility of defining a preferred inertial frame was a limitation of the human mind, not of the universe.

The success of Newtonian mechanics elevated the notion of the realness of space and time in everyone’s minds. But there was one prominent naysayer. Newton had a nemesis. Or maybe it was Newton who was the nemesis to this guy. Ok, he shared a mutually nemetical relationship with the German mathematician Gottfried Wilhelm Leibniz. Their most famous rivalry was over the discovery of calculus, which they figured out independently—with Leibniz probably getting to it first. Newton accused him of plagiarism, and being by far the most powerful scientist of his day, secured the credit for himself.

But another point of contention was on the nature of space and time. Leibniz did not accept Newton’s assertion that these dimensions were in some sense real and independent of anything in them. Instead, he thought that both space and time were relational.

What does that even mean? Well, it means that objects exist, but they don’t live in a 3- or any other dimensional space. Rather, what we think of spatial separation is a quality of the objects themselves—or rather of the connection between them. Exactly why Leibnitz thought this and rejected Newton … is a whole thing, which we don’t have time to dive into right now. Instead, let me try to give you a sense of what it could mean for space to be encoded in objects or in their relationships, rather than existing independently to those objects.

Let’s start by imagining only one dimension of space, represented as a line. This is a Newtonian space, where every point represents an absolute position in a 1-D universe. We can put some particles in the universe. The position of each in space is defined by - well, its position in space—whatever grid mark it’s next to if we add a coordinate system. The particles might have intrinsic or internal properties—say, mass, electric charge, etc., but their position isn’t a quantity that’s intrinsic to the particle.

In Leibniz’s view there is no space, so we get rid of the line. The particles still exist, but they aren’t anywhere. They’re sort of just bundles of properties with no size or location. Space doesn’t exist so maybe we should place these particles on top of each other, but then again if location is meaningless we might as well separate them so we can see them.

Let’s add a new property to each particle that we’ll call X. X is what we call a degree of freedom—something about the particle that can take on different values, and it can change. Other degrees of freedom could be energy and phase and spin and so on. X behaves in a particular way. For example, it can change freely. If it’s changing, then it keeps changing at the same rate and in the same direction.

Now these particles have no idea about each other's existence, except in a special circumstance. If two particles have values of X that are close to each other then those X values influence each other, changing the rate that the dials turn. Maybe they want to try to be more similar, or maybe they try to be more different. If we were to represent these X values with position on a number line - an x-axis - then the behaviour of the particles looks just like particles moving around in space and attracting or repelling each other only when they’re close. We can’t tell the difference between particles moving in space versus space-like behaviour emerging from a degree of freedom within the particles.

This thought experiment isn’t explicitly what Leibniz described, or how things would really have to be to explain a universe like our own. For one thing, we need 3 spatial dimensions, not one. X, Y, & Z would all have to be close to each other. Also, Leibnitz thought that position was encoded in the relationship between pairs of objects, not in the objects themselves. He gave his elementary particles names - monads - which among other things had rudimentary consciousness, and that space emerged from their first-person perspectives of each other. But we don’t actually need those extra qualities--the idea of particles with interacting, internal degrees of freedom illustrates how space can emerge from the relationships between elements that are themselves not in space.

That’s Leibnitz on space. He disagreed with Newton on time in a similar way, believing it to be a measure of the change intrinsic to each element, rather than a cosmic clock that kept the universe in sync. Of course Newton was the undisputed boss of science back then, and so his preference for absolute space and time won over the physicists, and ultimately found its way into the popular imagination. But who was really right? Are objects in space and moving through time, or are space and time somehow in objects and their connections? Are the dimensions absolute or relational?

The big next development seemed to support Newton. Over the 19th century, our understanding of the phenomena of electricity and magnetism converged, revealing the existence of something called the electromagnetic field.

A field is just some property that can take on a numerical value at all points in space. For example, temperature is a field defined in the air around you. It’s emergent from the properties of the air particles.

But the electromagnetic field doesn’t need particles. For the first time, it seemed that a field could be a property of space itself. So, surely if space can have properties, then space must objectively exist.

And more intrinsic properties emerged with the development of quantum mechanics—for example, space was shown to have a sort of energy even in the absence of particles—so-called vacuum energy. However, if we really want to decide whether space and time are real—to judge between Leibnitz and Newton—we need the ultimate arbiter. We need the greatest expert of space and time that ever lived—and that’s Albert Einstein.

We’ve talked about Einstein’s special and general theories of relativity many times before. Let’s just go over what the theory changed about our notions of the dimensions. With special relativity, the separation of 3-D space and 1-D time ended. They became 4-D spacetime. Einstein showed that our motion through space and our motion through time are linked. A clock moving relative to you ticks slower from your perspective.

And then with general relativity we see that the presence of mass and energy stretch and warp both space and time. This causes straight line trajectories that we expect on a Cartesian grid to become curved, and the apparent change in an object’s path in the presence of mass is Einstein’s explanation of gravity.

Relativity overturned some of Newton’s notions about absolute space and time: that they are independent entities, that there’s a universal clock for time, and that there’s some sort of ultimate, rigid coordinate system for space. But what did these mean for the central question of this episode: what about the realness of space and time?

Actually, spacetime in Einstein’s universe kind of feels even more substantial than before. It’s like a fabric that can be warped. It can hold energy. It can even propagate waves—gravitational waves. Einstein showed that empty space has properties, so it must be real, right? Well, maybe - but Einstein’s view is really a radical departure from Newton’s—to the extent that Einstein even called himself a Leibnizian. Newton believed  in space as an underlying stage on which the particles and the fields danced. But Einstein insisted that no such background existed—and that’s because to him, space and the gravitational field are the same thing. This field is not painted on top of a coordinate system; rather, it IS the coordinate system. Absent this field there is nothing.

So all of this landed Einstein somewhere between Leibniz and Newton. He believed that there is an extended structure “out there” that can hold objects and on which distances and durations can be defined, but it’s not absolute and fundamental in the way that Newton thought. According to Einstein, Descartes was right, and so was Plato: there’s no such thing as empty space.

So is Einstein the last word on the matter? Far from it. We know that general relativity breaks down on very small scales—smaller than around 10^-35 meters, which is the Planck length.

There it comes into hopeless conflict with quantum mechanics, and it becomes impossible to meaningfully define shorter distances. Just as it’s meaningless to define durations shorter than the Planck time. This conflict between Einstein’s theory and quantum mechanics is one of the major challenges and inspirations for progressing to the next level of physics. And essentially all of the possible paths forward force us to rethink our understanding of the dimensions—whether multiplying their number as in string theory, or by having them emerge from elements that, themselves, do not exist within space—such as in loop quantum gravity, which we’ve discussed, or the cellular automata of Wolframs physics project, or in the entanglements between elements on a holographic horizon, or from Arkani-Hamed’s amplituhedron among others..

If any of these latter are true, then Leibniz may have been onto something; space exists in the relationships between some sort of elementary… something, not as an absolute and physically real fabric. Leibniz also had another controversial idea: he thought that space was in our minds. This isn’t the same as saying that reality is in our minds—it’s not even the same as saying that space doesn’t exist. Rather, Leibniz felt that whatever it is that’s out there that behaves like space only gains the subjective feeling of depth, breadth, height, and distance when our brains try to organise objects that are separated by an altogether more abstract property. Kind of like how the subjective experience of red only exists when brains interpret a frequency of light.

It’s incredibly difficult to imagine a universe without space or time. The dimensions seem hardwired into our brains. Perhaps we need to break this preconception to move forward in physics. If so, we need to explore how and why our brains build our very convincingly spatial and temporal inner worlds. And we’ll do that in an episode very soon, and perhaps get closer to figuring out whether we live in an absolute or a relational spacetime.

Comments

Anonymous

Focusing on your end comment: Yes, space and time absolutely _are_ hardwired into our brains via grid and place cells, and the details of how are delightfully weird and non-intuitive. Bio-inspired cognition was always my favorite research review area, and I don't think any of us could have imagined a more unexpected result on that point.

Anonymous

Also, more simply: Space as a "full state slice" _cannot_ exist experimentally due to the profoundly fundamental SR restrictions against faster-than-light full-loop transfers of either energy or information. ["Loop" is safer since, as Einstein first noticed, one-way light speed is immeasurable. I would add that this must to be true to allow SR to do its thing, what I call the principle of asynchronous equivalence.] Foliated space from block universe thinking is stupid since blocks literally solve nothing and instead create needless paradoxes. There _is_ only one universe, a meaningful state, but its irreversibly historical bits cannot be described as "space" separated without creating noise and paradoxes. They are "spacetime" separated. It's not the same thing.

Anonymous

And hey, relational spacetime is fun because it makes gravity into a finite-resolution fabric of entanglements. That's testable at finite cost if someone can figure out how to detect gravity transition states between extremely lightweight objects. There's no Planck foam in that version, however, just mundane entanglements of the same type already seen in quantum experiments.

Anonymous

Riveting! While the show wrapped, I started putting away the dishes. My Kid put them away last, and she place the bowls and plates in the wrong positions on the shelf (despite extensive training). She has a mathematical universe. I have a mathematical universe. Will someone tell her that my mathematical universe is the correct one.

Anonymous

And if we all canned Liebnitz and supported Newton, case closed. Please don't call me a one dimensional Monad.

Anonymous

MOD: "A field is just some property that can take on a numerical value at all points in space." Here's a deliciously simple paradox: Did you know Faraday is most responsible for the vacuum density problem. Uh... say again? It's because the moment folks accept as a modeling principle that _every point in "empty" space (heh!) can support even an infinitely precise state vector, you've just created a black hole due to the energy cost of that infinitely precise data. Perhaps it's my background, but I have difficulty understanding why most physicists never notice that infinite information is "baked in" to the Faraday model. So, of course, it pops up _everywhere._ The solution? Multi-scale smoothness. Space in galactic voids doesn't have the same info content as space near a black hole. Why should it? Alas, the details of such a simple statement require _very_ different maths, all of which index back to actual mass-energy content: a relational model.

Iain McClatchie

Sorry to barge in, Patreon isn't letting me post a new message. As I understand the holographic principle, the surface area of a volume limits the entropy within that volume. And the second law of thermodynamics says the entropy of a closed system always increases. Does this mean that a closed system must always increase its surface area? Would this force an expansion of space, analogous to how the Pauli exclusion principle forces electrons out of the lowest energy level? Would it only force an expansion of space for maximum entropy density objects (black holes)? The universe might have a volume (which is expanding), but it doesn't have a surface area, right? Are we pretty sure the universe is not a black hole in some other universe? Because if it was, it might have a surface area, even if that surface was outside our observable part of the universe. But I can't see how a maximum entropy thing like a black hole can have something which appears to be not maximum entropy inside.

Hfil66

Although interesting, it seems incomplete. It breaks any symmetry between space and time, since it allows things to move through time (by changing their X attribute over time) but regards (X,Y,Z) as mere attributes that can be changed over time but cannot be moved through because they are no external reality. I would say that a more complete approach would be to say that (X,Y,Z,T) are all just attributes of an entity, and it is not that X moves in relation to T but that only certain permutations of (X,Y,Z,T) are allowable and others are dis-allowable. Thus an entity perceived as moving in the X axis at 10m/s allows states of (0,0,0,0) and (10,0,0,1) (100,0,0,10), but not (300,0,0,5). This then leads to the question of how forces are applied in such a system. In simple classical physics (all of this could get more complex in the smeared physics of quantum mechanics) a force will apply a filter to the allowed states that will then disallow some states that would have been allowed in the absence of that force while allowing other states that were disallowed in the absence of that perceived force.

Anonymous

The announced episodes look to be as fascinating as the ones about the block universe. What if time and space are statistical in nature? Could quanta have internal clocks in addition to spatial and momenta wave functions, and more frequent interactions cause their clocks to slow down, creating space time curvature?

Anonymous (edited)

Comment edits

2023-03-04 16:54:48 16:11 “... general relativity breaks down on very small scales ... [so] it’s meaningless to define durations shorter than the Planck time.” The biology equivalent is: “protein synthesis breaks down in large nuclear explosions, so it’s meaningless to define DNA transcription below the level of the forces that bind protons and neutrons in nuclei.” Both assertions are true, but does either one mean anything? As proposed in Einstein’s 1905 papers, time was a gentle beast: the ticking of a clock. If one accepts, as Einstein did in 1905, that non-abstract time is _defined_ by such clock-tick cycles, then the shortest time scales available to most of the universe have nothing to do with the Planck scale and everything to do with quantum mechanics since it is the latter that determines a system’s shortest experimentally detectable cycle time. At present, the shortest quantum cycle time ever observed was set by the oscillation frequency required to explain double-slit interference in molecules with over 2000 carbon atoms worth of mass. For most phenomena, that level of clock precision is unlikely and unnecessary. Quantum mechanics, in combination with bonding, defines the highest clock frequencies needed for everyday physics. That frequency varies by the situation but is typically vastly lower than in the complex molecule interference experiment. The lack of visible quantum interference in most classical systems demonstrates the irrelevance of extreme clock rates. One could define the boundary between the quantum and classical worlds by this casual brushing-off of the need for exceptional clock frequencies. One can, of course, argue that only the most extreme limits of physics can tell us more about the deep definition of time in the universe. However, to take that approach, one must ask how a fellow named Einstein gained such remarkable insights by fretting about nothing more than the mechanical clocks of his era. Just lucky, perhaps? QFT describes the universe as an exquisitely balanced pairing of nearly infinite negative and positive energies that happen, somehow and someway, cryptically and inexplicably, to cancel each other out just enough to give our almost-flat space. With such horrifyingly high energies in constant conflict, the Planck scale _must_ be where the real game plays out. QED! Even if one accepts the notoriously absurd QFT vacuum density prediction as anything more than a subtle math error, there’s another problem: QFT is frame dependent and thus aether-like, making it irrelevant to understanding the utterly featureless vacuum of Einstein’s 1905 special relativity. Think about a Feynman diagram, which provides an excellent visual model of the QED subset of QFT. It begins with an assumed set of particles and potentials and ends with a probability prediction for use in a detector. Where did those beginning states and ending detectors come from? They are certainly not part of Einstein’s featureless 1905 vacuum. They come from the laboratory of an _assumed_ observer, that is, some specific frame of reference. Recognizing that QED and QFT maths always begin and end with finite, frame-dependent collections of matter and energy puts a very different spin on the nearly infinite number of quantum vibration modes that energy-indifferent maths say could occur in between. If even one mode in one location requires more energy than exists going in or coming out, _that mode does not exist_ and thus should not enter into the calculation. Similarly, if you partition the entire QFT space into regions, the vibration modes “funded” in these regions must still add up to the starting and ending energies. The variety of modes possible in any one region stays vast. The difference is that a zero-sum game now limits the contribution of each mode to each region, with smaller, more intense peaks in one region resulting in lower energy budgets for other regions. Despite QFT’s current lack of explanation for why the universe does not explode or collapse, observable reality keeps the total mass and energy within a QFT region finite and entirely determined by how much mass and energy goes in and out of that region. This concept, better known as the conservation of mass and energy, somehow got lost in the lattice math. My point is this: While it’s encouraging that physicists [1] are returning to Einstein’s 1905 focus on experimentally meaningful definitions of space and time, this new path will, in time, veer back to the same pointless peat-bog of experimentally unencumbered paper production if it does not also explicitly discard the relation-incinerating impacts of Planck overshoot thinking. Recognizing that ordinary classical physics fully demonstrates the emergence of space and time is the only path that restarts Einstein’s 1905 insights and leads theoretical physics out of half a century of math-first, physics-never wandering. The first step in re-analyzing everyday spacetime is to eliminate reflexive xyzt thinking, which is inherently hyperclassical and leads to all sorts of math noises, e.g., Planck foam, that have no more to do with how ordinary time works than nuclear explosions have to do with mRNA transcription. ----- [1] Feynman also merits inclusion in relationist circles since his fields emerged only by summing all possible histories of _direct_ particle interactions. [The above is a repeat of my YouTube comment: https://youtu.be/SN8nTQiWOYY&lc=Ugzj404SErQEUdaJQWR4AaABAg Posted on: 2023-02-26.00:18 EST Sun] [A PDF copy of the above is available at sarxiv dot org slash apa.]
2023-02-26 05:22:26 16:11 “... general relativity breaks down on very small scales ... [so] it’s meaningless to define durations shorter than the Planck time.” The biology equivalent is: “protein synthesis breaks down in large nuclear explosions, so it’s meaningless to define DNA transcription below the level of the forces that bind protons and neutrons in nuclei.” Both assertions are true, but does either one mean anything? As proposed in Einstein’s 1905 papers, time was a gentle beast: the ticking of a clock. If one accepts, as Einstein did in 1905, that non-abstract time is _defined_ by such clock-tick cycles, then the shortest time scales available to most of the universe have nothing to do with the Planck scale and everything to do with quantum mechanics since it is the latter that determines a system’s shortest experimentally detectable cycle time. At present, the shortest quantum cycle time ever observed was set by the oscillation frequency required to explain double-slit interference in molecules with over 2000 carbon atoms worth of mass. For most phenomena, that level of clock precision is unlikely and unnecessary. Quantum mechanics, in combination with bonding, defines the highest clock frequencies needed for everyday physics. That frequency varies by the situation but is typically vastly lower than in the complex molecule interference experiment. The lack of visible quantum interference in most classical systems demonstrates the irrelevance of extreme clock rates. One could define the boundary between the quantum and classical worlds by this casual brushing-off of the need for exceptional clock frequencies. One can, of course, argue that only the most extreme limits of physics can tell us more about the deep definition of time in the universe. However, to take that approach, one must ask how a fellow named Einstein gained such remarkable insights by fretting about nothing more than the mechanical clocks of his era. Just lucky, perhaps? QFT describes the universe as an exquisitely balanced pairing of nearly infinite negative and positive energies that happen, somehow and someway, cryptically and inexplicably, to cancel each other out just enough to give our almost-flat space. With such horrifyingly high energies in constant conflict, the Planck scale _must_ be where the real game plays out. QED! Even if one accepts the notoriously absurd QFT vacuum density prediction as anything more than a subtle math error, there’s another problem: QFT is frame dependent and thus aether-like, making it irrelevant to understanding the utterly featureless vacuum of Einstein’s 1905 special relativity. Think about a Feynman diagram, which provides an excellent visual model of the QED subset of QFT. It begins with an assumed set of particles and potentials and ends with a probability prediction for use in a detector. Where did those beginning states and ending detectors come from? They are certainly not part of Einstein’s featureless 1905 vacuum. They come from the laboratory of an _assumed_ observer, that is, some specific frame of reference. Recognizing that QED and QFT maths always begin and end with finite, frame-dependent collections of matter and energy puts a very different spin on the nearly infinite number of quantum vibration modes that energy-indifferent maths say could occur in between. If even one mode in one location requires more energy than exists going in or coming out, _that mode does not exist_ and thus should not enter into the calculation. Similarly, if you partition the entire QFT space into regions, the vibration modes “funded” in these regions must still add up to the starting and ending energies. The variety of modes possible in any one region stays vast. The difference is that a zero-sum game now limits the contribution of each mode to each region, with smaller, more intense peaks in one region resulting in lower energy budgets for other regions. Despite QFT’s current lack of explanation for why the universe does not explode or collapse, observable reality keeps the total mass and energy within a QFT region finite and entirely determined by how much mass and energy goes in and out of that region. This concept, better known as the conservation of mass and energy, somehow got lost in the lattice math. My point is this: While it’s encouraging that physicists [1] are returning to Einstein’s 1905 focus on experimentally meaningful definitions of space and time, this new path will, in time, veer back to the same pointless peat-bog of experimentally unencumbered paper production if it does not also explicitly discard the relation-incinerating impacts of Planck overshoot thinking. Recognizing that ordinary classical physics fully demonstrates the emergence of space and time is the only path that restarts Einstein’s 1905 insights and leads theoretical physics out of half a century of math-first, physics-never wandering. The first step in re-analyzing everyday spacetime is to eliminate reflexive xyzt thinking, which is inherently hyperclassical and leads to all sorts of math noises, e.g., Planck foam, that have no more to do with how ordinary time works than nuclear explosions have to do with mRNA transcription. ----- [1] Feynman also merits inclusion in relationist circles since his fields emerged only by summing all possible histories of _direct_ particle interactions. [The above is a repeat of my YouTube comment: https://youtu.be/SN8nTQiWOYY&lc=Ugzj404SErQEUdaJQWR4AaABAg Posted on: 2023-02-26.00:18 EST Sun] [A PDF copy of the above is available at sarxiv dot org slash apa.]

16:11 “... general relativity breaks down on very small scales ... [so] it’s meaningless to define durations shorter than the Planck time.” The biology equivalent is: “protein synthesis breaks down in large nuclear explosions, so it’s meaningless to define DNA transcription below the level of the forces that bind protons and neutrons in nuclei.” Both assertions are true, but does either one mean anything? As proposed in Einstein’s 1905 papers, time was a gentle beast: the ticking of a clock. If one accepts, as Einstein did in 1905, that non-abstract time is _defined_ by such clock-tick cycles, then the shortest time scales available to most of the universe have nothing to do with the Planck scale and everything to do with quantum mechanics since it is the latter that determines a system’s shortest experimentally detectable cycle time. At present, the shortest quantum cycle time ever observed was set by the oscillation frequency required to explain double-slit interference in molecules with over 2000 carbon atoms worth of mass. For most phenomena, that level of clock precision is unlikely and unnecessary. Quantum mechanics, in combination with bonding, defines the highest clock frequencies needed for everyday physics. That frequency varies by the situation but is typically vastly lower than in the complex molecule interference experiment. The lack of visible quantum interference in most classical systems demonstrates the irrelevance of extreme clock rates. One could define the boundary between the quantum and classical worlds by this casual brushing-off of the need for exceptional clock frequencies. One can, of course, argue that only the most extreme limits of physics can tell us more about the deep definition of time in the universe. However, to take that approach, one must ask how a fellow named Einstein gained such remarkable insights by fretting about nothing more than the mechanical clocks of his era. Just lucky, perhaps? QFT describes the universe as an exquisitely balanced pairing of nearly infinite negative and positive energies that happen, somehow and someway, cryptically and inexplicably, to cancel each other out just enough to give our almost-flat space. With such horrifyingly high energies in constant conflict, the Planck scale _must_ be where the real game plays out. QED! Even if one accepts the notoriously absurd QFT vacuum density prediction as anything more than a subtle math error, there’s another problem: QFT is frame dependent and thus aether-like, making it irrelevant to understanding the utterly featureless vacuum of Einstein’s 1905 special relativity. Think about a Feynman diagram, which provides an excellent visual model of the QED subset of QFT. It begins with an assumed set of particles and potentials and ends with a probability prediction for use in a detector. Where did those beginning states and ending detectors come from? They are certainly not part of Einstein’s featureless 1905 vacuum. They come from the laboratory of an _assumed_ observer, that is, some specific frame of reference. Recognizing that QED and QFT maths always begin and end with finite, frame-dependent collections of matter and energy puts a very different spin on the nearly infinite number of quantum vibration modes that energy-indifferent maths say could occur in between. If even one mode in one location requires more energy than exists going in or coming out, _that mode does not exist_ and thus should not enter into the calculation. Similarly, if you partition the entire QFT space into regions, the vibration modes “funded” in these regions must still add up to the starting and ending energies. The variety of modes possible in any one region stays vast. The difference is that a zero-sum game now limits the contribution of each mode to each region, with smaller, more intense peaks in one region resulting in lower energy budgets for other regions. Despite QFT’s current lack of explanation for why the universe does not explode or collapse, observable reality keeps the total mass and energy within a QFT region finite and entirely determined by how much mass and energy goes in and out of that region. This concept, better known as the conservation of mass and energy, somehow got lost in the lattice math. My point is this: While it’s encouraging that physicists [1] are returning to Einstein’s 1905 focus on experimentally meaningful definitions of space and time, this new path will, in time, veer back to the same pointless peat-bog of experimentally unencumbered paper production if it does not also explicitly discard the relation-incinerating impacts of Planck overshoot thinking. Recognizing that ordinary classical physics fully demonstrates the emergence of space and time is the only path that restarts Einstein’s 1905 insights and leads theoretical physics out of half a century of math-first, physics-never wandering. The first step in re-analyzing everyday spacetime is to eliminate reflexive xyzt thinking, which is inherently hyperclassical and leads to all sorts of math noises, e.g., Planck foam, that have no more to do with how ordinary time works than nuclear explosions have to do with mRNA transcription. ----- [1] Feynman also merits inclusion in relationist circles since his fields emerged only by summing all possible histories of _direct_ particle interactions. [The above is a repeat of my YouTube comment: https://youtu.be/SN8nTQiWOYY&lc=Ugzj404SErQEUdaJQWR4AaABAg Posted on: 2023-02-26.00:18 EST Sun] [A PDF copy of the above is available at sarxiv dot org slash apa.]

Anonymous

"Could quanta have internal clocks?" That's a great question. It's related to another extraordinarily uncomfortable dangling physics thread, which is the nature of (and for many, the existence of) quantum wave collapse. If a quantum has an internal clock, then in some sense, it is _observing_ itself, even if only in a trivial, thumb-twiddling way. Its scale of length and duration could remain private until it encounters other similarly self-observing bundles, forcing them to agree to a shared scale. One might even rescale dramatically relative to the other. Wheeler popularized the idea of top-down observation, amping it up to almost theological levels. However, the opposite strategy of bottom-up observation scales better and generates fewer paradoxes. There's no need to get exotic since simple Newtonian action-reaction pairs are, if you think about it, almost unavoidably also a form of mutual observation. Spin and binding manipulate momentum and thus "locate" bits and pieces of quanta relative to each other.