Hacker Newsnew | past | comments | ask | show | jobs | submit | raattgift's commentslogin

Several of her first-or-sole-author minimal length quantum gravity phenomenology papers have more than a hundred citations:

https://scholar.google.com/citations?user=NaQZcyYAAAAJ&hl=en

and if nothing else, that's strong evidence that she has made a contribution to academic dialogue in that area.

Hossenfelder et al. 2003 in particular, is quite striking for an early career researcher: <https://scholar.google.com/citations?view_op=view_citation&h...>. Also noteworthy are several early publications on either side of her 2003 doctoral thesis on microscopic black holes in large extra dimensions. In that period numerous co-authors, reviewers, and editors supplied indirect evidence against your claim that her papers "were pretty bad".

Quite a lot of strong constraints on large extra dimensions came out of the LHC work eight to twelve years after these publications. Her old link-rotting written blog captures some of that: <https://backreaction.blogspot.com/2011/06/extra-dimensions-a...>, for instance.

There is an enormous difference between being wrong and publishing nonsense.

> at least those I read

You could have usefully supplied a short annotated bibliography. It would certainly make your final sentence

> She is pure show

less likely to be seen as nonsense and more likely to be seen as wrong.

Whatever she has become in the past couple of years, she was certainly not pure show in the first eight or so years after her doctorate.


> Space becomes timelike. There is only forward ...

No. It's a fanciful analogy on a particular family of coordinate charts, particuarly systems of coordinates which do not smoothly/regularly cross the horizon. The black hole interior is still part of a Lorentzian manifold, there is no change of the SO+(1,3) proper orthochronous Lorentz group symmetry at every point (other than spacetime points on the singularity). One can certainly draw worldlines on a variety of coordinate charts and add light-cones to them, and observe that the cones interior to the horizon all have their null surfaces intercept the singularity. However, there's lots of volume inside the interior light cones (and on the null surfaces) and nothing really constrains an arbitrary infaller's worldline, especially a timelike infaller, to a Schwarzschild-chart radial line (just as nothing requires arbitrary infallers to be confined to geodesic motion).

The interior segment of a Schwarzschild worldline in general can't backtrack in the r direction, but there are of course an infinity of elliptical trajectories which don't. (That is to say that all orbits across the horizon are plunging orbits; but one can also say that of large families of orbits that cross ISCO, which is outside the horizon).

A black hole with horizon angular momentum and general charges offer up different possibilities, as does the presence of any matter near (including interior to) the horizon (all of these also split the ISCO radius, move the apparent horizon, and may split the apparent and event horizons). The Schwarzschild solution of course is a non-spinning, chargeless, vacuum solution everywhere, and is maximally symmetrical, and is usually probed with a test particle. An astrophysical system like a magnetic black hole formed that passes through a jet from a companion pulsar, for example, does not neatly admit the Schwarzschild chart (and has no known exact analytical solution to the field equations). At least one such astrophysical binary is known (in NGC 1851 from TRAPUM/MeerKAT) (and if you don't immediately run away from A. Loeb papers like you should, he added his name to one that argues there are thousands of such systems in the galaxy centre near Sgr A*, which itself is now known to have strong magnetic fields (thanks to EHT's study of the polarized ring)).


You really laid the text on thick here to end up exactly conceding the point.


No, not really. To boil it down to thinner text, and to focus on your "Space becomes timelike", I think you are stuck on (a) a particular system of coordinates that (b) are not regular across the horizon and (c) thinking that either of these does anything physical to free-falling infalling test particle.

The huge flashing red warning sign on (a) & (c) is that you drop in the words "'upward' direction", "{toward, closer to, away from} the singularity" and most especially "slower": you are clearly implicitly slicing spacetime into space and time.

If you can handle thicker text, Unruh has a nice discussion of regular systems of coordinates at http://theory.physics.ubc.ca/530-21/bh-coords2.pdf Additionally, Martel & Poisson 2001 <https://pubs.aip.org/aapt/ajp/article-abstract/69/4/476/1055...> (arXiv version <https://arxiv.org/abs/gr-qc/0001069>) is a nice discussion of PG coordinates.

More visually, one can compare the light cone structure on a KS diagram like at <https://tikz.net/relativity_kruskal_diagram/> (just before the "Edit and compile if you like") and a randomly chosen but very typical diagram in Schwarzschild coordinates <https://www.researchgate.net/profile/Ward-Vleeshouwers/publi...> or (in German) <https://yukterez.net/f/einstein.equations/files/schwarzschil...> (hovering over a diagram displays some light cones). Which cone appears to topple over in their respective coordinate charts is pretty obvious, and should give you plenty of shaded grey to think about the coordinate-dependence of "Space becomes timelike".


The relevant quantities are the curvature scalars near the horizon, and for a sizable black hole they are small there. As an example, consider the Kretschmann scalar (KS). The KS is the sum of the squares of all components of a tensor. In Schwarzschild spacetime KS looks like R_{\mu\nu\lambda\rho}R^{\mu\nu\lambda\rho} = (48 G^2M^2)/(c^4r^6), where R is the Riemann curvature tensor, and we can safely set G=1 and c=1 so (48 M^2)/r^6. In this setting, KS is proportional to the spacetime curvature. At r = 2M, the Schwarzschild radius, the number becomes very small as we increase M, the black hole's mass. However, for any M at r = 0, the Kretschmann scalar diverges.

For a large-M black hole, there is "no drama" for a free-faller crossing the event horizon, as the KS gradient is tiny.

Since the crosser is in "no drama" free-fall he can raise his hands, toss a ball between his hands, throw things upwards above his head, and so forth. The important thing though is that all these motions are most easily thought of in his own local self-centred freely-falling frame of reference, and not against the global Schwarzschild coordinates. His local frame of coordinates is inexorably falling inwards. Objects moving outwards in his local frame are still moving inwards against the Schwarzschild coordinates.

You might compare with a non-freely-falling frame of reference. Your local East-North-Up (ENU) coordinates let you throw things upwards or eastwards, but in less-local coordinates your ENU frame of reference is on a spinning planet in free-fall through the solar system (and the solar system is in free-fall through the Milky Way, and the galaxy is in free-fall through the local group). That your local ENU is not a freely-falling set of coordinates does not change that the planet is in free-fall, and your local patch of coordinates is along for the ride.

A comparison here would be a long-running rocket engine imparting a ~ 10 m s^-1 acceleration to a plate you stand on. In space far from the black hole, you and the rocket engine would tend to move away from the black hole, but you'd be able to do things like juggle or jump up and down, and it'd feel like doing it on Earth's surface. This is a manifestation of the equivalence principle. Inside the horizon the rocket would still be accelerating the plate and you at ~ 10 m s^-1, but you, the plate, and the rocket would all be falling inwards.


The "river model" you mean isn't very general, as one eventually becomes interested gravitating systems where there isn't a suitable congruence, e.g. in close binary compact objects. In such systems, one has to add terms analogous to turbulence, frustrating calculability (and the development of relativistic intuition). It also doesn't deal well with tides: for example, Schwarzschild infaller worldlines (even on a body like the moon, where there is no horizon) on widely separated radial trajectories converge in a way that is unlike the confluences of rivers and their tributaries. These models really only assist in understanding a single (spatial) radial line with possibly multiple successive "rafts" of matter bound to it (at different times), and in a set of PG-like coordinates useful for a particular distant observer. From there one symmetrizes: all observers and all radial lines are identical (speherical symmetry) and successive "rafts" all take the same radial line (static spacetime). Without this symmetrization, a black hole is an infinite number of slightly different rivers, and then you might as well solve the equations of motion in the standard way.

For understanding a handful of highly symmetrical systems, it might help a student understand some intuitions about what Killing vector fields and congruences (notably those made by choosing the velocity vector field of a set of geodesics) are, and tends to lead into an investigation of what the shift vector in a 3+1 decomposition represents.

For calculating things like the spherical orbits around or the photon surface of a real black hole like our galaxy's central Sgr A*, the river model seems outright unhelpful. For example, how does a river model help to understand https://duetosymmetry.com/tool/kerr-circular-photon-orbits/ ?

> time moving at a constant rate

This is another way of saying slicing of a Lorentzian (4d) spacetime into non-overlapping spaces organized along an arbitrarily chosen future-directed non-spacelike worldline. That is, this is a 3+1 slicing. We can slice along your worldline, or on that of a neutral hydrogen atom floating in intergalactic space, or on that of a high-energy cosmic ray, or on that of a CMB photon. It's arbitrary, and each can give markedly different spatial slices through the same spacetime (in particular particle counts on slices will differ where the choices of index axes are anywhere accelerated with respect to one another).

When we decompose in this way, and take an <https://en.wikipedia.org/wiki/ADM_formalism> approach, we will tend to think of the shift vector as how we associate a point one one slice (everywhere in space at a coordinate instant in the spacetime) with its successor slice (everywhere in space at the next coordinate instant int he spacetime), which is helpful when spacetimes expand or contract in one or more spatial directions along the arbitrarily chosen time axis.

Braeck & Gron 2012 have a good bit of pedagogy about the river analogy and a fine set of references <https://arxiv.org/abs/1204.0419> and of course point to Hamilton & Lisle 2008, as originators of the analogy <https://arxiv.org/abs/gr-qc/0411060>.



If everything must be constrained to the lattice points, yes. However, empty space has high Boltzmann entropy: you can cut a patch of empty space from here and swap it for the same volume of empty space from there, and the two coarse grain macrostates will be indistinguishable.

Expanding de Sitter quasi-vacuum has tremendous growth in entropy. Gibbons and Hawking gives this (for 3+1d de Sitter) as a quarter of the horizon area: S_H = \frac{Area_{H}}{4} \sim H^{-2} with the "quasi-" giving us increasing growth in the horizon area as DoFs exit the horizon compared to classical pure de Sitter vacuum.

I'm not sure how confining some species of matter to expanding lattice is different from quasi-vacuum in the limit where the lattice spacing is large. I guess you have to abolish continuum spacetime in favour of a taxicab geometry with an analogue of dark energy? Otherwise, how does it differ from an isotropic homogeneous FLRW dust?


Oh, and by the way, entropy is the evolution of a system given the forces in it. So yes, in a universe with only repellent forces at first it would have low entropy (like ours did) and then as the particles get forced into an ever emptier lattice the universe would have more entropy. It's the forces -attractive or repellent- that make the system evolutions possible that lead to higher entropy over time.


If you consider a universe with only protons in it initially clearly they would all be forced into a low-entropy lattice and space would expand. Eventually the space between them would grow to be sufficiently large and empty that its own entropy would be enormous. But that doesn't deny that gravitational collapse is an entropy-increasing mechanism.


The (Newtonian) Shell Theorem is fairly sensitive to spherical symmetry. In General Relativity one can write down a metric wherein inside any boundary surface there is flat spacetime. It's easiest to do this for a spherical boundary, but one can work out a metric which is axisymmetric (e.g. oblate and spinning or prolate and tidally deformed) and probably all sorts of other weird shapes following ideas from Gauss's Law for Gravitation. Writing down a metric for that is hard though -- really hard if the idea is to make it time-independent, and really really hard if the idea is to make it time-dependent but static (as in a complex Gaussian surface doesn't relax into a more spherical shell). For example, bumps raised on each other by binary black holes will vanish after merger (or if they fly away on hyperbolic trajectories, having "grazed" each other), leaving you with a spherical horizon (if nonspinnning) or an oblate one (if spinning).

Essentially to break spherical symmetry (or axisymmetry where there's spin) and keep it broken you have to introduce something like a dark energy. One can do that outside (retaining flat space inside) or inside (leading to the equivalent direction-dependent attraction of outside objects).


The bit of math is the Shell Theorem <https://en.wikipedia.org/wiki/Shell_theorem>.


No, here "entropic" is as in the entropic force that returns a stretched rubber band to its unstretched condition, which (as it tends to be scrunched a bit) is at a higher entropy.

https://en.wikipedia.org/wiki/Rubber_band_experiment

"The stretching of the rubber band is an isobaric expansion (A → B) that increases the energy but reduces the entropy"

[apologies for any reversed signs below, I think I caught them all]

In Verlinde' entropic gravity, there is a gravitational interaction that "unstretches" the connection between a pair of masses. When they are closer together they are at higher entropy than when they are further apart. There is a sort of tension that drags separated objects together. In Carney et al's approach there is a "pressure mediated by a microscopic system which is driven towards extremization of its free energy", which means that when objects are far apart there is a lower entropy condition than when they are closer together, and this entropy arises from a gas with a pressure which is lower when objects are closer together than when objects are further apart. Pressure is just the inverse of tension, so at a high enough level, in both entropic gravity theories, you just have a universal law -- comparable to Newton's -- where objects are driven (whether "pulled" or "pushed") together by an entropic force.

This entropic force is not fundamental - it arises from the statistical behaviour of quantum (or otherwise microscopic) degrees of freedom in a holographic setting (i.e., with more dimensions than 3+1). It's a very string-theory idea.

The approach is very hard to make it work unless the entropic force is strictly radial, and so it's hard to see how General Relativity (in the regime where it has been very well tested) can emerge.


The local theory part of the Carney et al paper (preprint <https://arxiv.org/abs/2502.17575>) is interesting in that it isn't obviously related to string theory / holographic entropic gravity. Instead masses induce a spin polarization near them which is a lower entropy state. Two masses with two polarized spin-clouds will attract each other as the system tries to thermalize to a higher-entropy state. With careful choices of parameters, they can generate any central force, and they explore a particular choice which corresponds to Newtons 1/r^2 mutual attraction.

The paper cannot deal with fast-moving masses at all: it's not just the relativstic regime (where speeds are significant fractions of c) but rather the masses must move more slowly than the thermalization. This is hugely restrictive.

Finally, comparing themselves to the traditional approach of quantizing perturbations (e.g. turning classical (General Relativity) gravitational waves into lots of spin-2 gravitons) the authors write:

  The gravitational interactions we observe at accessible
  length scales could in principle emerge in many ways from
  physics at the Planck scale ρ ∼ mPl/ℓ3 Pl ∼ 10104 J/cm3.
  Perhaps the simplest is that gravitational perturbations
  are quantized as gravitons, i.e., as another quantum field
  theory like the gauge bosons of the other fundamental
  forces in nature. This is a perfectly good effective quan-
  tum field theory; nothing in principle forces us to aban-
  don this picture until energies near the Planck scale.
They also say that while their starting point was being very different from the holographic picture:

  we find that the models have a range of free parameters,
  and in some parameter regimes become indistinguishable
  from standard virtual graviton exchange
Some of this will necessarily by driven by the need to be compatible with General Relativity in the weak field limit. They are not compatible with strong gravity in General Relativity at present.

So while the idea is kinda interesting, I think they are putting the cart before the horse in asking what their model says about things like the interaction between gravitation and entanglement. That's simply unmeasurable by experiment right now whereas the very-well-understood relativistic precession of Mercury's perihelion is completely out of scope for this initial paper.


In 1+1 dimensions one can analyse the gravitational behaviour of an infinite line of ...-wire-resistor-wire-resistor-... with an adaptation of Bell's spaceship. Throwing away two dimensions eliminates shear and rotation (and all sorts of interesting matter-matter interactions) so we can take a Raychaudhuri approach.

We impose initial conditions so that there is a congruence of motion of the connected resistors, so that we have a flavour of Born rigidity. Unlike in the special-relativistic Bell's spaceship model (in which the inertial motion of each spaceship identical save for a spatial translation), in our general-relativistic approach none of the line-of-connected-reistors elements' worldlines is inertial, and each worldline's proper acceleration points in a different direction but with the same magnitude. This gives us enough symmetry to grind out an expansion scalar similar to Raychaudhuri's, Θ = ∂_a v^a (<https://en.wikipedia.org/wiki/Raychaudhuri_equation#Mathemat...>). As an aid to understanding, we can rewrite this as 1/v \frac{d v}{d \tau}, and again in terms of a Hubble-like constant, 3H_0.

We can then understand Θ as a dark energy, and with Θ > 0 the infinitely connected line of ...-wire-resistor-wire-resistor-... is forced to expand and will eventually fragment. If Θ < 0, the line will collapse gravitationally.

> no nucleation sites

If Θ = 0 initially, we have a Jeans instability problem to solve. Any small perturbation will either break the infinite ...wire-resistor-wire-..., leading to an evolution comparable to Bell's spaceship: the fragments will grow more and more separated; or it will drive the gravitational collapse of the line. The only way around this is through excruciatingly finely balanced initial conditions that capture all the matter-matter interactions that give rise to fluctuations in density or internal pressure. It is those fluctuations which break the initial worldline congruence.

This is essentially the part of cosmology Einstein struggled with when trying to preserve a static universe.

In higher dimensions (2+1d, 3+1d) the evolution of rotation and shear (instead of just pressure and density) becomes important (indeed, we need an expansion tensor and take its trace, rather than use the expansion scalar above). A different sort of fragmentation becomes available, where some parts of an infinite plane or infinite volume of connected resistors can undergo an Oppenheimer-Snyder type of collapse (probably igniting nuclear fusion, so getting metal-rich stars in the process) and other parts separate; the Lemaître-Tolman-Bondi metric becomes interesting, although the formation of very heavy binaries early on probably mitigates against a Swiss-cheese cosmological model: too much gravitational radiation. The issue is that the chemistry is very different from the neutral-hydrogen domination at recombination during the formation of our own cosmic microwave background, but grossly a cosmos full of luminous filaments of quasi-galaxies and dim voids is a plausible outcome. (It'd be a fun cosmology to try to simulate numerically -- I guess it'd be bound to end up being highly multidisciplinary).


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: