Hacker Newsnew | past | comments | ask | show | jobs | submit | vlmutolo's commentslogin

It’s vibrating at a microscopic level. You won’t be able to feel it at all.


What are your thoughts on how Bevy's developing UI toolkit compares (in terms of goals and use cases) to some of the other Rust efforts in the space (egui, xilem, iced, etc.)? Do you expect it will be specialized/limited to scene development for games?


About 7% of people who have ever lived are alive today. Still pretty lucky, but not quite winning the lottery.


Much luckier if you consider everyone who ever will live, assuming we don’t destroy ourselves.


To kill a mosquito, you need "a few tens of millijoules, delivered within a few milliseconds" [0], so let's say 10W. To destroy the Earth (so that it turns into scattered dust and never reforms) you need about 10^32 J [1]; if we assume this is applied over maybe 100s, the laser would be 10^30W.

So the log10 scale goes from 1–30, where mosquitos die at 1 and the Earth dies at 30. The 2 PW in the article is about a 15.3. The Vulcan 20-20 project (set to complete in 2029) will register at about 20PW, or a 16.3 on the mosquito-Death Star scale [2].

So on a log scale, we're over halfway to building the Death Star.

[0]: https://spectrum.ieee.org/backyard-star-wars

[1]: https://www.scientificamerican.com/article/how-much-energy-w...

[2]: https://news.sky.com/story/worlds-most-powerful-laser-to-be-...


If we're comparing lasers that go for seconds and lasers that go for femtoseconds, I think measuring watts is way too misleading.

Measured in simple joules, mosquito is .04, earth is 10^32, and this laser is 50.

If we make a joules version of the 1-30 scale, the laser in the article would only score a 4.


For sure; using total energy delivered makes a lot more sense. But then I think it would be better to use whatever tool humanity has that delivers the max total energy; let’s say Tsar Bomba.

Let’s say the mosquito is 1 again, so Death Star is 34. Tsar Bomba would be about 17.3. Over halfway again!

It’s kind of surprising that our max power output and max energy output are about the same on these scales.


So I could kill all the mosquitoes in my yard in one pulse with this laser.


If they get close enough together for that pulse to hit them all.


Just send me out there and aim the pulse at the one spot in the middle of my back I just can't quite reach.


> if we assume this is applied over maybe 100s

I think this is the crux of the assumption right here. It sounds like this is apply for well under a nanosecond.

I think we're closer to maybe killing a mosquito than "half way to building a Death Star on a log scale" (which, I guess is already much closer to a mosquito than a planet).


Every step is a doubling of the previous step, no?

So true "half way to a death star" is step 29/30?


They used log10, so each step is 10x the previous, so in a linear sense, it would double when going from about 29.7 to 30. But it seems that humans tend to improve tech at exponential rates, where we are constantly making improvements here and there that keep stacking up, when it comes to things that are actually in a developmental stage anyways.

Say your "endstage" goal is GPU with 200 billion transistors. Using linear scale, the current biggest GPU is only halfway there, and it took all of human civilization to get this far, and it will take another civilization to get to 200b. In reality, we'll have that in a couple years with our current civilization.


>humans tend to improve tech at exponential rates

A hypothetical "death star" project like this would require improvements in energy generation/storage capacity/etc., which haven't improved in nearly the way transistor production has (and are also much more limited by physical realities, such as the specific heat, enthalpy of combustion etc. of materials).


Yes, extremely high sustained power lasers still have a hard time competing with hypersonic projectiles in energy delivered. The difference in being able to throw nuclei at the problem.


This is awesome! Thank you!!

I’m still not sure what 15.3 on the MDS scale can destroy but I am sure the Emperor will be pleased to hear that we are half-way to building the Death Star.


The on a log scale is doing a fair amount of lifting here I think


Next question: on what planet/moon should the mosquito be to be _just_ safe from the laser?


Well, mass scales as the cube of radius, and we have 15 orders of magnitude to work with, so I guess it should be an object on the order of hundreds of meters in radius. But as noted, the duration of firing matters as well. Given https://news.ycombinator.com/item?id=44054239, the actual laser can only vaporize much smaller things.


Thank you for this, I will be using this for the rest of my life!


I don’t like log scales, they’re not intuitive.


A company called Invent Wood (based on research out of UMD) is creating “densified” wood that solves a lot of these problems. They have a process that collapses the cell walls in wood and compresses it to a quarter of its thickness, which gives something like a 10x increase in tensile strength, making it stronger than (a certain type of commonly used) steel by volume and weight. It’s also significantly harder than wood (nearly as hard as the carbon steel people use for knives), doesn’t warp, and is resilient to impacts.

My intuition is that trees need wood to serve purposes greater than just structural integrity. It needs to transport water and nutrients. But for building, we don’t care about these channels and it’s better if we collapse them to encourage stronger hydrogen bonding between cellulose chains.

It sounds like a lot of the benefits of “old growth” wood can be manufactured now. This is probably a good thing for preserving nature; there’s a greater demand for wood with these properties than a supply of old trees. Better to leave the great old trees intact and do cool engineering on cheap trees that grow quickly.

Recent Hacker News discussion:

https://news.ycombinator.com/item?id=44020832


> It sounds like a lot of the benefits of “old growth” wood can be manufactured now.

Yes, at greatly increased costs, both economic and ecological.

Fast-growth timber farms may produce an inferior product, but we've already compensated for that in design. 112% of a material that provides 90% of the "goodness" is a viable path; so is buying a Ford* every 5 years instead of a Mercedes every 10*. (Ford haters: :%s/Ford/Chevy) (* MB haters: shaddup, it's just an analogy.)

Until the overhead is lower than growing yellow pine, this is a niche product.


Modern mini-led monitors are very good. The “local” dimming is so local that there isn’t much light bleed even in the worst-case situations (cursor over black background makes it particularly apparent).

The advantage of LEDs is they’re brighter. For example, compare two modern Asus ProArt displays: their mini-LED (PA32UCXR) at 1600 nits and their OLED (PA32DC) at 300ish nits. The OLED is 20% more expensive. These two monitors have otherwise comparable specs. Brightness matters a lot for HDR because if you’re in a bright room, the monitor’s peak brightness needs to overpower the room.

Plus for color managed work, I think LED monitors are supposed to retain their calibration well. OLEDs have to be frequently recalibrated.

And so-called micro-LEDs are coming soon, which promise to make “local” so small that it’s imperceptible. I think the near-term future of displays is really good LEDs.


You can definitely get around a lot of the pain points by using owned types like String as much as possible instead of borrowed types like &str. This is even generally recommended; there’s often no benefit to using the more advanced features of the language.

Usually the advanced features come in when you’re looking for better performance. It helps performance a lot to use reference types (borrowed types) to eliminate deep copies (and allocations) with .clone() in a loop, for example.

Library authors usually don’t have the luxury of knowing how their code will be used downstream, so diligent authors try to make the code reasonably performant and use these advanced language features to do so. You never know if the consumer of your library will use your function in a hot loop.


A simple extension of the Bloom filter called “block Bloom filters” fixes this. The idea is that the first hash is the index of a small constant-size block of your array, and the rest of the indices are within that block.

So a single query to the filter should only have one or two cache misses, depending on the size of your block. Or even if your block is larger than a cache line, you can probably issue all the loads at once and only pay for the latency of one memory access.

The downside of doing this is slightly more space usage relative to simple Bloom filters. I’d almost always reach for block Bloom filters, though, once the filter becomes a significant fraction of cache size.

I implemented block bloom filters for fairly large (~GB) arrays and saw about 35ns performance. They’re excellent data structures, pretty much as fast as you can get for approximate membership tests (though other filters have better space-time tradeoffs).


The Underway app is basically an interactive version of this (NYC-specific). It’s just the transit map, but you can click on stations to see current arrival/departure times (“3min”) and MTA notices for lines going through that station.


I think that regardless of what references you have, Rust frees values at the end of their lexical “scope”.

For example, in the linked code below, x is clearly unused past the first line, but its “Drop” implementation executes after the print statement at the end of the function.

The takeaway is that if you want a value to drop early, just explicitly `drop` it. The borrow checker will make sure you don't have any dangling references.

https://play.rust-lang.org/?version=stable&mode=debug&editio...

In general, I think "lifetimes" only exist in the context of the borrow checker and have no influence on the semantics of Rust code. The language was designed so that the borrow checker pass could be omitted and everything would compile and run identically.


This is correct.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: