I'm getting the impression that this worked becaused the LLM had hoovered up all the previous research on this topic and found a reasonable string of words that could be a hypothesis based on what it found?
I think we are starting to get to the root of the utility of LLMs as a technology. They are the next generation of search engines.
But it makes me wonder, if we had thrown more resources towards using "traditional" search techniques on scientific papers, if we could have gotten here without gigawatts of GPU work spent on it, and a few years earlier?
Maybe the best way to have the best of both worlds is to ensure well-established areas of research are open to “outsider art” submissions on the topic?
this plane doesn't look like it was made to produce a low boom. It has a very distinct von Karman ogive [1] fuselage and typical delta wings. I would guess that it's shape is primarily optimized for fuel efficiency at 1.5 mach or above.
If you take a look at NASA's low boom demonstrator [2], you can see that it's much skinnier and the nose is crazy elongated. This is intended to break up the bow shock into multiple parts, thereby decreasing the amount of energy each one has.
Silly question, but would it be feasible to just equip a plan with a telescoping nose merely for this effect that could be deployed prior to supersonic flight?
I don't think it implies damage, but it implies that the exhaust pressure is lower than the outside air pressure (which you would not want for perfect efficiency). That's normal though because the first stage operates in a wide range of pressure, and you can only adjust it for one exhaust pressure. So mach diamonds on liftoff should be normal.
this is most likely due to the absolute positioning. position: absolute will use the top-left corner of the closest ancestor that is "positioned" as the origin for it's layout [1]. If you want that origin to be the top-left corner of the viewport, use position: fixed.
In addition to `position: fixed`, shouldn't it be top, left, height, width, instead of top, left, bottom, right? In the second case, it would follow the top and left instructions then take the necessary amount of space, ignoring right and bottom?
Can you talk about why separation of concerns is useful? I like it too, but I have a hard time trying to articulate why I prefer it over keeping everything in the same file. I've started working in a project that uses react and tailwind, and I've gotten pretty comfortable, but (ideological purity aside) it just isn't very enjoyable to use.
It's the contextual clarity for me. If I want to change the content, I can keep a specific syntax in mind and focus only on that. Same for JavaScript and CSS. I don't have to think about all of these things at once, but only at the boundaries where they interact with each other. Every component has a separate responsibility and they're composed in a specific way, but keeping them separate allows you to reason about them easier. This is similar to the Single Responsibility Principle.
Specialization is another aspect. Take this example from the mizu docs:
<div %http="https://example.com">
That is supposed to make a `fetch()` call. OK, great, so how do I make a POST request? Oh, with a `.post` "modifier". OK, great, so how do I specify the body? Oh, with a `%body` directive. OK, great, but what if I want to use a different encoding than the ones provided by the library? How about binary data? How about sending a specific header based on a condition, or interpreting the response in a specific way?
There are thousands of these questions and possible limitations and pitfalls that just wouldn't exist if the library didn't try to reinvent JavaScript in HTML. Just use the right tool for the job that is already specialized for what you're trying to do, and your task will be much easier.
BTW, I don't mind having components that contain JS, CSS and HTML in the same file. Svelte does this and I enjoyed using it. My problem is when these are mixed, or when CSS is entirely abandoned in favor of thousands of micro utility classes like Tailwind does.
It would super cool if they eventually make this a part of the phone OS and all you would need to do is buy a headset and plug it in over USB-C. Same idea as Dex, different display form factor, but same computer.
Then with Android Auto, Dex, and XR, you would just need a single computer you can carry with you.
Seems like the end state for personal computing. Instead of buying separate computers, you buy human interface devices and plug them in over USB-C.
Cloud sessions for everything, one unified OS for your phone, VR, PC, TV, etc.
Built from the ground up, it both runs on a 30$ phone and a 6k computer. Do it on Risc V or another open source architecture.
Then I came back to earth and realized this would cost hundreds of billions to build and market.
Android is close. But ultimately you can't run any PC apps on it( although Dex + Remote Desktop to a Microsoft Cloud PC can fake it).
In my dream we don't even need USB C, your just limited to whatever device your currently using. For example you're TV could probably play the Sims, or use cloud gaming. Your PC could also play the Sims, but AAA games as well.
We'd have to build a new OS( probably a Linux distro) which is heavily dependent on cloud services.
I'd be hyper aggressive with the marketing. A 50$ mini Risc V PC gets you started.
What about wireless? Wireless earbuds popular. People might find it a UX downgrade to need a cable running from their glasses to their phone in their pocket as they walk down the street (the demo shows AR navigation as someone walks down the street).
WiFi 6 does ok for VR. The current limitations IMHO are the hardware on glasses / headsets in terms of compute and power. Not too dissimilar to how wireless earbuds just weren’t practical til what 5 years ago?
One feature of i3 and friends that I really relied on when I was using a laptop as my main computer was the tab mode. Being able to tab between windows on half of your screen while keeping your browser open in the other half was extremely useful.
I know BeOS had tabbed windows in the 90s in a floating window manager; it makes me wonder why this idea didn't catch on in the early 2000s.
Windows has started to add tabs to individual programs incrementally as part of their rewrites of core applications in the new GUI frameworks. Notepad got tabs and so did Explorer. So they see the utility.
Why hasn't tabbing been included as a core feature of the window manager outside of these niche tiling window managers for Linux?
This might be true on the user-facing side of things, but moving chip design in-house and transitioning their entire hardware lineup to the ARM ISA is a big innovation. M-line Macs can't be beat in efficiency or performance any time soon.
M-line Macs have been beaten in pure-performance terms since the day they were released, and that goes for everything from the M1 to the M3 Max: https://browser.geekbench.com/opencl-benchmarks
It's the single-core scores that are really impressive, but as time goes on, the performance gap is closing and not widening. It's been said many times before, but it was Apple's TSMC alliance that mattered more than their core design or ISA choice at the time.
> M-line Macs have been beaten in pure-performance terms since the day they were released
I don't think anyone would ever argue this based on performance alone, which is why I think your geekbench link is irrelevant, and I think a fair reading of parent's comment talking about "efficiency and performance" would mean these two aspects taken together, not independently.
That is, nobody is surprised a beefy server chip has more computing power than an M3, and nobody is considering an Nvidia H100 a competitor to an M3 in any sense.
The problem is, we can correct for wattage and it's still a blowout.
Compare the M2 Ultra's 5nm GPU performance-per-watt to a product like Nvidia's RTX A5000 laptop GPU on Samsung 8nm. The M2 Ultra is rated to pull over 290w at load, compared to the A5000's maximum TDP of 165w. Apple's desktop-grade iGPUs are less power-efficient than Nvidia's last-generation laptop GPUs, even when they have a definitive node advantage and higher TDPs.
You don't have to just look at the server chips. Compare like-for-like and you'll quickly find that Apple's GPU designs are pretty lacking across the board. The M3 Max barely swaps spit with Nvidia's 2000-series in performance-per-watt. When you pull out the 4000-series hardware and compare the efficiency of those cards, it's just plainly unfair. The $300 RTX 4060 outperforms Apple's $3000 desktop GPUs in performance and efficiency.
Why not? Apple's SOCs are in the 200-300w range that desktop systems occupy, it's only fair to compare it with it's contemporaries. Nvidia is said to be developing desktop SOCs too so we'll soon have systems for comparison, but I don't see how laptop dGPUs are an unfair comparison against desktop SOCs in any case. It's basically the same hardware with similar thermal constraints.
Plus; say we did include all of the dGPUs that Apple officially supports. Not even the W6800X gets more performance-per-watt than an Nvidia laptop chip. It only highlights the fact that Apple goes out of their way to prevent Nvidia from providing GPU support on Mac.
CSS: https://developer.mozilla.org/en-US/docs/Learn_web_developme...
JS: https://developer.mozilla.org/en-US/docs/Learn_web_developme...
They even have courses for frameworks.
React: https://developer.mozilla.org/en-US/docs/Learn_web_developme...
Svelte: https://developer.mozilla.org/en-US/docs/Learn_web_developme...
Vue: https://developer.mozilla.org/en-US/docs/Learn_web_developme...