Hacker Newsnew | past | comments | ask | show | jobs | submit | jasoneckert's commentslogin

As someone who has used the Snapdragon X Elite (12 core Oryon) Dev Kit as a daily driver for the past year, I find this exciting. The X Elite performance still blows my mind today - so the new X2 Elite with 18 cores is likely going to be even more impressive from a performance perspective!

I can't speak to the battery life, however, since it is dismal on my Dev Kit ;-)


Unless they added low power cores to it, its probably isn't great. Chip design was for originally for datacenters.

Didn't laptops with Snapdragon X Elite CPUs have pretty good battery life?

https://www.pcworld.com/article/2375677/surface-laptop-2024-...

X2 Elite shouldn't be that different I think.


they do but not extraordinary either.

ive a x elite and a bunch of other laptops

i like the mba 13 (but barely) and the zbook 395+

the x elite is just a bit slow,.incompatible and newer x86 battery life isnt far off


Looks like the shift key isn't too reliable either.

If you read anything online, you'll realize that the battery life 'is' great. For example, LTT: https://www.youtube.com/watch?v=zFMTJm3vmh0

Reading youtube videos are ya?

They were joking. The dev kit didn’t have a battery.

They did add E-cores in X2.

Wait, you got one of those Dev kits? How? I thought they were all cancelled.

Edit: apparently they did end up shipping.


They got cancelled after they started shipping, and even people who received the hardware got refunded.

How's the compatibility? Are there any apps that don't work that are critical?

Surface Pro 11 owner here. SQL Server won't install on ARM without hacks. Hyper-V does not support nested virtualization on ARM. Most games are broken with unplayable graphical glitches with Qualcomm video drivers, but fortunately not all. Most Windows recovery tools do not support ARM: no Media Creation Tool, no Installation Assistant, and recovery drives created on x64 machines aren't compatible [EDIT: see reply, I might be mistaken on this]. Creation of a recovery drive for a Snapdragon-based Surface (which you have to do from a working Snapdragon-based Surface) requires typing your serial code into a Microsoft website, then downloading a .zip of drivers that you manually overwrite onto the recovery media that Windows 11 creates for you.

Day-to-day, it's all fine, but I may be returning to x64 next time around. I'm not sure that I'm receiving an offsetting benefit for these downsides. Battery life isn't something that matters for me.


You ABSOLUTELY do not have to create a recovery drive from a Snapdragon based device. I've done it multiple times from x64 Windows for both a SPX and 11.

Hmm, thank you, that's good to know. Did you just apply the Snapdragon driver zip over the x64 recovery drive? It didn't work for me when my OS killed itself but I could easily have done something wrong in my panic over the machine not working. Since I only have the one Snapdragon device, I was making the assumption that it would have worked if I had a second one, but I didn't actually know that.

Yes, just copy the zip over like the instructions say.

Thanks again for this. Honestly, it may sway my choice on returning to x64 vs. sticking with ARM64 next time. The other issues are relatively minor and can be dealt with, but I didn't like thinking that I was one OS failure away from a bricked machine that I couldn't recover.

That’s brutal.. I wonder why the Apple Silicon transition seemed so much smoother in comparison.

One reason is that Apple sold subsidized devkits to developers starting around 6 months before Apple Silicon launched, while the X Elite devkit was not subsidized, came with Windows 11 Home (meaning that you had to pay another $100 to upgrade to Pro if you were an actual professional developer who needed to join the computer to your work domain), and didn't ship until after months after X Elite laptops started shipping. As a result, when the X Elite launched basically everything had to run under emulation.

I think another reason is Apple's control over the platform vs Microsoft's. Apple has the ability to say "we're not going to make any more x86 computers, you're gonna have to port your software to ARM", while Microsoft doesn't have that ability. This means that Snapdragon has to compete against Intel/AMD on its own merits. A couple months after X Elite launched, Intel started shipping laptops with the Lunar Lake architecture. This low-power x86 architecture managed to beat X Elite on battery life and thermals without having to deal with x86 emulation or poor driver support. Of course it didn't solve Intel's problems (especially since it's fabricated at TSMC rather than by Intel), but it demonstrated that you could get comparable battery life without having to switch architectures, which took a lot of wind out of X Elite's sails.


Apple had a great translation layer (Rosetta) that allows you to run x64 code, and it's very fast. However, Apple being Apple, they are going to discontinue this feature in 2026, that's when we'll see some Apple users really struggling to go fully arm, or just ditch their MacBook. I know if Apple does follow through with killing Rosetta, I'll do the latter.

It's a transpiler that takes the x86-64 binary assembly and spits out the aarch64 assembly only on the first run AFAIK. This is then cached on storage for consecutive runs.

Apple silicon also has special hardware support for x86-64's "TSO" memory order (important for multithreaded code) and half-carry status flag.

BTW. A more common term for what Rosetta does is "binary translation". A "transpiler" typically compiles from one high-level language to another, never touching machine code.


Apple also implemented x86 memory semantics for aarch64 to allow for simpler translation and faster execution.

In HW?


Not OP, but I don’t think so. Rosetta inserts ARM barrier instructions in its generated code to emulate x86 memory ordering.

Did it? From that list: SQL server doesn't work on Mac and there's no Apple equivalent, virtualisation is built into the system so that kind of worked but with restrictions, games barely exist Mac so a few that cared did the ports but it's still minimal. There's basically no installation media for Macs in the same way as windows in general.

What I'm trying to say is - the scope is very different / smaller there. There's a tonne of things that didn't work on Macs both before and after and the migration was not that perfect either.


Out of the gate, Apple silicon lacked nested virtualization, too. They added it in the M3 chip and macOS 15. Macs have different needs than Windows though; I think it's less of a big deal there. On Windows we need it for running WSL2 inside a VM.

I'd guess the M3 features aren't required for nested virtualization, and it was more of a sw design decision to only add the support when some helpful hardware features were shipped too. Eg here's nested virtualization support for ARM on Linux in 2017: https://lwn.net/Articles/728193/

Nested virt does need hardware support to implement efficiently and securely. The Apple chips added that over time, eg M2 actually had somewhat workable support but still incomplete and hacky https://lwn.net/Articles/928426/ - the GIC (interrupt controller) was a mess to virtualise in older versions, which is different from the instruction set of the CPU.

On Windows nested virtualization already existed before WSL, all the kernel and device drivers security features introduced on Windows 10, and made always enabled on Windows 11, require running Hyper-V, which is a type 1 hypervisor.

So it is rather easy having to deal with nested virtualization, even those of us that seldom use WSL.


Yes, nested virtualization has existed for a long time... on Intel. On Windows, it is not supported on ARM. For a long time it wasn't even supported on AMD! They added AMD nested virtualization support in Windows Server 2022!

Note that when the Windows host is invisibly running under Hyper-V, your other Hyper-V VMs are its "siblings" and not nested children. You're not using nested virtualization in that situation. It's only when running a Hyper-V VM inside another Hyper-V VM. WSL2 is a Hyper-V VM, so if you want to run WSL2 inside a Windows Hyper-V VM which is inside your Windows host, it ends up needing to nest.


Nested virtualization is not required for WSL2 or Hyper-V VMs. It's only required if you want to run VMs from within WSL2 (Windows 11 only) or Hyper-V VMs within Hyper-V VMs.

Yeah, I understand this and said it correctly in my post. We need nested virtualization to run WSL2 inside a VM: this is a Linux VM inside a Windows VM inside a Windows host. WSL2 is already a VM, so if you want to run that inside a VM, it requires nested virtualization. Nested virtualization is one of those features that people don't know about unless they need it, and they find out for the first time when they get an error message from Hyper-V. If you have a development VM on a system without nested virtualization, you're stuck with WSL1 inside that VM, or using a "sibling" Linux VM that you set up manually (the latter was my actual solution to this issue).

For one thing Apple dropped 32-bit before they transitioned to ARM while Windows compatibility goes back 30 years.

Actually, the Macho file format was multiarch by design (On Windows we're still stuck with Program Files (x86))..

Anyway, before dropping 32bit, they've dropped PowerPC.

Another consideration, Apple is the king of dylib, you're usually dynamically linking to the OS frameworks/libs. so they can actually plan their glue smarter so the frameworks would still work in native arch. (that was really important with PPC->Intel where you also had big endian...)


You also get "Program Files (ARM)" (including a complementary "SysArm32") on older arm64 systems too.

Apple already went through this before with PowerPC -> x86. They had universal binaries, Rosetta, etc. to build off of. And they got to do it with their own hardware, which includes some special instructions intended to help with emulation.

> Apple already went through this before with PowerPC -> x86

Not to mention 68K -> PowerPC.

Rhapsody supported x86, and I think during the PowerPC era Apple kept creating x86 builds of OS X just in case. This may have helped to keep things like byte order dependencies from creeping in.


Because it was handled by the only tech company left that actually cares about the end user. Not exactly a mystery.

Having a narrow product line helped Apple a lot. Similarly being able to deprecate things faster than business-oriented Microsoft. Apple also controls silicon implementation. So they could design hardware features that enabled low to zero overhead x86 emulation. All in all Rosetta 2 was a pretty good implementation.

Microsoft is trying to retain binary compatibility across architectures with ARM64EC stuff which is intriguing and horrifying. They, however, didn't put any effort into ensuring Qualcomm is implementing the hardware side well. Unlike Apple, Qualcomm has no experience in making good desktop systems and it shows.


> Apple also controls silicon implementation.

People sometimes say that as if came without foresight or cost or other complexities in their business.

No, in the end they are hyper strategic and it pays off.


I didn't say otherwise. They probably realized they can pull a complete desktop CPU design off at the latest with iPad, probably earlier. They were probably not happy using Intel chips and their business strategy has always been controlling and limiting HW capabilities as much as possible.

Given how Apple makes it maintenance hostile and secures against their end customers, no.

Because Apple controls verything vs Windows/Linux world where hundres (thouthands?) of OEM create things?

I agree with you on the Windows side.

Linux is different. Decades of being tied to x86 made the OS way more coupled with the processor family than one might think.

Decades of bugfixes, optimizations and workarounds were made assuming a standard BIOS and ACPI standards.

Specially on the desktop side.

That, and the fact that SoC vendors are decades behind on driver quality. They remind me of the NDiswrapper era.

Also, a personal theory I have is that have unfair expectations with ARM Linux. Back then, when x86 Linux had similar compatibility problems, there was nothing to be compared with, so people just accepted that Linux was going to be a pain and that was it.

Now the bar is higher. People expect Linux to work the way it does in x86, in 2025.

And manpower in FOSS is always limited.


Linux runs perfectly on MIPS, Power, Sparc, obviously ARM - cue the millions of phone running Linux today, RiscV, and at least a dozen other architectures with little to no user. It's absolutely not tied to x86.

> Decades of being tied to x86

This doesn't pass the smell test when Linux powers so many smart or integrated devices and IoT on architectures like ARM, MIPS, Xtensa, and has done so for decades.

I didn't even count Android here which is Linux kernel as first class citizen on billions of mostly ARM-based phones.


You are talking out of your ass here. If you make bold statements like this you need to provide evidence. Linux works fine on many platforms...

my asahi linux m1 mac book air would disagree with you

Every Mac transitions to ARM, only a very small amount of Windows PCs are running ARM. SO right now there's not an large user base to incentivise software to be written for it.

You are right that Windows on ARM cannot be called a success. But if you make Windows/macOS cross platform software then your software needs to be written for ARM anyway.

So if you support macOS/x86, macos/ARM, and Windows/x86, then the additional work to add Windows/ARM is rather small, unless you do low-level stuff (I remember Fortnite WoA port taking almost a year from announcement to release due to anticheat).


The first few months were a little tricky depending on what software you needed, but it did smooth out pretty quickly.

>Creation of a recovery drive for a Snapdragon-based Surface (which you have to do from a working Snapdragon-based Surface) requires typing your serial code into a Microsoft website, then downloading a .zip of drivers that you manually overwrite onto the recovery media that Windows 11 creates for you.

That's just creation of a recovery drive for anything that Microsoft itself makes. It's the same process for the Intel Surface devices too.

>no Media Creation Tool

Why would anyone care about that? Most actively avoid Microsoft's media creation tool and use Rufus instead.


Does Remote Desktop into the Surface work well?

When I'm home, I often just remote desktop into my laptop.

I'm wondering if remoting into ARM Windows is as good?


Yes everything in user space works as expected. Note that NT has supported non-x86 processors since 1992.

According to some accounts, the name NT even was a reference to the Intel i860, which was the original target processor.

I have a similar Windows Arm64 machine (Lenovo "IdeaPad 5 Slim"), RDP into it works OK.

There is one issue I ran into that I haven't on my (self-built) Windows desktops: when Windows Hello (fingerprint lock) is enabled, and neither machine is on a Windows domain, the RDP client will just refuse to authenticate.

I had to use a trick to "cache" the password on the "server" end first, see https://superuser.com/questions/1715525/how-to-login-windows...


On the bright side, there's a good chance that Windows on ARM is not well supported by malware. There's a situation where you benefit from things being broken.

Most apps for dev work actually work; - RStudio - VS Code - WSL2 - Fusion 360 - Docker

Only major exception is: - Android Studio's Emulator (although, the IDE does work)


Yeah, I too was surprised to find the dev experience very good: all JetBrains IDEs work well, Visual Studio appears to work fine, and most language toolchains seem well supported.

JetBrains stuff (love it!) is built on Java, so I’m not terribly surprised. I don’t know how much native code there is though.

Plus they’ve been through the Apple Silicon change, so it’s not the first time they’ve been on non-x86 either.


Have I had any app compatibility issues? To quote Hamlet, Act 3, Scene 3, Line 87: "No."

The Prism binary emulation for x86 apps that don't have an ARM equivalent has been stellar with near-native performance (better than Rosetta in macOS). And I've tried some really obscure stuff!


For me it is too slow to run Age of Empires 2: DE multiplayer. More than ten year old Laptops with Intel chips are faster there.

I suspect that's due to the GPU and not due to Prism, because they basically just took a mobile GPU and stuffed it into a laptop chip. Generally performance seems to be on par with whatever a typical flagship Android devices can do.

Desktop games that have mobile ports generally seem to run well, emulation is pretty solid too (e.g. Dolphin). Warcraft III runs OK-ish.


The GPUs don't go toe-to-toe with current gen desktop GPUs but they should be significantly better than the GTX 650, a mid range desktop GPU from 2012, the game (2019) lists as recommended. It does sound like something odd is going on than just lack of hardware.

https://www.videocardbenchmark.net/gpu.php?gpu=Snapdragon+X+...

https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+6...


That something odd is called GPU drivers. Even Intel struggled (they recently announced that they are dropping all gpu driver older than Alchemist development) to get games running on their iGpus

There are also some architectural differences between mobile & desktop GPUs which may impact games that are not optimized for the platform: https://chipsandcheese.com/p/the-snapdragon-x-elites-adreno-...

That's certainly not what the reviews say.

Adobe apps that ran fine on Rosetta didn't work at all on Prism.

https://www.pcmag.com/articles/how-well-does-windows-on-arms...


Same here. I've not had any issues with my Surface Pro 11.

Ironically, the app I've had the most trouble with is Visual Studio 2022. Since it has a native ARM64 build and installation of the x64 version is blocked, there are a bunch of IDE extensions that are unavailable.

X Elite does not have AVX instructions (they are emulated instead)

The same points made about battery life with Apple Silicon in this blog post equally apply to Snapdragon Elite X laptops: https://www.youtube.com/watch?v=zFMTJm3vmh0

SGI hardware was the sexiest hardware of the 1990s, and IRIX was the sexiest UNIX OS of the same period. They were the desire of nearly every UNIX nerd back then.

Remember "Erwin", the SGI O2 in the userfriendly.org comic? https://photos1.blogger.com/blogger/1065/737/1600/user%20fri...

The comic is now dead, but it had a long and amazing run.

I still keep an SGI O2, Octane, and Fuel around for nostalgia hits nowadays, and they never disappoint:

https://triosdevelopers.com/jason.eckert/trios/SGI_Fuel_Blen...

https://triosdevelopers.com/jason.eckert/trios/SGI_Fuel.jpg


I credit a lab full of SGIs at Ohio State with my transition from a career in mechanical engineering towards technology and information security. I just got transfixed by all of the things I could do and would spend my time in class distributing povray renders across the whole lab instead of paying attention to what I was supposed to be doing.

We also had a gorgeous lab of NeXT computers in a basement computer lab. Between just doing what seemed like magic in Mathematica and seeing how small I could make fonts on the laser printer, I also got scolded directly by Steve Jobs by replying to his default email in the mail client with 'read receipt' enabled. He said it was a violation of his privacy lol.

Edit: This was the email - https://blog.hayman.net/2025/05/06/from-steve-jobs-great-ide...


I too have an O2 along with some SGI flat panel screens, which was amazing tech in the world of CRT displays of yesteryear.

I've been trying to donate this stuff to local museums for a while but sadly, none seem interested. The O2 still boots without any issues, and at least one of the screens work. Shame to just throw away.


Craigslist or fb marketplace for $200 and it will be gone. Don't make it free. Compromise to $100 for the right buyer.


I'm probably just nostalgic, but to me this hardware is a piece of history that's mostly forgotten or overlooked by anyone who wasn't working in IT at the time. That's why I've been trying to get museums to take it, because my hope would be they'd do something educational with the hardware. Alas, selling is indeed probably my only recourse if I don't want these things to end up on the heap (right away.)

I was there wishing I could afford those amazing machines. My point with the pricing is to hopefully find someone who values the machine as more than a sum of its parts. Speaking of which it reminds me of the youtube channel save it for parts. Saving old tech is not very popular. We lost Living Computers: Museum + Labs after Paul Allen died.

> https://triosdevelopers.com/jason.eckert/trios/SGI_Fuel.jpg

Thanks for posting this; never seen that particular model before. It looks like one of those kitschy, pad-supplied coffee or soda makers. I find the urge, very prevalent from the mid-90s to the mid-Aughts, to entomb even professional-grade electronics in cases that look like cheap toys or other household appliances both bizarre and fascinating, analog to the 70s trend of styling (especially home) computing devices as faux-wooden furniture pieces, et cetera.


I think the irony of this is the MIPS processors weren't that good for very long at all. (Famously Toy Story was rendered on SparcStations, even if the animation was prepared on SGIs). The PA-RISC ones seemed to have the most staying power, but many people don't view HP 9000 as sexy in the way SGI was.


The PA-RISC processor are really cool - the C8900 has a 64MB external shared L2, and a 768KB I & D cache per core. That's more I & D cache than any modern processor I am aware of, and more last level cache than basically anything but an AMD X3D or the last 5-10 years of X86 server chips. It's much slower of course than any modern cache.

The older 8500 has an article available with a die shot: https://ardent-tool.com/CPU/docs/MPR/19971117/111505.pdf It's like 75% cache even back then. (Fast SRAM With Integrated CPU is extremely accurate, lol).


> The PA-RISC processor are really cool

I'm glad someone else thinks so!

There's a very interesting vid about the design of the ISA here https://www.youtube.com/watch?v=C53tGHzp1PI and I think it's pretty clear they learned from early MIPS/Sparc. It's a shame it got abandoned in the Itanium push.

The Alpha was also a performance king in that era, but tbh I don't have the same nostalgia for it, although clearly it was executed very well.


(author) You can also count me in that list - working on a PA-RISC system was my first job out of college. I found the ISA very clean and they were strong performers. How HP got the wrong idea about VLIW, I'll never understand.


That's cool you worked on a PA-RISC system as a job! The ISA seems clean and the later superscalar designs were very advanced for their time.

I think an updated PA-RISC design be awesome for modern workloads; huge caches with prefetch, a good branch predictor + a 8-10 wide dispatch, and some sort of vector extension. A Mix of AMD Zen+X3D & Apple ARM. To be fair, ISA doesn't matter as much really these days, any core with similar features probably would perform well.

There's always someone who thinks VLIW or a similar is a good idea. So far that's been a bit tricky for a general purpose CPU, or even some parallel designs.

* 100% personal opinion, I've never actually worked on HW design directly *


> 768KB I & D cache per core

PA-RISC has mostly always had large L1 caches ( that used to be off-chip), and usually no L2 cache.

I know this bit of trivia, but I don't know the technical reasons/trade-offs for it.


On the other side, turns out both Playstation (1) and Nintendo 64 did quite well for quite long.


PS2, as well. But games and game hardware tend to have very different CPU requirements than general purpose workstations.

The N64/PS1/PS2 (and others) weren’t exceptional for very long, if ever, in terms of CPU power. They relied on dedicated graphics hardware, low price, ease—of-use, a business model that allowed for selling the base hardware at a loss, and devs optimizing for a fixed platform to stay competitive for 5-10 years as PC hardware improved.


I seem to recall the CPU in the N64 was specced to be something like 75% of the performance of a Pentium 90 but for 20% of the price. The PS1 doesn't even have floating point. When the PS2 was released it felt like x86 was advancing faster than ever, so whatever impressive performance edge it had lasted for about five minutes.

In all cases it's hard to argue that MIPS devices were sold on the strength of their CPUs from the mid 90s onwards.


N64 was effectively Indy, but priced into a game console. Its devkit was an Indy (at first) even.


> N64 was effectively Indy

Not really, no. The memory system and graphics systems are completely different, and the CPU is a different MIPS processor to one in any SGI desktop. Some devkits did involve having whole subsystems on expansions in Indys though.

It should say something that when the Indy was announced the quip was "It's an Indigo without the go" so even had the N64 been Indy based it would not have been noted for CPU performance.


> I think the irony of this is the MIPS processors weren't that good for very long at all.

AFAIK SpecInt and SpecFp would like to politely dissagree. /s


Didn't PA-RISC lead MIPS, and everyone else, on those? When the already-obsolete machine being discussed in the OP was finally marketed in mid-1996, its int/fp socres of either 8/10 or 9/11 depending on the clock speed were not great compared to the available PA8000 workstations that scored 12 on SPECint95 and 17 on SPECfp95. People bought SGIs because there was some critical app they needed that was only available on IRIX, and they tolerated the bad CPU performance because they didn't have a choice.


Irix remains the best looking OS imo.


I too keep an O2 (actually more than one), Octane, and a Fuel around. Plus an Indy or two.


> SGI hardware was the sexiest hardware of the 1990s, and IRIX was the sexiest UNIX OS of the same period.

Back in the 90s my best friend's stepfather happened to be the top dog of SGI Benelux (Belgium / The Netherlands / Luxembourg) so... My best friend had an SGI Indy (Indy, not Indygo) at home.

After school we'd go back to his place and have fun with the Indy. That thing already had a webcam (in, what, 1993?).

I remember a 3D demo of oil pipelines, some crazy music video with little balls bouncing on top of sound waves and... the terminal. I learned my very first Un*x command on that thing.

One one hand we had Commodore Amiga and games nearly as good as the arcade and on the other hand we had the Indy.

We of course also had our mediocre beige PC with an even more mediocre OS: DOS and then Windows.

Thankfully a few years late Linux came out and saved us from mediocrity.


A note of caution: everything is relative, and details are important.

If you love what you do (artist, self-employed, etc.) a 996 culture can be considered a good thing as a certain amount of "good" stress allows us to feel self-actualized.

As is a 996 culture that provides for work-life balance. For example, working from home with flex time for 12 hours where you get to take long breaks whenever you feel like it to run, walk the dog, eat, get coffee, etc., is quite enjoyable as well. Who cares if you're still replying to emails at 7pm if you can do this, right?

Added note: I find it very interesting that this was immediately downvoted. I'm interested in understanding why for those who wish to share their rationale and perspective.


If you want to work 996 and that is what makes you feel self-actualized - by all means, go for it, nobody is stopping you. May even allow you to get ahead of the pack (or maybe the quality of your work will suffer in your overworked state - big gamble!).

For me, the big problem in your post is the "996 culture". That means the expectation is that everyone is pushing forward with a similar intensity. Now, perhaps you were talking specifically about individual efforts given your examples of artist and self-employed, but when I think about culture, I think about groups of people, and in that context 996 is problematic.

It only provides work-life balance if there is not much of a "life" to balance, where taking a break once in a while is fulfilling enough. Maaaaaybe this can work in your early 20s, but it basically removes anyone with kids, hobbies, outside interests and responsibilities, and really, anyone with life experience out of the equation. It is a highly exploitative culture, sold under the guise of camaraderie, when anyone who has gone through one or more hype cycles can tell that the majority of these startups will fold with nothing to show for them other than overworked, cynical individuals and another level of normalization of exploitative practices.


Thanks for the reply — I really appreciate how I missed the distinction between individual choice and systemic expectation. I was speaking more to personal situations (like artists or self-employed folks), but I see how referencing “996 culture” more broadly brings in serious issues of exploitation and exclusion. Your points about how this affects people at different life stages and the long-term costs gives me more to think about.


> I'm interested in understanding why for those who wish to share their rationale and perspective.

Because it overlooks the dynamics of power distribution.

When there’s a big discrepancy in power, the needs of one party feel justified, and the needs of the other feel like a whim.

Flexibility favors the employee, if and only if it is added on top of explicit office hours. Otherwise, it’s just vagueness that benefits whoever makes the decision of how you should fill them (i.e. your boss).


There are certainly people who'd allocate that kind of time to a particular interest if they had the opportunity, me included.


Likely at least two reasons:

- People simply disagree with you, especially this line: “Who cares if you’re still replying to emails at 7pm if you can do this, right?”

That might work for you but I imagine it left a sour note for some because emailing involves entangling other people into your personal hustle. This can perpetuate “work for show” (especially if you have any power or influence). If you want to silently code into the night and save all the evidence of this for the next morning, that’s one thing. Visible evidence of constant work can be very stressful and draining to others, however.

- HN leans left, weekend HN even more so. This whole thing can feel like “shit you do because we live in a ruthless society that only cares about money”. I don’t agree with the modern left on many things, but I’m definitely coming around to this one. It was - though perhaps in a slightly different context - the original Leftist-owned meaning of “woke”. It’s the idea that you suddenly wake up to the shitty sewer water you’ve been swimming in all your life and look around astonished at everyone else, who all seem to think it’s a perfectly clean and clear place to swim. I suspect some of your downvotes are because of this.

So, in short: you’re entitled to your opinion but it’s phrased as a bit of a lightning rod for those whose values deeply conflict with your own.


These are sage points that make sense to me - I can see how it might have come across as tone-deaf or misaligned with deeper values some people hold. Likewise, I've also noticed the same regarding left-leaning here on HN (and when it comes to tech, Apple-leaning too). I'm a generally left-leaning person myself, but my flexible 996 job (of which I have complete control over) affords me a view on it that others won't appreciate. Thanks for sharing!


My personal website breaks this each time (jasoneckert.github.io), which is both a letdown and a pleasant surprise.


so much traffic... looks like it's working now! https://www.fuckupmysite.com/?url=https%3A%2F%2Fjasoneckert....


Ah, yet another brave soul seeks to gamify the ancient art of Vim. Admirable.

But Vim is not merely a tool... it's a discipline. A lifestyle. It is not learned in an afternoon, nor in a weekend sprint of neon-highlighted tutorials. No, Vim is best learned like one reads a long, weathered tome: slowly, reverently, one page at a time.

It begins humbly: i to whisper your intentions. <Esc> :wq to seal your work with a sacred rite. Then, perhaps days or weeks later, a revelation: "Wait… I can delete four lines with 4dd?!"

You do not master Vim. You grow into it. Each new keystroke discovered is like finding a hidden passage in a familiar castle. What begins as a cryptic incantation eventually becomes second nature... muscle memory and magic intertwined.

So yes, make it a game. But know that Vim is not beaten. It is befriended over years, not minutes.


Anyone can learn :q to quit vim

But can anyone truly quit vim? No


This is an incredible achievement... not just for the technical depth, but for what it represents. Alyssa's work is nothing short of inspiring. The way she combined deep technical insight with years of dedication has not only brought open-source graphics to Apple Silicon, but also lit a fire under reverse engineers and open-source developers.

She has shown a whole new generation that curiosity and persistence can break barriers. I thoroughly enjoyed watching the developments these past several years. Massive respect to her and everyone who made this possible, and kudos on her new position at Intel.


[flagged]


What?


Just random internet bigotry - Alyssa Anne Rosenzweig is transgender. [1]

1: https://web.archive.org/web/20250520182445/https://rosenzwei...


[flagged]


The evidence of my eyes is two anonymous troll accounts that were just created for this thread. HN mods should really do something about it, it's way too easy to sign up and ban evade.


When someone tells you they prefer going by a nickname, do you also complain that they're forcing you to ignore the evidence on their paperwork and refuse?


"One of the things it means to be Canadian is to honour the rights and wishes of other people. That’s part of what makes it a wonderful place to live: most people genuinely believe in equality and respect for others, including people who don’t look like them."

In my opinion, this drives the narrative in this article, and is at the root of why there is little stigma in Canada surrounding MAID.


There is little stigma because Canadians are terrified of making a moral stand on absolutely anything that has been condoned by the state


I think the biggest takeaway from this blog post is that developers and other professionals should take more note of the tiling window managers available on Linux like Sway and Hyprland - they are insanely fast and customizable to exactly what we need to be more productive.

I'm a Sway user (ironically on Fedora Asahi Remix on a Mac) and I won't have it any other sway... er... I mean way.


I feel like I completely missed the boat on tiling WMs; I was a Linux user since '97 to 2002, and then for the most part Mac/Windows since then. I just deal with the chaotic madness of floating windows. I did try to spend a year on Linux as my work laptop with sway, and while tiling was just okay, I just felt less productive compared to my colleagues that were wizzes with it. Also using Ghidra with sway was incredibly painful, mostly due to the whole Java AWT thing, but it was a necessary part of my job.

I'm sure there's certain things that I do that are inefficient with floating windows - even despite with macos now having a bit of tiling management, but it's just easier to deal with on the brain. I'm not really organised or methodical for the most part and I kind of like it that way.


I switched from MacOS (from a 12 year old first generation retina MBP) to Arch and started out with hyprland. It was really nice initially while I mostly used terminals, a browser or launched Steam. But when I needed to do some paperwork (taxes, stuff involving wide spreadsheets) I often ran into trouble, e.g. when I needed to read some numbers off a pdf quickly. Rearranging the tiling to have everything in appropriate size was rather slow. I often use overlapping windows in such cases, where I only need to see parts of a document and the floating tiles in hyprland just didn’t work for me (not as easy to arrange and so it felt clumsy). I moved on to KDE and that has been working great for half a year now. Maybe I‘m missing some functionality or just didn’t take the time to get used to it - stuff needed to get done ;)


I got a laptop recently where I installed Arch / Hyprland (not Omarchy) but I know what you mean about overlapping windows. I do this all the time on Windows where I overlap windows and then toggle some of them as "always on top" to optimize whatever workflow I'm doing at the time.

The good news is Hyprland supports this quite nicely. I don't know when you last tried it but it's easy to float windows as needed in a dynamic way. You can assign a keybinding to toggle floating on a specific window and then you can move and resize it while holding either mouse button.

It also has a feature called "pin" to make something always on top which you can assign to a keybinding to toggle this as needed. Floating windows are already pinned by default on top of tiled windows so you only need to deal with this when you have 2+ overlapping floating windows.

Combining floating and pin together lets you overlap things in whatever way works best for you in a config-less way.

Optionally you can also pre-assign specific apps to always float or be pinned in your config file and toggle them with keybinds too.


If you want to have tiling but don’t like windows being automatically resized or having to do any resize at all, try niri. It’s a scrolling tiling window manager based on PaperWM. It is in the Arch repository and a KDE plugin called Karousel also exists on the same PaperWM paradigm.


To me it’s not even the tiling. It’s the ability to switch focus to windows in a directional manner. Like super hjkl or whatever.

I have no idea how people are still using alt tab in 2025.


> I have no idea how people are still using alt tab in 2025.

Everything is full screen almost always. In a week I need windows tiled for maybe 2h.


This workflow is even easier on a Tiling WM.

These days I use niri which at its core, is just Alt-Tab blown up as your actual desktop.


Seconding Niri, it's easier to configure compared to other Tiling WMs and has good out of the box defaults.

Though you'll have to fiddle a bit with stuff like waybar, fuzzel and xwayland-satellite. But once you've configured that stuff you won't have to fiddle with it non-stop.

I'm currently running it on Fedora, to be clear.


The point is I don’t need any wm for this workflow. It just works on any box regardless of OS.


This sounds like madness to me. At the least, browser + editor to view hot reloading output or docs at the same time. Terminal for tailing output.


There’s a second monitor for those times when it’s an unquestionable benefit. Most of the time having multiple windows open is inefficient use of screen real estate (I either have two or three panes in the IDE - the terminal is also here, a browser with console open, some db query tool with wide tables or corp chat, which I explicitly do not want to see when I’m working on anything of substance.)


Komorebi on windows is the exact same thing


I think the reason why so many of us look up to the Woz in the tech world is that he is genuine, in an industry where we see so much of the opposite regularly - and we want to be the same.


There's an interesting job interview question: do you want to be more like Woz or S. Jobs? Elon Musk's management style is very Jobs-like: Motivates via manipulation and wow-factor of cutting edge, has grand visions, yet knows what factories and the market can and can't handle, tries odd drugs, etc. However, Jobs rarely stuck his nose into politics; Jobs mostly just trolled about tech.


I'm definitely a Woz. I appreciate both Jobs and Musk despite their blemishes, but I must say where Musk has Jobs beat is vision. Jobs was content to give everyone a computer. Musk is trying to bring the optimistic future to life on all fronts.


Job's vision was seeing that hardware and software vertically integrated had a strong position in the market, despite being told to sell off the hardware division and focus on OS work. His vision was to never give out dividends or buy back stock: always plough the money right back into the company.

Musk has the unbound optimism of his ability to succeed. He's good at selling that optimism to others. His decision making is often heavily seated by public opinion and the market.

Also the whole fascism thing.

Of the two Jobs actually accomplished what he sets out to do. Musk's batting record is more spotty. He's got some wins (Falcon 9) but boy howdy does he have a lot of losses, and he's just gearing up to take increasingly more L's


I really do wonder if this is still the case.

As a younger millenial I am somewhat familiar with the legends of yore. But not as familiar as someone older that was around when the tech world was much smaller and more intimate. Where people casually met a wild Stallman at random conferences.

Given how much bigger the software and tech world has gotten, with how much time has passed, and how much things have changed, I wonder if people still see Wozniak as tech hero and as part of casual tech culture knowledge.


Dude. Yes we still see him that way 1000%. It's just that there are a lot of tech people on this planet and they all have different ages and experiences, so they don't all think the same.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: