Having worked professionally with both C++ and Objective-C[0], I greatly prefer the latter. I'm not in love with either of them, but Objective-C feels so clean and well-thought out compared to the insanity of C++.
That's ok, C++23 is going to add another group of features that will be half-adopted at best in legacy codebases that will totally fix everything this time for real.
[0] in the same codebase via the unholy chimera that is Objective-C++
> Also a 19,000 line C++ program(this is tiny) does not take 45 minutes unless something is seriously broken
Agreed, 45 minutes is insane. In my experience, and this does depend on a lot of variables, 1 million lines of C++ ends up taking about 20 minutes. If we assume this scales linearly (I don't think it does, but let's imagine), 19k lines should take about 20 seconds. Maybe a little more with overhead, or a little less because of less burden on the linker.
There's a lot of assumptions in that back-of-the-envelope math, but if they're in the right ballpark it does mean that Jai has an order of magnitude faster builds.
I'm sure the big win is having a legit module system instead of plaintext header #include
> The net effect of this is that the software you’re running on your computer is effectively wiping out the last 10-20 years of hardware evolution; in some extreme cases, more like 30 years.
As an industry we need to worry about this more. I get that in business, if you can be less efficient in order to put out more features faster, your dps[0] is higher. But as both a programmer and an end user, I care deeply about efficiency. Bad enough when just one application is sucking up resources unnecessarily, but now it's nearly every application, up to and including the OS itself if you are lucky enough to be a Microsoft customer.
The hardware I have sitting on my desk is vastly more powerful that what I was rocking 10-20 years ago, but the user experience seems about the same. No new features have really revolutionized how I use the computer, so from my perspective all we have done is make everything slower in lockstep with hardware advances.
> The hardware I have sitting on my desk is vastly more powerful that what I was rocking 10-20 years ago, but the user experience seems about the same.
Not even.
It used to be that when you clicked a button, things happened immediately, instead of a few seconds later as everything freezes up. Text could be entered into fields without inputs getting dropped or playing catch-up. A mysterious unkillable service wouldn't randomly decide to peg your core several times a day. This was all the case even as late as Windows 7.
At the same time it was also that you typed 9 characters in an 8 characters field and you p0wn3e the application.
>Text could be entered into fields without inputs getting dropped or playing catch-up
This has been a complaint since the DOS days that has always been around from my experience. I'm pretty sure it's been industry standard from its inception that most large software providers make the software just fast enough the users don't give up and that's it.
Take something like notepad in opening files. Large files take forever. Yet I can pop open notepad++ from some random small team and it opens the same file quickly.
I understand the attitude but I think it misses a few aspects.
We have far more isolation between software, we have cryptography that would have been impractical to compute decades ago, and it’s used at rest and on the wire. All that comes at significant cost. It might only be a few percent of performance on modern systems, and therefore easy to justify, but it would have been a higher percentage a few decades ago.
Another thing that’s not considered is the scale of data. Yes software is slower, but it’s processing more data. A video file now might be 4K, where decades ago it may have been 240p. It’s probably also far more compressed today to ensure that the file size growth wasn’t entirely linear. The simple act of replaying a video takes far more processing than it did before.
Lastly, the focus on dynamic languages is often either misinformed or purposefully misleading. LLM training is often done in Python and it’s some of the most performance sensitive work being done at the moment. Of course that’s because the actual training isn’t executing in a Python VM. The same is true for so much of “dynamic languages” though, the heavy lifting is done elsewhere and the actual performance benefits of rewriting the Python bit to C++ or something would often be minimal. This does vary of course, but it’s not something I see acknowledged in these overly simplified arguments.
Requirements have changed, software has to do far more, and we’re kidding ourselves if we think it’s comparable. That’s not to say we shouldn’t reduce wastage, we should! But to dismiss modern software engineering because of dynamic languages etc is naive.
I feel like requiring software "engineers" to be actual capital E Engineers would fix a lot of problems in our industry. You can't build even a small bridge without a PE, because what if a handful of people get hurt? But on the other hand your software that could cause harm to millions by leaking their private info, sure, whatever, some dork fresh out of college is good enough for that.
And in the current economic climate, even principled and diligent SEs might be knowingly putting out broken software because the bossman said the deadline is the end of the month, and if I object, he'll find someone who won't. But if SEs were PEs, they suddenly have standing, and indeed obligation, to push back on insecure software and practices.
While requiring SEs to be PEs would fix some problems, I'm sure it would also cause some new ones. But to me, a world where engineers have the teeth to push back against unreasonable or bad requirements sounds fairly utopian.
I agree completely with you, in principle. The problem is that Engineers don't struggle with a mountain appearing in the middle of the river partway through construction.
It is a significantly broader problem. Processes are nearly always to blame for failure, not disciplines or people. For example, the sales team would need to come on board (don't sell anything that isn't planned or - better - completed), product would have to commit to features well in advance, the c-suite would need to learn how to say "no."
With all of that you would lose the ability to pivot. Software projects would takes years before any results could be shown. Just how things used to be. Maybe this can be done without that trade-off, but I'm not aware of any means.
I'm a (relatively new) math teacher. I realized I don't like writing on the whiteboard, so I bought myself a cheap Wacom Tablet off eBay. But then I couldn't find any existing Wacom-compatible software that was designed for my usecase—teaching in front of a live class of ten-year-olds, so last weekend I "vibe-coded" an app for myself. I just used the app for the first time while teaching today, it was great.
This codebase is probably terrible, because it was mostly written by AI. I manually edited certain bits, but there are large sections of the codebase I literally haven't looked at.
Is this a problem? The app works well for me!
My point here is, I'd really hate to gatekeep software development to a small group of "licensed" engineers. In fact, I want the opposite: to empower more people to make software for themselves, so they can control their own computers instead of being at the whims of tech giants. (This is also why I dislike iOS so much.)
I do also take your point about safety, but I think we need to acknowledge that not all software is security critical and it doesn't need to be treated in the same way!
> My point here is, I'd really hate to gatekeep software development to a small group of "licensed" engineers. If anything, I want the opposite--to enpower more people to make software for themselves, so they can make their computers work for them. (This is why I dislike iOS so much.)
I 100% agree. I wouldn't want to gatekeep software development in general. I would only put the PE requirement on companies that are running a service connected to the internet that collects user data.
Want to make an application that never phones home at all? Go nuts. Want to run a service that never collects any sensitive data? Sure thing! Want to run a service that needs sensitive data to function? Names, addresses, credit card info? Yeah, you're going to need a PE to sign off of that.
Side note, I was a math teacher in a previous life. Congrats on the relatively new career, and thanks for your service.
> Want to make an application that never phones home at all? Go nuts. Want to run a service that never collects any sensitive data? Sure thing! Want to run a service that needs sensitive data to function? Names, addresses, credit card info? Yeah, you're going to need a PE to sign off of that.
Agreed, but I do think a tool like curl makes this a little complicated. To my knowledge, curl itself does not phone home or collect user data, but it's obviously security critical.
...or maybe it's not, now that I think about it. Curl is not end-user software. Maybe when other software uses curl, that software gets a PE sign off. But now this is starting to feel to me like another dumb compliance checkbox system. Is it?
I think end-users should always be empowered to be cavalier with their own cybersecurity. Organizations managing the data of others, however, should be held to a higher standard. If this means that an organization is using curl, they should have a PE responsible for auditing curl for security flaws.
What's the plan for when one of your vibecoded app's vulnerabilities is exploited and a stranger's penis appears in front of your class of ten-year-olds? Is "AI did it" going to save your job / keep you off the sex offender registry?
This app doesn't use the internet. I'm sure it could be used as part of some complex exploit chain, but now we're talking about a highly sophisticated attack.
I think part of the problem with that is that for physical engineering, there are clear, well-understood, deterministic and enumerable requirements that, as long as you as the engineer understand them and take them properly into account, your bridges and buildings won't fall down.
With software engineering, yes, there are best practices you can follow, and we can certainly do much better than we've been doing...but the actual dangers of programming aren't based on physical laws that remain the same everywhere; they're based on the code that you personally write, and how it interacts with every other system out there. The requirements and pitfalls are not (guaranteed to be) knowable and enumerable ahead of time.
Frankly, what would make a much greater difference, IMNSHO, would be an actual industry-wide push for ethics and codes of conduct. I know that such a thing would be pretty unpopular in a place like Y Combinator (and thus HackerNews), because it would, fundamentally, be saying "put these principles ahead of making the most money the fastest"—but if we could start a movement to actually require this, and some sort of certification for people who join in, which can then be revoked from those who violate it...
If we could get such a cultural shift to take place, it would (eventually) make it much harder for unscrupulous managers and executives to say "you'll ship with these security holes (or without doing proper QA), because if you don't we make less money" and actually have it stick.
I think we're basically describing the same thing. Asking a software engineering process to be the same as a physical engineering process is not realistic. A PE for SEs would look more like a code of ethics and conduct than a PE for say civil engineering.
The key thing to borrow from physical engineering is the concept of a sign off. A PE would have to sign off on a piece of software, declaring that it follows best practices and has no known security holes. More importantly, a PE would have the authority and indeed obligation to refuse to sign off on bad software.
But expecting software to have clear, well-understood, deterministic requirements and follow a physical engineering requirements-based process? Nah. Maybe someday, I doubt in my lifetime.
I think about this a lot and I tend to agree. There’s so much misinformation and ghost in the machine these days. I wish swes went to seek out the truth more. I’m not saying it dosent happen I just wish we had more engineering in this field.
This article ignores the fact that aside from being barred with manufacturing unlicensed NES games, Atari also failed to compete with any of its subsequent consoles after the VCS (although it did have some success with its PCs). The consoles were all flawed in some way. They were underpowered, didn't offer much over the previous iteration, or simply didn't have a strong enough library of games to compete. Atari was famously slow to realize that maybe people want more out of a game console than home ports of decade-old arcade games. On top of that, their original games that weren't home ports were mostly lackluster or were just outside of what gamers of the time were demanding.
Hard to say that Nintendo putting the kibosh on one arm of Atari's business "bled them to death" when all their other arms were bleeding from self-inflicted wounds.
EDIT: As pointed out below, I have mixed up Atari Corporation and Atari Games, so not all my criticism stands. Atari Games, publishing as Tengen, still largely put out ports of arcade games, but they were at least contemporary arcade games.
You seem to be confused (which is fair, this is a little confusing). In 1984, Warner Communications sold Atari's home and computer game division to Jack Tramiel, which became Atari Corporation. Atari Corporation was the company that made all the future Atari consoles (7800, Jaguar, etc) and computers (ST line). Atari Games, Atari's arcade game division, remained with Warner. This article is entirely about Atari Games, who had nothing to do with anything sold for the home market with the Atari name. They were entirely separate companies. The reason why they did business as Tengen was that as part of the split, Atari Games wasn't allowed to sell games to the home market using the Atari name.
I will say that the article is a bit inaccurate at the end. Atari Games kept using the Tengen name for several years after the lawsuit for publishing games on the Genesis. They only stopped in 1994 when Warner consolidated all of its game related brands under the "Time Warner Interactive" name.
Prior to the Warner / Tramiel sale, though, Atari management showed a stunning lack of foresight re: the lifecycle of their console platforms. If I recall properly, I've heard Al Alcorn (and / or perhaps Joe Decuir) talk about how the technical people pitched VCS as a short-lived platform, but management kept the product going far beyond its intended lifetime.
The 5200 was released in 1982, built on 1979 technology. The Famicom was released in Japan in 1983 but didn't make it to the United States until 1986. If Atari had made better controller decisions with the 5200, and perhaps included 2600 compatibility, I think Nintendo would have had a much harder row to hoe when they came to the US.
Then again, if Atari had taken Nintendo's offer to distribute the NES in the US...
(Some people write speculative fiction about world wars having different outcomes. My "The Man in the High Castle" is to wonder about what the world would have been like if Jack Tramiel hadn't been forced out of Commodore, if the Amiga went to Atari, etc.)
atari marketing was pretty f---ing terrible. objectively so
i had one of the home computer division marketing types come to my office one day, and was asked:
"can you print out all possible 8x8 bitmaps? we'd like to submit them to the copyright office so no one else can use them"
a stunning lack of knowledge of copyright law and basic exponential math. i didn't bother to point out that he really wanted all possible 8x8 _color_ bitmaps (there aren't enough atoms in the universe for this, by many orders of magnitude)
they didn't make very good decisions about consoles or computers, either
Atari made a lot of bad decisions, but what you were asked is not something you should expect someone in marketing to understand in general. There is only so much someone can get good at in their lifetime and so eventually you will have to give up understanding everything - and then look like an idiot when you ask for something that is obviously unreasonable to someone who does know.
What was asked for is a reasonable ask. It just isn't possible to create.
No it isn't. You don't get any copyright protection on a volume of data produced by rules, such as "every possible 8x8 bitmap". Furthermore, you also don't get copyright protection against "copies" that were developed without reference to your work, as would always be the case for this idea. So there is no theoretical benefit from attempting it.
You are thinking as a lawyer, who for sure should have jumped in (if got that far - it appears to have went to engineering first who shut it down for engineering reasons). Someone in marketing should not be expected to know or think of those details about law. Maybe they will, but it isn't there job.
Specialization is a good thing. However it means you will have often ideas that because of something you don't know are bad even though within your lane they are good.
I'm shocked at how "few" pages printing all 8x8 bitmaps would actually require. Assuming full page coverage of an 8.5 x 11 sheet at 600 dpi I'm only coming with a touch over 548 billion pages. I expected it to be more. Legal-size paper drops that to about 430.5 billion pages.
I think your math is a little off (or maybe mine is).
I'll take a short cut and imagine that you have an 8x8 square with no margins (68% of a borderless 8.5x11), then you have a grid of 600x600 bitmaps, which is 3.6e5. if each pixel is only black or white, than you have 1.8e19 possible bitmaps (64-bit), divide the two and you have 5e13, or about 50 trillion pages. Fix the equation, and you get a grid of 5.2e5, for 30 trillion pages instead of 50.
However, bring that up to 24-bit color or more (even 8-level greyscale is e154), and the exponentiality of the problem goes back to as described by the OP
I got Atari 5200 when I was a kid, and the disappointment was immense, considering the marketing and hype that went into it. The controller made playing games very difficult. And the games were pretty bad as well. Later, I got a Commodore 64 and then also NES, which just revolutionized home gaming in general.
Yeah, Atari really "imprinted" on a style of game in the 2600 era and could never move on from it.
Interestingly, despite the fact that the Atari of today is completely disconnected in personnel several times over from the Atari of yesteryear, it still is imprinted on that style of game. YouTube popped this tour of an Atari booth from 10 days ago that shows what the modern Atari is up to: https://www.youtube.com/watch?v=_6u65VTqPSc (It's a five minute video, and you can pop it on 2x and just get the vibe of what I'm talking about even faster than an article could convey.)
And they're still making games that basically are Atari 2600 games with better graphics. If you really, really like that, they've got you.
Nintendo could easily have gone the same route. The NES is vastly more powerful than a 2600 by the standards of the time, but looking back in hindsight a modern child might find them somewhat hard to distinguish. Nintendo also made a ton of money with platformers like Super Mario 3 and could easily have also imprinted.
Instead, they definitely invested in pushing the frontier outward. Super Mario World was release-day for the SNES, and was definitely "an NES game, but better", but Pilot Wings was also release-day for the SNES, and that's not an NES game at all. F-Zero, also a release title, is a racing title, but definitely not "an NES racing game but better". The year after that you get Super Mario Kart, which essentially defined the entire genre for the next 33 years and still counting, and Star Fox in 1993, Donkey Kong Country was a platformer but definitely not a "rest on our laurels" platformer, I'm not mentioning some other games that could be debated, and then by the Nintendo 64, for all its faults, Super Mario 64 was again a genre-definer... not the very very first game of its kind, but the genre-definer. And so forth.
Nintendo never fell into the trap of doing exactly what they did last time, only with slightly better graphics. Which is in some ways a weird thing to say about a company that also has some very, very well-defined lines of games like Mario Kart and Super Mario... but even then in those lines you get things like Super Mario Galaxy, which is neither "genre-defining" nor the first of its kind, but is also definitely not just "like what came before only prettier". It shows effort.
The gaming industry moved on... Atari never did. Still hasn't.
A child can certainly tell the difference between the best of the best 2600 games and Super Mario Brothers. The latter is recognizably a modern game. Many 2600 games are completely unplayable unless you read the manual.
“Never moved on” isn’t entirely fair to the modern incarnation of Atari, which is a relatively new company intentionally producing/licensing retro games, emulation, T-shirts, etc. It’s not that they haven’t moved on, it’s that this is what the new, youngish IP owners are doing with the brand. It’s a choice, not inertia.
It's not a literal point, it's an observation of how far we've come. A single texture blows away 2600 and NES games in size quite handily. The emulation effort for either is a sneeze compared to what we pour into a single frame nowadays. Compared to modern stuff they're both just primitive beyond primitive as far as a modern kid is concerned.
And as for your second paragraph, it has that thing I don't understand that so many people seem to have in their brains that if you explain why a thing is true, it is no longer true. I do not understand it. Explaining why they haven't moved on does not suddenly make it so they have moved on. They haven't moved on. Best of luck to them but I doubt it's going to work very well as a strategy in 2025 any more than it did in the 1980s.
"And as for your second paragraph, it has that thing I don't understand that so many people seem to have in their brains that if you explain why a thing is true, it is no longer true. I do not understand it."
This is an interesting observation. I've seen the same thing.
I think the clue is in the "it is a choice"...perhaps they are perceiving seeing some sort of judgement being made of Atari implicit in your argument???
In other words, it can be true at the same time that (1) The are not moving on and (2) It is a choice.
Dude. There is no way in hell they probably even could move on. They probably simply do not have the organizational structure to develop modern games. They are like one of those companies making retro style record players. That is their niche. Not trying to go toe to toe with nintendo or playstation. Just a completely different business model.
Star Fox was made mainly by Argonaut Software, including the development of the Super FX chip. Only the scenario and characters were from Nintendo.
Donkey Kong Country was all Rare, except for use of the Donkey Kong character. If you look carefully at the DK sprite, you can even see design elements from Battletoads in there.
I agree with you up to a point. Epyx made the Lynx for Atari and it was by far better than the gameboy for the gaming of its time. It had hardware-based sprite scaling. It could’ve done a Mario kart type of game very well if someone had the foresight to. But Atari didn’t have Mario or any cutesy ideas that kids wanted. Nintendo was very smart in that they made the main target audience the kids. Nintendo also knew parents would only spend a certain amount of money so the gameboy had the price advantage.
Man, I remember learning that the VCS/2600 had successors well after there time and was like "gee, I wonder how powerful those were". The difference between a 2600 and 5200 is a small step up, and the 5200 to 7800 is damn near imperceptible:
The 5200 was essential the 8 bit Atari computer hardware on the inside. The controllers were different, and no keyboard, but almost exactly the same (IIRC one graphics mode was different in their GPU [not related to the GPU of today]). The 8 bit XEGS of 1987 was the same hardware as computer.
They did have some interesting handhelds in later years, but didn't have enough good ideas to make them catch on.
Even as a young kid I noticed that split. The NES included some posters and flyers listing all the original line up of games, with the same visual design (even cartridges stickers) and they were all simple arcade-like games. It already felt vintage even though this wasn't my generation, and rapidly the feel of games changed radically, it also merged with the current culture, with tv shows and movies of the 80s.
> And they're still making games that basically are Atari 2600 games with better graphics.
FWIW various Atari incarnations did try to move on to newer stuff but they all ended up with various levels of fail. The current Atari incarnation is probably the most (relatively) successful this side of the 2000s - though they're probably also (relatively) the smaller one.
I think they were close to closing shop before deciding to focus on the retro and indie gaming stuff.
I remember growing up Atari was always Atari. The games you knew on an Atari were the same years later / system to system. You knew what you were going to get and it was pretty stagnant tech wise.
Nintendo came along and even across the life span of the NES games looked / got better year to year.
Plenty of late 2600 games look tons better than early games. If you look at Combat vs late life Activision games like Pitfall! or Keystone Kapers, it's a huge difference in visual quality.
It's still nothing compared to early NES games, of course. And late NES games certainly got a lot nicer looking.
It's not about visual quality so much as the complete inability of Atari to understand that people's taste in games had moved on. In 1986, Super Mario Bros was still hottest game in the world, over a million sold in the US alone. Platformers were in, big time. And the Atari 7800 launched with... Centipede.
Part of the problem is that the 7800 was a decent/good system when designed in ‘84 terms of tech, other than sound which I think was identical to the 2600.
But it was shelved for years because of the crash until the NES took off and suddenly it popped up again in ‘86 as “We’re Atari! Remember us! We’re alive! Buy us!” to try to cash in. Would that have been Tramiel?
However a couple of years in the 80s was an eternity in terms of tech. The games they had to sell were from the original launch plan, so they all felt a few years out of date in terms of mechanics too.
In ‘86 and ‘87 they had Joust, Asteroids, Food Fight, and Pole Position 2. All ‘81-‘83 Arcade games.
By then US kids had played Mario, Golf, Baseball, Duck Hunt, Excitebike, Ghosts and Goblins, Gradius, Castlevania, Kid Icarus, Metroid, and more.
The games on the 7800 were a full generation or two behind in terms of mechanics and complexity. There was no competing with what Nintendo and it’s 3rd parties had.
The joystick being famously bad wasn’t going to help anything. And 2600 compatibility probably wasn’t important by then when even a new 2600 was cheap.
So it didn’t do well at all.
Jeremy Parish’s covered this saga and the games on his YouTube channel in comparison to what else was available at the time of its actual launch.
Warner Atari had left an enormous amount inventory behind. (Beyond what they infamously put into a landfill.) They also had screwed-over the major chain stores, who wouldn't touch anything Atari.
Tramiel was cash poor and resurrected the 7800/2600jr/XEGS/etc just as way to keep the lights on selling old stuff as they launched the ST computer line. It wasn't really intended to be competitive, and was sold cheaply through second-tier outlets.
(There was actually still tons of classic inventory when Tramiel Atari went under.)
That doesn’t surprise me. I know he was a “screw over anything if it will make the computers 0.05% more popular” guy. That was all that ever mattered in his mind.
I have a self hosted calendar solution. It was $15 at Staples, and it hangs in my kitchen. It wasn't a complete out of the box solution, though, I had to do a little work to customize it. I placed a pen cup with a few pens in it on the counter near the calendar to ensure it is always easy to modify.
>As someone who travels a lot, it’s also one of those things where statistically speaking, the chances of me being on a plane whenever some newsworthy event happens is higher than for the average person. I want my wife, friends, coworkers to know what flights I’m on and what cities I’m in. I’ve survived one terror attack, nearly dodged two others and a mass shooting. It’s one of those things where I want to make sure people who care about me can check in easily to see where I am.
It seems like the author has very different needs than you.
My wife uses this solution. When I am at work and someone wants to know if I can do a team dinner, I have to call her if she's at home, or tell them I'll get back to them. I never know if I'm free and finding out is inefficient at best.
Nearly the same thing here. We're scheduling for our daughter who, as she's getting older, has increasingly more scheduled events, too. If we're out of the house and my wife hasn't brought the paper calendar with her we simply can't commit to any plans. It's excruciating.
The cobbler's children go barefoot, so I haven't come up with a good solution for us... >sigh< It almost makes me want to hitch my wagon to a hosted product/service. Almost.
I used to do this with my wife, and it drove me crazy. Now we use a shared Google calendar, which works way better than prior solutions. Our unspoken rule: if there is an open time slot available, the first to enter it in the shared calendar wins. We're both responsible for entering all family-related appointments in the calendar as soon as they come up. There have been conflicts when either of us forgets to enter something into the calendar, but we just resolve the conflicts as usual. This was a game-changer from my point of view.
How does it handle notifications? How do you access it remotely? How do you share it for common events with others?
Comparing a paper calendar to a digital one is like comparing a Nokia 3310 to a modern smartphone. Yes, technically both are "a phone" but that's roughly where the similarities end.
> Share the webcam link or ring them up on the telephone if we are (jokingly) going traditional.
Why not make it more convenient for them and pipe the webcam stream through a script that OCRs it on change (or on timer) and makes it available as an .ical file that others can import via a link?
Also it's 2025, you can do it fast and robust by piping it through an LLM instead, with a prompt like "turn this calendar image into an .ical file pretty please".
/s, but only a little :). I've honestly thought of doing that for real, except with a feed of my laptop screen, to sync availability (busy/free) info from my work calendar to my family one, because it's way easier than trying to argue the point with corporate IT.
(I've changed jobs since then, and now I'm using a lazy hack on top of some random Fastmail-friendly cloud app.)
The max size I want a gaming device is probably that of the 3DS XL. Anything bigger than that definitely won't fit in my pocket. So screen size, what I really want is as big a screen as you can fit on a device that size without making it uncomfortable.
Nintendo handhelds from GB to New 3DS XL were mostly very pocketable (except for 2DS.) I am still a fan of the 3DS XL form factor, and the dual screen clamshell design. DS and GBA SP also had great clamshell form factors. 3DS was also fun to carry around for streetpasses.
But I really like the Switch in handheld mode, and I actually want a bigger screen since for me it's more of a baggable rather than a pocketable, and many games have tiny text or UI elements sized for docked/TV mode rather than handheld mode.
I don't mind the existence of baggable handhelds. I own both a Switch and a Steam Deck, and the Steam Deck won't even fit in the oversized pockets of cargo shorts. But I wish there were room in the market for both baggables and pocketables[0]. I'd 100% buy a scaled-down Steam Deck that is roughly 3DS XL-sized.[1]
[0] For a while Nintendo was claiming that the Switch wasn't their new portable console, it was just a console that happened to be portable. There would still be a successor to the 3DS at some point. I guess you could argue that's the Switch Lite, but I still feel a little betrayed to not get a smaller next-gen handheld.
[1] I'm aware there's a flourishing market of mostly Chinese arm64 devices running Android and loaded with emulators for every system you could imagine. But I want an x86 device to play my existing GoG and Steam games.
That's ok, C++23 is going to add another group of features that will be half-adopted at best in legacy codebases that will totally fix everything this time for real.
[0] in the same codebase via the unholy chimera that is Objective-C++