Hacker Newsnew | past | comments | ask | show | jobs | submit | fd111's commentslogin

Download this: https://www.whitehouse.gov/wp-content/uploads/2024/03/hist01...

Outlays jumped over $2T in 2020 (3rd column, 127th row). We all know why.

Outlays have not come down to anything close to pre-COVID levels. We all know why.

Cutting $2T from a $6.7T budget would not even return to 2019's pre-COVID outlays.

Seems doable to me.


> Outlays have not come down to anything close to pre-COVID levels.

Federal Net Outlays as Percent of GDP have been around 20% since ~1980:

* https://fred.stlouisfed.org/series/FYONGDA188S

They jumped to 30% in 2020, and are on a downward trend, with 2023 being at 22%.

Technically true not down to 20% in 2019, but not as horrendous as many people think.


You have to inflation adjust the 2019 numbers to get the comparables for 2025. And not using the CPI (which is aimed at individuals), but using an index based on what the government actually spends the money on :)


Can you explain why you think spending jumped in 2020?



A couple of times in my career, I've had the extreme pleasure of working as a dedicated support/backstop engineer within a small-ish (~20 person) development group buried deep inside a giant multinational corp. (I'm one of those seemingly rare weirdos who strongly prefers troubleshooting, bug-fixing, etc.)

My priorities were inverted: customer escalations first, and if you have time left over, then go work on bug backlogs that no one else wants to address. Everyone else sprinted while I bumped along at my own pace -- subject to escalation priorities, of course.

Those were dream jobs for me, and the rest of the teams seemed to appreciate having someone -- anyone, just not them! -- dedicated to the support role.

Is it just my imagination that this kind of (I would say "enlightened") management/organization is rare in the industry at large? Or do lots of dev teams do this sort of thing? And where can I find them? :-)


A small assortment of HP RPN calculators. Cold, dead hands, etc.


Everyone has different obsessions.

Within one's obsession, design/variety/change represents progress. It's great!

Outside of one's obsession, design/variety/change is maddeningly useless churn. It's awful!

The fact that I couldn't design a pretty UI even if I wanted to? Well, there's that, too. :-)


It was great. Full stop.

A sense of mastery and adventure permeated everything I did. Over the decades those feelings slowly faded, never to be recaptured. Now I understand nothing about anything. :-)

Starting in 1986 I worked on bespoke firmware (burned into EPROMs) that ran on bespoke embedded hardware.

Some systems were written entirely in assembly language (8085, 6805) and other systems were written mostly in C (68HC11, 68000). Self taught and written entirely by one person (me).

In retrospect, perhaps the best part about it was that even the biggest systems were sufficiently unsophisticated that a single person could wrap their head around all of the hardware and all of the software.

Bugs in production were exceedingly rare. The relative simplicity of the systems was a huge factor, to be sure, but knowing that a bug meant burning new EPROMs made you think twice or thrice before you declared something "done".

Schedules were no less stringent than today; there was constant pressure to finish a product that would make or break the company's revenue for the next quarter, or so the company president/CEO repeatedly told me. :-) Nonetheless, this dinosaur would gladly trade today's "modern" development practices for those good ol' days(tm).


> In retrospect, perhaps the best part about it was that even the biggest systems were sufficiently unsophisticated that a single person could wrap their head around all of the hardware and all of the software.

This was it, even into the 90s you could reasonable "fully understand" what the machine was doing, even with something like Windows 95 and the early internet. That started to fall apart around that time and now there are so many abstraction layers you have to choose what you specialize in.

And the fact that you couldn't just shit another software update into the update server to be slurped up by all your customers meant you had to actually test things - and you could easily explain to the bosses why testing had to be done, and done right, because the failure would cost millions in new disks being shipped around, etc. Now it's entirely expected to ship software that has significant known or unknown bugs because auto-update will fix it later.


It isn't right to consider that time as a golden age of software reliability. Software wasn't less buggy back then. My clear recollection is that it was all unbelievably buggy by today's standards. However things we take for granted now like crash reporting, emailed bug reports, etc just didn't exist, so a lot of devs just never found out they'd written buggy code and couldn't do anything even if they did. Maybe it felt like the results were reliable but really you were often just in the dark about whether people were experiencing bugs at all. This is the origin of war stories like how Windows 95 would detect and effectively hot-patch SimCity to work around memory corruption bugs in it that didn't show up in Windows 3.1.

Manual testing was no replacement for automated testing even if you had huge QA teams. They could do a good job of finding new bugs and usability issues compared to the devs-only unit testing mentality we tend to have today, but they were often quite poor at preventing regressions because repeating the same things over and over was very boring, and by the time they found the issue you may have been running out of time anyway.

I did some Windows 95 programming and Win3.1 too. Maybe you could fully understand what it was doing if you worked at Microsoft. For the rest of us, these were massive black boxes with essentially zero debugging support. If anything went wrong you got either a crash, or an HRESULT error code which might be in the headers if you're lucky, but luxuries like log files, exceptions, sanity checkers, static analysis tools, useful diagnostic messages etc were just totally absent. Windows programming was (and largely still is) essentially an exercise in constantly guessing why the code you just wrote wasn't working or was just drawing the wrong thing with no visibility into the source code. HTML can be frustratingly similar in some ways - if you do something wrong you just silently get the wrong results a lot of the time. But compared to something more modern like JavaFX/Jetpack Compose it was the dark ages.


I'm reminded of the Windows 95 uptime bug https://news.ycombinator.com/item?id=28340101 that nobody found for years because you simply couldn't keep a Windows system up that long. Something would just crash on you and bluescreen the whole thing, or you needed to touch a mandatory-reboot setting or install some software.


running FF on a windows 11 flagship HPE OMEN gaming laptop right now and this bitch crashes at LEAST once a day.


I get forced restarts on Windows 10 due to .net updates only. These tend to ensure applications that ran on previous CLR cannot run until the shutdown thing does the job of rebuilding everything, and it's not done online.


Reliable and buggy can go together - after all, nobody left their computer running for days on end back then, so you almost always had a "fresh slate" when starting up. And since programs would crash, you were more trained to save things.

The other major aspect was that pre-internet, security was simply not an issue at all; since each machine was local and self-contained there wasn't "elite hax0rs" breaking into your box; the most there was were floppy-copied viruses.


> a lot of devs just never found out they'd written buggy code and couldn't do anything even if they did.

This is undoubtedly true. No doubt there are countless quietly-malfunctioning embedded systems all around the world.

There also exist highly visible embedded systems such as on-air telephone systems used by high-profile talents in major radio markets around the country. In that environment malfunctions rarely go unnoticed. We'd hear about them literally the day of discovery. It's not that there were zero bugs back then, just nothing remotely like the jira-backlog-filling quantities of bugs that seem to be the norm today.


This was what passed for an "AAA" game in 1980

https://en.wikipedia.org/wiki/Ultima_I:_The_First_Age_of_Dar...

it was coded up in about a year by two people who threw in just about every idea they got.


1980 wasn't about 30 years ago, though. 30 years ago is 1992 which is Wolfenstein 3D, Civilization, and Final Fantasy type games. It's on the cusp of games like Warcraft, C&C, Ultima Online, Quake, Diablo, and Everquest. Games that are, more or less, like what we have now but with much much worse graphics.


In 1992 (and pretty much in a good part of the 90s) it was still possible and practical to build a small dev team (under 10) and push out incredible titles. IIRC Both ID Software and the team that worked on Diablo were relatively small-ish.

Nowadays we have to look to indie studios.


You're not going to be believe this, but I'm still working on Ultima IV. I come back to it every 3-4 years and spin up a new player and start over. I love it, but never can seem to commit enough time to build up all my virtues.


It was also very unforgiving - one mistake and you were back to rebuilding that virtue.


I recall that on one of my play-throughs as a kid, I got everything done except my characters needed to be level 8. So I un-virtuously tweaked the save file. (I think that was Ultima IV, but it's been a while.)

I also tweaked a later Ultima to not need the floppy disk in the drive. The budget copy protection had added itself to the executable and stored the real start address in a bad sector on disk, so I just patched the real start address back into the EXE header.


I read somewhere that Ultima V is the last Ultima that Lord British did the majority programming work. For Ultima VI he was convinced that he need a full team to get it done.

I still think it should be rather doable (and should be done by any aspiring game programmers) for a one man team to complete a Ultima V spin-off (same graphics, same complexity but on modern platforms) nowadays. Modern computers, languages and game engines abstract away a lot of the difficulties.


Completely agreed. The tile based graphics that pushed the limits of a mid-80's computer (and in some cases required special hardware) can now be done off the cuff with totally naive code in a matter of a couple hours:

https://github.com/mschaef/waka-waka-land

There's lots more room these days for developing the story, etc. if that's the level of production values you wish to achieve.


I actually got turned down by Chuckles for my first 'real' programming job ... They wanted someone that actually played the games :-) It was a neat experience interviewing though - I was really surprised they had offices in New Hampshire.


This guy? Chuck "Chuckles" Beuche?

they interview him on the "Apple Time Warp" series. He wrote "Caverns of Calisito" at origin. The optimizations they came up with to make things work was crazy. (I think they were drawing every third line to speed things up and had self modifying code).

https://appletimewarp.libsyn.com/

or youtube

https://www.youtube.com/channel/UC0o94loqgK3CMz7VEkDiIgA/vid...

good resource on Apple programing 40ish years ago.


I grew up in Manchester, it was years after I left that I realized Origin games had a development office right by the airport.


And now we have AAA games that take 6+ years to make and still ship unfinished and broken. What a weird time we live in.


Ultima I was a phenomenal game, but also incredibly hard by today's standards


Throwing up another Ultima memory...

I've got great memories of Ultima II on C64 and some Apple II as well. It was far more expansive than I, but still relatively fast. I remember when III came out, it was just... comparatively slow as molasses. It was more involved, but to the point where it became a multi day event, and it was too easy to lose interest waiting for things to load. II was a great combination of size/breadth/complexity and speed.

Then I got Bard's Tale... :)


speaking of Ultima www.uooutlands.com/

Been playing this recently. It is Ultima Online but with all the right choices rather than all the wrong choices post 1999 the development studio made. Anyone who enjoys Online RPGs should certainly give it a try. The graphics don't do the game justice at all but the quality of the game makes up for this and then some.


This is why I drifted towards game development for most of my career. Consoles, until the penultimate (antipenultimate?) generation, ran software bare or nearly bare on the host machine.

I also spent time in integrate display controller development and such; it was all very similar.

Nowadays it feels like everything rides on top some ugly and opaque stack.


For context of what this guy is saying: the modern Xbox's (after 360) are actually running VMs for each game. This is part of why despite the hardware being technically (marginally) superior: Xbox tends to have lower graphical fidelity.


The 360 has a Type 1 (bare metal) hypervisor. So there's not much, if any, performance impact to having it since the software runs natively on the hardware.

Microsoft used a hypervisor primarily for security. They wanted to ensure that only signed code could be executed and wanted to prevent an exploit in a game from allowing the execution of unsigned code with kernel privileges on the system itself.

Every ounce of performance lost to the hypervisor is money Microsoft wasted in hardware costs. So they had an incentive to make the HyperV as performant as possible.


the 360 has no hypervisor.

The CPU had an emulator that you could run on x86 Windows, but it was not itself a hypervisor.

The hypervisor in the XB1 served a more important purpose: to provide developers a way of shipping the custom SDK to clients, and not forcing them to update it. This was quite important for software stability and in fact we made a few patches to MS's XB1 SDK (Durango) to optimise it for our games.

VM's are VM's, there are performance trade-offs.

I know this because I worked on AAA games before in this area, do you also work in games and are repeating something you think. you heard?


I don't work in the industry. But just because you "worked on AA games" before doesn't make you correct.

This detailed architectural overview of the 360 discusses the hypervisor:

https://www.copetti.org/writings/consoles/xbox-360/

This YouTuber, who is an industry vet, and has done several xbox ports claims the XB360 has a hypervisor:

https://www.youtube.com/watch?v=Vq1lxeg_gNs

And there entries in the CVE database for the XB360 which describe the ability to run code in "hypervisor mode":

https://www.cvedetails.com/cve/CVE-2007-1221/

This detailed article on the above exploit goes into detail on how the memory model works on the XB360, including how main memory addressing works differently in hypervisor mode than in real mode:

https://www.360-hq.com/article1435.html

That's a whole lot of really smart people discussing a topic that you claim doesn't exist.


Appreciate the detailed reply!

> This detailed architectural overview of the 360 discusses the hypervisor:

> https://www.copetti.org/writings/consoles/xbox-360/

Yes, the 128KB of key storage and W^X. That's not a hypervisor in the sense that the XB1/HyperV or VMWare have a hypervisor, they shouldn't even share a name it's not the same thing at all.

It's like calling the JVM is a virtual machine in the same way QEMU is.

The 360 "Hypervisor" is more akin to a software T2 chip than anything that actually virtualises.


I don’t think you are showing respect when you simplistically repeat your assertion without effort, after two people expended their precious time to tell you in detail that you are wrong with examples. I don’t know anything, but a few minutes following the provided links and I find https://cxsecurity.com/issue/WLB-2007030065 which says:

  The Xbox 360 security system is designed around a hypervisor concept. All games and other applications, which must be cryptographically signed with Microsoft's private key, run in non-privileged mode, while only a small hypervisor runs in privileged ("hypervisor") mode. The hypervisor controls access to memory and provides encryption and decryption services.

  The policy implemented in the hypervisor forces all executable code to be read-only and encrypted. Therefore, unprivileged code cannot change executable code. A physical memory attack could modify code; however, code memory is encrypted with a unique per-session key, making meaningful modification of code memory in a broadly distributable fashion difficult. In addition, the stack and heap are always marked as non-executable, and therefore data loaded there can never be jumped to by unpriviledged code.

  Unprivileged code interacts with the hypervisor via the "sc" ("syscall") instruction, which causes the machine to enter hypervisor mode.
You can argue your own definition of what a hypervisor is, but I suspect you won’t get any respect for doing so.


360 indeed uses hypervisor [0], but uses it only for security, to make the app signature verification run on a higher level.

Windows on PCs also runs under hypervisor if you enable some security features (e.g. VBS/HVCI which are on by default since Windows 11 2022 update, or Windows Sandbox, or WDAG) or enable Hyper-V itself (e.g. to use WSL2/Docker).

The performance losses are indeed there, but by purely running the hypervisor you lose just around 1% [1], because the only overhead is added latency due to accessing memory through SLAT and accessing devices through IOMMU...

I'd imagine XB1 is running with all the security stuff enabled though, which demands additional performance losses [2]..

[0]: https://www.engadget.com/2005-11-29-the-hypervisor-and-its-i...

[1]: https://linustechtips.com/topic/1022616-the-real-world-impac...

[2]: https://www.tomshardware.com/news/windows-11-gaming-benchmar...


There’s no reason a pass through GPU configuration in a vm would have lower graphical fidelity.


There is a reason but it would only harm particularly poorly written games and even then a single digit percentage.

To excercise that you need a lot of separate memory transfers. Tiny ones. Games tend to run bulky transfers instead of many megabytes.

Memory bandwidth and command pipe should not see an effect even with the minimally increased latency, on any HVM.


Again with the caveat that this is specific to dom0 virtualization taking advantage of full hardware acceleration VT-d/VT-x, etc, what you say isn’t even necessarily the case.

With modern virtualization tech, the hypervisor mainly sets things up then steps out of the way. It doesn’t have to involve itself at all in the servicing of memory requests (or mapped memory requests) because the cpu does the mapping and knows what accesses are allowed or aren’t. The overhead you’re talking about is basically traversing one extra level in a page table noticeable only on micro benchmarks when filling the tlb or similar - this is a change in performance you might encounter a micro-regression in (without any virtualization to speak of) even when going from one generation of cpu architecture to the next.

Theoretically, the only time you’ll have any overhead is on faults (and even then, not all of them).

Of course I guess you could design a game to fault on every memory request or whatever, but that would be a very intentionally contrived scenario (vs just plain “bad” code).


Hello ComputerGuru,

As you may understand: there's more to graphical fidelity than just the GPU itself.

CPU<->GPU bandwidth (and GPU memory bandwidth) are also important.

There is a small but not insignificant overhead to these things with virtualisation: VMs don't come for free.


"Pass through GPU configuration" means that GPU memory is mapped directly into guest address space in hardware.

Bandwidth from a VM partition should be identical to that from the root partition.


I don’t understand what you’re trying to imply here.

Are you seriously suggesting that I chose to downgrade the graphics on the XB1 because I felt like it, and that dozens of other AAA game studios did the same thing?

Our engine was Microsoft native, by all rights it should have performed much better than PS4.

If you’re going to argue you’ll have to do a lot better than that since I have many years of lived experience with these platforms.


OK, you have a technical disagreement. No need to take it personally.

You may be right - you probably have more experience with this particular thing than I do.

I can't answer for the performance of the XB1, but I am curious what % reduction in GPU memory bandwidth you observed due to virtualization.

Did you have a non-virtualized environment on the same hardware to use for comparison?


I didn't take it personally, i just think you're presenting ignorance as fact and it's frustrating.

Especially when seemingly it comes from nowhere and people keep echoing the same thing which I know not to be true.

Look, I know people really love virtualisation (I love it too) but it comes with trade-offs; spreading misinformation only serves to misinform people for.. what, exactly?

I understood the parents perspective, GPU passthrough (IE; VT-d & AMD-Vi) does pass PCI-e lanes from the CPU to the VM at essentially the same performance. My comment was directly stating that graphical fidelity does not solely depend on the GPU, there are other components at play, such as textures being sent to the GPU driver, those textures don't just appear out of thin air, they're taken from disk by the CPU, and passed to the GPU. (there's more to it, but usually I/O involves the CPU on older generations)

The problem with VMs is that normal memory access's take on average a 5% hit, I/O takes the heaviest hit at about 15% for disks access and about 8% for network throughput (ballpark numbers but in-line with publicly available information).

It doesn't even matter what the exact precise numbers are, it should be telling to some degree that PS4 was native and XB1 was virtualised, and the XB1 performed worse with a more optimised and gamedev friendly API (Durango speaks DX11) and with better hardware.

It couldn't be more clear from the outside that the hypervisor was eating some of the performance.


I guess I should clarify that my point was purely in abstract and not specific to the XBox situation.

Of course in reality it depends on the hypervisor and the deployed configuration. Running a database under an ESXi VM with SSDs connected to a passed-through PCIe controller (under x86_64 with hardware-assisted CPU and IO virtualization enabled and correctly activated, interrupts working correctly, etc) gives me performance numbers within the statistical error margin when compared to the same configuration without ESXi in the picture.

I haven’t quantified the GPU performance similarly but others have and the performance hit (again, under different hypervisors) is definitely not what you make it out to be.

My point was that if there’s a specific performance hit, it would be pedantically incorrect to say “virtualizing the GPU is the problem” as compared to saying “the way MS virtualized GPU access caused a noticeable drop in achievable graphics.”


Sorry, I don't think I implied virtualising the GPU is the problem.

I said "the fact that it's a VM has caused performance degradation enough that graphical fidelity was diminished" - this is an important distinction.

To clarify further: the GPU and CPU is a unified package and the request pipeline is also shared, working overtime to send things to RAM will affect GPU bandwidth, so overhead of memory allocations that are non-GPU will still affect the GPU due to that limited bandwidth being used.

I never checked if the GPU bandwidth was constrained by the hypervisor to be fair, because such a thing was not possible to test, the only corrolary is the PS4 which we didn't optimise as much as we did for DX and ran on slightly less performant hardware.


I always figured texture loading from disk was mostly done speculatively and during the loading screen, but what do I know.

Anyway, a 5% memory bandwidth hit does not sound to me like a huge deal.


Lower graphical fidelity than what? PlayStation?


> Marginally superior to what? PlayStation?

Precisely

Both the original Xbox One and the Xbox One S have a custom, 1.75GHz AMD 8-core CPU, while the Xbox One X bumps that up to a 2.3GHz 8-core chip. The base PS4 CPU remained clocked at 1.6GHz and contains a similar custom AMD 8-core CPU with x86-based architecture, while the PS4 Pro bumps that clock speed up to 2.13GHz.

EDIT: you’ve edited your comment, but also yes.


The CPU isn't particularly relevant is it (although the CPUs in the PS4/XBone generation were exceptionally terrible compared to what was standard on PCs at the time)? Graphical fidelity is going to depend much more on the GPU (although the CPU is going to bottleneck framerate if it's not powerful enough).

In the current generation the Series X has a more powerful GPU than the PS5, which tends to mean a lot of games run at higher resolutions on the system, although there's some games that run slightly better on PS5 (I think the Call of Duty games might be in that category?). And a lot (most?) are basically the same across both systems - probably because devs aren't bothering to tweak cross platform games separately for the two different consoles.


>This was it, even into the 90s you could reasonable "fully understand" what the machine was doing, even with something like Windows 95 and the early internet. That started to fall apart around that time and now there are so many abstraction layers you have to choose what you specialize in.

This doesn't really track. 30 years ago computers were, more or less, the same as they are now. The only major addition has been graphics cards. Other than that we've swapped some peripherals. Don't really see how someone could "fully understand" the modem, video drivers, USB controllers, motherboard firmware, processor instruction sets, and the half dozen or so more things that went into a desktop.


This is why you fail. Thirty years ago I could make a wire wrapped 68000k board that did nothing but play music. CE/CS was different back then. I'd cut pins and solder in chips to twiddle the filters on audio output. You could know the entire process from power on to running of your computer and it was easy to change bits even down to the hardware level like adding a 'no place to put it unless you build it yourself' CPU/MPU/RAM upgrade and make it work. Adjust your NTSC video output, just cut that resistor in lieu of replacing it with something really high resistance, it'll be better. Let's build our own new high speed serial port for MIDI. How about a graphics co-processor that only does Mandlebrot calculations, let's build three of them. Only few of the younger generation comprehend the old ways. And the machines have changed to fewer chips and machines have turned into system on a chip. It's a bit of a shame.


Where did one acquire your kind of knowledge outside of a university? Were there any books or USENET groups that you visited to attain it?


You would build a wire wrapped 68000 board in 1992? Isn’t that a tiny bit late to expend that much effort on a 68000?


Not at all. I was still building embedded hardware around 68k 10 years later. There are undoubtedly new products being built around 68k today.

If all you want to do is synthesize music the 68k is perfect.

If you’re taking issue with wire wrap, there just weren’t general purpose dev boards available back then. You were expected to be able to roll your own.


Wire wrap is the most reliable form of construction, used by NASA for many years for this reason - the wrapping of the wire around the square pegs creates a small cold weld at every corner.

Plus when mulitlayer boards were not really a thing, wirewrap gives you all the layers you want, more or less.


30 years ago was DOS computers - usb certainly wasn’t widespread even if it was out, and many of the video drivers at the time were “load palette here, copy memory there” type things.


As mentioned, we didn't have USB controller until 1996. But even if you included that, which was an order of magnitude more complex than parallel port, USB 1.0 spec was only about 100 pages long. And yes you could reasonably understand what was going on there.


the crazy thing to me is just how many different workflows/UI/UX you need to learn along so many platforms today. AWS, GCP, Azure - like you need to learn so much deeply about each in order to be "marketable" and the only way you shall learn all of them is if you happen to work at a company that happens to rely on said platform.

Then there is low-level training of ILO bullshit that ive done weeks training on for HPE and I have building and dealing with HPE servers since before the bought COMPAQ....

And dont even get me started on SUN and SGI... how much brain power was put into understanding those two extinct critters... fuck even CRAY.

there is so much knowledge that has to evaporate in the name of progress....


Yeah, it's definitely great but also terrible that bugs can be patched so easily now.


Just so its documented ;; may you plz ELI5 how easy a bug is to patch today? Thanks


When DOOM was released in 1993 it spread like wildfire across bulletin boards and early FTP services. But the vast majority of players got it from a floppy copy at their local computer store - it was shareware, and so copying the floppy was fine. They even encouraged stores to charge for the shareware game, they wanted it distributed as widely as possible.

And if you paid the $40 for the full game, you got floppies mailed to you.

There was no easy way for the company to let you know there was an update available (the early versions had some well-known bugs) so the user would have to go searching for it, or hear a rumor at the store. If you called id, they'd have to mail you a disk with the updated executable on it. This was all confusing, time consuming, and was only for a game.

Things were much worse with operating systems and programs.

Now almost every piece of software is either distributed via an App Store of some sort that has built-in updates, or has a "Check for updates" button in the app itself. Post an updated build and within days a huge percentage of your users will be running the latest software.


This makes me saddest in the game market with day-one patching. I'm old enough to remember bringing a game home, plugging in the cart and playing it, but if I was to do that now with a disc or cart, there is likely a download in my future. At least some of the game publishers will pre-download so I can play when the game is made available, but I miss the days (not the bugs) of instant playing once the game was acquired.


> if I was to do that now with a disc or cart, there is likely a download in my future.

The newest Call of Duty (was released this past Friday) was released on disc as well as the various digital forms. Apparently, the disc was only contained ~72mb of data for a game that easily cleans 100gb when fully installed.


I miss the days when bugs were extra content. B)


For history. We're at time where SaaS is ascendant and CI/CD tools are everywhere. This means that to patch a bug, you fix the code, commit it, and then it magically makes it way to production, within like an hour, sometimes less. Customers are interacting with your product via a web browser, so they refresh the page and receive the new version of the software. Compared to times of old, with physical media and software that needed installing, it's ridiculously easier.


grats on this reply... even though ELI5 context will have to have some domain knowledge... but this was a great response, thanks.


It's fascinating to think about a historian from the future, coming along and reading about the day to day lives of a 2020's developer and them just being enamored and asking about the most mundane details of our lives that we never considered or at all document. What colorschemes did they use in VScode? What kinds of keyboards did they use? What's a webpack?


In 2014 it was writen that Sergi Brin (goog founder) had a genetic future-forming-cancer gene and he was funding pharma around it....

so I posted this to reddi May 2 2014....

===

Fri, May 2, 2014, 4:49 PM

to me

In the year 2010, scientists perfected suspended animation through the use of cryogenics for the purpose of surgery. After more than a decade of study and refinement, long term suspended animation became a reality, yet a privilege reserved for only the most wealthy and influential.

The thinking at the time was that only those who showed a global and fundamental contribution to society (while still viewed through the ridiculously tinted lenses of the global elite of the era) were worthy of entering into such state.

The process was both incredibly complex and costly. As each Transport, as they were known, required their own stand alone facility to be built around them. Significant resources were put into the development of each facility as they required complete autonomous support systems to accommodate whatever duration was selected by the Transport.

Standalone, yet fully redundant, power, security and life support systems were essential to the longevity of each facility.

Additionally, it was recognized that monetary resources would be subject to change over time, especially fiat-currency based resources. Thus there was a need to place physical holders of value that would be perceived to not deplete/dilute over time into the facilities for use by the Transport when they resuscitate.

These resources are the most sought after treasure of the new world.

After hundreds of years of human progress, civilization could no longer sustain itself in an organized self-supporting system. Through utter corruption of what some call the human soul, the world has fallen dark. There are very few outposts of safety in the current Trial of Life, as its now known.

Many Transporters have been found, resuscitated and exploited already. There are believed to be many many more, but their locations are both secret and secure. Akin to your life relying on the discovery of an undisturbed Tomb of a Pharaoh - even though every consciousness on the planet is also seeking the same tomb.

They are the last bastion of hope for they alone have the reserves of precious materials needed to sustain life for the current generation.

Metals, technology (however outdated), medicines, seeds, weapons and minerals are all a part of each Transport 'Crop'.

One find can support a group or community for years alone based on the barter and renewable resource potentials in each Crop.

One transport, found in 2465, that of a long dead nanotech pioneer - who was purportedly responsible for much of the cybernetic medical capabilities of the 21st century, which he sought to cure his genetic predisposition for a certain disease, was so vast that the still powerful city-state in the western province of North America was able to be founded.

The resources of this individual were extraordinary, but his resuscitation, as they all are, was rather gruesome and cold.

The security systems in each Transport Facility are biometric and very complex. They can only be accessed by a living, calm and (relatively) healthy Transport.

If the system, and its control mechanism AI, detect signs of duress, stress or serious injury to the Transport - they go into fail-safe. Which is to say they self detonate. Taking with them all resources, the Transport and the Seekers as well.

There have been many instances of this, such that the art of successful Resuscitation has become an extremely profitable business.

The most active and successful Resuscitation Team (RT) have been the ironically named, Live Well Group.

The most conniving, well practiced and profitable con in the history of mankind.

LWG alone has been responsible for the resuscitation of more than 370 Transports. Their group is currently the most powerful in the world. With their own city-state, established after the Brin case mentioned, they have a cast of thousands of cons all working to ensure the Transport believes they have been Awakened to a new, advanced, safe world and that they would be allowed to stake part in a significant way now that they have been Transported.

They are fooled into releasing their resources, then brutally tortured for information about any other Transports or any other knowledge they may possess, which invariably is less than nothing.

It is a hard world out there now, and the LWGs ruthlessly strive to locate the thousands of other Transport Facilities is both the worst aspect of our modern struggle - yet ironically will serve to be the basis of the ongoing endeavor of the species.

There is rumor of a vast facility of resources and Transports in an underground 'CITY' of the most elite Transports ever. A facility supposedly comprised of the 13 most powerful and rich bloodlines of people to have ever existed.

It is not known which continent this facility is on, but I believe it is in Antarctica - fully automated and with the ability to auto-resuscitate at a given time.

This is my mission, this is my life's work. To find and own this facility and crush any and all other groups that oppose me.


Today you can use the internet to patch a bug on a users computer, and the users expect this, and even allow you to patch bugs automatically.

Previously, patching bugs meant paying money for physical media and postage.


I've been trying to teach my young teenage kids about how things work, like, washing machines, cars, etc. One of the things I've learned is that it's a looooot easier to explain 20th century technology than 21st century technology.

Let me give you an example. My father was recently repairing his furnace in his camper, which is still a 20th century technology. He traced the problem to the switch that detects whether or not air is flowing, because furnaces have a safety feature such that if the air isn't flowing, it shuts the furnace off so it doesn't catch on fire. How does this switch work? Does it electronically count revolutions on a fan? Does it have two temperature sensors and then compute whether or not air is flowing by whether their delta is coming down or staying roughly the same temperature? Is it some other magical black box with integrated circuits and sensors and complexity greater than the computer I grew up with?

No. It's really simple. It's a big metal plate that sticks out into the airflow and if the air is moving, closes a switch. Have a look: https://www.walmart.com/ip/Dometic-31094-RV-Furnace-Heater-S... You can look at that thing, and as long as you have a basic understanding of electronics, and the basic understanding of physics one gets from simply living in the real world for a few years, you can see how that works.

I'm not saying this is better than what we have now. 21st century technology exists for a reason. Sometimes it is done well, sometimes it is done poorly, sometimes it is misused and abused, it's complicated. That fan switch has some fundamental issues in its design. It's nice that they are also easy to fix, since it's so simple, but I wouldn't guarantee it's the "best" solution. All I'm saying here is that this 20th century technology is easier to understand.

My car is festooned with complicated sensors and not just one black box, but a large number of black boxes with wires hooked in doing I have no idea what. For the most part, those sensors and black boxes have made cars that drive better, last longer, are net cheaper, and generally better, despite some specific complaints we may have about them, e.g., lacking physical controls. But they are certainly harder to understand than a 20th century car.

Computers are the same way. There is a profound sense in which computers today really aren't that different than a Commodore 64, they just run much faster. There are also profound senses in which that is not true; don't overinterpret that. But ultimately these things accept inputs, turn them into numbers, add and subtract them really quickly in complicated ways, then use those numbers to make pictures so we can interpret them. But I can almost explain to my teens how that worked in the 20th century down to the electronics level. My 21st century explanation involves a lot of handwaving, and I'm pretty sure I could spend literally a full work day giving a spontaneous, off-the-cuff presentation of that classic interview question "what happens when you load a page in the web browser" as it is!


> This was it, even into the 90s you could reasonable "fully understand" what the machine was doing

That was always an illusion, only possible if you made yourself blind to the hardware side of your system.

https://news.ycombinator.com/item?id=27988103

https://news.ycombinator.com/item?id=21003535


Your habit of citing yourself with the appropriate references has led me from taking your stance as an extremely literal one (“understanding all of the layers; literally”) to actually viewing your point as…very comprehensive and respectful to the history of technology while simultaneously rendering the common trope that you are addressing as just that, a trope.

Thanks, teddyh.


Fully understand and “completely able/worth my time to fix” are not identical. I can understand how an alternator works and still throw it away when it dies rather than rebuild it.


In that case, what is the value proposition of investing the time to learn how an alternator works? It surely has some value, but is it worth the time it takes to know it?

To bring it back to our topic, is it worth it to know, on an electrical level, what your motherboard and CPU is doing? It surely has some value, but is it worth the time to learn it?


You’re just saying you didn’t go to school for science or engineering. Plenty of people program and also understand the physics of a vacuum tube or capacitor. Sometimes we really had to know, when troubleshooting an issue with timing or noise in a particular signal on a PC board or cable.


It was a mix of great and awful.

I wrote tons of assembly and C, burned EPROMs, wrote documentation (nroff, natch), visited technical bookstores every week or two to see what was new (I still miss the Computer Literacy bookstore). You got printouts from a 133 column lineprinter, just like college. Some divisions had email, corporation-wide email was not yet a thing.

No source code control (the one we had at Atari was called "Mike", or you handed your floppy disk of source code to "Rob" if "Mike" was on vacation). Networking was your serial connection to the Vax down in the machine room (it had an autodial modem, usually pegged for usenet traffic and mail).

No multi-monitor systems, frankly anything bigger than 80x25 and you were dreaming. You used Emacs if you were lucky, EDT if you weren't. The I/O system on your computer was a 5Mhz or 10Mhz bus, if you were one of those fortunate enough to have a personal hard drive. People still smoked inside buildings (ugh).

It got better. AppleTalk wasn't too bad (unless you broke the ring, in which case you were buying your group lunch that day). Laserprinters became common. Source control systems started to become usable. ANSI C and CFront happened, and we had compilers with more than 30 characters of significance in identifiers.

I've built a few nostalgia machines, old PDP-11s and such, and can't spend more than an hour or so in those old environments. I can't imagine writing code under those conditions again, we have it good today.


> No source code control

30 years ago is 1992, we certainly had source control a long time before!

In fact in 1992 Sun Teamware was introduced, so we even had distributed source control, more than a decade before "git invented it".

CVS is from 1986, RCS from 1982 and SCCS is from 1972. I used all four of those are various points in history.

> No multi-monitor systems, frankly anything bigger than 80x25 and you were dreaming.

In 1993 (or might've been early 1994) I had two large monitors on my SPARCstation, probably at 1280×1024.


That's like saying that "we" had computers in 1951.. The future is already here – it's just not evenly distributed.

Something existing is different from something being in widespread use.

When I was a kid in the 90s, I had a computer in my room that was entirely for my personal use. There was a pretty long stretch of time where most kids I encountered didn't even have access to a shared family PC.. much longer before they had a computer of their own.


Had a Kaypro back in ‘82 that I used to create a neat umpire for a board war game. It had a markup language and could run things that let me get on arpanet and run Kermit. Lots of time has passed and programs used to be way more “efficient”. And the workstations and mini supers that followed shortly had great graphics it just wasn’t a card as much as a system. SGI’s and specialized graphics hardware such as Adsge and the stuff from e&s. Lots happened before pc’s.


I'm certainly not saying that nothing happened before PCs, only that when talking about the past, one cannot say "we had X" based simply on whether X existed somewhere in the world, but one must consider also how widespread the usage of X was at the time.


There were gobs of suns and SGI’s in the 80’s on just not at home. Whole lot of Unix work was done before that on pdp11’s and VAXen. Had to dial in or stay late to hack :-).


Indeed, I'm not disputing that.

However, you still need to mind the context. For instance, there existed computers in 1952.. Saying that "we had computers in 1952" is still right, very few institutions had access to one. Most people learning to program a computer in 1952 wouldn't actually have regular access to one, they'd do it on paper and SOME of the their programs may actually get shipped off to be actually run. So it'd even be entirely unreasonable to say "We, the people learning to program in 1952, had computers", one may say "We, who were learning to program in 1952 had occational opportunity to have our programs run on a computer"..

Yes, there were lots of nice hardware in the 80s, and LOTS of people working professionally in the field would be using something cheaper and/or older. In context of my original post, I took issue with op writing that "we had version control", sure, version control exited, but it was not so widely used throughout the industry that it's reasonable to say that we had it, some lucky few did.


The topic of the thread was software developer experience as a professional career, not at home.

Sure, in the early 90s I didn't have muti-CPU multi-monitor workstations at home, that was a $20K+ setup at work.

But for work at work, that was very common.


Maybe GP was in a less developed or wealthy area than you.

Often when talking to Americans about the 90s they're surprised, partly because tech was available here later and partly because my family just didn't have enough money.


Dude is literally talking about Atari. It’s surprising they didn’t have better source control by 1992; Apple certainly had centralized source control and source databases by that point. But Atari was basically out of gas by then.


>(I still miss the Computer Literacy bookstore)

I used to drive over Highway 17 from Santa Cruz just to visit the Computer Literacy store on N. First Street, near the San Jose airport. (The one on the Apple campus in Cupertino was good, too.)

Now, all of them—CL, Stacy's Books, Digital Guru—gone. Thanks, everyone who browsed in stores, then bought on Amazon to save a few bucks.


Won’t defend Amazon generally, but you need to blame private equity and skyrocketing real estate prices for the end of brick and mortar bookstores. And, for completeness, if we’re saying goodbye to great Silicon Valley bookstores of yore: A Clean Well Lighted Place For Books in Cupertino and the BookBuyers in Mountain View were also around 30 years ago, although they were more general. And strip mall ones like Waldenbooks still existed too.


Agree with the poster. Much better IMHO and more enjoyable back then.

Because of the software distribution model then there was a real effort to produce a quality product. These days not so much. Users are more like beta testers now. Apps get deployed with a keyboard input. The constant UI changes for apps (Zoom comes to mind) are difficult for users to keep up with.

The complexity is way way higher today. It wasn't difficult to have a complete handle on the entire system back then.

Software developers where valued more highly. The machines lacked speed and resources - it took more skill/effort to get performance from them. Not so much of an issue today.

Still a good job but I would like seek something different if I was starting out today.


> Still a good job but I would like seek something different if I was starting out today

I'm only 6 years in, and I am starting to feel this.

I went into computer science because it's something I knew that, at some level, it was something I always wanted to do. I've always been fascinated with technology ever since I was a child -- how things work, why things work, etc..

While studying computer science at my average state school, I met a few others that were a lot like me. We'd always talk about this cool new technology, work on things together, etc.. The was a real passion for the craft in a sense. It's something I felt similar during my time studying music with my peers.

Perhaps, in some naive way, I thought the work world would be a lot like that too. And of course, this is only my experiences so far, but I have found my peers to be significantly different.

People I work with do not seem to care about technology, programing, etc.. They care about dollar signs, promotions, and getting things done as quickly as possible (faster != better quality). Sure, those three things are important to varying degrees, but it's not why I chose computer science, and I struggle to connect with those people. I've basically lost my passion for programing because of it (though that is not the entire reason -- burnout and whatnot has contributed significantly.)

I'm by no means a savant nor would I even consider myself that talented, but I used to have a passion for programming and that made all the "trips" and "falls" while learning worth it in the end.

I tell people I feel like I deeply studied many of the ins and outs photography only to take school pictures all day.


Don't get me wrong there are still incredible opportunities out there. IoT is starting to pick up steam. Individuals that really like knowing what the metal is doing have many green fields to settle. You can get prototype boards designed and delivered for prices that an individual can afford. That was not possible 30 years ago. If you can find areas that cross disciplines things get more interesting.

WebStuff is dead IMHO. It is primarily advertising and eyeballs - yawn. If I see one more JS framework I'll puke. We have so many different programming languages it is difficult to get a team to agree on which one to use:) Don't get me started on databases. I have apps that use 3 or 4 just because the engineers like learning new things. It is a mess.


Better workplaces exist! Don't settle for one that saps your will to live.


> A sense of mastery and adventure permeated everything I did.

How much of that is a function of age? It is hard to separate that from the current environment.

Personally, I don't feel as inspired by the raw elements of computing like I once did, but it is probably more about me wanting a new domain to explore than something systemic. Or at least, it is healthier to believe that.

> knowing that a bug meant burning new EPROMs made you think twice or thrice before you declared something "done".

The notion of Internet Time, where you're continuously shipping, has certainly changed how we view the development process. I'd argue it is mostly harmful, even.

> perhaps the best part about it was that even the biggest systems were sufficiently unsophisticated that a single person could wrap their head around all of the hardware and all of the software.

I think this is the crux of it: more responsibility, more ownership, fewer software commoditization forces (frameworks), less emphasis on putting as many devs on one project as possible because all the incentives tilt toward more headcount.


Yes indeed could be Dunning Kruger effect.


There wasn’t HN so no distraction to digress to every now and them.

I second this - systems were small and most people could wrap their brains around them. Constant pressure existed and there wasn’t “google” & “so” & other blogs to search for solutions. You had to discover by yourself. Language and API manuals weighed quite a bit. Just moving them around the office was somewhat decent exercise.

There wasn’t as much build vs buy discussion. If it was simple enough you just built it. I spent my days & evenings coding and my nights partying. WFH didn’t exist so, if you were on-call you were at work. When you were done you went home.

My experience from 25 years ago.


I actually used to do 'on call' by having a vt100 at the head of my bed and I would roll over every couple hours and check on things over a 9600 baud encrypted modem that cost several thousand dollars.

the only time I ever had to get up in the middle of the night and walk to the lab was the Morris worm. I remember being so grateful that someone brought me coffee at 7


I have one word for you "Usenet".


That was there. However, where I started 25 years back, they reserved Internet access for only privileged senior and staff engineers. I was a lowly code worm, No Internet, no usenet.


Alot of our modern software practices have introduced layers of complexity on to systems that are very simple at a fundamental level. When you peel back the buzzword technologies you will find text streams, databases, and REST at the bottom layer.

It's a self fulfilling cycle. Increased complexity reduces reliability and requires more headcount. Increasing headcount advances careers. More headcount and lower reliability justifies the investment in more layers of complicated technologies to 'solve' the 'legacy tech' problems.


> A sense of mastery and adventure permeated everything I did.

My experience too. I did embedded systeems that I wrote the whole software stack for: OS, networking, device drivers, application software, etc.

> Over the decades those feelings slowly faded, never to be recaptured. Now I understand nothing about anything.

These days programming is more trying to understand the badly-written documentation of the libraries you're using.


I'm younger than you, but one of my hobbies is messing around with old video game systems and arcade hardware.

You're absolutely right - there's something almost magical in the elegant simplicity of those old computing systems.


> A sense of mastery and adventure permeated everything I did. Over the decades those feelings slowly faded, never to be recaptured. Now I understand nothing about anything. :-)

Are you me? ;) I feel like this all the time now. I also started in embedded dev around '86.

> Nonetheless, this dinosaur would gladly trade today's "modern" development practices for those good ol' days(tm).

I wouldn't want to give up git and various testing frameworks. Also modern IDEs like VSCode are pretty nice and I'd be hesitant to give those up (VSCode being able to ssh into a remote embedded system and edit & debug code there is really helpful, for example).


And it had it's downside too. - Developing on DOS with non-networked machines. (OK,l one job was on a PDP-11/23) - Subversion (IIRC) for version control via floppy - barely manageable for a two person team. - No Internet. Want to research something? Buy a book. - Did we have free S/W? Not like today. Want to learn C/C++? Buy a compiler. I wanted to learn C++ and wound up buying OS/2 because it was bundled with IBM's C++ compiler. Cost a bit less than $300 at the time. The alternative was to spend over $500 for the C++ compiler that SCO sold for their UNIX variant. - Want to buy a computer? My first was $1300. That got me a Heathkit H-8 (8080 with 64 KB RAM) and an H19 (serial terminal that could do up to 19.2 Kbaud) and a floppy disk drive that could hold (IIRC) 92KB data. It was reduced/on sale and included a Fortran compiler and macro-assembler. Woo! The systems we produced were simpler, to be sure, but so were the tools. (Embedded systems here too.)


Yeah, I am almost identical, lots of 6805, floating point routines and bit banging RS232, all in much less than 2k code memory, making functional products.

Things like basketball scorebaords, or tractor spray controllers to make uniform application of herbicide regardless of speed. Made in a small suburben factory in batches of a hundred or so, by half a dozen to a dozen "unksilled" young ladies, who were actally quite skilled.

No internet, the odd book and magazines, rest of it, work it out yourself.

In those days it was still acceptable, if not mandatory to use whatever trick you could come up with to save some memory.

It didn't matter about the direct readability, though we always took great pains in the comments for the non obvious, including non specified addressing modes and the like.

This was around the time the very first blue LEDS came out.

When the web came along, and all the frameworks etc, it just never felt right to be relying on arbitrary code someone else wrote and you did not know the pedigree of.

Or had at least paid for so that you had someone to hassle if it was not doing what you expected and had some sort of warranty.

But also a lot of closed source and libraries you paid for if you wanted to rely on someone elses code and needed to save time or do something special, an awful lot compared to today.

Microsoft C was something like $3000 (maybe $5k, cant rememeber exactly) dollars from memory, at a time when that would buy a decent second hand car and a young engineer might be getting 20-25k a year tops(AUD).

Turbo C was a total breakthru, and 286 was the PC of choice, with 20MB hard drive, with the Compaq 386-20 just around the corner.

Still, I wouldn't go back when I look at my current 11th Gen Intel CPU with 32Gig RAM, 2 x 1TB SSDs and a 1080Ti graphics card with multiple 55inch 4k monitors, not even dreamable at the time.


don't forget the community. it was very much the case that you could look at an IETF draft or random academic paper and mail the authors and they would almost certainly be tickled that someone cared, consider your input, and write you back.

just imagine an internet pre-immigration-lawyer where the only mail you ever got was from authentic individuals, and there were no advertisements anywhere.

the only thing that was strictly worse was that machines were really expensive. it wasn't at all common to be self-funded


> knowing that a bug meant burning new EPROMs made you think twice or thrice before you declared something "done".

> Schedules were no less stringent than today;

So … how did that work, then? I know things aren't done, and almost certainly have bugs, but it's that stringent schedule and the ever-present PM attitude of "is it hobbling along? Good enough, push it, next task" never connecting the dots to "why is prod always on fire?" that causes there to be a never ending stream of bugs.


With no pms you dealt directly with the boss and you managed your own tasks so you had a hard deadline and showed demos and once it was done support/training. It was waterfall so not finishing on time meant removing features or finishing early meant added additional features if you had time . Everything was prod. You needed to fix showstopper bugs/crashed but bugs could be harmless (spelling fr example) or situational and complex or show shoppers. You lived with them because bugs were part of the OS or programming language or memory driver experience at the time.


As my old boss once said (about 30 years ago actually!) when complaining about some product or the other "this happens because somewhere, an engineer said, 'fuck it, it's good enough to ship'."


I wonder how much of this is due to getting old vs actual complexity.

When I started I was literally memorising the language of the day and I definitely mastered it. Code was flowing on the screen without interruption.

Nowadays I just get stuff done; I know the concepts are similar, I just need to find the specifics and I'm off to implement. It's more akin to a broken faucet and it definitely affects my perception of modern development.


Thanks. I'd forgotten how much the 68705 twisted my mind.

And how much I love the 68HC11 - especially the 68HC811E2FN, gotta get those extra pins and storage! I never have seen the G or K (?) variant IRL (16K/24K EPROM respectively and 1MB address space on the latter). Between the 68HC11 and the 65C816, gads I love all the addressing modes.

Being able to bum the code using zero-page or indirectly indexed or indexed indirectly... Slightly more fun than nethack.


https://en.wikipedia.org/wiki/Rosy_retrospection

I am sure everything was great back then but. I've been coding for 20 years, and there are a lot of problems of different types (including recurring bugs) that have been solved with better tooling, frameworks and tech overall. I don't miss too much


Exactly my experience coming out of school in 1986. Only for me it was microcontrollers (Intel 8096 family).

Thanks for bringing back some great memories!


I miss everything being a 'new challenge'... Outside of accounting systems - pretty much everything was new ground, greenfield, and usually - fairly interesting :-)


Thank you.

When you make keyboard software with nine different toggle settings, most/all of which are unknown to 90% of users until they read a HN comment buried 43 pages down, you've lost the plot. :-)

Didn't Apple used to promote a design principle, sometimes to a maddening extreme, of minimizing user options/settings because they were too confusing? (e.g. single-button mouse) The ios keyboard apparently adheres to an opposite philosophy.


Seconded.

I bought the First Person DVD set many years ago just to get this episode, which I first saw on (IIRC) IFC.

I rewatch Denny's episode every year or two. Chills every time.


For many decades I've kept an HP calculator on my desk within easy reach. I use it regularly for one thing or another. (day job: programmer)

Due to force of habit, my physical calculator is easier and faster to use than on-screen calculators. Not unlike Emacs commands, the calculator interface is burned into muscle memory: my fingers know how to use it more than my brain does.

I use an emulator app on my phone when not at my desk, but it's always unsatisfying. Real calculator buttons give me some kind of brain-stem level of satisfaction that a functionally identical phone app can't match.


The worst offender in recent memory was Walt Disney World. Starting about nine months after a physical trip to WDW, my Disney hotel reservation email address received spam from the following Disney-related enterprises before I finally black-holed the address:

- Walt Disney Studios Home Entertainment

- FX Networks

- shopDisney | Disney store

- ABC News

- Freeform

- National Geographic ("Now streaming on Disney+")

- Walt Disney Pictures

- Storyliving by Disney

You could argue that this wasn't a "sell out" since it was all Disney, but not a single one of those enterprises had much to do with a trip to Orlando. :-)


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: