Hacker News new | past | comments | ask | show | jobs | submit | i_c_b's comments login

I'm probably saying something obvious here, but it seems like there's this pre-existing binary going on ("AI will drive amazing advances and change everything!" "You are wrong and a utopian / grifter!") that takes up a lot of oxygen, and it really distracts from the broader question of "given the current state of AI and its current trajectory, how can it be fruitfully used to advance research, and to what's the best way to harness it?"

This is the sort of thing I mean, I guess, by way of close parallel in a pre-AI context. For a while now, I've been doing a lot of private math research. Whether or not I've wasted my time, one thing I've found utterly invaluable has been the OEIS.org website, where you can just enter sequence of numbers and then search for it to see what contexts it shows up in. It's basically a search engine for numerical sequences. And the reason it has been invaluable is that I will often encounter some sequence of integers, I'll be exploring it, and then when I search for it on OEIS, I'll discover that that sequence shows up in much different mathematical contexts. And that will give me an opening to 1) learn some new things and recontextualize what I'm already exploring and 2) give me raw material to ask new questions. Likewise, Wolfram Mathematica has been a godsend. And it's for similar reasons - if I encounter some strange or tricky or complicated integral or infinite sum, it is frequently handy to just toss it into Mathematica, apply some combination of parameter constraints and Expands and FullSimplify's, and see if whatever it is I'm exploring connects, surprisingly, to some unexpected closed form or special function. And, once again, 1) I've learned a ton this way and gotten survey exposure to other fields of math I know much less well, and 2) it's been really helpful in iteratively helping me ask new, pointed questions. Neither OEIS nor Mathematica can just take my hard problems and solve them for me. A lot of this process has been about me identifying and evolving what sorts of problems I even find compelling in the first place. But these resources have been invaluable in helping me broaden what questions I can productively ask, and it's through something more like a high powered, extremely broad, extremely fast search. There's a way that my engagement with these tools has made me a lot smarter and a lot broader-minded, and it's changed the kinds of questions I can productively ask. To make a shaky analogy, books represent a deeply important frozen search of different fields of knowledge, and these tools represent a different style of search, reorganizing knowledge around whatever my current questions are - and acting in a very complementary fashion to books, too, as a way to direct me to books and articles once I have enough context.

Although I haven't spent nearly as much time with it, what I've just described about these other tools certainly is similar to what I've found with AI so far, only AI promises to deliver even more so. As a tool for focused search and reorganization of survey knowledge about an astonishingly broad range of knowledge, it's incredible. I guess I'm trying to name a "broad" rather than "deep" stance here, concerning the obvious benefits I'm finding with AI in the context of certain kinds of research. Or maybe I'm pushing on what I've seen called, over in the land of chess and chess AI, a centaur model - a human still driving, but deeply integrating the AI at all steps of that process.

I've spent a lot of my career as a programmer and game designer working closely with research professors in R1 university settings (in both education and computer science), and I've particularly worked in contexts that required researchers to engage in interdisciplinary work. And they're all smart people (of course), but the silofication of various academic disciplines and specialties is obviously real and pragmatically unavoidable, and it clearly casts a long shadow on what kind of research gets done. No one can know everything, and no one can really even know too much of anything out of their own specialties within their own disciplines - there's simply too much to know. There are a lot of contexts where "deep" is emphasized over "broad" for good reasons. But I think the potential for researchers to cheaply and quickly and silently ask questions outside of their own specializations, to get fast survey level understandings of domains outside of their own expertise, is potentially a huge deal for the kinds of questions they can productively ask.

But, insofar as any of this is true, it's a very different way of harnessing of AI than just taking AI and trying to see if it will produce new solutions to existing, hard, well-defined problems. But who knows, maybe I'm wrong in all of this.


I was in the game industry when we originally transitioned from C to C++, and here's my recollection of the conversations at the time, more or less.

In C++, inheritance of data is efficient because the memory layout of base class members stays the same in different derived classes, so fields don't cost any more to access.

And construction is (relatively fast, compared to alternatives) because setting a single vtable pointer is faster than filling in a bunch of variable fields.

And non-virtual functions were fast because, again, static memory layouts and access and inlining.

Virtual functions were a bit slower, but ultimately that just raised the larger question of when and where a codebase was using function pointers more broadly - virtual functions were just one way of corralling that issue.

And the fact that there were idiomatic ways to use classes in C++ without dynamically allocating memory was crucial to selling game developers on the idea, too.

So at least from my time when this was happening, the general sense was that, of all the ways OO could be implemented, C++ style OO seemed to be by far the most performant, for the concerns of game developers in the late 90's / early 2000's.

I've been out of the industry for a while, so I haven't followed the subsequent conversations since too closely. But I do think, even when I was there, the actual reality of OO class hierarchies were starting to rear their ugly heads. Giant base classes are indeed drastically bad for caches, for example, because they do tend to produce giant, bloated data structures. And deep class hierarchies turn out to be highly sub-optimal, in a lot of cases, for information hiding and evolving code bases (especially for game code, which was one of my specialties). As a practical matter, as you evolve code, you don't get the benefits of information hiding that were advertised on the tin (hence the current boosting of composition over inheritance). I think you can better, smart discussions about those issues in this thread, so I won't cover them.

But that was a snapshot of those early experiences - the specific ways C++ implemented inheritance for performance reasons were definitely, originally, much of the draw to game programmers.


I was knee-deep working as a technical game designer + engine programmer on Soldier of Fortune when Half-Life came out. I can't put into words the impression that the opening that game left on me; I still remember very distinctly experiencing the tram ride, just being utterly entranced, and then being deeply irritated when an artist walked over to my cubicle, saw the game, and jokily asked what was going on, pulling me out of the experience. For me, it was one of those singular experiences you only have a very, very rarely in gaming.

It's funny, though - I would say in retrospect that Half-Life had the typical vexed impact of a truly revolutionary game made by a truly revolutionary team. In terms of design, the Half-Life team was asking and exploring a hundred different interesting questions about first person gaming and design, very close to the transition from 2d to 3d. And their influence, a few years later, often reduced down to a small handful of big ideas for later games influenced by them. After Half-Life, because of the impact of their scripted sequences, FPS games shifted to much more linear level designs to support that kind of roller coaster experience (despite many of Half-Life's levels actually harkening back to older, less linear FPS design). The role of Barneys and other AI character also really marked the shift to AI buddies being a focus in shooters. And the aesthetic experience of the aggressive AI from the marines as enemies also cast a long shadow, too, highlighting the idea of enemy AI being a priority in single player FPS games.

Certainly, those were the biggest features of Half-Life that impacted our design in Soldier of Fortune, which did go on to shift to much more linear levels, much more focus on scripted events, and would have resulted in much more emphasis on AI buddies too if I hadn't really put my foot down as a game programmer (and in my defense, if you go back to FPS games from that era, poorly implemented AI buddies are often, by a wide margin, the most frustrating aspect of that era of games, along with forced poorly done stealth missions or poorly implemented drivable vehicles - the fact that Barneys were non-essential is why they worked well in the original Half-Life). You can see this shadow pretty clearly if you compare Half-Life to, afterwards, the single player aspects of Call of Duty and Halo. Both are series that, in their single player form, are a lot more focused and a lot less varied than Half-Life was, but they clearly emphasize those aspects of Half-Life I just mentioned. And in practice, those were the single player FPS games that were in practice actually copied for quite a while.


Thank you for soldier of fortune. It was a pioneer in its own way with how brutal the enemy destruction was. I loved it. Incredible game that is up there with DOOM, Half Life and Halo for me.


Hey, thanks! I ran myself ragged on that project, so that's nice to hear. And yeah, I think we really did nail a particularly kind of visceral experience.


I remember Soldier of Fortunte, at the time I was still in my home country and somehow I get to find the CD.

It was a great game and one of those which stuck in my head despite being a kid back then.


I went to college in 1995, and my very first week of school, I was introduced to the internet, usenet, ftp, and netscape navigator. A few months later, I was downloading cool .mod files and .xm files from aminet and learning to write tracker music in Fast Tracker 2, downloading and playing all sorts of cool Doom wads, installing DJGPP and pouring over the source code for Allegro and picking up more game programming chops, and getting incredibly caught up in following the Doom community and .plan files for the release of Quake.

Then Quake came out, and the community that grew up around it (both for multiplayer deathmatch and for QuakeC mods) were incredible. I remember following several guys putting up all sorts of cool experiments on their personal webpage, and then being really surprised when they got hired by some random company that hadn't done anything yet, Valve.

There was really just this incredible, amateur-in-the-best-sense energy to all those communities I had discovered, and it didn't seem like many people (at least to my recollection) in those communities had any inkling that all that effort was monetizable, yet... which would shortly change, of course. But everything had a loose, thrown off quality, and it was all largely pseudo-anonymous. It felt very set apart from the real world, in a very counter cultural way. Or at least that's how I experienced it.

This was all, needless to say, disastrous to my college career. But it was an incredible launching pad for me to get in the game industry and ship Quake engine games 2 years later, in many cases with other people pulled from those same online communities.

I miss that time too. But I think there's something like a lightning in a bottle aspect to it all - like, lots of really new, really exciting things were happening, but it took some time for all the social machinery of legible value creation / maximization to catch up because some of those things were really so new and hard to understand if you weren't in at the ground floor (and, often, young, particularly receptive to it all, and comfortable messing around with amateur stuff that looked, from the outside, kind of pointless).


>It felt very set apart from the real world, in a very counter cultural way.

We hate the internet today because it became mainstream.

https://en.wikipedia.org/wiki/Eternal_September


I hate the internet today not because it became mainstream, but because it became commercialized and that squeezed out too much of the best stuff.


That was a result of it becoming mainstream.


It's a different thing nonetheless. I don't think that the thing that makes the modern web bad is that the "unwashed masses" are using it (as several commenters here assert), it's the commercialization.

The web is no longer a place for people to be able to interact freely with each other. It's a place to monetize or be monetized. That means that a lot of the value of the web is gone, because it's value that can't be monetized without destroying it.


The "unwashed masses" (your words) are only here because companies that want to advertise to them made their systems just good enough to draw them in but just bad enough they exploit the worse instincts in people to make more advertising money.

If the web was not commercial then it wouldn't be mainstream. While they are different they are fundamentally linked.


Libraries and highways are very mainstream and are not commercial.

One possible version of the internet/web is a global library.

Another is as a ubiquitous information utility.

In any case, my vision of a superhighway doesn't include video billboards every 3m in every non-toll lane.


> Libraries and highways are very mainstream and are not commercial.

I would argue that the highways are actually a counter-example of what you are saying. They exist to connect workers to businesses, businesses to other businesses, and businesses to consumers. While there is certainly an amount of traffic on the highway that is not doing those three things, we have a name for the first one in any populated area - rush hour. To say that the highway system was not intended to facilitate commerce is just historically inaccurate.

The difference between the highway system and the Internet is that the creation of the Internet was not intended to facilitate commerce - it in fact took several years (1991-1995 as best I can tell) for it to officially be allowed as the neolibs in government did not want to keep funding the network. That choice is why we are where we are with the Internet - the good and the bad.


Nice response. It's true that highways carry both commercial and non-commercial traffic, and that trucks and commercial vehicles clog up highways and make it worse for non-commercial traffic. There is also a difference between the internet (communication infrastructure) and the web (stuff that uses it), which I was wary of, so the analogy isn't perfect in OP's context.

But the vision of an information "superhighway" should be something that is better than regular highways. The good news is that network bandwidth is much easier to add than highway lanes, and is increasing at a much faster rate than human bandwidth.


Are you talking about the web specifically, or more as a 'fundamental principle'?


I'm using the web as a synecdoche for the Internet as a whole because before the Web there wasn't much of a reason for Joe and Jane Q Public to use the Internet.


The Internet was intentionally commercialized and privatized as a third step in its development, from DARPA project, to education/research network, to what we have today.

Mainstreaming is a side effect of its broadening scope; as college students graduated and scholars took their work home with them, the NSFnet backbone was ceded to Sprintlink, and OS/hardware developers started working on consumer-grade interfaces.


The Green Card spam on Usenet is my line in the sand. Usenet got a lot more annoying after that. see https://en.wikipedia.org/wiki/Laurence_Canter_and_Martha_Sie...



I miss the Internet of the 90s because every page I visit today has a pop-up asking me if I want to accept cookies. It makes browsing the open web a jarring experience.

Why the EU didn’t require an ability to do global opt-opt while forcing web sites to implement this feature is a mystery to me.


The internet was quite mainstream in the 90s and 2000s.

The problem with the internet today is it’s a bunch of disconnected privately owned silos.


Well, it became mainstream by the late 1990s (IIRC the pets.com superbowl ad was 1999). But in 1992 or so it was still a bunch of Gopher sites (this new fangled "World Wide Web" will never displace this technology...) and MUDs being used by college students and hobbyists.


Even in 1995, I had to beg my parents to get a dialup account so I could stay in touch over breaks.


In 1993, UNC Charlotte had one computer in its library with a big sign next to it explaining what the World Wide Web was. It would be a few years yet before home computers became commonplace in that region, and late ‘90’s before everyone was more likely than not to have a computer at home, and to have some sort of dial-up internet. I was purchasing domains in 1997-1998 for $3/ea, I believe (I wish I could have known then what I know now…). I sold my first website design job in ‘96, which would probably coincide with when many businesses around Charlotte were establishing websites for the first time.

Fun to think about.

In this context, “mainstream” may just be another way to describe Web 2.0.


I think this is genuinely true. The internet today appeals to the lowest common denominator, in the same way that blockbuster movies often do. It is less appealing because it is less specific to our tastes.


Similar story here, with similarly disastrous impacts on my GPA. There was something magical about that time - technology was moving so rapidly and access to information was exploding. It was all so very early that it seemed like anything was possible for an aspiring computer nerd with a good computer and a fast internet connection.

Of course, it was also really unevenly distributed. If you were on the "have" side of the equation - i.e. in a setting like a college campus, already working in the industry, or in the right IRC channels, with access to modern hardware - you could hop along for the ride and it felt like anything was possible. Otherwise, you were being left behind at a dramatic rate.

Overall things are better now, because so many more people have access to data and resources online. It's trivially easy to learn how to code, information is readily available to most of humanity, and access to good quality internet access has exploded. But I can't deny that it was kind of amazing being one of the lucky ones able to ride that wave.


Same here, the Internet, game modding, early LAN->Internet bridges for multiplayer gaming, IRC and all that probably reduced my GPA by about -1.0 and that caused me to miss out on the "premium" tech employers early in my career, ultimately set me back decades. Thank you, rec.games.computer.quake.* hierarchy and Quake-C mailing lists.


the AI image generation and 3D printing community had a similar kind of feel from 2021-2023, both are slowing down now though and becoming more mainstream everyday and needing less tinkering everyday. Which is great but disapointing at the same time.


It doesn't feel like the same energy to me. Image generation was always "ok, maybe we release this for free or cheap now to see how people feel about it but sooner or later we're going to charge $$$" and 3d printing... I don't know, I think those guys are still doing their own thing. The barrier to get in is lower but there's still Luke warm interest to do so.


Same year for me. My college experience was a mix of PCU, Animal House, Hackers and Real Genius (ok not quite). I first saw email in a Pine terminal client. Netscape had been freshly ripped off from NCSA Mosaic at my alma mater UIUC the year before. Hacks, warez, mods, music and even Photoshop were being shared in public folders on the Mac LocalTalk network with MB/sec download speeds 4 years before Napster and 6 years before BitTorrent. Perl was the new hotness, and PHP wouldn't be mainstream until closer to 2000. Everyone and their grandma was writing HTML for $75/hr and eBay was injecting cash into young people's pockets (in a way that can't really be conveyed today except using Uber/Lyft and Bitcoin luck as examples) even though PayPal wouldn't be invented for another 4 years. Self-actualization felt within reach, 4 years before The Matrix and Fight Club hit theaters. To say that there was a feeling of endless possibility is an understatement.

So what went wrong in the ~30 years since? The wrong people won the internet lottery.

Instead of people who are visionaries like Tim Berners-Lee and Jimmy Wales working to pay it forward and give everyone access to the knowledge and resources they need to take us into the 21st century, we got Jeff Bezos and Elon Musk who sink capital into specific ego-driven goals, mostly their own.

What limited progress we see today happened in spite of tech, not because of it.

So everything we see around us, when viewed through this lens, is tainted:

  - AI (only runs on GPUs not distributed high-multicore CPUs maintained by hobbyists)
  - VR (delayed by the lack of R&D spending on LCDs and blue LEDs after the Dot Bomb)
  - Smartphones (put desktop computing on the back burner for nearly 20 years)
  - WiFi (locked down instead of run publicly as a peer to peer replacement for the internet backbone, creating a tragedy of the commons)
  - 5G (again, locked down proprietary networks instead of free and public p2p)
  - High speed internet (inaccessible for many due to protectionist lobbying efforts by ISP duopolies)
  - Solar panels (delayed ~20 years due to the Bush v Gore decision and 30% Trump tariff)
  - Electric vehicles (delayed ~20 years for similar reasons, see Who Killed the Electric Car)
  - Lithium batteries (again delayed ~20 years, reaching mainstream mainly due to Obama's reelection in 2012)
  - Amazon (a conglomeration of infrastructure that could have been public, see also Louis De Joy and the denial of electric vehicles for the US Postal Service)
  - SpaceX (a symptom of the lack of NASA funding and R&D in science, see For All Mankind on Apple TV)
  - CRISPR (delayed 10-20 years by the shuttering of R&D after the Dot Bomb, see also stem cell research delayed by concerns over abortion)
  - Kickstarter (only allows a subset of endeavors, mainly art and video games)
  - GoFundMe (a symptom of the lack of public healthcare in the US)
  - Patreon (if it worked you'd be earning your primary income from it)
Had I won the internet lottery, my top goal would have been to reduce suffering in the world by open sourcing (and automating the production of) resources like education, food and raw materials. I would work towards curing all genetic diseases and increasing longevity. Protecting the environment. Reversing global warming. Etc etc etc.

The world's billionaires, CEOs and Wall Street execs do none of those things. The just roll profits into ever-increasing ventures maximizing greed and exploitation while they dodge their taxes.

Is it any wonder that the web tools we depend upon every day from the status quo become ever-more complex, separating us from our ability to get real work done? Or that all of the interesting websites require us to join or submit our emails and phone numbers? Or that academic papers are hidden behind paywalls? Or that social networks and electronic devices are eavesdropping on our conversations?


It is greed indeed. Visionaries lost once reality hits in. We're at the largest IT unemployment since the dot com bubble burst.

It's not that nobody cares, it's you're either rich and have influence, or you're a visionary like the rest of us.

I see all the coolest things get slapped behind a $50/m fee (or $ fee)

It's how it is, you hit it dead on.

We can try and fix it, but... all that's offered is running on hamster wheels. We lost. And we lost bad.

But, we can still create things and hope those things we create pave the foundation for things to be. That, that keeps us going.


I never loved the original Far Cry as a player, but I did deeply appreciate it as a game designer.

I was working as a game programmer and technical designer on a big budget FPS back when the original Half-Life was released, and immediately "AI AI AI!!!!!" became a stifling buzzword and thought-terminating (ironically) slogan, heavily reorienting how people thought about shooter design and, essentially, ending boomer shooters as a thing for a good long while and ushering in the era of Halo, Call of Duty, cover-based shooters, and so on.

I happened to adore boomer shooters and have good taste for their rhythms and making them, so the transition is not one I personally enjoyed at all.

But worse in a way, Half-Life ALSO ushered in much more linearity in level design because of their awesome interactive set pieces and the particular interactive way they got across their story. Certainly that was the way its release was experienced in the studio I was in, anyway. Less sprawling, unfolding, and backtracking like in Doom (where the space unfolds over the course of a level in something like a fractal way), more following a string of pearls and hitting all the scripted events, like a haunted house ride. You didn't want the players getting lost, you didn't want them to get bored backtracking, and you didn't want them to miss any of the cool one-off scripted content you'd made for them.

(I love Half-Life, so I don't blame it for any of this - it's a vastly more interesting game than many of the games it inspired, which I think is typical of highly innovative games)

At the time, I wasn't quite yet a thoughtful enough, perceptive enough game designer to recognize how deeply in tension those two changes ended up being with each other. And so I spent a miserable year of eventual burnout trying to make "good enemy AI that players actively notice" as a programmer for a game whose levels kept getting progressively tighter, more linear, and more constrained to support the haunted house ride of scripted events.

As a point of contrast, games like Thief and Thief 2 were magnificently structured for good, cool AI that players could notice, and it was specifically because of the non-linear ways the levels were built, the slow speed of the game, the focus on overtly drawing attention to both player and enemy sense information, and the relationship between the amount of space available to players at any given point to the amount of enemies they faced, as well as the often longer length of time players engaged with any particular set of enemies... and of course, despite all these cool features, poor, poor Thief was released to store shelves just 11 or 12 days after the original Half-Life. Business advice 101 is don't release your first person game 11 or 12 days after Half-Life.

Anyway, that all leads in to my admiration for Far Cry's design. Their outdoor levels actually steered their game design in a direction that could let enemy AI breathe and be an interesting feature of their design, in turn giving players higher level choices about when, where, and how to make initiate fights. In that sense, it reminded me of where Thief had already gone previously, but in the context of a high profile shooter. But doing that required actively relinquishing the control of the haunted house ride-style of pacing, which I think was kind of brave at the time.


Wow. This post gave me emotional whiplash.

I opened the collection of links, which is quite good if a bit old. But then I had a subconscious mental itch, and thought, wait... where had I heard the name mrelusive before? That sounds _really_ familiar.

And then I remembered - oh, right, mrelusive, JP-what's-his-name. I've read a huge amount of his code. When I was working on Quake4 as a game programmer and technical designer, he was writing a truly prodigious amount of code in Doom 3 that we kept getting in code updates that I was downstream of.

And he was obviously a terrifically smart guy, that was clear.

But I had cut my teeth on Carmack's style of game code while working in earlier engines. Carmack's style of game code did, and still does, heavily resonate with my personal sensibilities as a game maker. I'm not sure if that particular style of code was influenced by id's time working with Objective-C and NeXTStep in their earlier editors, but I've long suspected it might have been - writing this comment reminds me I'd been meaning to explore that history.

Anyway, idTech4's actual game (non-rendering) code was much less influenced by Carmack, and was written in a distinctly MFC-style of C++, with a giant, brittle, scope-bleeding inheritance hierarchy. And my experience with it was pretty vexed compared to earlier engines. I ultimately left the team for a bunch of different reasons a while before Quake4 shipped, and it's the AAA game I had the least impact on by a wide margin.

I was thinking about all this as I was poking over the website, toying with the idea of writing something longer about the general topics. Might make a good HN comment, I thought...

But then I noticed that everything on his site was frozen in amber sometime around 2015... which made me uneasy. And sure enough, J.M.P. van Waveren died of cancer back in 2017 at age 39. He was a month younger than me.

I didn't really know him except through his code and forwards from other team members who were interacting with id more directly at the time. But what an incredible loss.


Just wanna say that I loved Soldier of Fortune. Lots of FPSs around that time felt really like and plastic-y. SoF was one of the few that made shooting a gun feel satisfying and visceral (and I’m not even talking about the gore, I played the censored version).


Thanks! That's really cool to hear, 20+ years later.

I actually did all the effects work on the weapons (muzzleflashes, smoke, explosions, bullet holes and surface sprays, and all the extensions to Quake2's particle systems to make that content possible) and all the single player weapons system game balancing, as a matter of fact. Both the sound design and animation / modelling on the weapons went through a number of iterations to get them really over-the-top and delightfully powerful / ridiculous, too - I was lucky to work closely with a great sound designer and a really talented animator on that.


My biggest takeaway from my time programming in Objective-C was not being afraid to name functions and variables more verbosely.

Curious to hear what aspects of Objective-C you feel influenced id?


Well, as I say, I'd been meaning to look into this in more detail because it's something I'd been long curious about. But I don't think I have time right now to dig into it.


If and when you do, I hope you'll write about it and post it to HN!


> Curious to hear what aspects of Objective-C you feel influenced id?

Well, 'id' is the generic object type in ObjC. ;-)


Really enjoyed this comment--thanks for sharing. Game development really sounds like such a different beast from standard line-of-business programming. Always enjoy hearing stories about it and reading books about it (Masters of Doom comes to mind).


Thanks! I've been thinking a lot recently about maybe getting some of my own stories down. The late 90's were a really fascinating time to be in games.

And I loved Masters of Doom, too, although it was weird reading it and occasionally seeing people I knew show up in it, briefly.


You should write them down. I would love to read them, and I'm positive many others would, too. The 90s gaming scene is incredibly fascinating to read about, especially as it started to shift from the cowboy ethic to the corporate ethic (both have their pros and cons). I think I speak for a lot of us when I say we'd love to hear what you have to share.


Did you know where the idea of crouch sliding came from?


I ... hmm. My memory is really, really dusty on that.

I remember I had a handful of conversations with Bryan Dube during development about Q4 deathmatch. He was a super sharp game programmer / technical designer who had done a ton of the work on Soldier of Fortune 2 deathmatch previously, and had worked on the Urban Terror mod for Quake 3 before that. And he was much more focused on multiplayer than I was.

We talked a lot about weapons (as I had done most of the code side work on weapons in Soldier of Fortune), but I'm now remembering him being really keen at the time on adding more high skill play to Q4 deathmatch. We all loved rocket jumping, and I remember him really wanting to add other kinds of high skill movement.

So that much I definitely remember. More than that and my memory is kind of fuzzy. To be honest, lots of team members loved deathmatch and Quake, and all of us were of course talking about gameplay possibilities all the time, so it's possible the idea originated somewhere else on the team.


    // RAVEN BEGIN
    // bdube: crouch slide, nick maggoire is awesome
https://github.com/bc85/quake4/blob/master/game/physics/Phys...


Ah, yeah. If you go through the code there, Bryan had to change that file in quite a few places to implement crouch sliding.

Nick Maggoire is an animator. Super nice guy. No inside voice at all. I shared an office with him for a while :) I haven't kept up with him, but after Q4 he left for Valve where he's been ever since, it seems.

That comment strongly suggests to me that Nick did the animations for players crouch sliding. Or, that's my hunch anyway.


I'm not exactly up-to-date on current techniques (I've been out of AAA game making for a while now), but here are some general observations that might be useful.

Way back during the transition from Doom to Quake, which in some ways really marked the transition from 2D to 3D, Quake's particle systems also relied overwhelmingly on small flat colored particles and their motions, rather than larger textured sprites and their silhouettes. (Quake did use a few sprites, but it was few and far between).

And I think the reasoning was pretty straight forward even back then; in a 3d game world, there are a lot of conceptual and architectural benefits to only working with truly 3d primitives - and point sprites often can be treated like nearly infinitely small 3d objects.

Whereas putting 2d sprites into a 3d scene introduces a bunch of kludges. In particular, 2d sprites with any partial transparency need to be sorted back to front in a scene for certain major blend modes, which gets really troublesome as there are more and more of them. They don't play nice with zbuffers. And because they need to be sorted, they don't always play nice with the more natural order you might prefer to batch drawing to keep the GPU happy. And likewise, they have a habit of clipping into 3d surfaces in ways that reveal their 2d-ness. There's probably more things I'm forgetting.

These are all issues that have had lots of technical workarounds and compromises thrown at them over time, because 2d transparent textures have been so important for things like effects and grass and trees. Screen door transparency. Shaders to change how sprites are written into zbuffers. Alpha testing. Alpha to coverage. Various follow-on techniques to sand down the rough edges of these things and deal with aliasing in shaders. And so on.

And then there's the issue of VR (or so a cursory skim suggests). I haven't spent time doing VR development myself, but a quick refresher skim of articles and forum posts suggests that 2d image based rendering in 3d scenes tends to stick out a lot more, in a bad way, in VR than on a single screen. The fact that they're flat billboards is much more noticeable... which is roughly what I had guessed before I started writing this comment up.

All of those reasons taken together suggest why people would be happy to move on from effects based on 2d texture sprites in 3d scenes, to say nothing of the other benefits that come from using masses of point sprites specifically themselves (especially in terms of physics simulations and such).


Andre LaMothe's prior book, Tricks of the Game Programming Gurus, was literally life changing for me.

I was in my late junior or early senior year of high school when it came out. My stepfather had a 386/20 and then later a 486/33, a Borland C compiler, and a generic 700 page "Learn C" book at home, and I had worked all the way through the book. But I couldn't for the life of me figure how in the world to bridge the gap between the extremely slow, "high res" 16 color graphics libraries that came with the compiler, on the one hand, and what Wolfenstein and Doom were doing, on the other, both of which I was utterly entranced by.

And then I saw LaMothe's book on a random shopping trip to... Software Etc, I think? I'd never seen anything like it. And I knew I had to have it, immediately.

After getting that book, I was diving headlong into relatively fast VGA C programming in mode 13h (320x200x256 color). I spent the afternoons of my senior year of high school writing relatively fast texture mapping routines and trying to get full screen 30+ fps interactive scenes and levels running, which I think I mostly did. I had to write my own paint program, too, for 256 color palettized textures. It was thrilling.

Thanks largely to my time with that book, later when I was introduced to the internet the first week I started a Computer Science program at college, I was primed to dive into all the awesome C open source game libraries and tools (like Allegro and DJGPP) that I found online, and I was making commercial games and working in the guts of the Quake and Quake 2 code bases two short years later. (The book and then the internet were not, however, great for my college career)

I know there are corny parts of the book, and maybe things that weren't as cutting edge as they claimed to be. It doesn't teach you how to actually write actual Doom, of course.

But prior to the widespread roll out of the internet, it's hard to get across just how inaccessible most of the knowledge in the book was, at least for a high school kid like me. It really was like turning on a light switch when I got it. Sometimes something is just at the right place at the right time for someone, and that's what that book was for me.


Very similar story here. I was in middle school when the follow-up Tricks of the Windows Game Programming Gurus came out [0]. I read it cover to cover and proceeded to buy as many Premier Press books I could get using money I'd save from doing chores around the house. This wasn't pre-Internet but the best material was by far still in books. My dad would pay $5 per hour so if I worked hard I could buy another book after a weekend of yardwork. Those middle and early high school years were incredible. You could still understand the cutting-edge and a single person could still make something big like RollerCoaster Tycoon or Doom. I made a bunch of games, isometric ones, worlds in D3D and OpenGL, physics sims, learned CS algorithms, made pixel art and 3d models in 3ds max, and even made my way to a game developer's conference as an awkward teenager. The only downside to all this is it pulled me away from schooling. I probably could have gone to a better university and had an easier time the first few years of career had I put just a little more effort into classes, but that's life. No regrets.

[0] https://theswissbay.ch/pdf/Gentoomen%20Library/Game%20Develo....


I have such nostalgia for that particular moment in time, and for me it was the Renderman Companion and Advanced Animation and Rendering Techniques. The web was still small, and the information density contained in Borders Books or Barnes & Nobel was just completely immersive. Lots of snowy Saturday trips to the mall with my parents and negotiating the purchase of another hefty computer book.


Loved reading this, thanks for sharing.

And a shout-out to mode13h. In my case it was BBS and Denthor's tuts that changed my life.

Good times.


As someone who grew up in similar circumstances in the 90s but in South Africa (no home internet, no books, no friends no help at all) and then finally found Denthor of Asphyxia's tutorials (as well as PCGPE and eventually Huge), I was super gutted to find there wasn't really a South African graphics coding scene, it was basically just him :/

Mode 13h changed my life and set me on the course to being a graphics coder today (along with an email from John Carmack!), it's been such an amazing ride with hardware getting exponentially faster every year (RIP to that). I ought to get mov ax, 13h; int 10h tattooed along with 0xa0000, 0x3c8, 0x3c9 or something :)


Love the tattoo idea.

Add 0x5f3759df to the mix ^_^

Speaking of JC, what did his email say? Cool memento that, I hope you framed it!


Love it. My copy is next to Art of Electronics, Go4, and Knuth.

LaMothe is special because I’ve actually had it since I was 8. I still didn’t “get” matrices for years… but I implemented em!


Software Etc shout-out! Mac Warehouse catalog was also top tier


For anyone who wants to give LOGO a try, there's a really nice browser-based, javascript-implementation live at https://calormen.com/jslogo/ . Source for it can be found at https://github.com/inexorabletash/jslogo .

LOGO's great. It was my first introduction to programming back in 3rd grade in the 80s and definitely helped shape my eventual arc as a programmer. And honestly, it's just lots of fun to play with and explore.


Same, except they started us in 1st or 2nd grade. I (mid-40s) have vivid memories of the computer lab, with Apple computers lining either side of the wall and the lighting and everything. That segment in my early education left a bigger imprint on me than when we watched the challenger explode from the carpeted benches in my music class.


I wish this had a feature to slow down the turtle.


ucblogo is directly available from Ubuntu repositories. From this website there are some resources for learning LOGO as well. https://people.eecs.berkeley.edu/~bh/logo.html


You can read Carmack's .plan archive from 1996 here, if you're so inclined:

https://github.com/ESWAT/john-carmack-plan-archive/blob/mast...

It's a _fascinating_ snapshot into Quake's development.

I have no idea if his .plan is a record of what he, specifically, was doing, or if he was just capturing what the programming team was doing, but at the very least, it makes clear that he was aware of huge amounts of very highly specific game code issues as they were being worked on and was almost certainly deeply involved.


Wow, this link is something else - you can track what problems he tackled and what he worked on DAILY.


Guessing you're on the younger side? I used to read these voraciously as a 11 year old. Felt like a magic window into the game industry.


Hello fellow I-was-once-11-and-fascinated-by-.plan! I wasn't sure if I was the only one. St Louis in 2000 made it seem like there were only a handful of people interested in programming at all.

Hmm... 11 would put me at 19999, so I must've been more like 12 or 13. I remember that's when I started taking gamedev seriously.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: