If you're genuinely already good at coding, use the LLM to go horizontal into other complementary verticals that were too expensive to enter prior. Do the same thing that the other professions would do unto yours.
As an example, I would have never considered learning to use blender for 3d modeling in a game before having access to an LLM. The ability to quickly iterate through plausible 3d workflows and different design patterns is a revelation. Now, I can get through some reasonably complex art pipelines with a surprising amount of confidence. UV mapping I would have never learned without being able to annoy one of OAI's GPUs for a few hours. The sensation of solving a light map baking artifact on a coplanar triangle based upon principles developed from an LLM conversation was one of the biggest wins I've had in a long time.
The speed with which you can build confidence in complementary skills is the real super power here. Clean integration of many complex things is what typically brings value. Obsession with mastery in just one area (e.g. code) seems like the ultimate anti-pattern when working with these tools. You can practically download how to fly a helicopter into your brain like it's the matrix now. You won't be the best pilot on earth, but it might be enough to get you to the next scene.
If it's any consolation, I do think the non-technical users have a bigger hill to climb than the coders in many areas. Art is hard, but it is also more accessible and robust to failure modes. A developer can put crappy art in a game and ship it to steam. An artist might struggle just to get the tooling or builds working in the first place. Even with LLM assistance there is a lot to swim through. Getting art from 5% to 80% is usually enough to ship. Large parts of the code need to be nearly 100% correct or nothing works.
I can confirm this. My datapoint: I was mostly a web developer but using these "vibe" tooling I am making my own hardware board and coding for embedded, which includes writing drivers from datasheets, doing SIMD optimizations, implementing newer research papers into my code, etc.
Thanks for this, I like your idea about breaking into areas I don't have experience with. E.g. in my case I might make a mobile app which I've never done before, and in theory it should be a lot easier with Claude than it would've been with just googling and reading documentation. Although I did kind of like that process of reading documentation and learning something new but you can't learn everything, you only have so much time on this planet
> Although I did kind of like that process of reading documentation and learning something new but you can't learn everything, you only have so much time on this planet
I actually enjoy reading the documentation more these days, because I am laser focused on what I want to pull out of it after seeing the LLM make a suspicious move.
Super resolution is a thing but I think they mean to just take the differences and track based on the motions in that data directly. Any deltas which appear to move in a certain way are likely to be aircraft, especially if the same deltas show up on a 2nd camera, even if they aren't clear enough to directly identify. Basically the motion vector estimation step of video encode but tuned really tight and ignoring motions that don't look like aircraft motions.
I'm not sure this is really all that perfect/foolproof though, especially with 8k visible light cameras. For one, clouds can be quite the problem (even at non-visible wavelengths). Atmospheric turbulence can also be annoying and the air can be plain hazy depending on the day - both limit detection accuracy at a given range. When night comes the detection ability is greatly hampered even with additional wavelengths (and IR has lower resolution anyways).
It worked well enough in a weekend project many years back to use computer vision (the craze at the time) and compare to an ADS-B make by using a 4k 1 FPS image feed in a weekend project to mess with computer vision a decade back. I definitely see it as a valid addition to a detection system... but I would stop short of writing a "Military Dumbfounded at This One Simple Trick" article about it.
That and the 3-way "OpenAI pays Oracle $300B in cloud computing on their $15B of revenue", "Oracle buys $40B of NVidia chips for their new datacenter", "NVidia invests $100B in OpenAI". The money's just moving around and around, propping up revenue numbers for Wall Street, but the consumer benefits are dubious.
It's more like you offering to invest $1bn in Fred's company in return for a promise to spend $1bn for sink fixing services. You could question if there is a demand or $1bn of sink fixing in the same way that you could wonder about the demand for $100bn of AI slop.
Investing is just an equivalent value exchange. My point is that while you could say in my example that money just moved around in a circle, it turns out that money moving around in a circle is actually fine.
I agree money in a circle is fine. But in your example there is a straightforward situation of an end customer paying money for a service, eg. getting a sink fixed. The worry with the data centers is will there be enough end customers willing to pay what the data centers cost.
Sure - but nothing driving up revenue value is actually being created[1]. What's missing in that system is a way that money is entering the system. These deals are (in my cynical opinion at least) being inked to create the appearance of large continued investment and market excitement to pump or sustain valuations. Oracle actually spotlit the arrangement as future sales in their recent earnings and that seemed to be what mostly drove their valuation up.
Performative actions to drive up valuation and try and attract more investors absolutely feels bubbly to me.
1. Discounting products that are not only currently operating at a loss but are priced well below actual resourcing required to produce.
Customers pay for the compute. There are tons of CSPs selling the capacity to small and large consuming entities alike (both OpenAI/Anthropic + other small outfits we've not heard of).
The fair criticism of the infra $ is where the non-VC non-bank-loan cash stream is, but there could be a lot of B2B deals and e.g. Meta, TikTok and other behemoths do tend to make plenty of money and pay their bills, and have extreme thirst for more AI capacity.
Take Oracle for example (as a whole, not just OCI) - tons of customers who are paying for AI-enhanced enterprise products.
It's still the early days, as the cost of creating software continues to approach zero the rules will change in ways which are hard to predict. The effect this will have on other white collar industries is even more challenging to reason about.
That line of reasoning conveniently left out the explosive datacenter revenue growth that generates huge free cash flows. Even disregarding AI, it's double-digit compounding growth.
NVIDIA's stock may eventually get decimated (but the company itself will be fine, they have a relatively low employee account and insane margins), the Coreweaves of the world are definitely leveraged plays on compute and may indeed end up being DotCom style busts, but a key difference is that the driving forces at the very top - the Microsofts and Amazons of the world - have huge free cash flows, real compute demand growth beyond the AI space, and fortress balance sheets.
I think that's a fair point and sort of speaks to one of the indicators that say this possible bubble may be different than the dotcom bubble. I think that end-user revenue for AI is a pipe-dream - but the companies interested in compute have a whole lot of resources and so long as they are willing to divert those resources to prop up AI it can keep going for quite a while (at a smaller scale though).
There is a commonly held belief that there is a level of compute (vaguely referred to as AGI) that would be extremely valuable and those companies may continue to rationally fund AI research as R&D though if the VC and loan funding dries up there will probably be serious fights with the accounting departments. It is good to point out that companies with huge war chests do seem poised to continue investing in this even if VC/etc dries up due to the lack of end-user profitability - it'll be an interesting shift but probably not as disastrous as the dot-com bubble burst was.
>What's missing in that system is a way that money is entering the system.
Or maybe not enough money soon enough, and at this scale that could be more of a disaster than it had to be.
So far it's not looking like a business boom much at all compared to the massive investment boom which is undeniable, and that's where a good amount of remaining prosperity is emanating from.
If you were a financial person wouldn't you figure there were a lot bigger bonuses by getting involved with the amount of cash flow being invested rather than the amount resulting from profits being made in AI right now?
And how much of the money even exists. I remember earlier in the year Altman saying they were spending $500bn on the Stargate project and Must saying he had less than $10bn in actual funds.
It wouldn't surprise me if much of the $1tn most doesn't turn up and the bubble bursts before 1/10th or that becomes real.
There's nVidia that we know (primarily graphics cards) and more like an investment firm "nVidia" these days. The stocks have grown so much that they are trying to turn their fortunes around (to sustain growth) by investing everywhere.
nVidia has invested in so many companies in the ecosystem and beyond.
What about the NVidia/OpenAI deal? Such a giant investment in a huge customer looks a heck of a lot like "circular dealing". That is, invest $100 million so your biggest customer can spend a ton of that on your own chips. You get to report skyrocketing revenue but that revenue was bought with your own money.
If the AMD/OpenAI deal means that OpenAI will put serious work into making AMD GPUs finally be more amenable to general purpose computing, it's actually a huge win for AMD. None of the major numerical computing frameworks (PyTorch, TensorFlow, JAX, etc.) work as well on AMD GPUs versus NVIDIA GPUs, and that's really holding AMD back from making inroads into the machine learning space.
If this means that compute is considerably cheaper for OpenAI, it's a win for them too. But that remains to be seen.
Proton runs like a dream these days. I can't think of a game that I couldn't run under it that I really wanted to. The biggest incompatibility seems to be caused by multiplayer games or live service games with hyper aggressive DRM or anti-cheat measures (Destiny, PUBG, etc). If you typically avoid these kinds of games I think you'll be alright.
> But the fast workflow enables me to focus on the insanity that is the foolish endeavor of simulating the world, instead of getting sidetracked on making 3D and UI actually work in Go, or intoxicating my brain with Unity
I'm glad the author found something that works for them. That said, if the author's goal was to publish a game with intention of turning a profit, this attitude can be very counterproductive. It does work out really well in some cases, but more often than not it results in failure. The distribution looks like a bathtub curve - either your concept is so simple that a DIY thing can work (Minecraft) or you have the other thing (Elden Ring).
The most challenging parts of game dev happen in places like photoshop, blender, audacity and blank sheet of paper. Turning the art integration tool into your primary obsession is a fantastic way to slide on all of these other value drivers. For example, populating a game with premade assets from the store is no longer a viable commercial solution when your audience has seen hundreds of prior arrangements of the same.
If the game is a hobby or other not-for-profit venture, then all of the advice in this article is fantastic. I started my game dev journey doing everything in the web as well. It is still a very compelling platform target. The thing that really gets me thinking is that despite my ability to create flawless webGL builds out of unity is the fact that I don't bother anymore. The need kind of went away once it became clear that layers like Proton on Linux would actually cover my ass.
It reminds me of the idea behind Dwarf Fortress, which has been a sleeper hit for 20 years before one of the creators needed money for a cancer treatment; their Steam release earned them a good chunk of money. But for 20 years, it was a not-for-profit labour of love.
> For example, populating a game with premade assets from the store is no longer a viable commercial solution when your audience has seen hundreds of prior arrangements of the same.
Focusing purely on profits (as your comment does), is this really true? Production costs seems really low for those sort of games, and I see countless of them on the Playstation Store even, which does have some barrier to entry, so someone must find it profitable enough to continue churning out those sort of slop games.
But the author of this post seems to not do it for the profits, they have other goals in mind, so not sure it really matters in the end.
> If the game is a hobby or other not-for-profit venture, then all of the advice in this article is fantastic
Good job author for creating a fantastic blog post, I agree with parent :)
I miss playing on consistent <10ms servers in the CS 1.6 days.
The Houston/Dallas/Austin/San Antonio region was like a mini universe of highly competitive FPS action. My 2mbps roadrunner cable modem could achieve single digit ping from Houston to Dallas. Back in those days we plugged the modem directly into the gaming PC.
I moved to a neighborhood where a surprising majority of the residents do not outsource their lawn care and I think this makes the biggest difference. The noise reduction of simply not having beaten up landscaper trucks with muffler deletes driving through the streets every day is a massive help.
Letting your landscapers blow nutrients off your property is insane when it's difficult to find good quality top soil. The stuff you buy at Home Depot is essentially trash and rocks now. What comes out of the mower bag each spring can yield an incredible amount of dirt after it's had a full summer to cook in the pile.
I made a halfway step away from B2B SaaS by getting involved with launching a game on Steam. This is still B2B in many ways, but also exposes you to the advantages of a B2C arrangement.
Going direct to customers without any kind of intermediary is where I start to get really nervous.
> he's funded a credible (almost) replacement for Windows
Proton on Steam Deck is indistinguishable from Windows.
I've loaded Win64 Unity builds on the machine to test and they run perfectly every time. I actually dont see the reason I would bother with a native Linux build at this point. The machine doesn't even get hot despite my fear that it would doing something like this.
The only part of the SD experience that felt like "linux" was the OOBE wherein I had to arbitrarily restart the first time setup process 3-4 times before it finally worked.
I am at a point where I almost prefer to game on the linux handheld over my windows desktop. It really is a superior package in many ways. Games like Elden Ring, Arkham Knight, Euro Truck Simulator 2, etc., are so much better to play on a machine like this. On keyboard+mouse I struggled to enjoy these titles. I realize I could always connect a controller to my PC, but it never felt right to me in that form factor. Playing ETS2 on the couch is a completely different dimension of relaxation. I'd never touch this game on my PC.
> I actually dont see the reason I would bother with a native Linux build at this point.
I would have agreed, having played the windows build Baldurs Gate III on the Deck. But a week ago the developer put out a native Deck build that outperforms the windows build, which is very helpful in the later parts of the game.
Especially in GPU-bound titles, there are endless examples of Proton running better than even native Linux versions. Here is the venerable DF Direct crew comparing Cronos' recent Linux-native build versus Proton: https://youtu.be/Sj5TyrHDspU?t=2951
DXVK is remarkably optimized and I think many people overlook that.
> I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
This is also why I prefer to use Unity over all other engines by an incredible margin. The productivity gains of using a mature C# scripting ecosystem in a tight editor loop are shocking compared to the competition. Other engines do offer C# support, but if I had to make something profitable to save my own standard of living the choice is obvious.
There's only two vendors that offer built-in SIMD accelerated linear math libraries capable of generating projection matrices out of the box. One is Microsoft and the other is Apple. The benefits of having stuff like this baked into your ecosystem are very hard to overstate. The amount of time you could waste looking for and troubleshooting primitives like this can easily kill an otherwise perfect vision.
As an example, I would have never considered learning to use blender for 3d modeling in a game before having access to an LLM. The ability to quickly iterate through plausible 3d workflows and different design patterns is a revelation. Now, I can get through some reasonably complex art pipelines with a surprising amount of confidence. UV mapping I would have never learned without being able to annoy one of OAI's GPUs for a few hours. The sensation of solving a light map baking artifact on a coplanar triangle based upon principles developed from an LLM conversation was one of the biggest wins I've had in a long time.
The speed with which you can build confidence in complementary skills is the real super power here. Clean integration of many complex things is what typically brings value. Obsession with mastery in just one area (e.g. code) seems like the ultimate anti-pattern when working with these tools. You can practically download how to fly a helicopter into your brain like it's the matrix now. You won't be the best pilot on earth, but it might be enough to get you to the next scene.
If it's any consolation, I do think the non-technical users have a bigger hill to climb than the coders in many areas. Art is hard, but it is also more accessible and robust to failure modes. A developer can put crappy art in a game and ship it to steam. An artist might struggle just to get the tooling or builds working in the first place. Even with LLM assistance there is a lot to swim through. Getting art from 5% to 80% is usually enough to ship. Large parts of the code need to be nearly 100% correct or nothing works.
reply