Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Isn't it insane to think that rendering triangles for the visuals in games has gotten so demanding that we need an artificially intelligent system embedded in our graphics cards to paint pixels that look like high definition geometry?

That's not _quite_ how temporal upscaling work in practice. It's more of a blend between existing pixels, not generating entire pixels from scratch.

The technique has existed since before ML upscalers became common. It's just turned out that ML is really good at determining how much to blend by each frame, compared to hand written and tweaked per-game heuristics.

---

For some history, DLSS 1 _did_ try and generate pixels entirely from scratch each frame. Needless to say, the quality was crap, and that was after a very expensive and time consuming process to train the model for each individual game (and forget about using it as you develop the game; imagine having to retrain the AI model as you implement the graphics).

DLSS 2 moved to having the model predict blend weights fed into an existing TAAU pipeline, which is much more generalizable and has way better quality.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: