Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I actually wonder if in the limit of video encoding we could just get a diffusion model that can in real time render realistic video based on a script. Then downloading a movie is just downloading a few megabytes of a prompt and you get a movie playing based off it locally.


Maybe. The only problem I see is economical. Sure, sending over a sequence of prompts, instead of sequence of frames, is going to be a huge storage and bandwidth saver. However, you're going to pay for it dearly, in compute, whenever you want to watch such a live-generated video. In almost all cases, it's vastly better to use more storage than to use more compute, for the same reason that, if you need to keep something to stay above ground level, you're better off placing it on a table or bolting it on a wall, instead of attaching it to a jet engine pointing downwards, firing for TWR=1.


That’s very true and I did make that comment mostly in jest. I think the idea of art generated like that has a distinctly dystopian feel. I think one thing that might push this idea closer to reality is dedicated hardware. It would also allow you to do things like enjoy movies with your favorite actors. “Netflix, show me Dune but with Dustin Hoffman as Paul”.


Wouldn't it be non-deterministic? (Legit question, I'm new to this)


It depends on the specifics, but in general if your settings are correct and the hardware isn't doing something whacky/buggy then it should be deterministic. The math itself is deterministic, the only randomness comes from temperature (intentional randomness introduced in software) and some bugginess with GPUs that weren't designed for this workload.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: