Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you are only interested in Stable Diffusion there's no need to get up to 24 GB. Plenty of people are using a 12 GB RTX 3060 (with that you could generate something like 4 simultaneous 512x512 images in a matter of seconds).

Even the cheaper and recently re-released 12 GB RTX 2060 could be a good pick although the 2060 is generation behind and is less efficient in terms of electricity consumption.

Of course with more GB you get higher resolution images, but plenty of people just generate at 512x512 and then use other AI projects (for example "real-ESRGAN") to later upscale their images, which will still let you achieve great results.

I think a good advice could be to join Stable Diffusion Discord and talk to other users sharing their results and experiences there.

By the way, IIRC (please double-check the following) it could be also worth noting that the guys behind Stable Diffusion (Stability.AI) declared that in the end they will eventually bring down the VRAM usage to approximately 5 GB.

However more GB are always a good thing in general for Machine Learning...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: