Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I pay 78 cents an hour to host Llama.


Vast? Specs?


Runpod, 2xA40.

Not sure why you think buying an entire inference server is a necessity to run these models.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: