Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are you talking about? People love Macs for running local LLMs.


For real work tho? My colleagues couldn't get past toy demos.

And it ruins battery life.

For coding it's on par with GPT3 at best which is amateur tier these days.

It's good for text to speech and speech to text but PCs can do that too.


Why would anyone run AI workloads without being plugged in? It's going to trash your battery.


So basically, that kills the whole argument about Apple Silicon efficiency.

Which I know is almost a lie, since it's quite efficient but if you really hit the SoC hard you are still getting around 3hrs battery life at most. Of course, that's better than the 1,5hrs you would get at best from an efficient x86 SoC but it makes the advantage not as good as they make it out to be. You are going to need a power source, a later sure, but that's just a problem displacement.


Are you just making up numbers? Power efficiency is relative, and the argument about Apple Silicon efficiency was a thing since M1 and you have to compare it to the competitors at the time. Of course Intel have caught up a lot of ground.

But even if your numbers weren't pulled out of your ass, a 3hr vs 1.5hr difference is a *100%* improvement. In what multiverse is that not absolutely phenomenal?


What model do you want to run locally to do "real work"? I can run qwen3-32B on my Mac with a decent TPS.

And no battery powered device is going to last long running large AI models. How is that an ok thing to bash Apple about? Because they don't break the laws of physics?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: