So basically, that kills the whole argument about Apple Silicon efficiency.
Which I know is almost a lie, since it's quite efficient but if you really hit the SoC hard you are still getting around 3hrs battery life at most. Of course, that's better than the 1,5hrs you would get at best from an efficient x86 SoC but it makes the advantage not as good as they make it out to be. You are going to need a power source, a later sure, but that's just a problem displacement.
Are you just making up numbers? Power efficiency is relative, and the argument about Apple Silicon efficiency was a thing since M1 and you have to compare it to the competitors at the time. Of course Intel have caught up a lot of ground.
But even if your numbers weren't pulled out of your ass, a 3hr vs 1.5hr difference is a *100%* improvement. In what multiverse is that not absolutely phenomenal?
What model do you want to run locally to do "real work"? I can run qwen3-32B on my Mac with a decent TPS.
And no battery powered device is going to last long running large AI models. How is that an ok thing to bash Apple about? Because they don't break the laws of physics?