Hacker News new | past | comments | ask | show | jobs | submit login

They can run locally on-device: a win for cost, latency and privacy (privacy is pragmatic: it means you can use all the user's data as context without qualms). There's a reason Microsoft tried so hard to push for the neural processors a year or two ago. Avoiding the cost of the datacenter while offering good-enough inference (emphasis on good) is a massive win.



Google already has some of the best on device models (Gemma) and chips (Tensor).


> and chips (Tensor)

Is there actually any hard data out there comparing the NPU on the Google Tensor G4 vs the Apple A18? I wasn't able to quickly find anything concrete.

I mean Apple has been shipping mobile NPUs for longer than Google (Apple: since A11 in 2017, Google: since 2021), and are built on (ostensibly) a smaller silicon node that Google's (G4: Samsung SF4P vs A18: TSMC N3E). However, the G4 appears to have more RAM bandwidth (68.26 GB/s vs 60 GB/s on A18).


Google has been shipping custom NPUs since the Pixel 4 in 2019. Prior to that, Google phones just used off the shelf SOCs from Qualcomm, with 2018's Pixel 3 using the NPU in the Snapdragon 845. Android first shipped NNAPI in Android 8.1 in 2017, with acceleration on various mobile GPUs and DSPs, including the Pixel Visual Core on the Pixel 2. Google has shipped more on-device models so far, but neither company has a moat for on-device inference.

https://blog.google/products/pixel/pixel-visual-core-image-p...


Unfortunately for them, Google doesn't make devices that people want to buy


Other Android phone vendors do, and they have the same strategy, sitting on top of Qualcomm NPUs.


Yes, thank you; this is the strategy I was referring to. It will take some time for the models and chips to get there, but on-device inference will have massive advantages for privacy, speed and cost. Plus it will drive demand for hardware—at first, iPhones, but soon AirPods and glasses.


They are running data centers and offloading some things to chatGPT though, not just running on device.

In fact there’s no clear indication when Apple Intelligence is running on-device or in their Private Cloud Compute.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: