They can run locally on-device: a win for cost, latency and privacy (privacy is pragmatic: it means you can use all the user's data as context without qualms). There's a reason Microsoft tried so hard to push for the neural processors a year or two ago. Avoiding the cost of the datacenter while offering good-enough inference (emphasis on good) is a massive win.
Is there actually any hard data out there comparing the NPU on the Google Tensor G4 vs the Apple A18? I wasn't able to quickly find anything concrete.
I mean Apple has been shipping mobile NPUs for longer than Google (Apple: since A11 in 2017, Google: since 2021), and are built on (ostensibly) a smaller silicon node that Google's (G4: Samsung SF4P vs A18: TSMC N3E). However, the G4 appears to have more RAM bandwidth (68.26 GB/s vs 60 GB/s on A18).
Google has been shipping custom NPUs since the Pixel 4 in 2019. Prior to that, Google phones just used off the shelf SOCs from Qualcomm, with 2018's Pixel 3 using the NPU in the Snapdragon 845. Android first shipped NNAPI in Android 8.1 in 2017, with acceleration on various mobile GPUs and DSPs, including the Pixel Visual Core on the Pixel 2. Google has shipped more on-device models so far, but neither company has a moat for on-device inference.
Yes, thank you; this is the strategy I was referring to. It will take some time for the models and chips to get there, but on-device inference will have massive advantages for privacy, speed and cost. Plus it will drive demand for hardware—at first, iPhones, but soon AirPods and glasses.