Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is more processing done in the Siri local device than there is in the Alexa local device.

It's not "Siri is smarter" but rather "Apple is working on minimizing cloud costs because it is entirely a cost for them."

Siri is scripted and limited to make it locally "smarter".

With Amazon and Alexa, everything goes to AWS because that's where the entirety of Alexa's processing happens. The hardware devices in the homes are "dumb" terminals for AWS with a voice interface. This allows Amazon to make use of AWS as much as it can and do something with "surplus" computing power that it has available on AWS.



Amazon has been working on on-device voice for a while. Actually everyone is trying to do that. Running large speech models in the cloud is expensive, considering the number of devices, they probably need more than "surplus" :)

https://www.amazon.science/blog/on-device-speech-processing-...


It could certainly rack up a big bill.

This is also part of the work that Apple has done (and is likely part of the making Siri cost less for cloud compute).

https://www.macrumors.com/how-to/use-siri-offline-ios/

> In iOS 15, Apple moved all Siri speech processing and personalization onto your device, making the virtual assistant more secure and faster at processing requests. This also means Siri can now handle a range of requests entirely offline.

> Once you're using iOS 15, you don't need to enable anything for Siri to work offline. The types of requests that it can handle without phoning home to Apple's servers include the following:

    Create and disable timers and alarms.
    Launch apps.
    Control Apple Music and Podcasts audio playback.
    Control system settings including accessibility features, volume, Low Power mode, Airplane mode, and so on.


I didnt disclose as I was not gonna promote anything, but I work for a startup specializing in on-device voice recognition. I am 100% biased towards on-device voice processing :)

i just wanted to share my 2 cents, as it's not unique to Apple and the cloud can be costly even if you own it. Big tech has been investing in on-device for a while. besides voice commands, apple and google do transcription locally too. because now you can have local speech to text with cloud level accuracy and of all the reasons you shared - cost, privacy, latency etc. (but again, i'm biased)

https://www.androidauthority.com/voice-typing-opinion-322134... https://techcrunch.com/2022/05/17/apple-adds-live-captions-t...


Interesting! Is that also the case for the homepod (and homepod mini which I have)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: