I love the idea and I would like to build something like this. But the few attempts i have made using whisper locally has so far been underwhelming. Has anyone gotten results with small whisper models that are good enough for a use case like this?
Yeah, I would definitely double-check your setup. At work we use Whisper to live-transcribe-and-translate all-hands meetings and it works exceptionally well.
+1 this. Whisper works insanely well. I've been using the medium model as it has yet to mis transcribe anything noticeable, and it's very lightweight. I even converted it to a coreML model so it runs accelerated on apple silicon. It doesn't run *that* much faster than before.. but it ran really fast to begin with. For anyone tinkering, ive had much success with whisper.cpp.
I'd agree with your experience. I simply sit my phone (~200 dollar motorola, cheap phone) in centre of room, split voice file into chunks using voice prints/ID's I get from a voice embedding model I trained, then feed labelled chunks through whisper, and get a nice transcript of everything said. I combine that with my handwritten notes (as image, get a VLM to transcribe) and the agenda, and I get out really nice meeting minutes as a LaTex document. Works a charm and has turned an hour or two of work per meeting into maybe 30 minutes (proofing what was written).
Which model do you use? I use large usually, on a GPU. It's fast and works really well. Be aware though that it can only recognise one language at a time. It will autodetect if you don't specify one.
Of course the smaller models don't work nearly as well and they are often restricted to English. Large works great for me though it does require GPU hardware to be responsive enough, even with faster-whisper or insanely-fast-whisper.
Does Apple even allow you to replace Siri with another assistant? For the longest time on android, all non-Google assistants were crippled by not being able to listen in the background or use the assistant hardkey, gestures, or shortcuts. I'm not sure if the Google assistant still has privileges others don't, but I wouldn't be surprised in the least.
Part of the problem is the wake word “hey siri” is actually handed by a separate coprocessor (AOP) with the model compiled into the firmware. While anything is technically possible, it isn’t as simple as just letting the google app run in the background since the AP is asleep when any of these gesture happen. You could probably setup the action button on the side to open an assistant, but that’s going to be a less pleasant experience (app might not be open, etc).
You can now setup Vocal Shortcuts[1] which can be used to run any shortcut or action with almost any trigger word and without saying "Siri". However, I'm not certain if it can wake the device from sleep or not.
Same with android phones - a super-specific hardcoded phrase is much easier to work in the power budgets required for an "always on" part of the device.
It's why a manufacturer (like Samsung) can change that sort of thing on their devices, but it's not realistically something an end user (or even an app) can customize in software. It's not some "arbitrary" limitation.
Back in 1992 or so the NeXT could distinguish (was it 16 or) 64 fixed, trained, phrases. Point being, it doesn’t take too much compute with a finite vocabulary.
There's open solutions for that like openwakeword and microwakeword (the latter can even run on an esp32!)
The training is a lot of work though and requires a lot of material. For Home Assistant's voice preview model they had tens of thousands of volunteers record the "okay nabu" wakeword and even still it doesn't work quite as well as hey siri on Apple devices.
I saw an article about this and downloaded the Perplexity app but I was unable to figure out if this was true? Do I need a paid tier? I just quickly worked through the free sign up and couldn't sort it out. The demo looked really slick. Is it worth pursuing?
This summary-like style — with heavy formatting and every (!) paragraph as a bulleted list — drives me nuts tbqh. Especially in lengthy texts, it just looks... noisy, bland, and sometimes confusing.
What's the format you would prefer? We're not using ChatGPT to write and we've experimented with this format. The other articles may have a better format?
I’ve noticed recently (maybe I missed an announcement) that Siri now functions locally for at least some commands. Try putting an Apple watch in airplane mode and asking it to set a timer or reminder
Siri has had limited offline functionality since at least iOS 15? Although I don't think most users noticed at the time, since most of Siri's command vocabulary is for things that require a network connection...
Faithful year and half user of chatGPT on my iPhone which has made me loathe Siri for how dumb she is in every sense of the way!
When will OpenAI (with the help of Microsoft) release a GPT phone to compete with the iPhone? Im so tired of the boring iPhone! Give me a GPT phone where from my lock screen GPT does everything for me. Fingers crossed :) it's secretively in the works!
They are doing this, just at a mind-numbingly slow pace. They seem to add controls for brightness and power but don't make it clear what works when offline. It's not even worth trying because there's no guide or documentation on what commands would be available. You just have to go into airplane mode and try asking stuff. Awful UX
In earnest though, I'm certain we'll see a community replacement of Siri by end-of-year if the iPhone permissions model allows it or there's some workaround. IDK what the limitations are here but I'm eagerly awaiting the community to step in where Siri has failed.
It's even better asking it to play a playlist I have made and downloaded in Apple Music only for it to say "I'm going to need your permission to access your Spotify data" and play something completely random.
Even saying something like "play the playlist ____ in Apple Music" doesn't help, it cuts the "in Apple Music" part of.
For me I ask a lot of things like "How do I say <xxx> in Spanish". It's better than a google translate because it's not quite as literal, it will translate to proper colloquialisms if necessary.
Maybe I've just had a bad microphone.