Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How so?


Running Multimodal LLMs on device and offline, i.e LLMKit for free equaling GPT-3.5 / 4 then Google will follow on Android.

Ability to download / update tiny models from Apple and Google as they improve, à la Google Maps.

No need for web services like ChatGPT.


So Apple is filling in ChatGPT's moat then, not their own? Pardon my confusion


I believe that's the point the parent commenter was trying to make, although as the leaked Google document noted, "[Google has] no moat and neither does OpenAI".

This is more evidence that Apple is investing in building a MLLM as good as anything OpenAI and Google can build, albeit in a more Apple-y way (privacy-first, licensed content, etc.).


Yes, it looks like Apple is going after everyone and anyone that has a web based LLM, ChatGPT, Poe, Claude, etc. via developer kits LLMKit that can work offline.

This will only work if their models (even their tiny or even medium / base models) equal (or are better than) GPT-3.5 / 4.

From there, Google will follow Apple in doing this offline / local LLM play with Gemini.

OpenAI's ChatGPT moat will certainly shrink a bit unless they release another powerful multimodal model.


Apple's moat has been and continues to be their insanely large installed base of high-margin hardware devices. Meanwhile, LLMs are rapidly becoming so commoditized that consumers are already expecting them to be built-in to every product. Eventually LLMs will be like spell check—completely standard and undifferentiated.

If OpenAI wants to survive, they will need to expand way beyond their current business model of charging for access to an LLM. The logical place for them to go would be custom chipsets or ARM/RISCV IP blocks for inference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: