I think it’s fair to leave it out in the on-device model comparison. 3b is much smaller than 8b, it is obviously not going to be as good as llama 3 if they did not make groundbreaking advancements with the technology.
The model is always wrong, since it predicts a propability distribution over all possible tokens, but the target has 100% possibility for one token and 0 for all others.
I am really impressed by the Apple Maps implementation. I think it also uses textured polygons, but does so in a very good looking way and at 120 fps on an iPhone, showing even a whole city in textured 3d.
Apple bought a Swedish startup called C3 and their became 3D part of Apple Maps. That startup was a spin-off from Saab Aerospace, who had developed a vision system for terrain-following missiles. Saab ran a project with the municipal innovation agency in Linköping and the result was that they decided this tech should be possible to find civilian use cases for. C3 decided to fly small Cessnas in grids across a few major cities and also Hoover Dam, and built a ton of code on top of the already extremely solid foundation from Saab. The timing was impeccable (now many years ago) and they managed to get Microsoft, Apple and Samsung into a bidding war which drove up the price. But it was worth it for Apple to have solid 3D in Apple Maps and the tech has stood the test of time.
I remember seeing a Nokia or Here demo around that time that looked like similar or the same tech. Do you know anything published about it with technical details? Seems like enough time has passed that it would be more accessible. I would love to learn more about it.
It looks like MLX is a part of this initiative. https://github.com/apple/corenet lists "MLX examples" as one of the components being released in April.
As mentioned in the "mlx_examples/open_elm": "MLX is an Apple deep learning framework similar in spirit to PyTorch, which is optimized for Apple Silicon based hardware."
I’d say talent? Outside of OpenAI no team has been able to release a model as capable as GPT-4, and I’m unsure if the CIA has been prioritizing LLM experts in their hiring.