Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mistral Small 3.1: the best model in its weight class (mistral.ai)
30 points by victormustar 3 months ago | hide | past | favorite | 5 comments



> Building on Mistral Small 3, this new model comes with improved text performance, multimodal understanding, and an expanded context window of up to 128k tokens. The model outperforms comparable models like Gemma 3 and GPT-4o Mini, while delivering inference speeds of 150 tokens per second.

This is a really nice bump on the previous model, considering it’s now multimodal. I’m a little surprised it only received a 0.1 version bump.


Downloaded the 22b version, produces very concise functional-style code. Et c'est français pour qui c'est important.


There is no 22b version of Mistral Small 3.1. You probably downloaded Mistral Small 2409 (released September 2024), which was 22b.


True, thanks. So 6 months old version is good too. Downloaded 24b.


If you're using ollama, the 24b Mistral Small is 3 rather than 3.1, which is lacking the 130k context window and multi modality. Still a very capable model though.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: