Thanks for letting me know about this. I'd experimented with some local LLM's before, but at the time I couldn't find a good model that was small enough to run on my 3080 Ti. This one is just small enough for me to run at a useable speed (just over 1 token per second), and so far seems to be nearly as good as GPT3.5.