gemma-3n-E4B-it-Q8_0
import_cuda_impl: initializing gpu module...
get_rocm_bin_path: note: hipcc not found on $PATH
[...]
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma3n'
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'gemma-3n-E4B-it-Q8_0.gguf'
main: error: unable to load model
worse, support that is overly fixed on support sessions and calls to resolve bugs even when in the ticket it clearly states it's not reproducable, only a restart fixes it etc.
Take my logs and everything you get but ffs make it observable enough so that you don't waste my time.