In my case, I was just asking it for a cheeky name for a talk I want to give in a few months. The suggestions it gave were of comparable quality to what I think ChatGPT would have given me.
I subscribe to the belief that, for a chat model with the same parameters, creativity will be proportional to tendency to hallucinate, and inversely proportional to the factual answers. I suspect an unaligned model, without RLHF, wouldn't adhere to this.