What I have realized after using Bard(Palm2), ChatGPT(3.5) and some other LLMs is that they are good for tasks where an accuracy <100% is acceptable, and the cost of getting wrong answers is not high.
For example, labeling a million text samples with 90% accuracy by using few shot learning is a good use case. Writing a poem is good use case. Trying to learn a new language is not. Generating a small function that you can verify might be ok. Writing entire codebase is not.
So far, I haven't found any use case for personal use of LLMs. For work however, LLMs are going to be very useful with text(and potentially image) based machine learning tasks. Any tasks where having knowledge beyond the labeled training dataset is useful is going to be a good task for LLMs. One example is detecting fraud SMS.
For example, labeling a million text samples with 90% accuracy by using few shot learning is a good use case. Writing a poem is good use case. Trying to learn a new language is not. Generating a small function that you can verify might be ok. Writing entire codebase is not.
So far, I haven't found any use case for personal use of LLMs. For work however, LLMs are going to be very useful with text(and potentially image) based machine learning tasks. Any tasks where having knowledge beyond the labeled training dataset is useful is going to be a good task for LLMs. One example is detecting fraud SMS.