I clicked because I thought they were defining LLM developer as "someone training LLMs", but instead they define it as "someone integrating LLMs into their application".
I had the same thought, but overall, there is probably an order of magnitude more people using LLMs in applications or fine-tuning them compared to those trying to pretrain LLMs from scratch.
I'm curious about the point on the embedding lookup cost... in my experience for an embedding lookup to be accurate, you have to include your entire document dataset to be queried against... obviously this can be just as expensive as querying a full cloud model if your dataset is very large. Interested if anyone had thoughts about this.
Yes. I think the point is that the price per token for creating the embeddings using e.g. OpenAI's text-embedding-ada-002 api might be low, this will add up to some significant cost for a large document corpus. The suggestion to roll your own based on freely available embedding models is sound IMHO.
Now how to chunk those documents into semantically coherent pieces for context retrieval, that is the real challange though.
There are very efficient algorithms for doing this, but of course it may still be expensive if your dataset is very large. See https://ann-benchmarks.com/ for some of the algorithms
The main thing every LLM should know is that ARM will eat x86_64's lunch in ML. Why? Because of the shared/unified memory model. M2 Ultra from apple can use up to 192GB of RAM. Even your smartphone thanks to this model can run networks a lot bigger than you would expect.
I've found - when using ChatGPT with GPT4 - that sometimes when I ask it to do two things like that it will ignore my request to do one before the other and try to do both at the same time before providing a combined answer, unless I give even more specific instructions along the lines of "do not <do whatever> until after you have entirely finished and answered to completion <first thing>".
Just FYI in case anyone reading your comment tries your suggestion and has same issue, that with more firm instructions the problem can be avoided. Though I've not felt the need to experiment enough to understand exactly where the line is to avoid it trying to start one task too early without being wastefully verbose in the prompt.
The original numbers every programmer should know is a profound piece of pedagogy, aimed at helping programmers be better at their craft.
This is just an excerpt from a pitch deck.