>> Claude can speak dozens of languages. What language, if any, is it using "in its head"?
I would have thought that there would be some hints in standard embeddings. I.e., the same concept, represented in different languages translates to vectors that are close to each other. It seems reasonable that an LLM would create its own embedding models implicitly.
I would have thought that there would be some hints in standard embeddings. I.e., the same concept, represented in different languages translates to vectors that are close to each other. It seems reasonable that an LLM would create its own embedding models implicitly.