Engineering such a system is a harder challenge than many types of research. Even the mighty Google, the leader in AI research by many metrics, is catching up.
Another example is Meta only finishing OPT-175B, a near equivalent of GPT-3, two years after it.
——
Added to reply:
GPT-4 got much better results on many benchmarks than PaLM, Google's largest published model [1]. PaLM itself is probably quite a bit better than LamDa in several tasks, according to a chart and a couple of tables here: https://arxiv.org/abs/2204.02311
It's unclear that Google currently has an internal LLM as good as GPT-4. If they do, they are keeping quiet about it, which seems quite unlikely given the repercussions.
Google was not catching up before gpt-4. That's my point lol. all the sota llms belonged to google via deepmind and google brain/ai right up to the release of gpt-4. chinchilla, flamingo, flan-palm.
GPT-4 was finished in the summer of 2022. Several insiders gave interviews saying they were using it and building guardrails for it for the last 6 months or so.
OpenAI doesn’t publish as much as Google so we don’t really know how long or in what periods they were ahead.
And there’s no organization outside the US/UK/China with the same caliber of AI engineering output as Google.
Another example is Meta only finishing OPT-175B, a near equivalent of GPT-3, two years after it.
——
Added to reply:
GPT-4 got much better results on many benchmarks than PaLM, Google's largest published model [1]. PaLM itself is probably quite a bit better than LamDa in several tasks, according to a chart and a couple of tables here: https://arxiv.org/abs/2204.02311
It's unclear that Google currently has an internal LLM as good as GPT-4. If they do, they are keeping quiet about it, which seems quite unlikely given the repercussions.
[1] GPT-4's benchmark results vs PaLM: https://openai.com/research/gpt-4