Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Engineering such a system is a harder challenge than many types of research. Even the mighty Google, the leader in AI research by many metrics, is catching up.

Another example is Meta only finishing OPT-175B, a near equivalent of GPT-3, two years after it.

——

GPT-4 got much better results on many benchmarks than PaLM, Google's largest published model [1]. PaLM itself is probably quite a bit better than LamDa in several tasks, according to a chart and a couple of tables here: https://arxiv.org/abs/2204.02311

It's unclear that Google currently has an internal LLM as good as GPT-4. If they do, they are keeping quiet about it, which seems quite unlikely given the repercussions.

[1] GPT-4's benchmark results vs PaLM: https://openai.com/research/gpt-4



> Even the mighty Google

Since the release of the Attention paper, they havent come up with any groundbreaking idea, that was five years ago. Where is their research? All they seem to have are technical descriptions with scarce details, deceiving tactics, fiddling with parameters, and an abundance of pointless ethical debates. Can we even call this "research"?


Including DeepMind, they published Gato, Chinchilla, PaLM, Imagen, and PaLM-E, among others. They may not be as fundamental as transformers, but important nonetheless.

Can you list 1-2 research organizations, in any field, with more important output in 5 years? Bonus points if outside the US/UK/the west per context above.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: