It is frustrating to other researchers and may be self-interested as other commenters mentioned. But these models are also now capable enough that if they are going to be developed, publishing architectural details could be a serious infohazard.
It's good when AI labs don't publish some details about powerful models, for the same reason that it's good when bio research labs don't publish details about dangerous viruses.
Do you believe that these models will not be replicated outside OpenAI? And do you believe OpenAI will remain relatively benevolent long-term if they are not replicated elsewhere?
I believe they will be replicated outside OpenAI, given enough time. But the fewer details OpenAI releases, the longer it will take for someone else to replicate them.
To your second question, I am worried about the power dynamics of one lab having a monopoly on super-powerful models. But by far the worst risk I'm worried about (and it's my job to try and help mitigate) is catastrophic accidents from someone creating a super-powerful model without the right alignment techniques and safeguards. And that kind of risk is heightened when there are more actors competitively racing to build AGI.
It's good when AI labs don't publish some details about powerful models, for the same reason that it's good when bio research labs don't publish details about dangerous viruses.