Preventing the development of nukes entirely was obviously not going to happen. But delaying the first detonations by a few years, and moving the Partial Nuclear Test Ban treaty up a few years, was quite achievable.
Whether delaying AI development a little matters depends on whether you think the success of AI alignment, applied to future superintelligence, is overdetermined to succeed, overdetermined to fail, or close to borderline. Personally I think it looks borderline, so I'm glad to see things like this.
I'm firmly in the camp that delaying it's development could make a difference, I just don't see how that's possible. These models are relatively simple and the equipment necessary to develop them is public (and relatively cheap if we're talking about corporate or national scales). At least with nukes, there was a raw material bottleneck, but there really isn't a limiting factor here that any "good guys" could choke point. It's out there and it's going to get worked on, and the only people the "good guys" can limit are themselves.
Whether delaying AI development a little matters depends on whether you think the success of AI alignment, applied to future superintelligence, is overdetermined to succeed, overdetermined to fail, or close to borderline. Personally I think it looks borderline, so I'm glad to see things like this.