As far as I can see there are pretty much zero incentives in the AI research arena for being careful or intellectually rigorous, or being at all cautious in proclaiming success (or imminent success), with industry incentives having well invaded elite academia (Stanford, Berkeley, MIT, etc) as well. And culturally speaking, the top researchers seem to uniformly overestimate, by orders of magnitude, their own intelligence or perceptiveness. Looking in from the outside, it's a very curious field.
> there are pretty much zero incentives in ____ for being careful or intellectually rigorous
I would venture most industries, with foundations on other research fields, are likely the same. Oil & Gas, Pharma, manufacturing, WW2, going to the moon... the world is full of examples where people put progress or profits above safety.
> I would venture most industries, with foundations on other research fields, are likely the same.
"Industries" is a key word though. Academic research, though hardly without its own major problems, doesn't have the same set of corrupting incentives. Although the lines are blurred, one kind of research shouldn't be confused with another. I do think it's exactly right to think of AI researchers the same way we think of R&D people in oil & gas, not the same way we think of algebraic topologists.
Andrej Karpathy (the one behind the OP project) has been in both academia & industry, he's far more than a researcher, he also teaches and builds products