Thanks, I'm a half hour in and will keep listening; he's more soft spoken and relatable sure but he still strikes me as a vague doomer who's not trying to explain why he's confident humans will lose, just that he's confident.
@30:24, on what solutions might be realistic to help, if not a 6 month sanction, Miles says:
"Maybe what we should be asking for is just an enormous amount of money to do alignment research with"
Sounds like a classic doomsday cult, end is nigh, deposit your checks to this account...
Ah yes all that research money will be used to buy mansions and ferarris. Do you have a clue how this stuff works in real life?
Also note that until maybe the last year not only was there not a great financial incentive to be a "doomer" but it would actively hurt your career if you were. Most of the main people in the scene have been sounding this alarm for years sometimes decades. It's also difficult to explain people like Geoff Hinton or Yoshua Bengio joining and leaving behind high profile and highly lucrative positions. Yann LeCun, a staunch anti-doomer is perhaps the perfect example who has actually DOES have enormous financial incentive to play down AI dangers.