This is a companion discussion topic for the original entry at https://problemattic.app/problem-details/ai-alignment
This is a companion discussion topic for the original entry at https://problemattic.app/problem-details/ai-alignment
Here’s a brief list of some of the better resources I’ve found lately on this topic:
- One of the more sensible, non-hysterical assessments of the risks and benefits by the co-founder of Deepmind. His interview on Sam Harris’ podcast is worth listening to as well as they discuss his book, “The Coming Wave.”
- I can’t stand this dude but Eliezer Yudkowsky’s interview on Lex Friedman is worth listening to just to get the max doomer side of things. I tried engaging him via Twitter but he wouldn’t respond to me, only passive-aggressively like the tweets of his own followers responding. Robin Hanson’s interview on Bankless in rebuttal to Yudkowsky’s was far more sane IMO.
- The 80k hours pod has a bunch of episodes on this topic but the Paul Christiano one is probably the most thorough in coverage of the risks. He’s not as full-tilt as Yudkowsky but he paints a pretty grim picture.
I want to throw a very antiquated counter-balance out there: Abundance by Peter Diamandis which is a fantastic does of optimism in the face of more generic generational doomerism (also called “declinism”). Eps 51-55 on his Moonshots and Mindsets pod deal with the topic of AI Alignment but I have not yet listened to these.
My takeaway from bingeing a bunch of AI Alignment interviews is that it’s a precarious thing but one that also represents great promise if we can navigate it successfully. The moratoriums called for previously by folks like Musk seem impractical (it just gives China 6mos to catch up - they almost certainly would not honor a moratorium like this). It’s going to play out within our lifetimes so will be certainly interesting to witness. If you find any worthy resources to add to the discussion please add them here.
Two more interviews worth listening to on the AI safety / Alignment front: