AI Safety/Alignment


This is a companion discussion topic for the original entry at https://problemattic.app/problem-details/ai-alignment

Here’s a brief list of some of the better resources I’ve found lately on this topic:

I want to throw a very antiquated counter-balance out there: Abundance by Peter Diamandis which is a fantastic does of optimism in the face of more generic generational doomerism (also called “declinism”). Eps 51-55 on his Moonshots and Mindsets pod deal with the topic of AI Alignment but I have not yet listened to these.

My takeaway from bingeing a bunch of AI Alignment interviews is that it’s a precarious thing but one that also represents great promise if we can navigate it successfully. The moratoriums called for previously by folks like Musk seem impractical (it just gives China 6mos to catch up - they almost certainly would not honor a moratorium like this). It’s going to play out within our lifetimes so will be certainly interesting to witness. If you find any worthy resources to add to the discussion please add them here.

Two more interviews worth listening to on the AI safety / Alignment front: