Artificial General Intelligence (AGI) Show with Soroush Pour
When will the world create an artificial intelligence that matches human level capabilities, better known as an artificial general intelligence (AGI)? What will that world look like & how can we ensure it's positive & beneficial for humanity as a whole? Tech entrepreneur & software engineer Soroush Pour (@soroushjp) sits down with AI experts to discuss AGI timelines, pathways, implications, opportunities & risks as we enter this pivotal new era for our planet and species.
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
Podcasting since 2022 • 15 episodes
Artificial General Intelligence (AGI) Show with Soroush Pour
Latest Episodes
Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT)
We speak with Stephen Casper, or "Cas" as his friends call him. Cas is a PhD student at MIT in the Computer Science (EECS) department, in the Algorithmic Alignment Group advised by Prof Dylan Hadfield-Menell. Formerly, he worked with the Harvar...
•
Season 1
•
Episode 14
•
2:42:17
Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts)
We speak with Katja Grace. Katja is the co-founder and lead researcher at AI Impacts, a research group trying to answer key questions about the future of AI — when certain capabilities will arise, what will AI look like, how it will all go for ...
•
Season 1
•
Episode 13
•
1:20:28
Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host)
We speak with Rob Miles. Rob is the host of the “Robert Miles AI Safety” channel on YouTube, the single most popular AI alignment video series out there — he has 145,000 subscribers and his top video has ~600,000 views. He goes much deeper than...
•
Season 1
•
Episode 12
•
1:21:26
Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)
We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field ...
•
Season 1
•
Episode 11
•
1:37:19
Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)
We speak with Ryan Kidd, Co-Director at ML Alignment & Theory Scholars (MATS) program, previously "SERI MATS".MATS (https://www.matsprogram.org/) provides research mentorship, technical seminars, and connections to help new AI resea...
•
Season 1
•
Episode 10
•
1:16:58