Artificial General Intelligence (AGI) Show with Soroush Pour
When will the world create an artificial intelligence that matches human level capabilities, better known as an artificial general intelligence (AGI)? What will that world look like & how can we ensure it's positive & beneficial for humanity as a whole? Tech entrepreneur & software engineer Soroush Pour (@soroushjp) sits down with AI experts to discuss AGI timelines, pathways, implications, opportunities & risks as we enter this pivotal new era for our planet and species.
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts)
We speak with Katja Grace. Katja is the co-founder and lead researcher at AI Impacts, a research group trying to answer key questions about the future of AI — when certain capabilities will arise, what will AI look like, how it will all go for humanity.
We talk to Katja about:
* How AI Impacts latest rigorous survey of leading AI researchers shows they've dramatically reduced their timelines to when AI will successfully tackle all human tasks & occupations.
* The survey's methodology and why we can be confident in its results
* Responses to the survey
* Katja's journey into the field of AI forecasting
* Katja's thoughts about the future of AI, given her long tenure studying AI futures and its impacts
Hosted by Soroush Pour. Follow me for more AGI content:
Twitter: https://twitter.com/soroushjp
LinkedIn: https://www.linkedin.com/in/soroushjp/
== Show links ==
-- Follow Katja --
* Website: https://katjagrace.com/
* Twitter: https://x.com/katjagrace
-- Further resources --
* The 2023 survey of AI researchers views: https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai
* AI Impacts: https://aiimpacts.org/
* AI Impacts' Substack: https://blog.aiimpacts.org/
* Joe Carlsmith on Power Seeking AI: https://arxiv.org/abs/2206.13353
* Abbreviated version: https://joecarlsmith.com/2023/03/22/existential-risk-from-power-seeking-ai-shorter-version
* Fragile World hypothesis by Nick Bostrom: https://nickbostrom.com/papers/vulnerable.pdf
Recorded Feb 22, 2024