It has been a few years since Google DeepMind CEO and co-founder, Demis Hassabis, and Professor Hannah Fry caught up. In that time, the world has caught on to artificial intelligence—in a big way. Listen as they discuss the recent explosion of interest in AI, what Demis means when he describes chatbots as ‘unreasonably effective’, and the unexpected emergence of capabilities like conceptual understanding and abstraction in recent generative models.
Demis and Hannah also explore the need for rigorous AI safety measures, the importance of responsible AI development, and what he hopes for as we move closer towards artificial general intelligence.
—
Want to share feedback? Have a suggestion for a guest that we should have on next? Why not leave a review on YouTube and stay tuned for future episodes.
Timecodes
00:00 Introduction
01:05 Public interest in AI
03:22 Grounding in AI
05:22 Overhyped or underhyped AI
07:42 Realistic vs unrealistic goals in AI
10:22 Gemini and Project Astra
15:12 Project Astra compared to Google Glass
18:22 Lineage of Project Astra
21:22 Challenges of keeping an AGI contained
24:22 Demis Hassabis's view on AI regulation
28:22 Safety of AGI
31:22 Timeline for the arrival of AGI
33:22 DeepMind's progress on their 20-year project
34:22 Surprising capabilities of current AI models
38:22 Challenges of long-term planning, agency, and safeguards in AI
41:22 Predictions about the future of AI and cures for diseases
44:22 Conclusion
Thanks to everyone who made this possible, including but not limited to:
Presenter: Professor Hannah Fry
Series Producer: Dan Hardoon
Editor: Rami Tzabar, TellTale Studios
Commissioner & Producer: Emma Yousif
Music composition: Eleni Shaw
Camera Director and Video Editor: Tommy Bruce
Audio Engineer: Darren Carikas
Video Studio Production: Nicholas Duke
Video Editor: Bilal Merhi
Video Production Design: James Barton
Visual Identity and Design: Eleanor Tomlinson
Commissioned by Google DeepMind
Demis and Hannah also explore the need for rigorous AI safety measures, the importance of responsible AI development, and what he hopes for as we move closer towards artificial general intelligence.
—
Want to share feedback? Have a suggestion for a guest that we should have on next? Why not leave a review on YouTube and stay tuned for future episodes.
Timecodes
00:00 Introduction
01:05 Public interest in AI
03:22 Grounding in AI
05:22 Overhyped or underhyped AI
07:42 Realistic vs unrealistic goals in AI
10:22 Gemini and Project Astra
15:12 Project Astra compared to Google Glass
18:22 Lineage of Project Astra
21:22 Challenges of keeping an AGI contained
24:22 Demis Hassabis's view on AI regulation
28:22 Safety of AGI
31:22 Timeline for the arrival of AGI
33:22 DeepMind's progress on their 20-year project
34:22 Surprising capabilities of current AI models
38:22 Challenges of long-term planning, agency, and safeguards in AI
41:22 Predictions about the future of AI and cures for diseases
44:22 Conclusion
Thanks to everyone who made this possible, including but not limited to:
Presenter: Professor Hannah Fry
Series Producer: Dan Hardoon
Editor: Rami Tzabar, TellTale Studios
Commissioner & Producer: Emma Yousif
Music composition: Eleni Shaw
Camera Director and Video Editor: Tommy Bruce
Audio Engineer: Darren Carikas
Video Studio Production: Nicholas Duke
Video Editor: Bilal Merhi
Video Production Design: James Barton
Visual Identity and Design: Eleanor Tomlinson
Commissioned by Google DeepMind
- Category
- Artificial Intelligence
Comments