Most discussions on AI / AGI are about tools, or startup ideas, or clickbait headlines.

This is an online book club where we’ll read books on AI politics, economics, history, science, biology, philosophy, concerns, future.

<aside>

Who should join?

Anyone, anywhere! You’re welcome to join one, a few, or all seven book events. Ideally, you’ll have read and thought through the book, but the minimum is a few hours of “vibe reading” — enough to get a feel for it, with the intention to finish later.

</aside>

https://luma.com/ai-zeitgeist — click to join

Theme Book Date Notes
Future scenarios of AI AI 2041: Ten Visions for the Future, by Kai-Fu Lee & Chen Qiufan 4 Oct, 2025done [**Discussion notes
Recent history / politics / economics of AI Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, by Karen Hao 17 Oct, 2025done [**Discussion notes
Social science / human concerns of AI AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference, by Arvind Narayanan & Sayash Kapoor 14 Nov, 2025done [**Discussion notes
AI alignment / control If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, by Eliezer Yudkowsky & Nate Soares 21 Nov, 2025done [**Discussion notes
Biology & AI A Brief History of Intelligence: Why The Evolution of The Brain Holds The Key To The Future of AI, by Max Bennett 5 Dec, 2025done [**Discussion notes

<aside>

Why these books?

These books are selected based on quality, depth / breadth, diversity, recency, ease of understanding, etc. Beyond that — I neither endorse any book, nor am affiliated with any.

</aside>

**🤖 AI safety / future — NotebookLM → on AI safety / future, based on high quality free sources**

🏫 AI safety & impact → groups / job boards / mentorship / education


Discussion notes

Discussion notes | AI 2041

Discussion notes | Empire of AI

Discussion notes | AI Snake Oil

Discussion notes | If Anyone Builds It, Everyone Dies

Discussion notes | A Brief History of Intelligence


Insights

Types of AI alignment issues

Scaling laws in AI performance

[Taxonomy of technical AI safety](https://longshot-ai.notion.site/Taxonomy-of-technical-AI-safety-29fd9f2d05338029be42e1d196240b71)