Making Sense of Artificial Intelligence

A collection of some of the best links from around the web, manually curated.

Great links like these, every Friday in your inbox

The case for AGI by 2030

Unlike the title might suggest, this isn’t a forceful argument that we will reach artificial general intelligence within five years. Instead, it’s a nuanced examination of the factors that influence how artificial intelligence will evolve. And why it’s likely that we will either get to AGI over the next couple of years, or see a significant slowdown in progress afterwards. This made me pause: «Today’s situation feels like February 2020 just before COVID lockdowns: a clear trend suggested imminent, massive change, yet most people continued their lives as normal.»

From Weekly Filet #523, in April 2025.

The Government Knows A.G.I. Is Coming

While the web is getting polluted with AI-generated garbage, the race towards artificial general intelligence is real. The Biden administration’s AI advisor believes AI could exceed all human cognitive capabilities within the next 2-3 years. Which means: during Trump’s presidency. Obviously, other experts disagree with this timeline, but even if it’s unlikely yet possible, it’s a scenario to take very seriously.

From Weekly Filet #221, in March 2025.

Humanity’s Last Exam

This is a fascinating challenge: To test when – if ever, but yes, more likely: when – artificial intelligence surpasses human intelligence, scientists are designing a super-hard exam, called Humanity’s Last Exam. 3000 multiple-choice and short answer questions, with areas ranging from hummingbird anatomy to rocket engineering.

From Weekly Filet #515, in February 2025.

The phony comforts of AI skepticism

The binary distinction between two camps who view artificial intelligence very differently is obviously oversimplifying things, but Casey Newton does make some good observations here, especially his main point: If we want to avoid AI going really, really badly, then skeptics must stop «staring at the floor of AI’s current abilities … and accept that AI is already more capable and more embedded in our systems than they currently admit.»

From Weekly Filet #508, in December 2024.

Machines of Loving Grace

This is a spectacular piece. Dario Amodei, the founder of Anthropic, lays out his vision of what a world with powerful AI might look like if everything goes right. Obviously, you would expect someone working on these models to be very optimistic, and there’s good reason to assume that not everything will go right. But still, I like how clearly he describes what the best-case scenario would look like. It makes everything tangible – and disputable. Here’s one key takeaway: «After powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.» What is powerful AI? Smarter than a Nobel Prize winner across most relevant fields. And able to directly interact with the world in all the ways humans can. And when will we have it? Could be as soon as 2026. Just imagine…

From Weekly Filet #501, in October 2024.

Gradually, then Suddenly: Upon the Threshold

Thresholds are a good way of making sense of technological change. These moments when a technology makes a leap from barely useable to surprisingly good. AI has, undeniably, passed lots of thresholds in the past year. But here’s the catch: Unlike with other technologies, it is hard to measure when an AI crosses a threshold. Here’s where the idea of the «impossibility list» comes into play…

From Weekly Filet #497, in July 2024.

What Should an AI’s Personality Be?

Thought-provoking post from the team at Anthropic, the company behind large language model Claude that gets way less attention than ChatGPT, but is no less interesting. They write about why thinking about the «character» of an AI (and what it means to make them «behave well») is important, both for the immediate user experience, and as a way to make sure these models remain aligned with human values.

From Weekly Filet #493, in June 2024.

Press Pause on the Silicon Valley Hype Machine

«The question isn’t really whether A.I. is too smart and will take over the world. It’s whether A.I. is too stupid and unreliable to be useful.» In a week when both OpenAI and Google hyped up their latest advancements with AI, it’s worth reading Julia Angwyn, one of the smartest minds in tech. She imagines a future in which artificial intelligence «could end up like the Roomba, the mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests.»

From Weekly Filet #489, in May 2024.

How Should I Be Using A.I. Right Now?

Ethan Mollick is one of my favourite experts for making sense of what’s happening with artificial intelligence. He provides a deep understanding of the technology and its wider implications with as well as very hands-on practical advice on how to make use of AI applications. His conversation with Ezra Klein will leave you with a better understanding of what AI can do for you.

From Weekly Filet #483, in April 2024.

Make sense of what’s happening, and imagine what could be.

Carefully curated recommendations on what to read, watch and listen to. For nerds and changemakers who love when something makes them go «Huh, I never thought of it this way!».

Undecided? Learn more | Peek inside