The binary distinction between two camps who view artificial intelligence very differently is obviously oversimplifying things, but Casey Newton does make some good observations here, especially his main point: If we want to avoid AI going really, really badly, then skeptics must stop «staring at the floor of AI’s current abilities … and accept that AI is already more capable and more embedded in our systems than they currently admit.»
From Weekly Filet #508, in December 2024.
🔗
This is a spectacular piece. Dario Amodei, the founder of Anthropic, lays out his vision of what a world with powerful AI might look like if everything goes right. Obviously, you would expect someone working on these models to be very optimistic, and there’s good reason to assume that not everything will go right. But still, I like how clearly he describes what the best-case scenario would look like. It makes everything tangible – and disputable. Here’s one key takeaway: «After powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.» What is powerful AI? Smarter than a Nobel Prize winner across most relevant fields. And able to directly interact with the world in all the ways humans can. And when will we have it? Could be as soon as 2026. Just imagine…
From Weekly Filet #501, in October 2024.
🔗
Thresholds are a good way of making sense of technological change. These moments when a technology makes a leap from barely useable to surprisingly good. AI has, undeniably, passed lots of thresholds in the past year. But here’s the catch: Unlike with other technologies, it is hard to measure when an AI crosses a threshold. Here’s where the idea of the «impossibility list» comes into play…
From Weekly Filet #497, in July 2024.
🔗
Thought-provoking post from the team at Anthropic, the company behind large language model Claude that gets way less attention than ChatGPT, but is no less interesting. They write about why thinking about the «character» of an AI (and what it means to make them «behave well») is important, both for the immediate user experience, and as a way to make sure these models remain aligned with human values.
From Weekly Filet #493, in June 2024.
🔗
Techcrunch has started a series of interviews with key figures in the field of AI whose work gets little attention: women. Two dozen interviews are already available. You can pick any of them and you’ll learn something you haven’t heard before.
From Weekly Filet #491, in May 2024.
🔗
«The question isn’t really whether A.I. is too smart and will take over the world. It’s whether A.I. is too stupid and unreliable to be useful.» In a week when both OpenAI and Google hyped up their latest advancements with AI, it’s worth reading Julia Angwyn, one of the smartest minds in tech. She imagines a future in which artificial intelligence «could end up like the Roomba, the mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests.»
From Weekly Filet #489, in May 2024.
🔗
Ethan Mollick is one of my favourite experts for making sense of what’s happening with artificial intelligence. He provides a deep understanding of the technology and its wider implications with as well as very hands-on practical advice on how to make use of AI applications. His conversation with Ezra Klein will leave you with a better understanding of what AI can do for you.
From Weekly Filet #483, in April 2024.
🔗
This is hands down the best primer on how AI models work. It’s a 90-minute talk by Spotify Co-President Gustav Söderstöm to bring their employees up to speed, but it works for any audience. His premise: AI models might be highly complex in practice, but in theory, they are quite easy to understand — if you take away all the jargon. That’s what he does, masterfully, and explains everything from the basics of large language models to how AI models can generate images and music from text alone. I had many moments during the talk when I thought to myself «Ok, I understand this, but how about…?» and it’s always the next thing he goes on to explain. So good.
From Weekly Filet #452, in August 2023.
🔗
This week, some of the leading experts on artificial intelligence have released a statement, warning that AI poses an existential threat to humanity, asking that mitigating the risk should be a global priority. So, what exactly is the risk? How can lines of code become a threat to humanity? Yoshua Bengio, one of the signatories of the statement, has a good overview, both nuanced and easy to understand. One of the key insights: «Even if we knew how to build safe superintelligent AIs, it is not clear how to prevent potentially rogue AIs to also be built.»
From Weekly Filet #444, in June 2023.
🔗
Very interesting interview with Jacy Reese Anthis, a sociologist and an expert on how nonhuman creatures experience the world. As artificial intelligence gets more advanced and we find ever new ways to have it work for us, do we need to consider whether we cause pain?
From Weekly Filet #441, in May 2023.
🔗
Make sense of what’s happening, and imagine what could be.
Carefully curated recommendations on what to read, watch and listen to. For nerds and changemakers who love when something makes them go «Huh, I never thought of it this way!».
Undecided? Learn more | Peek inside