Thresholds are a good way of making sense of technological change. These moments when a technology makes a leap from barely useable to surprisingly good. AI has, undeniably, passed lots of thresholds in the past year. But here’s the catch: Unlike with other technologies, it is hard to measure when an AI crosses a threshold. Here’s where the idea of the «impossibility list» comes into play…
From Weekly Filet #497, in July 2024.
🔗
Thought-provoking post from the team at Anthropic, the company behind large language model Claude that gets way less attention than ChatGPT, but is no less interesting. They write about why thinking about the «character» of an AI (and what it means to make them «behave well») is important, both for the immediate user experience, and as a way to make sure these models remain aligned with human values.
From Weekly Filet #493, in June 2024.
🔗
Techcrunch has started a series of interviews with key figures in the field of AI whose work gets little attention: women. Two dozen interviews are already available. You can pick any of them and you’ll learn something you haven’t heard before.
From Weekly Filet #491, in May 2024.
🔗
«The question isn’t really whether A.I. is too smart and will take over the world. It’s whether A.I. is too stupid and unreliable to be useful.» In a week when both OpenAI and Google hyped up their latest advancements with AI, it’s worth reading Julia Angwyn, one of the smartest minds in tech. She imagines a future in which artificial intelligence «could end up like the Roomba, the mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests.»
From Weekly Filet #489, in May 2024.
🔗
Ethan Mollick is one of my favourite experts for making sense of what’s happening with artificial intelligence. He provides a deep understanding of the technology and its wider implications with as well as very hands-on practical advice on how to make use of AI applications. His conversation with Ezra Klein will leave you with a better understanding of what AI can do for you.
From Weekly Filet #483, in April 2024.
🔗
This is hands down the best primer on how AI models work. It’s a 90-minute talk by Spotify Co-President Gustav Söderstöm to bring their employees up to speed, but it works for any audience. His premise: AI models might be highly complex in practice, but in theory, they are quite easy to understand — if you take away all the jargon. That’s what he does, masterfully, and explains everything from the basics of large language models to how AI models can generate images and music from text alone. I had many moments during the talk when I thought to myself «Ok, I understand this, but how about…?» and it’s always the next thing he goes on to explain. So good.
From Weekly Filet #452, in August 2023.
🔗
This week, some of the leading experts on artificial intelligence have released a statement, warning that AI poses an existential threat to humanity, asking that mitigating the risk should be a global priority. So, what exactly is the risk? How can lines of code become a threat to humanity? Yoshua Bengio, one of the signatories of the statement, has a good overview, both nuanced and easy to understand. One of the key insights: «Even if we knew how to build safe superintelligent AIs, it is not clear how to prevent potentially rogue AIs to also be built.»
From Weekly Filet #444, in June 2023.
🔗
Very interesting interview with Jacy Reese Anthis, a sociologist and an expert on how nonhuman creatures experience the world. As artificial intelligence gets more advanced and we find ever new ways to have it work for us, do we need to consider whether we cause pain?
From Weekly Filet #441, in May 2023.
🔗
A good contrast to both grandiose «this changes everything» hyperboles and to objections that AI is just a mostly useless gimmick: 35 tangible examples of how people are making use of artificial intelligence in their everyday lives. (Gift link so you won’t hit the paywall)
From Weekly Filet #438, in April 2023.
🔗
I’m sure you’ve heard the argument that all artificial intelligence is really doing at this point is «guessing what the next word in a series of words will be». In a way, it’s hard to argue against, because that is literally what GPT is doing, and yet that process generates astonishing results. How come? For an answer, look no further than this excellent in depth explainer by Stephen Wolfram, one of the key early figures in artificial intelligence. It’s a good 1.5 hour read, but if you’re into language and technology, it’s so worth it. His answer, in a nutshell: «Language is at a fundamental level somehow simpler than it seems»
From Weekly Filet #435, in March 2023.
🔗
Make sense of what’s happening, and imagine what could be.
Carefully curated recommendations on what to read, watch and listen to. For nerds and changemakers who love when something makes them go «Huh, I never thought of it this way!».
Undecided? Learn more | Peek inside