Do AIs think differently in different languages?
Humans grow up within a framework of values, shaped by society, family, personal interactions. Where and how you grow up primes how you see the world. Large language models don’t have that; they are trained on text only. But then, this interesting experiment explores, isn’t it plausible that the language in which we interact with them «profoundly shapes the values and priorities they express»?
From Weekly Filet #549, in October 2025.
💔 Some older links might be broken — that's the state of the web, sadly. If you find one, ping me.