Carefully curated recommendations for curious minds who love when something makes them go «Huh, I never thought of it this way!».
«If you can poison a machine learning model so that it usually works, but fails in ways that the attacker can predict and the user of the model doesn’t even notice, the scenarios write themselves…» Summary of a fascinating and frightening research paper.
From Weekly Filet #415, in October 2022.
Explore collections