Backdooring a summarizerbot to shape opinion

Backdooring a summarizerbot to shape opinion

«If you can poison a machine learning model so that it usually works, but fails in ways that the attacker can predict and the user of the model doesn’t even notice, the scenarios write themselves…» Summary of a fascinating and frightening research paper.


From Weekly Filet #415, in October 2022.

💔 Some older links might be broken — that's the state of the web, sadly. If you find one, ping me.

Make sense of what matters, today and for the future.

Every Friday, carefully curated recommendations on what to read, watch and listen to. Trusted by thousands of curious minds, since 2011.

Undecided? Learn more | Peek inside