Weekly Filet, click logo to get back to home page

Backdooring a summarizerbot to shape opinion

Backdooring a summarizerbot to shape opinion

«If you can poison a machine learning model so that it usually works, but fails in ways that the attacker can predict and the user of the model doesn’t even notice, the scenarios write themselves…» Summary of a fascinating and frightening research paper.


From Weekly Filet #415, in October 2022.

💔 Some older links might be broken — that's the state of the web, sadly. If you find one, ping me.

👆 This is one of 2577 recommended links, manually curated since 2011. If you like things like this, you will love the Weekly Filet newsletter. The five best links of the week, every Friday.