RE: Rogue Self-Driving Cars and Attacks on ML models
Hi everyone,
New video on Attacks on Neural Networks! But why did I make it? Well...
Self-driving cars are in the news again. It turns out Waymo self-driving cars can be fooled by cones. And a month ago, a Tesla on Autopilot slammed on the brakes having seen the word "stop" on a sign (linked tweet).
Examples like this remind us that machine learning systems aren't perfect. As machine learning is being used more ubiquitously, the headlines will focus on the human-like nature and start talking about the singularity. Maybe they'll discuss harmful use-cases, like deepfakes. But they're all missing the elephant in the room...
We've had plenty of computer hacks before, and ML is no different. ML is not some magic sauce (see: automation bias) and it actually introduces a whole new attack surface: the data fed into the model. Researchers have found adding a sticker to a stop sign can fool self-driving cars into accelerating.
It's not just adversarial examples that can fool them, there are a whole bunch of other attacks that can extract private data, manipulate the explanations of model behaviour and even reverse-engineer the model itself.
Want to know more? Check out my video!
I'll end with this. If you're thinking of deploying machine learning in the real world, be sure to test the robustness of your system. Because if you don't, the internet sure will!
That's it from me,
Mukul
P.S. As a treat for reading this far, here's a heads-up on the next email. I'll talk about opportunity cost, why I'm not doing a PhD, and what this means for the content coming soon (spoiler: it's going to get a whole lot better)!
P.P.S. Channel is going strong! Last time I told you we crossed 300 subs, now we're past 400!