Top 5 things to know about adversarial attacks

Machine learning is helpful to many organizations in the tech industry, but it can have a downside. Tom Merritt lists five things to know about adversarial attacks.

Machine learning is being used for a lot of great things, from guiding autonomous cars to creating pictures of cats that don’t actually exist. Of course, as with any technology, if it exists someone will want to hack it. Some of those hackers will be malicious. Adversarial attacks use machine learning against machine learning by creating images, text or audio, that thwarts other algorithms from performing as expected. Not a big deal if you disrupt the non-existent cat generator. Bigger deal if the autonomous car doesn’t see a stop sign anymore. Let’s look at five things to know about adversarial attacks.

SEE: Social engineering: A cheat sheet for business professionals (free PDF) (TechRepublic)

  1. Only the algorithms know what’s going on. Since machine learning is a bit of a black box, so are the attacks. An algorithm can try injecting a bit of noise that is undetectable to humans until it has fooled another algorithm into thinking a panda is a gibbon. This is harder to defend than a more straightforward buffer overflow or some such thing. There’s not a line of code causing the problem.
  2. Adversarial attacks are statistical. That can make them a little harder to catch because they don’t work every time. A change of angle or lighting might cause them to fail and the variables that cause failure can be multiple. The attacker just needs them to work enough times, not all the time.
  3. There’s never one fix. You can change the statistical parameters or architecture of a machine learning model to defend against known attacks, but the attackers can retrain their attacking algorithms to find new noise patterns.
  4. Research on adversarial attacks is rising. Ben Dickson from TechTalks did a search on ArXiv for papers that mentioned adversarial attacks or adversarial examples. 1,100 were submitted in 2020 up from 800 in 2019 and none back in 2014.
  5. The AI community knows this is a problem. They just need to move from research to tools. OpenAI wrote that adversarial examples are a concrete problem but “fixing them is difficult enough that it requires a serious research effort.” Researchers from Microsoft, IBM, Nvidia, MITRE, and other companies published the Adversarial ML Threat Matrix in December 2020 to help researchers find weak spots.

It’s early enough in the pervasiveness of artificial intelligence not to worry that adversarial attacks can cause widespread danger. It’s just early enough to start worrying about how to make sure that adversarial attacks can’t cause widespread danger, because time will eventually run out.

Subscribe to TechRepublic Top 5 on YouTube for all the latest tech advice for business pros from Tom Merritt.

Also see

Hacker attacking internetHacker attacking internet

Image: iStockphoto/xijian