Machine learning adversarial attacks are a ticking time bomb

Software developers and cyber security experts have long fought the good fight against vulnerabilities in code to defend against hackers. A new, subtle approach to maliciously targeting machine learning models has been a recent hot topic in research, but its statistical nature makes it difficult to find and patch these so-called adversarial attacks. Such threats in the real-world are becoming imminent as the adoption of machine learning spreads, and a systematic defense must be implemented.

Check out the full article at KDNuggets.com website
Machine learning adversarial attacks are a ticking time bomb

Comments