Blind Spots in AI Just Might Help Protect Your Privacy

Blind Spots in AI Just Might Help Protect Your Privacy

Machine learning, for all its benevolent potential to detect cancers and create collision-proof self-driving cars, also threatens to upend our notions of what's visible and hidden. It can, for instance, enable highly accurate facial recognition, see through the pixelation in photos, and even—as Facebook's Cambridge Analytica scandal showed—use public social media data to predict more sensitive traits like someone's political orientation.

Those same machine-learning applications, however, also suffer from a strange sort of blind spot that humans don't—an inherent bug that can make an image classifier mistake a rifle for a helicopter, or make an autonomous vehicle blow through a stop sign. Those misclassifications, known as adversarial examples, have long been seen as a nagging weakness in machine-learning models. Just a few small tweaks to an image or a few additions of decoy data to a database can fool a system into coming to entirely wrong conclusions.


Now privacy-focused researchers, including teams at the Rochester Institute of Technology and Duke University, are exploring whether that Achilles' heel could also protect your information. "Attackers are increasingly using machine learning to compromise user privacy," says Neil Gong, a Duke computer science professor. "Attackers share in the power of machine learning and also its vulnerabilities. We can turn this vulnerability, adversarial examples, into a weapon to defend our privacy."


A Dash of Fake Likes



Support the originator by clicking the read the rest link below.