Machine learning models could become a data security disaster

Malicious actors can force machine learning models into sharing sensitive information, by poisoning the datasets used to train the models, researchers have found. 

A team of experts from Google, the National University of Singapore, Yale-NUS College, and Oregon State University published a paper, called “Truth serum: Poisoning machine learning models to reveal their secrets (opens in new tab)”, which details how the attack works.

Source