Unregulated AI threatens privacy and data security

When AI usage occurs, private data is revealed such as age, gender, location, preferences, etc. Companies analyze private data and use it to give you a much better user experience.  

According to ISACA, an international professional association focused on IT governance, there are three breaches of privacy: data persistence, data repurposing and data spillovers. 

The definition of each is as follows: data persistence – data existing longer than the human subjects that created it, driven by low data storage costs. Data repurposing – data being used beyond their originally imagined purpose. Data spillovers – data collected on people who are not the target of data collection. 

Data collection by AI raises privacy concerns such as informed consent freely given, being able to opt out, limiting data collection, describing the nature of AI processing, and even being able to delete data on request.  Currently, human subjects of data collection or the spillover effect have no universal governance body to approach in order to resolve the privacy concerns mentioned above – in other words, no one is regulating data collection laws. 

The Cambridge Analytica incident in the 2016 US presidential election and the consequences for privacy in artificial intelligence played a significant part in this decline in confidence, and challenges to democracy are still being fed by AI manipulating democracy’s levers. Another illustration is the U.S. company Clearview AI’s alleged violation of Canadian privacy laws in collecting images of Canadian adults and even children for mass surveillance and facial recognition without their consent and for commercial sale. This only serves to erode public confidence in the ability of entire nations to responsibly handle privacy and AI-related issues. 

According to ISACA, “a primary concern with artificial intelligence is its potential to replicate, reinforce or amplify harmful biases;” such biases can exacerbate other privacy concerns such as the spillover effect. 

Data privacy in general is distinct from privacy in the context of AI. One of the difficulties in preserving privacy in the context of artificial intelligence is establishing how to design appropriate legislation without limiting the development of AI technology. The scanning processes that allow AI tools to learn about their surroundings as well as the nature of the data itself and how it is utilized to develop the AI capabilities are all data contexts that are at risk.

Personally identifiable information (PII) and protected health information (PHI) are constantly being violated; organizations such as IBM, Panasonic, Alibaba, military researchers and Chinese surveillance firms used Microsoft’s database of 10 million facial photographs, which were now removed as most of the people whose faces were in the dataset were not aware their image had been included. 

Karl Manheim, Professor of Law at Loyola Law School and Lyric Kaplan and Associate in Privacy & Data Security Group at Frankfurt Kurnit Klein & Selz, Los Angeles said that, “[data] is the lifeblood of AI,” so it is imperative governments protect the citizens of their countries. 

As the proliferation of AI continues and becomes a recurring phenomenon, lawmakers may need to reevaluate the current privacy and security laws.