{"id":18443,"date":"2023-06-19T09:26:50","date_gmt":"2023-06-19T08:26:50","guid":{"rendered":"https:\/\/www.rosello-mallol.com\/?p=18443"},"modified":"2023-06-19T09:26:56","modified_gmt":"2023-06-19T08:26:56","slug":"artificial-intelligence-and-privacy","status":"publish","type":"post","link":"https:\/\/www.rosello-mallol.com\/en\/artificial-intelligence-and-privacy\/","title":{"rendered":"Artificial Intelligence and privacy"},"content":{"rendered":"\n
Amidst the public debate on the virtues and risks of tools like the famous ChatGPT, the European Union has debated and approved the first regulations<\/a><\/strong> on artificial intelligence and privacy<\/strong>, with the aim of regulating this disruptive technology and its mass use.<\/p>\n\n\n\n Among the risks of this new technology are undoubtedly all those related to the privacy of people, since AI feeds on the processing of large amounts of data<\/strong> (also personal) that is obtained from many different sources, such as data published on the social media, posts or any other that may result in data processing, of which the data subject is sometimes not even aware. <\/p>\n\n\n\n AI systems are constantly fed by the information they consume and, therefore, the collection and use of this information is an inseparable part of this same system.<\/p>\n\n\n\n In terms of privacy, the risks, which are identified below, can be grouped into three areas:<\/p>\n\n\n\n Indeed, AI systems feed on the constant and mass collection of data and information (also personal). The more data collected and analysed, the greater the risks that a potential attack on an AI system might cause further uncontrollable damage to data subjects.<\/p>\n\n\n\n Unquestionably, if the system makes decisions in the data collection processes (for example in a personnel selection process) based on the analysis of the social profiles, for example, of the candidates, discriminatory situations may arise<\/strong>, with the problem that the candidates they may not even be aware that their data is being processed.<\/p>\n\n\n\n The improvement in practices such as deep fake, which we discussed here<\/strong><\/a> more than 3 years ago, can mean that people are involved in undesirable situations<\/strong>, such as fake news, which can even have very important consequences on their lives. Obviously, the use of this type of technology involves the unauthorised use of images.<\/strong><\/p>\n\n\n\n So far, in four human lines the main risks. Let’s read, therefore, what AI itself says about privacy risks:<\/p>\n\n\nArtificial Intelligence and privacy: what are the risks?<\/h2>\n\n\n\n
\n
\n
\n