

Discover more from The Privacy Whisperer
đ My work
Stay up to date and advance your career: my professional courses on AI & Privacy and on Privacy UX will come out soon. Join the waitlist and get a 20% discount when they launch.
Live sessions with global privacy leaders: tomorrow, I will have a live conversation with Dr. Gabriela Zanfir-Fortuna about AI Regulation in the EU & US. Sign up (1,060+ people confirmed) and watch the recording of previous sessions on my YouTube channel.
Job opportunities: every week, I add new links to help you find a job in privacy. Today, there are 56 links for you to check out, bookmark & share.
Spread privacy awareness: there are 65,000+Â people following us on various platforms. Share this newsletter with your friends and help us reach 100,000.
đ„ Recommended privacy & AI articles
Brain Privacy: âThe professor trying to protect our private thoughts from technology,â The Guardian
[I am a fan of Prof. Farahany's work and book âThe Battle for your Brain" - she was my guest last month at the Women Advancing Privacy event, and I also wrote about her work. The article explains some of her main concerns regarding neurotechnology, as well as the right to brain privacy and cognitive liberty, to which she has been tirelessly advocating]Children's Privacy: âInfluencer parents and the kids who had their childhood made into content,â TeenVogue
[I constantly write on social media about children's privacy, sharenting, and how parents and caregivers exploit children's privacy in exchange for the dopamine hit that comes with likes, comments, and shares - you will read more about this topic in today's main essay (below). This is a great article showing real stories and the irreversible damage caused by sharenting in the context of âinfluencer parentsâ]AI: âWe need to bring consent to AI,â MIT Technology Review
[The article discusses ChatGPT's âincognito mode,â data protection regulatorsâ pressure, and the issue of consent during AI training - some important privacy-related concerns around AI, as I also wrote about here. At the end of the article, you can also read about Geoffrey Hinton, âthe Godfather of AIâ, who recently left Google and now shares his thoughts on the dangers of AI]
đĄ AI and Privacy Have More Intersections Than You Think
With the extreme hype and popularity of AI-based tools such as ChatGPT, in the last few months, I wrote various articles in this newsletter dealing with privacy issues in AI.
I spoke about reputational harm, the lack of contextual integrity, the lack of compliance with basic data protection principles, additional challenges when there are vulnerable populations involved, AI governance issues, and my proposed classification of dark patterns in AI. (If you would like to learn more about the topic, join my next course on Privacy & AI).
This week, I would like to comment on another AI-based tool that has been used for more than a decade and that does not receive as much attention as it should - despite being deeply harmful to privacy: AI-based recommendation algorithms in the context of social networks.
When applied in the context of e-commerce, the proponents of AI-based recommendation systems argue that these recommendations do what sellers and consultants do in the offline world: they help the user understand what they want, navigate the (online) store, compare features, shortlist and filter the most relevant products based on what others will a similar profile have chosen, and buy the product that fits the user's needs.
Despite additional privacy concerns such as excessive tracking, lack of transparency, lack of consent, dark patterns to collect consent, and so on, to a certain extent, I agree that, in the context of e-commerce, there can be a legitimate use for AI-based recommendation systems, especially because:
purpose: the goal of the recommendation system is to suggest products in a specific UI/UX frame (lower manipulative potential);
interactions: there are no synchronous interactions with other users (which would make the user potentially spend hours on the website);
social validation: there are a few or less invasive social validations mechanisms (e.g., user reviews), and the user is not incentivized to behave in a certain way to get immediate social validation;
goal achievement: after the user ends a purchase, there is a âgoal achieved,â and most users will move on to other daily activities (instead of staying in an environment of âendless scrollâ).
Even in the context of e-commerce, my personal approach is that there should be built-in mechanisms to support users who seem to be vulnerable to specific AI-based recommendation systems. For example, e-commerce, by default, should allow users to set a maximum daily budget or a maximum daily usage time or to block a certain category of products that might trigger compulsive habits and so on.
What I constantly try to express, both in this newsletter and in my academic research, is that technology should, first and foremost, support users' capabilities. Every time a deviation from the norm is detected while the user is interacting with the technology, in the sense of harm to oneself or to others, there should be built-in mechanisms to support the users affected. If a technology does not provide these supportive built-in features, it should be liable for the harm.
Now let's move back to AI-based recommendation algorithms in the context of social networks and the privacy harms nobody talks about.
In social networks such as Facebook, Twitter, LinkedIn, Instagram, TikTok, YouTube, and so on, the AI-based recommendation system typically aims at recommending âwho to followâ and âwhat content to see next.â These are tools with a much higher potential for harm:
purpose: the goal of the recommendation system is to suggest content that will capture the user's attention and make them spend more time online (as these social media platforms rely on ad-based business models). It does not matter the type of content or how it will affect a particular user, what matters is that the user will remain for as much time as possible in the social network;
optimization: certain types of content catch the user's attention much more efficiently, such as tragic, surprising, shocking, offensive, or polarizing texts, images, and videos. Due to the strong social validation mechanisms that support recommendation algorithms (see below), users are incentivized to post more of this type of content. The AI system will every time optimize for content that is more shocking, more offensive, more polarizing, and so on, building an environment that can quickly become harmful for the people interacting with it.
social validation: AI-based recommendation systems are supported by strong and addictive social validation mechanisms. Content is ranked by the amount of âlikes, comments, and sharesâ it receives. A piece of content will be shown to more people the more social validation it receives, and people will be incentivized to post more content that is tragic, surprising, shocking, offensive, polarizing, and so on to capture other usersâ attention and social validation. The interruptive and invasive notifications make users hooked and anxious for their next dopamine hit from likes, comments, and shares.
interactions: there can be potentially multiple continuous synchronous interactions with other users from anywhere in the world. The user can spend the whole day on the social network.
goal achievement: due to the now ubiquitous âendless scrollâ feature, there is never a sense of âtask accomplished." The user has to by themselves, establish the amount of time they will spend in the social network. Due to well-known restraint bias, users will frequently spend an unhealthy - and potentially harmful - amount of time mindlessly scrolling.
addiction by design: the recommendation systems in social networks are designed to addict through intermittent reinforcement, similar to the way casinos work. Viral and super enticing content (in the sense of shocking, surprising, polarizing, etc.) is shown intercalated with more âuninterestingâ or âboringâ content (with lower social validation) creating a constant expectation of reward that is similar to the one that characterizes, for example, gaming addiction.
The features above cause various types of harm. The first category is societal harm, and some examples are:
increase in polarization - and social division on regular topics, filter bubbles, sometimes leading to hate and violence;
increase in hate-speech and hate-related crimes - as people have their opinions radicalized online, a percentage of people will attempt to act on;
increase in disinformation - in order to âgo viralâ or get the desired social validation, people produce their own âshockingâ or âtragicâ content, sometimes false, and sometimes exaggerating or extrapolating on existing arguments.
A second category of harm we can extract from the features described above is privacy harm, which has been largely ignored by lawmakers and regulators. Examples are:
no control over time online: multiple continuous synchronous interactions with other users, anxiety over the social validation of the content that was posted by the user, constant notifications, and expectation over the highly optimized âviralâ content being shown in the ârecommended for youâ page in an endless scroll. These extreme optimization methods to grab the user's attention and make them lose track of time online go against the basic idea of autonomy and human dignity, central tenets of privacy. Moreover, from a more strictly data protection-related perspective, they go against transparency, data minimization, and fairness. There is no warning on the powerful AI-based algorithms being used and how they can harm human beings - so there is no transparency; also, as the user is being manipulated to spend more time online and share more of his preferences, desires, and intimate thoughts with the social network - so no data minimization; lastly, given the enormous asymmetries between social networks and users and the lack of usable tools to help users to modulate these features in a way that are less harmful - no fairness.
no control over the content that is being shown. As I said above, AI-based recommendation algorithms can âlearnâ what precise type of content will grab the attention of each specific user and keep showing content that will catch this user's, based on millions of algorithmic A/B tests occurring in real-time around the world. In mainstream social networks, such as the ones I cited above, there is no express choice or a moment where they ask the user what type of content they want to see. In addition to the lack of choice, the recommendation system will inevitably steer users in a certain direction, possibly radicalized, polarized, or hateful, according to the algorithm tunning in each social network. The user does not see these mechanisms working, as there are thousands of tech professionals behind them, making sure they are powerful and subtle and make the user engaged, even if it is a harmful type of engagement. So users are again in a helpless situation, without autonomy, choice, or transparency. Unknowingly, users have their personal data heavily harvested to teach and feed the recommendation system; the recommendation system, in turn, manipulates this same user's opinions, emotions, and feelings according to each social media's algorithmic programming. There are no warnings or transparency about how the recommendation mechanisms are working in real time and how they can negatively affect users; there are also no effective control mechanisms to avoid personal harm to users.
The idea of a broader notion of privacy harms, including those affecting autonomy, emotions, relationships, etc., was advocated by Profs. Citron and Solove in their paper âPrivacy Harms,â which I recommend everyone to read.
If we want to support a human dignity-based notion of privacy, as Prof. Luciano Floridi has argued, we need to focus on human capabilities and regulate/ prohibit situations in which technology and data-intensive tools make people helpless and submissive to the wishes of the companies behind these tools.
AI tools that have the ability to collect user data and âlearnâ the user's behavior and, from these inferences, become super persuasive and manipulative are a threat to privacy. Autonomy, the ability to choose how to behave online and offline, as well as transparency and fairness, should be much more closely protected.
AI-based recommendation systems in social networks are unregulated and absolutely neglect user privacy. One of the reasons for that - as I have shared on various occasions in this newsletter - is that privacy laws around the world still disregard autonomy harm as a central type of privacy harm in the digital age.
This must change.
Did you enjoy this newsletter? Spread privacy awareness: there are 65,000+Â people following us on various platforms. Share this newsletter with friends and help us reach 100,000.
All the best, Luiza Jarovsky