Hi, Luiza Jarovsky here. Welcome to the 62nd edition of The Privacy Whisperer, and thank you to 75,000+ people who follow and support us on various channels.
There is a lot going on in tech and AI, but not enough is discussed on how they affect privacy and data protection issues, as well as the numerous changes the privacy field is undergoing. In this newsletter, I provide an up-to-date, independent, and informed perspective on relevant topics in this critical intersection. Read more about my work, invite me to speak at your event, or say hi here.
This week, I analyze some of Grammarly's privacy-related practices and what other organizations can learn from them. To read the full newsletter and receive discounts on our training programs, become a paid subscriber (readers have obtained reimbursement from their companies). You can also access paid subscriber benefits by recommending The Privacy Whisperer to your friends.
✅ Privacy & AI resources
[LIVE TALK: MAX SCHREMS] On Thursday, I will speak with Max Schrems about various topics around GDPR enforcement challenges. So far, 3,040+ people have confirmed attendance. Join our conversation and bring your questions.
[LISTEN/WATCH] Thousands of people have watched my live talks with global experts, including Prof. Daniel Solove, Dr. Ann Cavoukian, and various others. Check out the recordings of previous sessions on my podcast & YouTube channel.
[MASTERCLASS] We have two upcoming masterclasses to help you and your company navigate emerging challenges: AI & Privacy and Privacy UX. Places are limited, make sure to secure your place. (paid subscribers of this newsletter get 30% off).
[JOB OPPORTUNITIES] We now have two job boards: one focusing on privacy jobs and another on AI jobs. Check out hundreds of openings around the world & sign up for weekly alerts.
[DAILY CONTENT] For more privacy & AI updates, follow me on Twitter & LinkedIn.
🔥 How AI influences surveillance capitalism
I created the infographic above based on two theoretical models:
- Prof. Shoshana Zuboff's model of surveillance capitalism (detailed in her book "The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power");
- Prof Laurence Lessig's four modalities of regulation (detailed in his book "Code and Other Laws of Cyberspace");
Prof. Zuboff defined surveillance capitalism as “the unilateral claiming of private human experience as free raw material for translation into behavioral data. These data are then computed and packaged as prediction products and sold into behavioral futures markets — business customers with a commercial interest in knowing what we will do now, soon, and later.”
And according to Prof. Lessig: “four constraints regulate this pathetic dot – the law, social norms, the market, and architecture - and the ‘regulation’ of this dot is the sum of these four constraints. Changes in any one will affect the regulation of the whole. Some constraints will support others; some may undermine others. Thus, ‘changes in technology [may] usher in changes in . . . norms,’ and the other way around. A complete view, therefore, must consider these four modalities together.” (p. 123)
In the infographic above, I mapped the concept of surveillance capitalism according to Lessig's four modalities: market, architecture, social norms, and laws. This is what Lessig said about my model on Twitter.
If you read carefully, you will see that today, all four modalities are heavily influenced by AI systems and AI-related practices. AI became an essential pillar in sustaining surveillance capitalism as we know it.
Perhaps new laws and regulatory frameworks (such as the AI Act in the EU and heavier FTC enforcement in the US) will change the forces as they appear in the picture below. But not yet.
🔥 The FTC investigates OpenAI
The Washington Post disclosed a letter from the FTC to OpenAI, as part of an ongoing investigation of the company.
The subject of the investigation is to understand if OpenAI has engaged in:
a) unfair or deceptive privacy or data security practices; or
b) unfair or deceptive privacy practices relating to risk or harm to consumers, including reputational harm;
the FTC also wants to understand if obtaining monetary relief would be in the public interest.
As part of this investigation, the FTC is asking dozens of questions to OpenAI, including:
- detailed inquiries about model development and training;
- how they obtained the data;
- all sources of data, including third parties that provided datasets;
- how the company assesses and addresses risks;
- privacy and prompt injections risks and mitigations;
- monitoring, collection, use, and retention of personal information.
The FTC is also requesting various documents (see pages 17-20).
As a result of this investigation, we will have a better picture of the connection between AI-related practices and the FTC's perspective on unfair and deceptive practices, as well as the connection between privacy and AI issues from an FTC regulatory point of view.
According to Marc Rotenberg, founder of the Center for AI and Digital Policy (CAIDP), they filed the initial complaint in March and spent the last months advocating for the investigation. He added that “the United States lags behind other countries in AI policy. In March, CAIDP President Merve Hickok told Congress ‘the US lacks necessary guardrails for AI products.’ The FTC investigation of OpenAI is now the best opportunity to put these safeguards in place.”
Sam Altman, OpenAI's CEO, wrote about the investigation on Twitter: “it is very disappointing to see the FTC's request start with a leak and does not help build trust. that said, it’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. of course we will work with the FTC.”
On the topic, I have been discussing OpenAI and its privacy practices in previous articles, such as this: OpenAI's Unacceptable 'Privacy by Pressure' Approach.
The unfolding of this investigation will be extremely interesting to both privacy & AI professionals and will probably have regulatory repercussions in other parts of the world.
Privacy & AI is also the topic of my upcoming masterclass. If you want to dive deeper into risks, challenges, and regulation, register here.
🔥 Movie recommendation: Coded Bias
Coded bias, a documentary featuring Dr. Joy Buolamwini and other experts, is available on Netflix - super recommended, watch it here. To have an overview of Dr. Buolamwini's research, watch the video about her project Gender Shades, and check out her non-profit Algorithmic Justice League, which leads the movement for equitable and accountable AI.
🔥 Transparency, usability & privacy policies. Case study: Grammarly
This week, I discuss transparency obligations, privacy policies, some of Grammarly's privacy practices, and what companies are doing wrong:
Keep reading with a 7-day free trial
Subscribe to