Today: Google, Microsoft, OpenAI & Anthropic have partnered
Case study: TikTok's privacy UX shortfalls
Hi, Luiza Jarovsky here. Welcome to the 63rd edition of The Privacy Whisperer, and thank you to 75,000+ people who follow and support us on various channels.
The AI fascination continues at full speed, fueling surveillance capitalism's thirst for personal data. In this context, people should be wary of unnecessary hype and get informed about what happens behind the curtains. In this newsletter, I offer an up-to-date, independent, and informed perspective on relevant topics in the intersection of privacy & AI. Read more about my work, invite me to speak at your event, or just say hi here.
Today I cover the partnership between Google, Microsoft, OpenAI, & Anthropic, Meta's fine in Australia, and the Brazilian DPA's investigation on Threads. You'll also find privacy & AI resources, such as in-depth conversations with global leaders, training programs, and specialized job boards.
This week's case study is about TikTok's privacy user experience (UX) shortfalls, with examples and screenshots. These lessons are valid across industries, given that so many companies still do not take transparency and fairness seriously. Become a paid subscriber of this newsletter to access these weekly case studies and learn from others’ mistakes and best practices. Use this template to request reimbursement from your employer. You can also unlock paid subscriber benefits by recommending The Privacy Whisperer to your friends.
✅ Privacy & AI resources
[LIVE TALKS] Last week, I spoke with Max Schrems about various topics, including Meta's €1.2 billion fine and EU-US data transfers. So far, 6,600+ people have watched this 80 minutes conversation on LinkedIn, YouTube, or my podcast. If you are a privacy lawyer: you CAN'T MISS it.
[LISTEN/WATCH] Thousands of people have watched my live talks with global experts, including Prof. Daniel Solove, Dr. Ann Cavoukian, & various others. Check out the recordings on my podcast & YouTube channel.
[MASTERCLASS] We have two upcoming masterclasses to help you and your company navigate emerging challenges: AI & Privacy (this Sunday) and Privacy UX (in September). Places are limited, and our last masterclass is sold out, so make sure to secure your spot. (paid subscribers of this newsletter get 30% off).
[JOB OPPORTUNITIES] We now have two job boards: one focusing on privacy jobs and another on AI jobs. Check out hundreds of openings around the world & sign up for weekly alerts.
[DAILY CONTENT] For more privacy & AI updates, follow me on Twitter & LinkedIn.
This week's newsletter is sponsored by Didomi:
Take your game monetization to new heights! Safeguard your AdMob revenue while adhering to privacy laws with Didomi for Gaming. As a Google-certified Consent Management Platform (CMP) that supports Unity SDK, Didomi empowers game developers to display ads in compliance with the new set of guidelines to use Google products. Request a demo.
Note: we only have 3 newsletter sponsorship spots left until the end of the year. If you want to feature your brand here, get in touch.
🔥 Google, Microsoft, OpenAI & Anthropic have partnered
Today I discovered through Lila Ibrahim - Chief Operating Officer at Google DeepMind - that Google has partnered with Anthropic, Microsoft, and OpenAI and launched Frontier Model Forum, a “new industry body (that) will focus on the safe and responsible development of frontier AI models.” OpenAI and Google have also posted about it.
At Google's official blog post on the topic, they state that the objectives of the Forum are:
“Advancing AI safety research
Identifying best practices for the responsible development and deployment of frontier models
Collaborating with policymakers, academics, civil society, and companies
Supporting efforts to develop applications that can help meet society’s greatest challenges”
They state that this Forum is an attempt to support multilateral initiatives such as the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council, as well as the Partnership on AI and MLCommons.
The goals and the partnership seem positive, socially beneficial, and a step in the right direction. However, two aspects of this partnership catch my attention:
There is an extreme concentration of power and wealth among these companies, and each one of them has made billionaire investments in AI and has a direct interest in multiplying these investments. It is not clear how the governance of this Forum will work in practice (including transparency, participation, accountability, and oversight). It is also a challenge how these companies will be able to align their corporate incentives to generate profit with their publicized goals of serving the public interest.
This initiative follows the expected pattern that companies try to strengthen their lobbying efforts (as well as marketing and product strategies) through “self-regulation” initiatives, in which they set their own rules. Setting best practices and helping to develop standards in this growing consumer-facing AI industry might be welcome at this point. However, it is not enough, and there must be strong laws, regulations, fines, oversight, and enforcement. It's too early for any conclusions, but any self-regulatory initiative must not undermine legislative and regulatory efforts.
🔥 Meta is fined $20 million in Australia: avoid their mistake
Today, two of Meta's subsidiaries, Facebook Israel and the VPN app Onavo Inc (discontinued), were ordered by the Australian Federal Court to pay $10 million each, following the proceedings instituted by the Australian Competition and Consumer Commission. According to its media release:
“The Court declared that the two companies engaged in conduct liable to mislead the public in promotions for the Onavo Protect app by failing to adequately disclose that users’ data would be used for purposes other than providing Onavo Protect, including Meta’s commercial purposes.”
The language used to advertise the free VPN service Onavo Protect reflected that it would be used in a way to safeguard personal information (such as in “use a free, fast and secure VPN to protect personal information”). However, this data was used to benefit Meta's commercial activities.
According to ABC News, these disclosures (regarding how they were using consumer data) were present in the Terms of Service and Privacy Policy; however, the way the product was being marketed and the information transmitted directly to customers in Apple's and Google's app stores did not reflect those practices.
It's interesting how this case/fine is aligned with others I've been discussing in this newsletter, such as the BetterHelp case ($7.8 million fine), where the FTC used screenshots to show that the way a product was being advertised - or the message being told directly to the consumer - was not coherent with data practices occurring in the background.
What every company should learn here is that the UX and the language used to market the product, communicate with the customer, and promote a service both reflect privacy compliance. A long privacy policy written by a team of lawyers and targeting other lawyers is not enough. Privacy culture should go beyond the legal department.
*On the topic, if you want to dive deeper into avoiding dark patterns and improving your company's privacy UX to avoid compliance issues, I am giving a Privacy UX masterclass in September, register here (limited seats, the July session is sold out).
🔥 The Brazilian Data Protection Authority investigates Threads
The Brazilian Data Protection Authority (Autoridade Nacional de Proteção de Dados) announced this week that, after a preliminary evaluation of Threads’ data practices in light of the Brazilian data protection law (LGPD), the Board of Directors decided that further investigation is needed.
They had a meeting with Meta's representatives, in which it was clarified that, so far, there is no behavioral advertising within Threads (similar to what happens on Instagram and Facebook). Meta's representatives also clarified that there was no active block by the Irish Data Protection Commission, but Meta opted not to launch due to concerns with Digital Services Act (DSA) & Digital Markets Act (DMA) compliance, as well as recent decisions by the Irish DPC and judgments from the Court of Justice of the European Union.
My opinion here is that the Brazilian Authority is concerned with the internal data sharing between Threads, Facebook, and Instagram and with Thread's unavailability in the European Union. Brazil's data protection law (LGPD) is largely inspired by the GDPR, and the fact that Threads is not available in the EU has turned the alarm on for the Brazilian authorities.
On the topic of Meta's privacy practices, make sure to listen to my 80-minute conversation with Max Schrems last week - more than 6,600 people have already watched it on LinkedIn, YouTube, or my podcast.
🔥 Case study: TikTok's privacy UX shortfalls
This week I analyze some of TikTok's user experience (UX) practices that have privacy relevance and show that many companies still ignore data protection principles such as fairness and choice transparency. I start with UX features that affect transparency and user autonomy and then move to issues more typically related to privacy compliance.
Keep reading with a 7-day free trial
Subscribe to