The DSA hurricane is here
Case study: the new privacy paradigm and what companies should know
Hi, Luiza Jarovsky here. Welcome to the 68th edition of The Privacy Whisperer, and thank you to 78,000+ people who follow and support us on various channels, making us a leading publication in the field.
The internet is flooded with chatbot-generated, low-quality, and clickbait content. In this newsletter, however, you find an up-to-date, informed, and independent perspective on topics and resources at the intersection of privacy, tech & AI. Read more about my work, invite me to speak at your event, or just say hi here.
Today I cover copyright concerns around AI, an interesting piece on AI governance and geopolitics, and the arrival of the Digital Services Act (DSA). You also find links to my conversations with global experts (two open sessions in September), my Masterclasses on AI & Privacy and Privacy UX (four upcoming sessions in September), and job opportunities in privacy & AI worldwide. Some of your friends and colleagues might benefit from these resources: recommend The Privacy Whisperer and join the leaderboard.
This week's case study deals with the new privacy paradigm and what companies should know about it, especially in the context of strengthening compliance efforts. Paid subscribers support my work, receive discounts on my Masterclasses, and access the weekly case studies on tech companies’ best & worst practices in privacy, transparency, and fairness. Many readers get reimbursed by their employers (template): upgrade to paid, and stay ahead of the changing privacy & AI landscape.
✅ Privacy & AI resources
[MASTERCLASSES] The September editions of our Masterclasses are open! The AI & Privacy (Sept. 13 or 28) and the Privacy UX (Sept. 4 or 20) programs were designed to help professionals and companies navigate the evolving privacy landscape. They include a 90-minute live session, additional material, a certificate of conclusion, and CPE credits pre-approved by the IAPP. Hundreds of tech & privacy leaders have already attended - read some of their testimonials here. Places are limited: secure your spot today, or get in touch to book a group session.
[UPCOMING LIVE SESSIONS] I will have two live sessions in September: the first with Prof. Orly Lobel on Sept. 5 (“AI for Good: Rethinking Risks, Privacy & Regulation,” 1,830+ people registered, join us); the second with Dr. Alex Hanna & Prof. Emily Bender on Sept 21 (“Understanding Large Language Models and Breaking Down the AI Hype,” 760+ people registered, join us).
[LISTEN/WATCH] Tens of thousands of people have watched my live talks with global experts such as Max Schrems, Dr. Ann Cavoukian, Prof. Daniel Solove, and various others. Check them out on my YouTube channel & podcast, and get a glimpse of the current privacy & AI zeitgeist.
This week's edition of The Privacy Whisperer is sponsored by Didomi:
Unravel the complexities of data privacy with Didomi’s upcoming webinar on September 21st at 5pm CEST. Hosted by Max Schrems and Romain Gauthier, this discussion will explore the latest changes and challenges of the industry, from EU-US data transfers to the intersection of privacy and marketing. Secure your spot!
Note: all our 2023 sponsorship slots are sold out. Today we are opening the 2024 calendar, if you are interested in featuring your product or service, get in touch.
🔥AI copyright wars
As generative AI becomes widespread - and every tech company is adding “generative fill” functionalities to their products - copyright issues receive more and more attention.
For those not familiar, the core questions are: given that AI models are trained using content from “the whole internet,” if I create a new piece of content using AI tools, can I own the copyright? How much human work is necessary to consider a certain creative work as deserving of intellectual property protection? For the artists, creators, or anyone who helps feed content to train those models, should there be any sort of compensation?
We are not yet one year past the beginning of this latest AI hype wave, and the lawsuits are coming in and giving us a glimpse into the current state of copyright concerns.
For example, authors Sarah Silverman, Christopher Golden, and Richard Kadrey have sued OpenAI and Meta for copyright infringement (which occurred through ChatGPT and LLaMA, respectively). In the OpenAI lawsuit, they argue that:
“The unlawful business practices described herein violate the UCL because they are unfair, immoral, unethical, oppressive, unscrupulous or injurious to consumers, because, among other reasons, Defendants used Plaintiffs’ protected works to train ChatGPT for Defendants’ own commercial profit without Plaintiffs’ and the Class’s authorization. Defendants further knowingly designed ChatGPT to output portions or summaries of Plaintiffs’ copyrighted works without attribution, and they unfairly profit from and take credit for developing a commercial product based on unattributed reproductions of those stolen writing and ideas.”
On the topic of AI and copyright, a few days ago, a US judge said that “human authorship is a bedrock requirement of copyright” and denied copyright to an AI-generated image.
Last week, Benedict Evans wrote an interesting article on the topic showing the uniqueness of the current issues involving AI and intellectual property, you can read it here.
In the same way that the internet, music streaming platforms, and other technologies changed copyright forever, it will probably be the case again with generative AI. We can expect more groundbreaking trends, controversies, and legal decisions in the next months.
🔥 AI governance and geopolitics
There have been continuous discussions about the AI Act and how each country or region is planning to regulate and govern AI. However, we are facing a new technological challenge that does not respect national borders. We should also focus on coordinated global efforts to make sure the whole planet is on board.
I have recently read a very interesting piece covering AI governance and geopolitics issues, written by Ian Bremmer and Mustafa Suleyman: “The AI Power Paradox. Can States Learn to Govern Artificial Intelligence - Before It’s Too Late?" Here is a quote:
“Like past technological waves, AI will pair extraordinary growth and opportunity with immense disruption and risk. But unlike previous waves, it will also initiate a seismic shift in the structure and balance of global power as it threatens the status of nation-states as the world’s primary geopolitical actors. Whether they admit it or not, AI’s creators are themselves geopolitical actors, and their sovereignty over AI further entrenches the emerging “technopolar” order—one in which technology companies wield the kind of power in their domains once reserved for nation-states.”
They discuss the size and breadth of the global AI governance challenge and propose an approach called “technoprudentialism,” which is aligned with other paradigms in international law that aim at identifying and mitigating risk from a global perspective.
It is unclear to me, however, if the proposed approach works in practice and how we can build a global framework in which:
a) tech companies work together with governments and are held accountable at the global political stage; and
b) countries mitigate risk collectively when there is so much competition, the economic stakes are so high, and AI development is concentrated in the hands of tech companies.
There are all sorts of alliances being formed, but is still unclear how AI governance and regulation will end up consolidated.
This is a well-written piece, and anyone interested in diving deeper into geopolitical waters should read it.
🔥 The DSA hurricane is here
Last week, the Digital Services Act (DSA) became legally enforceable for very large online platforms (VLOPs) and very large online search engines (VLOSEs).
There are 19 companies that fit these two categories, according to the European Commission's decision from April 25:
Very Large Online Platforms:
Very Large Online Search Engines:
These companies will have to comply with a full set of obligations around transparency, protection of minors, content moderation, privacy, and more.
As an example, Article 34 of the DSA establishes that these companies will have to identify, analyze, and assess systemic risks stemming from their services - *including algorithmic systems* - such as
- the dissemination of illegal content;
- negative effects on the exercise of fundamental rights;
- negative effects on civic discourse and electoral processes;
- negative effects on gender-based violence, the protection of public health and minors;
- serious negative consequences to the person’s physical and mental well-being.
Especially in the context of the rapid expansion of AI-based functionalities, the DSA is an important step towards more algorithmic transparency and a meaningful effort to help make the internet a safer and fairer place.
We can expect that these rules, similar to what happened with the GDPR (General Data Protection Regulation), will trigger a global regulatory wave towards a more transparent, safer, and fairer internet.
Talking about the GDPR, another important topic is how it intersects with the DSA. Dr. Gabriela Zanfir-Fortuna and Vasileios Rovilos from the Future of Privacy Forum have just published an article on the topic, check it out.
GDPR enforcement has been lagging behind, as Max Schrems made clear in our conversation in July. Let's hope that DSA enforcement will follow a different path, as EU Commissioner Thierry Breton's video suggests.
I am optimistic that the DSA is a positive step in making the internet a better place.
In the case study below, I talk more about the new privacy paradigm (which includes the arrival of the DSA and its impact on tech companies), do not miss it.
🔥 Case study: the new privacy paradigm
In this week's case study, I discuss the new privacy paradigm and what companies should know about it, especially in the context of strengthening their compliance efforts:
Keep reading with a 7-day free trial