When human knowledge becomes feedstock
Case study: new tech, new responsibilities
Hi, Luiza Jarovsky here. Welcome to the 66th edition of The Privacy Whisperer, and thank you to 75,000+ people who follow and support us on various channels.
In this newsletter, I offer an up-to-date, informed & independent perspective on relevant topics in the intersection of privacy, tech & AI. Read more about my work, invite me to speak at your event, or just say hi here.
Today I cover India's new data protection law, Profs. Brett Frischmann & Susan Benesch’s new article about friction-in-design regulation and AI systems’ relationship with authentic human knowledge. You also find links to my in-depth conversations with global experts, my Masterclasses on AI & Privacy and on Privacy UX, and job opportunities.
This week's case study deals with tech companies’ ethical responsibility to protect users, especially when new technologies are involved. Become a paid subscriber of this newsletter to access the weekly case studies about tech companies’ best & worst practices in terms of privacy, transparency, trust, and fairness. Many paid subscribers get reimbursed by their employers, you can use this reimbursement request template.
✅ Privacy & AI resources
[UPCOMING LIVE SESSION] 1065+ people have already registered for my live session with Prof. Orly Lobel on September 5th. We will discuss privacy, AI, and innovation, as well as her acclaimed books "The Equality Machine" and "You Don't Own Me." Join us live.
[LISTEN/WATCH] Tens of thousands of people have watched my live talks with global experts, including Max Schrems, Prof. Daniel Solove, Dr. Ann Cavoukian, and various others. Check out the recordings on my YouTube channel and podcast.
[MASTERCLASSES] The September editions of our popular Masterclasses are open! The AI & Privacy and the Privacy UX programs were designed to help people and companies navigate the evolving privacy landscape. They include a 90 min live session, additional material, a certificate of conclusion, and CPE credits pre-approved by the IAPP. Hundreds of tech & privacy leaders have already attended - read some of the testimonials here. Places are limited, and the July sessions were sold out. Secure your spot today, or get in touch to book a private session at your company.
This week's edition of The Privacy Whisperer is sponsored by The State of US Privacy & AI Regulation:
Want to hear directly from the people shaping the US Privacy & AI Regulation at the federal and state levels? Then join this LinkedIn Live on August 28 at 11am PST (2pm EST), with speakers Rep. Ro Khanna (member of Congress representing Silicon Valley), Alastair Mactaggart (co-author of CCPA & CPRA, and board member of the California Privacy Protection Agency), and moderator Tom Kemp (co-author of the California Delete Act, and author of the new book Containing Big Tech). Free registration here.
Note: we have only 2 newsletter sponsorship spots left until the end of the year. To feature your product or service, get in touch.
🔥 On India's new data protection law
Last week, after 6 long years of back and forth, India - a country with more than 1.4 billion people - now has a data protection law, which they call Digital Personal Data Protection Act 2023, and you can access here.
On the topic, Dr. Gabriela Zanfir-Fortuna and Raktima Roy published a comprehensive analysis on the Future of Privacy's blog, recommended reading for those interested in a detailed overview.
While reading the Digital Personal Data Protection Act, the use of the term “data fiduciary” (instead of “data controller”) caught my attention:
“2 (i) Data Fiduciary means any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data”
This is a concept very similar to the GDPR's data controller. However, the use of the term fiduciary is etymologically interesting here. The word fiduciary comes from Latin - fidere / fiducia - and it means to trust. In law, the idea of fiduciary duty can be generally described as:
“the fiduciary accepts legal responsibility for duties of care, loyalty, good faith, confidentiality, and more when serving the best interests of a beneficiary. Strict care must be taken to ensure that no conflict of interest arises to jeopardize those interests.”
The word “controller” in the GDPR can be perceived as neutral (an entity that controls), but the word “fiduciary” carries a much deeper meaning, which has been explored by various legal scholars in the context of privacy, such as Prof. Ari Ezra Waldman in his book Privacy as Trust.
If the use of the term data fiduciary by India's Digital Personal Data Protection Act will have a deeper meaning and positively impact fundamental rights, we are yet to see. But words carry meaning, and this could be a great opportunity to foster change.
🔥Have you heard of “friction-in-design”?
As I frequently discuss dark patterns in privacy and privacy UX practices in this newsletter (join my masterclass in September), I am always interested in new ways academics, advocates, and professionals propose to improve how we interact with technology and how technology can serve us better as individuals and as a society.
In their new article “Friction-in-Design Regulation as 21st Century Time, Place and Manner Restriction,” Profs. Brett Frischmann & Susan Benesch have written about how changing some of the usual practices around how we interact with technology could have a broad social impact.
As the title of their article says, they discuss adding friction to the design/code/interface interaction so that the choice architecture actually pushes people to be more autonomous and aware, and technology-based systems can be strengthened in favor of humans and human societies.
This is how they briefly describe it:
“Friction-in-design, which induces humans to behave more safely and civilly in many offline contexts, can come in many forms online. It can be as simple as a time delay prior to publishing a social media post, a notice that provides salient information coupled with a nudge toward actual deliberation, or a query that tests comprehension about important consequences that flow from an action—for example, when clicking a virtual button manifests consent to share information with strangers” (p. 379)
This approach goes directly against some of the most popular product, design, and marketing practices in the last decades, which aim at making every online interaction seamless, quick, and almost invisible. This invisibility clearly favors service providers (such as tech companies). When interactions are so quick and almost unperceivable, people do not notice that they are being pushed to do exactly as companies want, so they click, buy, post, share, and so on without thinking about what it means and what the consequences are.
There will be numerous practical challenges to implementing friction-in-design, but in my point of view, fostering autonomy and awareness is the correct path to improve how we interact with technology and how technology can support humans.
🔥 When human knowledge becomes feedstock
If we think about the “age of social media,” 2004 is usually referred to as the start year, which was when Orkut (long gone) and Facebook were launched. Of course, there were other social networks before that, and I fondly remember filling up my Fotolog page in 2003 while in high school.
Any teenager or young adult who actively experienced this transition from pre-social media to social media age can remember how game-changing and fascinating it was and how it looked like social life was now hyper-connected, digitalized, and had changed forever.
These predictions were right, and the hyper-connectedness and digitalization only progressed further, especially pushed by AI-based algorithms.
In 2007, another landmark event happened: Facebook launched its Facebook Ads platform, and privacy concerns grew further. The famous saying “if you are not paying, you are the product” got more popular at this time. For many, it captured the bitter feeling that maybe this hyper-connectedness was too much and platforms knew too much about us. There was also a strange feeling that we were voluntarily opting in for widespread surveillance.
At the beginning of 2023, another bitter feeling started to spread.
ChatGPT's popularity started to skyrocket, and millions of people around the globe were using it every minute. At the same time, AI ethics experts, privacy pros, advocates, and many others were publicly discussing the potential risks and harms of AI-based systems. Some of the topics being discussed involve the negative relationship between human knowledge and AI systems, such as systemic copyright infringement, the intensive human work behind AI training and fine-tuning, and the “enshittification” of the internet, when AI-made content pollutes social networks, communities, marketplaces, crowdsourced environments and so on, making the internet a much less creative, authentic, and valuable place for people to thrive.
If in the social media age, we were the products - being datafied, commodified, and sold through behavioral ads inventories - in the AI age, we are the feedstock of AI systems.
Humans offer original creative content to feed and train AI models: these models are only accurate and precise and can offer persuasive and “creative” outputs because they are trained on original human content. Humans then have to manually filter and fine-tune the system. Humans will then use AI-based systems, and developers will use this new input data to improve these systems according to their criteria.
AI systems are nothing without creative and authentic human knowledge, which can be easily found for free on the internet. But, as in a tragic Shakespearean novel, these same AI systems can also potentially damage the internet forever.
I am deeply optimistic about tech and innovation, and I think that humans can work together to find solutions. At the same time, it is important to recognize how AI systems affect humans and human societies. At the moment, we are AI feedstock, so far, unprotected.
🔥 Case study: new tech, new responsibilities
This week's case study deals with tech companies’ ethical responsibility to protect users, especially when new technologies are involved.
Keep reading with a 7-day free trial