The Privacy Whisperer

Share this post

Regulating Artificial Intelligence

www.theprivacywhisperer.com

Regulating Artificial Intelligence

Luiza Jarovsky
Dec 7, 2022
Share this post

Regulating Artificial Intelligence

www.theprivacywhisperer.com
black and white robot toy on red wooden table
Photo by Andrea De Santis on Unsplash

If you have been online in the last few days, you have certainly heard of ChatGPT, the artificial intelligence (AI) tool developed by OpenAI. As the company's founder said, in less than one week, they went from zero to one million users. AI is fascinating, and it does not come without its own risks. As such, it has been under the radar of lawmakers and regulators, especially in the European Union (EU). In today's newsletter, I will discuss ChatGPT, AI, and the latest developments of the EU Artificial Intelligent Act.

According to OpenAI, the creators of ChatGPT: "we’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests."

As it seems that everything the internet could talk about in the last few days was this technology, I decided to try it myself. I asked it to "write a rap battle between privacy & security." See below the result and judge for yourself if the tool deserves its current hype or not:

No alt text provided for this image
No alt text provided for this image

You can visit ChatGPT's website and learn more about the research behind it, the methods, and the declared limitations.

ChatGPT was trained using Reinforcement Learning from Human Feedback. As such, among the various problematic issues are bias in training data - which might lead to incorrect, inappropriate, prejudicial, unethical, immoral, and unlawful answers. I will write a newsletter about AI bias in a few weeks, so I hope to explore the topic more soon.

I brought the ChatGPT model as an example due to the fascination it has generated worldwide and to illustrate the current capabilities of AI-based tools. When talking about AI deployment, especially in a privacy & data protection discussion, it is central to understand: how the technology will be used in the real world, how it will affect real people, and what the risks and consequences are - to individuals, groups, communities, and societies.

And the risk element brings us to the EU Artificial Intelligence Act. According to yesterday's press release from the Council of the European Union: "the proposal follows a risk-based approach and lays down a uniform, horizontal legal framework for AI that aims to ensure legal certainty. It promotes investment and innovation in AI, enhances governance and effective enforcement of existing law on fundamental rights and safety, and facilitates the development of a single market for AI applications. It goes hand in hand with other initiatives, including the Coordinated Plan on Artificial Intelligence which aims to accelerate investment in AI in Europe."

This proposal matters not only to the EU but to the whole world, as it will likely generate waves of regulatory changes affecting all continents. You can check the full proposal here.

The aspect of the proposal that I would like to comment on here are the two central categories of "prohibited artificial intelligence practices" and "classification of AI systems as high-risk."

Regarding prohibited practices, according to Article 5, these will be the use of an AI technology that: deploys subliminal techniques, exploits any of the vulnerabilities of a specific group of persons, or establishes a social scoring system. There is also a strict regimen for using "‘real-time’ remote biometric identification systems."

I am curious to know how these subliminal techniques will be properly identified and banned, as the exploitation of cognitive biases through diverse methods is current practice in various fields, and they can be subtle and nevertheless cause psychological harm. The same comment is valid for identifying technologies that exploit vulnerabilities and materially distort behavior, as it can be done in a disguised or contextual way that will be tricky to detect and ban.

Regarding classifying AI systems as high-risk, there are cumulative conditions or, alternatively, a list of high-risk systems. It looks like an effective system, as these high-risk systems will have to follow special requirements, including a risk management system. My concern here is that it should be easy and straightforward to amend the list, as high-risk AI can occur at any moment in time (and there are probably hundreds of high-risk AI systems being developed right now).

I look forward to seeing the next developments of the AI Act, how it will be applied in practice, and if it will affect technologies similar to ChatGPT.

-

💡Thoughts? Questions?

What do you think are additional problematic issues from models such as ChatGPT? In your view, what should be the focus of AI laws & regulations around the world? What other weaknesses do you see in the EU Artificial Intelligence Act? Privacy needs critical thinkers like you: share this article and start a conversation about the topic on social media.

✅ Before you go:

  • Did someone forward this article to you? Subscribe to The Privacy Whisperer and receive this weekly newsletter in your email.

  • For more privacy-related content, check out The Privacy Whisperer Podcast and my Twitter, LinkedIn & YouTube accounts.

  • At Implement Privacy, I offer specialized privacy courses to help you advance your career. I invite you to check them out and get in touch if you have any questions.

See you next week. All the best, Luiza Jarovsky

Share this post

Regulating Artificial Intelligence

www.theprivacywhisperer.com
Previous
Next
TopNew

No posts

Ready for more?

© 2023 Luiza Jarovsky
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing