New Technology: Who is Responsible?

08 May 2023 ·6 min read time

In conversation with Matthias Dobbelaere-Welvaert


The development of AI relies on the collection and analysis of big data, including personal data. With machine learning, this process accelerates, making it increasingly difficult to maintain an overview and handle data correctly. Can AI be reconciled with data protection? And how can companies ethically engage with AI? We ask privacy lawyer Matthias Dobbelaere-Welvaert.

While AI offers significant benefits, we also see more fake news and privacy violations due to this technology. For example, the Republican presidential campaign in the US used AI-generated images. And in Italy, ChatGPT was banned because it processes too much personal data. Can privacy legislation keep pace with technological evolution?

That's indeed an important question. A ban on AI applications is probably not the solution because those applications are inevitable. But it's panic football due to the lack of European legislation. We have been waiting for years. Both proponents and opponents of the technology want that framework because it provides clarity and a workable basis.

It's a complex issue. There's also no transparency about the sources of the data processed by AI machines. The privacy law is broadly written and can capture the issues surrounding new tech, but enforcing privacy rights is problematic. Because who is liable?

AI operates with increasingly complex algorithms, where a machine makes decisions automatically. Is there an ethical logic behind this? Can AI be ethically regulated?

That is absolutely the intention. In 2021, the first draft law on AI came from the European Commission, but due to heavy lobbying, it has not been approved yet. It includes transparency and major ethical issues. American companies lead the debate, but once they process data from European subjects, they must comply with European legislation. We await European regulation on AI, but we are already protected by the GDPR. The European Court of Justice has already ruled several times - under pressure from privacy activist Max Schrems - that data transfers from the EU to the US are illegal because the US cannot guarantee that its security services will not screen that data.

Can a company today guarantee that the personal data collected for AI applications is used correctly?

Data streams are currently the biggest problem. Developers don't always have control over them. Especially with a self-learning AI system, where you are at the mercy of an algorithm. For example, personal data on your website should not be arbitrarily retrieved by AI as "available" information in response to a query. But AI doesn't "realize" that.

As a result, controlling our data becomes almost impossible. I'm not saying that AI means the end of our privacy, but it's a major blow to those who want to protect privacy. Because now you're not only fighting against governments and corporations but also against algorithms, which are even less accountable.

If European developers want to use AI, they need to consider: is the dataset legitimately obtained, with the consent of the individuals involved? Or was the data obtained from another valid processing ground? If you can't guarantee that, can the data be stripped of personal information, can you anonymize the information? And that's a lot of work.

Therefore, today, as a European company, it's difficult to launch an application like ChatGPT if you want to be GDPR-compliant.

Who owns the copyright to what comes out of an AI like ChatGPT?

In principle, only a natural person can do that. According to ChatGPT, it's you yourself when you generate a text. But who is liable when there are errors in the text and excerpts from others' texts, and when you can't do source control? ChatGPT understands nothing about the importance of sources. You can see it as a child who wants to do their best and constantly wants to please but still poses a risk. Because even things like copyright are still unclear.

Zooming out for a moment: why is it so important to protect personal data?

If you had said ten years ago, "Soon we will take fingerprints of minors, DNA samples of newborns, and put cameras everywhere on the road that will follow you 24/7 and can identify you..." we might have rebelled. But that's not happening because governments and companies introduce everything very gradually. First, a camera appears here and there, and before you know it, our roads and cities are full of them. Or we first take fingerprints for the European passport, and gradually, that shifts to the national passport and also to minors. That gradual change is not as noticeable. Yet your privacy is fundamental to your security and a healthy democracy.

So it's very important that we can protect the use of our personal data by AI. Tech companies often use a standard data processing agreement - a so-called Data Processing Agreement. And that's take it or leave it. If those tech giants don't want to disclose the sources of their data to governments or an auditor, certainly not to a corporate lawyer. Companies need to be aware of this because the ultimate responsibility to the customer lies with you once you start using that application.

What actions can corporate lawyers take to protect their clients' privacy?

With a DPIA (Data Processing Impact Assessment), you assess the place new technology occupies in your company, what data the application will collect, how long it will be stored, to which servers the data will be sent, and whether everything stays within the EU. Based on that, you look at the costs and benefits, both for the company and the customer. You also need to obtain a legal basis from your customers, for example, by explicitly asking for consent. If there's a complaint that leads to a procedure or an inspection by the Data Protection Authority, you must be able to demonstrate these things, including in a register.

Virtual and augmented reality are also on the rise. In Zuckerberg's Metaverse, privacy becomes an illusion, but companies are increasingly having custom metaverse applications built. Can that be done safely?

If you have your own metaverse built, you obviously have more control. As a company, I wouldn't use the platforms of Facebook and Microsoft for that. Then no data leaves the EU, you determine yourself which data you work with, and you can determine with a retention policy how long the data are kept. Then it can be privacy-friendly, in collaboration with the metaverse builder. They have a duty of care and will have thought about it. If not, they should indicate that.

In a Metaverse Privacy Policy, your customers can read what their data are used for and how they can delete their data.

How can you scrutinize new technology in terms of cybersecurity?

I'm not an IT expert, but it's a good idea to work with an ethical hacker. They test the tools for potential risks. That has been legal since February. It costs some money, but considering what's at stake, the cost is relative.

In addition, it's wise to only use servers within the EU. Many companies find such a migration difficult and therefore stick with Google and Microsoft.

And then work with a good provider, of course. Tech builders should know even more about privacy and cybersecurity than our business experts. There should be room for discussion.

If there is a hack, you want the company to have done everything possible to guarantee security: a DPIA, an ethical hacker, reliable partner companies, an engaged DPO, privacy by design, etc. Everything is hackable, but in that case, you couldn't have done more.