This article is reserved for our members

When we look at the next five to ten years, when artificial intelligence (AI) and sensors will be integrated into literally everything, we’re looking at an era where you go home to your living room and you turn on your TV, and while the TV is watching you, it’s talking to the fridge, it’s talking to the toaster, it’s talking to your bedroom, it’s talking to your bathroom, it’s talking to your car, it’s talking to your mailbox. So you’re sitting in your most private space and you’re constantly being watched and there’s a whole silent conversation going on around you.

The purpose of that conversation, in an economic context where you are being optimised as a consumer, is what these devices should show you to make you a better customer. So you end up being stripped of your basic agency because decisions are being made for you without your involvement.

This leads to a really fundamental question about being human in digital spaces. In nature, things affect us, but the weather or animals do not try to harm us. But when you are walking in a city where the city itself is thinking about how to optimise you, you become a participant in a space that is trying to dictate your choices. How can you exercise your free choice when the city around you, the buildings around you, the devices around you – literally everything – is trying to pre-select what you see in order to influence that choice? Once you’re in this space that’s watching you, how do you get out of it? How do you exercise your autonomy as a person? You can’t. We really need to start thinking about what we’re building with AI, what we’re putting ourselves into, because once we’ve created that space, there’s no way out of it.

This is why we need to regulate AI. And we need to do it quickly. If we look at the way we regulate other sectors, the first thing we do is articulate a set of harms that we want to avoid. If it’s an aeroplane, we don’t want it to crash; if it’s drugs, we don’t want them to poison people. There are a number of harms where we already know AI is contributing, and we want to avoid them in the future: for example, disinformation, deception, racism and discrimination: we don’t want an AI that contributes to decision-making with a racial bias. We don’t want AI that cheats us. So the first step is to decide what we don’t want the AI to do. The second is to create a process for testing so that the things we don’t want to happen don’t happen.


Many of the problems that society will experience with AI are almost analogous to climate change. It’s a systemic problem and systemic problems require systemic approaches


We test for potential flaws in aeroplanes so that they don’t fall over. Similarly, if we don’t want an algorithm that amplifies false information, we need to find a way to test for that. How is it designed? What kind of inputs does it use to make decisions? How does the algorithm behave in a test example? Going through a process of testing is a really key aspect of regulation. Imagine that, prior to release, the designers identify that an algorithm used by a bank has a racial bias that, in the test sample, gives more mortgages to white people than to black people. We’ve assessed the risk and can now go back and fix the algorithm before it’s released.

Christopher Wylie at the ZEG festival in Tbilisi, June 2023. | Photo: Gian-Paolo Accardo
Christopher Wylie at the ZEG festival in Tbilisi, June 2023. | Photo: Gian-Paolo Accardo

It is relatively straightforward to regulate AI using the same kind of process that we use in other industries. It is a question of risk mitigation, harm reduction and safety before the AI is released to the public.

Instead, what we’re seeing right now is Big Tech doing these big experiments on society and finding out as they go along. Facebook finding out the problems as it goes along and then apologising for them.

Then it decides whether the affected population is worth fixing it for, or not. If there’s genocide in Myanmar – not a big market – it’s not going to do anything about it, even though it’s a crime against humanity. Facebook admitted that it had not thought about how its recommendation engine would work in a population with a history of religious and ethnic violence. You can require a company to assess this before releasing a new application.


Receive the best of European journalism straight to your inbox every Thursday


The problem with the idea of “move fast and break things” [a motto popularised by Facebook CEO Mark Zuckerberg] is that it implicitly says, “I’m entitled to hurt you if my app is cool enough; I’m entitled to sacrifice you to make an app.” We do not accept that in any other industry. Imagine if we had pharmaceutical companies saying, “We’ll get faster results in cancer treatment if you don’t regulate us, and we’ll do whatever experiments we want”. That may be true, but we don’t want that kind of experimentation, it would be disastrous.

We really need to keep that in mind as AI becomes more important and integrated into every sector. AI is the future of everything: scientific research, aerospace, transportation, entertainment, maybe even education. There are huge problems that arise when we start to integrate unsafe technologies into other sectors. That’s why we need to regulate AI.

This is where regulatory capture comes in. Regulatory capture is when an industry tries to make the first move so that they can define what should be regulated and what shouldn’t. The problem is that their proposals tend not to address harms today,but rather theoretical harms at some point in the future. When you have a bunch of people saying, “We’re condemning humanity to extinction because we’ve created the Terminator robot, so regulate us”, what they’re saying is: “Regulate this problem that’s not real at the moment.

Don’t regulate this very real problem that would prevent me from releasing products that contain racist information, that might harm vulnerable people, that might encourage people to kill themselves. Don’t regulate real problems that force me to change my product.”


If you’re speaking from a privileged position – say, a rich white guy somewhere in the San Francisco Bay area – you’re probably not thinking about how your app it’s going to hurt a person in Myanmar


When I look at the global tours of these tech moguls talking about their grand vision of the future, saying that AI is going to dominate humanity, and that therefore we need to take regulation seriously, it sounds sincere. Actually, it sounds to me like they’re saying they will help create a lot of rules for things that they don’t have to change.

If you look at the proposals they released a few weeks ago [in May 2023]: they want to create a regulatory framework around OpenAI where governments would give OpenAI and a few other big tech companies a de facto monopoly to develop products, in the name of security. This could even lead to a regulatory framework that prevents other researchers from using AI.

The problem is that we’re not having a balanced conversation about what should be regulated in the first place. Is the problem the Terminator or the fragmentation of society? Is it disinformation? Is it incitement to violence? Is it racism? If you’re speaking from a privileged position – let’s say, a rich white guy somewhere in the San Francisco Bay area – you’re probably not thinking about how your new app is going to hurt a person in Myanmar.

You’re just thinking about whether it hurts you. Racism doesn’t hurt you, political violence doesn’t hurt you. We’re asking the wrong people about harms. Instead, we need to involve more people from around the world, particularly from vulnerable and marginalised communities who will experience the harms of AI first. We need to talk not about this big idea of future harms, but about what harms are affecting them today in their countries, and what will prevent that from happening today.

Many of the problems that society will experience with AI are almost analogous to climate change. You can buy an electric car, you can do some recycling, but at the same time it’s a systemic problem and systemic problems require systemic approaches. So I would suggest being active in demanding that leaders do something about it.

Spending political capital on this issue is the most important thing that people can do – more important than joining Mastodon or deleting your cookies. What makes the construction of AI so unfair is that it’s designed by big American companies that don’t necessarily think about the rest of the world. The voices of small countries matter less to them than the voices of key swing states in the United States.

If you look at what’s been done in Europe with GDPR, for example, we are not addressing the underlying design. We’re creating a set of standards for how people consent or don’t consent, and at the end of the day, everyone opts in.

That is not the right framework for a regulatory approach because we’re treating it like a service, not like architecture, and so we’re regulating security through consent. It is as if we regulated the security of buildings by just putting a sign on the door to say that by accepting the Terms of Use you’re consenting to the shoddy design and construction of the building, and if it falls down it doesn’t matter because you’ve consented.

This text is a transcription of Christopher Wylie’s talk at the ZEG storytelling festival in Tbilisi, in June 2023. It was recorded by Gian-Paolo Accardo and edited for clarity by Harry Bowden.

Adblock test (Why?)