Harvard professor Lawrence Lessig on why AI and social media are causing a free speech crisis for the internet

Propos recueillis par Nilay Patel pour The Verge, 24 octobre 2023 #article

Audio et retranscription

Info

Le podcast est sponsorisé par Oracle et Intel, qui y font la pub de leurs propres solutions d'"AI"

La discussion porte notamment sur la difficulté de réguler les plateformes aux États-Unis, où la liberté d'expression (et donc la liberté des algorithmes de recommandation) est protégée par le Premier amendement de la Constitution :

The American version of TikTok is completely uncontrolled. (...) The Chinese version of TikTok is blocked during certain hours. (...) The substance of what it provides is aiming to lead people to want to be astronauts. If you ask our kids what their number one dream job is, it’s being a social influencer. If you ask the Chinese kids, it’s being an astronaut. That’s not an accident. That’s not the free market working. That is an intentional design decision that they’re making towards helping their kids and we’re making towards giving up on our kids.

Imagine China invaded Taiwan tomorrow and immediately started amplifying all the American voices on TikTok that were saying that “Taiwan had it coming. China’s the natural leader for this area. It’s ridiculous that we’ve been resisting One China for so many years” and suppressed all the American voices on that platform that were saying, “This is outrageous. This is a free people,” blah, blah, blah. That decision to suppress and amplify is what we call editorial judgment. It is the core of First Amendment protection. When you say, “Could we do anything about it?” the standard First Amendment answer is: “No, there’s nothing you could do about that. That’s absolutely the most protected thing you could have.” It’s amazing to see how we slid to this position of being vulnerable in this way. If, in 1960, the Soviet Union had come to the FCC and said, “We’d like to open up a news station in the United States for Americans to consume in the United States,” the FCC, of course, would’ve blocked it, but we’ve moved to a place where we have no legal capacity to do anything about this kind of influence, with the consequence being that we are so incredibly vulnerable.

Lessig et Patel considèrent que les partis au pouvoir aux USA ont exploité les Facebook Files pour faire pression sur les plateformes et les inciter à orienter leurs algorithmes en faveur du contenu leur étant favorable, plutôt que de tomber sous le coup d'une législation de contrôle

Absolutely, and that’s exactly what happened in 2016 and in 2020. Indeed, the Facebook Files demonstrated that though, for example, conservatives said that they were being discriminated against by these platforms, in fact, the interventions from the political department of Facebook overwhelmingly were interventions to bend the rules in favor of allowing conservatives to do whatever they wanted on the platform. You’re right. That dynamic was certainly what happened in that election.

Patel avance l'idée que l'administration US pourrait "wind its way to the Supreme Court and potentially win on some theory of restricting that platform", mais que la régulation des LLM "foncerait tête baissée dans le Premier amendement".

Patel : I’m not asking you to criticize Taylor Swift. I’m just challenging your premise that the single most effective way to build engagement is anger.
Lessig : I would just say that the fact that you can point to Taylor Swift, and I can point to everything in social media, doesn’t mean that I’ve lost the argument. You’re right. Taylor Swift is a good example. Who else?
Patel : Marvel movies. You can pick any number of things that make people generally feel good.
Lessig : Let’s just recognize that’s going to be a hard problem, and people feeling good doesn’t mean that they know the truth. There are a lot of people who believe the Big Lie who feel good about what they’re doing. They’re defending the truth, defending America, and they’re told that in a very positive way.

Patel demande à Lessig s'il ne retourne pas sa veste ("arguing the flip") en réclamant des limites à la liberté d'expression alors qu'il a toujours combattu les menaces à la liberté d'expression que représentait le "copyright maximalism". Lessig n'est pas d'accord :

I don’t think it’s a flip (...) It feels like it’s a different problem. (...) Some people say, “When the algorithm decides to feed me stuff that makes me believe that vaccines don’t work, that’s just like the New York Times editors choosing to put certain things on the op-ed page or not.” It’s not. It’s not at all like that. When the New York Times humans make that judgment, they make a judgment that’s reflecting values and understanding of the world and what they think they’re trying to produce. When the algorithm figures out what to feed you so that you engage more because you believe vaccines don’t work, that is not those humans.

What I’m trying to say is that enterprise of figuring out how to regulate media to solve that problem, I think, is hopeless. We’re not going to regulate media to solve that problem. That problem’s going to be there in some form, some metastasized, dangerous form, regardless of what regulation there is, even if we have creative ways of thinking about how to get around the First Amendment. (...) we should recognize we can’t do democracy in that space. It’s not going to work. Then, we’ve got to think about: where can we rebuild democracy or what could it look like that was protected from that sort of thing?

Toue en considérant les tentatives de régulation des médias comme étant "sans espoir", Lessig défend les conventions citoyennes comme antidote :

It’s almost like it’s in a shelter from the corrupting influences of AI, whether foreign-dominated influences or even just commercial engagement influences. What I think is that the more people see this, the more they’ll be like, “Well, let’s see more of this. Let’s figure out how we can make this more of our democratic process.”

Lessig : Again, the radiation metaphor, I think, is really powerful here. The skies open up, and we’re not protected from the UVs in the way we were before. We’re going to have to go underground, we’re going to have to have shades on, and we’re going to have to protect as much as we can. It’s not clear we’re going to succeed. (...) When you realize exactly how dangerous these things are and exactly how weak our capacity to do something collectively about it is, there’s a lot of reasons to be terrified about it, and that leads a bunch of people to say, “Well, whatever. I’ll just spend my time watching Netflix.
Patel : It is interesting how much the AI doomers are also the people building the systems.
Lessig : Yes.
Patel : A fascinating relationship.

Info

Sam Altman s'est fait "brutalement" virer de sa propre boîte 3 semaines après la publication de cette émission

Lessig : Maybe we should have a compulsory license-like structure or some structure for compensation. I’m all for that,the idea that we try to regulate AI through copyright law is crazy talk. (...) I think all of this in the American context should be considered fair use.
Patel : Is that a policy decision or a legal conclusion?
Lessig : It’s a legal conclusion. I think that [if you] run the fair use analysis, that’s what you get.

I’m a very strong “Training is free.” The view I have, which is surprising to people, (...) is that I absolutely think that, when you use AI to create work, there ought to be a copyright that comes out of that.

Lessig : (..) I would say you get a copyright with these AI systems if and only if the AI system itself registers the work and includes in the registration provenance so that I know exactly who created it and when, and it’s registered so it’s easy for me to identify it because the biggest hole in copyright law, a so-called property system, is that it’s the most inefficient property system known to man. We have no way to know who owns what. (...) The reality is we could have a much more efficient system for identifying “ownership,” and I think AI could help us to get there. I would say let’s have a system where you get a copyright immediately and —
Patel : I push Generative Fill in Photoshop; it immediately goes to the copyright office, says in some database, “Here’s my picture I made in Photoshop.”
Lessig : And, “Here’s how it was made, when it was made and what fed into it.”

Patel demande si la question du copyright n'est pas secondaire par rapport à la question économique : comment les artistes vont-ils gagner leur vie si l'IA générative fait baisser la demande.
Lessig répond en comparant la situation à la démocratisation de la photographie : "there are still pretty good professional photographers". Pour lui, le plus important est de "radicalement baisser le coût de la protection" par le droit d'auteur, pour permettre aux "meilleures œuvres" de "remonter à la surface plus facilement".

Info

Un argument qui ressemble beaucoup aux idéaux des pionniers des réseaux tels que décrits par Turner (la justice est une conséquence de la liberté)

Lessig pense que les blockchains consacrées par le droit seraient le meilleur moyen ("best of both worlds") de mettre en place ce régime de copyright.

Info

à ce sujet, cf. Molly White :

Some say that blockchains are the real solution to forcing sovereignty, as though it is somehow built in to the architecture, but in reality blockchain-based platforms can lock in their users just as easily as "web2" platforms.

Patel s'inquiète de l'usage détourné du copyright par les youtubeurs, pour censurer via le ContentID les vidéos de réaction qui ne leur plaisent pas. Lessig en met en partie la faute sur YouTube, "that created this really perverse incentive for people to basically create complete ripoffs of other people’s work and then just use the same mechanism to go after them"

This, I think, has created an economy of what feels to me like real piracy because these are not creators who are trying to do something creative, remixing in some interesting way; they’re just trying to exploit the system to steal from others. That reaction creates a counter-reaction, which is, I think, the culture you’ve identified.

À propos de la démonétisation des vidéos comportant des extraits même très courts ("a certain two or three notes") identifiés par le ContentID :

Lessig : What’s necessary is the courts or maybe Congress to tilt it in a direction to try to achieve a kind of common recognition of fair use across these platforms. Instead, there’s been conventions set by industries that were long before the internet that creates these expectations.
Patel : This is happening right now. YouTube is entering into deals, particularly with Universal Music, where they’re going to invent some private copyright law on the platform to deal with AI. There was an AI artist on YouTube that sounded like Drake. (...) You basically have a private copyright law to one platform for the benefits of the music label.

Info

ça me fait penser à ce pouet Mastodon : "all you need to do is be a $170B international corporation and you too can have AI companies respect your copyrights:"

I remember 20 years ago, artists were convinced that this campaign of copyright extremism would produce an internet that would be profitable for artists. Well, ask artists how much they get from Spotify today. It’s actually worked against the interest of artists. I think that if we’d had a healthier debate back then, more open, less moralistic, like there were criminals on one side, pirates on one side and believers in property on the other, we could’ve come up with a better solution. I hope that we have... well, I don’t actually hope — I don’t think there’s any hope for this at all, but what we ought to be having is a more healthy debate about that today.