What Ethical AI Really Means (PhilosophyTube)

écrit par Abigail Thorne (octobre 2023)

visible sur YouTube et Nebula

AGI does not currently exist. What we do have are a lot of very different tools that are all called "AI". We have algorithms that judge people on their parole hearings, job applications and credit scores. We have security systems that can recognize faces and voices, and we have text and image generators like DALL-E and ChatGPT. (...) before we get anywhere close to Skynet, there are ethical dilemmas that arise for regular AI. We know that AI can produce unethical outcomes.

À propos des scanners corporels (la "penis detection machine") :

We often assume that machines like that detect information that's already there, but they also do the opposite: project information onto the body, in a process that scholar Simone Brown calls digital epidermalization (...). But people assume that machines are neutral, so they listen to the machine more than they listen to you, and that can land you in trouble. The term "epidermalization" comes from philosopher Franz Fanon, who spent some time in France in the 40s and the 50s, and found that people called him strange, unwelcome and dangerous, because he was black. They weren't just detecting the color of his skin, they were projecting meaning onto it.

Info

les scanners corporels sont mentionnés en introduction d'Algorithmes - Vers un monde manipulé (Dörholt)

Here's What Ethical AI Really Means - YouTube - 0-19-38.jpeg

I've read a lot of papers about this issue, and in the final paragraph they always pull back and use a vague sentence like that, because academic philosophers don't like telling people what to do. Their go-to solution is usually "uuuuh, let's try doing some more philosophy?" (...) When in doubt, hire someone with a humanities degree! (...) So, we are ready to make ethical AI! (pause) Right?

Thorne évoque ensuite le "data flattening":

That process is how AIs like ChatGPT and Dall-E Mini work: they scrape the internet for data, no consent is taken, no care or thought is given, the people who make these models assume that they can do that because it's just data, just abstract and immaterial and that's not true. Training data is made by and from people.

À propos du "subemployment" (Phil Jones, "Work Without the Worker") :

Somebody has to go down the lithium mine and dig it up, somebody has to work in the refinery, drive the trucks and the ships; (...) somebody has to produce all that training data. Someone also has to label that data (...). If you added up everyone who does it, thet would make up one of the largest workforces on Earth (Jones).

It is a mistake to think that a disembodied mind could destroy the world from inside the cloud. Even inf we build badly aligned AGI, its power would come not form technology, but from the human systems in which that technology is embedded. So when people talk about ethical AI, maybe we shouldn't think about Skynet? Maybe we should think instead about working conditions, climate changes, and how to make the economy serve humans rather than than the other way around (...) If we really want to make ethical AI, then we might like to consider this perspective: there is no ethical computation under capitalism.

La vidéo mentionne :


capture d'écran : © PhilosophyTube