An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.
Look, I hate racism and inherent bias toward white people but this is just ignorance of the tech. Willfully or otherwise it’s still misleading clickbait. Upload a picture of an anonymous white chick and ask the same thing. It’s going go to make a similar image of another white chick. To get it to reliably recreate your facial features it needs to be trained on your face. It works for celebrities for this reason not a random “Asian MIT student” This kind of shit sets us back and makes us look reactionary.
It’s less a reflection on the tech, and more a reflection on the culture that generated the content that trained the tech.
This is a real potential issue, not just “clickbait”.
No company would use ML to classify who’s the most professional looking candidate.
Companies already use resume scanners that have been found to bias against black sounding names. They’re designed to feedback loop successful candidates, and guess what shit the ML learned real quick?
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
The AI might associate lighter skin with white person facial structure. That kind of correlation would need to be specifically accounted for I’d think, because even with some examples of lighter skinned Asians, the majority of photos of people with light skin will have white person facial structure.
Plus it’s becoming more and more apparent that AIs just aren’t that good at what they do in general at this point. Yes, they can produce some pretty interesting things, but they seem to be the exception rather than the norm, and in hindsight, a lot of my being impressed with results I’ve seen so far is that it’s some kind of algorithm that is producing that in the first place when the algorithm itself isn’t directly related to the output but is a few steps back from that.
I bet for the instances where it does produce good results, it’s still actually doing something simpler than what it looks like it’s doing.
This is like a demonstration of lack of self awareness
Almost like we’re looking for things to get mad about.
Also what are these 50 people downvoting you for? Too much nuance I suppose.