AI algorithms can assess your attractiveness


A comparison of two Beyonce Knowles photos from Lauren Rhue’s research using Face ++. His AI predicted that the left image would be 74.776% for men and 77.914% for women. The image on the right, meanwhile, got 87.468% for men and 91.14% for women in its model..

Beauty scores, she says, are part of a disturbing dynamic between an already unhealthy beauty culture and the recommendation algorithms we come across online every day. When scores are used to decide which posts to run on social media platforms, for example, it reinforces the definition of what is deemed attractive and distracts attention from those who do not fit the strict ideal of the machine. “We’re restricting the types of images available to everyone,” says Rhue.

It’s a vicious cycle: With more gazes on content featuring attractive people, these images are able to drive higher engagement, so they’re shown to even more people. Ultimately, while a high beauty score isn’t a direct reason a message is shown to you, it is an indirect one.

In one to study published in 2019, she looked at how two algorithms, one for beauty scores and one for age predictions, affected people’s opinions. Participants were shown pictures of people and asked to rate the subjects’ beauty and age. Some of the participants saw the AI-generated score before giving their answer, while others did not show the AI ​​score at all. She found that participants without knowledge of the AI ​​rating did not exhibit additional bias; However, knowing how AI ranked people’s attractiveness led people to give scores closer to the result generated by an algorithm. Rhue calls this the “anchoring effect”.

“Recommendation algorithms actually change our preferences,” she says. “And the technological challenge, of course, is not to restrict them too much. When it comes to beauty, we’re seeing a much bigger shrinkage than I would have imagined. “

“I saw no reason not to assess your flaws, because there are ways to correct them.”

Shafee Hassan, Studio Qoves

At Qoves, Hassan says he tried to tackle the running problem head-on. When doing a detailed facial analysis report – the type clients pay for – his studio attempts to use data to classify the face based on ethnicity so that not everyone is simply assessed. compared to a European ideal. “You can escape this Eurocentric bias just by becoming the best version of yourself, the best version of your ethnicity, the best version of your race,” he says.

But Rhue says she is concerned that this type of ethnic categorization is embedded more deeply into our technological infrastructure. “The problem is, people do it no matter how we look at them, and there’s no kind of regulation or oversight,” she says. “If there is one type of conflict, people will try to find out who belongs to which category.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *