You know the saying “Looks aren’t everything.” But if that were true, dating apps might look completely different. Matchmaking apps like Bumble and Los Angeles-based Tinder would not lead each potential match’s profile with a large photo. In a world where attractiveness is not appreciated, they would have a UI and UX that might have each user’s profile lead with education history or a custom message.
A new artificial intelligence (AI) algorithm is testing just how important attractiveness is by attempting to figure out who you’ll find attractive and why you find that person attractive. A team comprised of researchers from the University of Helsinki and Copenhagen University generated images of fake faces that they then asked people to rate for attractiveness. It then used that feedback to further tune the AI algorithm, making it perform even better at generating fake attractive faces.
The application, which uses a machine learning development algorithm called generative adversarial network (GAN), creates fake faces by pitting two “adversarial” algorithms against one another. Adversarial means the two algorithms have differing goals: one, called the generator, creates images based on what it learned during its training phase, and the other (called the discriminator) tries to figure out which of the generated images are fake or real. The latter algorithm is tested with photos of real people and fake faces.
The two algorithms eventually start training each other in a loop that allows the generator and discriminator to improve their performance greatly with each cycle. The generator improves its ability to create realistic images, and the discriminator gets better at finding fake faces. This symbiotic relationship may first sound unproductive, but the algorithms work well together and also successfully test each other.
The GAN algorithm was trained on 200,000 images of celebrities, who usually have attractive faces—at least, according to Hollywood standards.
After the training phase, the generative algorithm developed hundreds of unique faces of people who it believed were of similar attractiveness as the celebrities it “knew” to be attractive. These fake faces were shown to real people who wore brain-computer interfacing equipment hooked up to an EEG (electroencephalography) reader. Using this data, the researchers could measure each person’s brain activity for each photo they saw, down to the neuron’s exact moment of firing.
When a participant saw an image of an attractive face, there was a marked increase in brain activity. This could be partially attributed to the fact that the participants were told to focus harder on faces they thought were attractive. The participants weren’t asked to articulate what specifically they found attractive about any of the images. Instead, the AI stored the EEG datapoints and found the commonalities within each photo.
Those commonalities could be big eyes, high cheekbones, a medium-sized nose, wide-set eyes, small ears, or any other facial feature. The AI found that most participants liked the same aspects of a face in an image. In other words, humans seem to favor most of the same facial features when asked about attractiveness.
Using the common features found by the algorithm, the team distilled this data back down in a format that could be fed to the GAN algorithm. The generative algorithm then took this new information as instruction in making its second batch of attractive faces. Now, the faces had more chiseled jawlines, darker and more mysterious eyes, curlier hair, and more features that we find conventionally more attractive.
Real Looks vs. Fake Faces
When this second round of generated photos was shown to participants, they were instructed to rate the face as attractive or unattractive. For 87% of the newly-generated photos, participants rated the face as attractive. The remaining 13% were either too perfect or there was something that seemed off about their facial features. Even though the participants were told to focus on attractiveness, they were unable to look past how some faces looked fake or off.
AI developers and AI ethics experts worry that this type of well-performing technology could be used to generate faces that look realistic for the purposes of deepfake videos or fake images. Not only do the faces not need to be real, they don’t need to be attractive to cause issues for people or even nations. And the consequences don’t need to be so far-reaching: even social media accounts used for a malicious purpose could use AI-generated fake faces to blend in with the crowd. They might even look normal and real at a quick glance. After all, how much detail can you see in a small circular avatar?
The Future of Dating?
The future of this type of technology extends far beyond dating and social media. It could be used for political gain or even start a war. The research team is interested in advancing the technology, and it has some ideas for how to use their application in a productive and non-malicious way. Associate professor at the University of Helsinki, Tuukka Ruotsalo, says that the team hopes to dig deeper into attractiveness, as well as explore stereotypes, biases, preferences, and individual differences.
Have you come across an AI-generated face that was attractive but looked off? How did it make you feel? Let us know in the comments below!
The post How AI and Brain-Computer Interfaces Know What You’ll Find Attractive first appeared on Dogtown Media.
This content was originally published here.