Does a person generally look the same if you see them in different lighting? A human would answer yes, but picking out the right details to focus on can be harder for an algorithm.
A new feature in Google’s messaging app Allo demonstrates what it looks like to tackle this challenge. The feature creates Bitmoji-like selfie stickers based on users’ real photos.
It works like this: You take a selfie using the feature, and it matches your selfie to a set of artist illustrations designed to represent a variety of features. Human raters helped gauge the accuracy of matches between photos and illustrated features chosen by the software.
One place it gets complicated is accounting for the different environments in which people take selfies, as Jennifer Daniel, a creative director working on Allo, wrote in a post about the new feature:
“The traditional computer vision approach to mapping selfies to art would be to analyze the pixels of an image and algorithmically determine attribute values by looking at pixel values to measure color, shape, or texture. However, people today take selfies in all types of lighting conditions and poses. And while people can easily pick out and recognize qualitative features, like eye color, regardless of the lighting condition, this is a very complex task for computers. When people look at eye color, they don’t just interpret the pixel values of blue or green, but take into account the surrounding visual context.”
Technology that overcomes this hurdle is not new per se, according to Google, but the new selfie feature demonstrates an application of it.
Engineers who worked on the feature used existing general-purpose computer vision technology from Google. Another application of the technology might be a security camera that accurately recognizes faces when it’s dark out.
This article originally appeared on Recode.net.