Google uses a new way to measure skin tones to make search results more inclusive

Google has partnered with a Harvard professor to promote a new scale for measuring skin color in hopes of solving problems of bias and diversity in the company’s products.

The tech giant is teaming up with Ellis Monk, an assistant professor of sociology at Harvard and the creator of the Monk Skin Tone Scale, or MST. The MST scale is designed to replace aging skin tones that tend toward lighter skin. When these older scales are used by tech companies to categorize skin color, it can lead to products that perform worse for dark-skinned people, Monk says.

“Unless we have enough differences in skin tone, we can’t really integrate that into products to make them more inclusive,” Monk says. The edge† “The Monk Skin Tone Scale is a 10-point skin tone scale that has been intentionally designed to be much more representative, covering a wider range of different skin tones, especially for people [with] dark skin tones.”

There are countless examples of tech products, especially ones that use AI, that perform worse on darker skin tones. Including apps designed to detect skin cancerfacial recognition softwareeven machine vision systems used by self-driving cars

While there are many ways in which these kinds of biases are programmed into these systems, a common factor is the use of outdated skin tone scales when collecting training data. The most popular skin tone scale is the Fitzpatrick scale, which is widely used in both academia and AI. This scale was originally designed in the 1970s to classify how pale-skinned people burn or tan in the sun and was later expanded to include darker skin.

This has led to a number of criticism that the Fitzpatrick scale fails to capture a full range of skin tones and may mean that when machine vision software is trained on Fitzpatrick data, it is also biased towards lighter skin types.

The 10-point Monk Skin Tone Scale.
Image: Ellis Monk / Google

The Fitzpatrick scale consists of six categories, but the MST scale extends this to 10 different skin tones. Monk says this song was chosen based on his own research to balance diversity and ease of use. Some skin-tone scales offer more than a hundred different categories, he says, but too much choice can lead to inconsistent results.

“Usually, if you have more than 10 or 12 points on these types of scales, [and] ask the same person to repeatedly pick the same tones, the more you increase that scale, the fewer people can do that,” says Monk. “Cognitively speaking, it just becomes very difficult to differentiate accurately and reliably.” A choice of 10 skin tones is much more manageable, he says.

However, creating a new skin tone scale is only a first step, and the real challenge is to integrate this work into real applications. To promote the MST Scale, Google has created a new website, skintone.google, dedicated to explaining the research and best practices for using it in AI. The company also says it is in the process of applying the MST scaling to some of its own products. These include the “Real Tone” photo filters, which are: designed to work better with darker skin tonesand the search results for images.

Google lets users refine certain search results with skin tones selected from the MST scale.
Image: Google

Google says it is introducing a new image search feature that will allow users to narrow down searches based on skin tones classified by the MST scale. For example, if you search for “eye makeup” or “bridal makeup looks,” you can then filter the results by skin tone. In the future, the company also plans to use the MST scale to monitor the diversity of the results, so that when you search for images of “cute babies” or “doctors,” you don’t just see white faces.

“One of the things we do is take a set [image] results, understand when those results are particularly homogeneous across a few shades, and improve the diversity of the results,” said Tulsee Doshi, Google’s lead product for responsible AI. The edge† However, Doshi stressed that these updates were in a “very early” stage of development and had not yet been rolled out across the company’s services.

This should come as a warning not only to this particular change, but to Google’s approach to resolving bias in its products in general. The company has a fragmentary history when it comes to these issues, and the AI ​​industry as a whole tends to promise ethical guidelines and crash barriers and then fail to follow up.

Take, for example, the infamous Google Photos error that led to the search algorithm tag photos of black people as “gorillas” and “chimpanzees”. This bug was first noticed in 2015, but Google confirmed that: The edge this week that it still didn’t fix the problem, just removed these search terms altogether. “While we’ve significantly improved our models based on feedback, they’re still not perfect,” said Google Photos spokesperson Michael Marconi. The edge† “To avoid these types of errors and possible additional damage, the search terms will remain disabled.”

Implementing these kinds of changes can also be a cultural and political challenge, as it reflects wider difficulties in integrating this kind of technology into society. For example, in the case of filtering image search results, Doshi notes that “diversity” may look different in different countries, and if Google adjusts image results based on skin color, it may need to change those results based on geography.

“What diversity means, for example when we show up in India, results in a [or] if we show results in different parts of the world, it will be inherently different,” says Doshi. “It’s hard to say, ‘oh, this is the exact set of good results we want,’ because that will vary by user, by region, and by query.”

The introduction of a new and more comprehensive scale for measuring skin tones is a step forward, but much more thorny issues with AI and bias remain.


Related:

Leave a Reply

Your email address will not be published.