Can an AI be racist? - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

Anything from household gadgets to the Large Hadron Collider (note: political science topics belong in the Environment & Science forum).

Moderator: PoFo The Lounge Mods

By foxdemon
#14852784

http://www.bbc.com/news/world-asia-china-41606161

WeChat translates 'black foreigner' into racial slur

Chinese messaging app WeChat has apologised after its software used the N-word as an English translation for the Chinese for "black foreigner".
The company blamed its algorithms for producing the error.
It was spotted by Ann James, a black American living in Shanghai, when she texted her Chinese colleagues to say she was running late.
Ms James, who uses WeChat's translation feature to read Chinese responses, got the reply: "The [racial slur] is late."
Horrified, she checked the Chinese phrase - "hei laowai" - with a co-worker and was told it was a neutral expression, not a profanity.

WeChat acknowledged the error to China-focused news site Sixth Tone, saying: "We're very sorry for the inappropriate translation. After receiving users' feedback, we immediately fixed the problem."
China social networks 'spreading terror'
Chinese 'anti-communist' chatbots removed
How social media is different in China
The app's software uses artificial intelligence that has been fed huge reams of text to help it pick the best translations.
These are based on context, so it sometimes uses insulting phrases when talking about negative events.
Local outlet That's Shanghai tested the app, and found that when used to wish someone happy birthday, the phrase "hei laowai" was translated as "black foreigner". But when a sentence included negative words like "late" or "lazy," it produced the racist insult.
Almost a billion people use WeChat, which lets users play games, shop online, and pay for things as well as sending messages. It resembles another popular chat app, WhatsApp, but is subject to censorship.
A research group at the University of Toronto analysed the terms blocked on WeChat in March, and found they included "Free Tibet", "Down with the Communist Party", and many mentions of Nobel laureate Liu Xiaobo, who was China's most prominent human rights advocate.



OK, so what went wrong here? How did an AI come to decide to translate a Chinese term for a catagory of foreigner into a racist slur? And why only in contexts such as ‘lazy’ or ‘late’? Furthermore, what was the racist slur?
#14852790
foxdemon wrote:OK, so what went wrong here?

Children say the funniest things. AIs are like idiot savant children, completely innocent of human taboos but very ready to learn and repeat. Until a grown up says "Now Johnny AI, we don't use the N-word just because the other people do, it's bad." it will use it if it hears others use it just like any other word. Learning computers need to learn not just the words but the taboos too.

foxdemon wrote:How did an AI come to decide to translate a Chinese term for a catagory of foreigner into a racist slur?

Because that is what people do, it is learning from people.
foxdemon wrote:And why only in contexts such as ‘lazy’ or ‘late’?

Because somewhere along the way it noticed the N-word was used as a pejorative in connection with criticism. Observing correct context is important part of language learning and use. When it is your wife's birthday you might say "Happy Birthday lovely buns" and give her a hug and kiss but if it is your line manager's birthday you would say "Happy Birthday ma'am" and shake her hand. The message is essentially the same but is expressed differently depending on the context and using the wrong expression for the context can get you into trouble. Learning computers need to learn that too.
foxdemon wrote:Furthermore, what was the racist slur?

I can't tell you because Chinese Communist Censors may abduct and kill me for repeating it. Suffice to say it begins with N and ends in igger.
Last edited by SolarCross on 16 Oct 2017 15:28, edited 1 time in total.
By ness31
#14852795
Can AI have a sense of humour? Yep.
User avatar
By Hong Wu
#14852798
What probably happened is that black people (and people of Italian descent, including me) are chronically late, so the bot found a correlation between "black foreigner" and "was late to work".

This is pretty similar to how AI analysis notes that more black people are committing certain kinds of crimes. This can inevitably only result in someone somewhere choosing to not prosecute people who commit certain crimes part of the time if they are black so that the data will show up as even.
#14852891
Depends on how you define "racism." I do not personally take the charge of racism seriously unless malicious intent is explicit, Thus, unless A.I. is programmed in such a way to have this sort of intent, or can be proven to have acquired it, I am suspicious of the claim that A.I. can be racist.
By foxdemon
#14852918
Pants-of-dog wrote:Yes AIs can be racist.

This happens when the entire team putting together the AI is of the majority race.



Surely it is a bit much to expect a design team to anticipate every prejorative term in all foreign languages. Especially if the machine is self learning. What’s more, if the term in question is the one @SolarCross is suggesting, then that term is not considered prerogative if used by a specific group of people in a specific country which speaks the language in question.

Could the AI have searched English language sources online, came across numerous homeboy chat rooms and catalogued the term as the correct word to use?
#14852933
Doubtful. Why would an AI be so heavily influenced by a small number of chatrooms when the rest of the internet has far more examples of N***** being used as a pejorative?
#14853002
Pants-of-dog wrote:Doubtful. Why would an AI be so heavily influenced by a small number of chatrooms when the rest of the internet has far more examples of N***** being used as a pejorative?

I think a very great many english language sites censor or self-censor that word, such is the fashion at the moment, consequently the AI could only figure out its meaning from reading those few sites that don't censor it.

Imagine an AI teaching itself english usage by reading pofo and in particular the post you just made as quoted by me above.... You have used the word but through your own self-censorship have used it in a very indirect purposely obfuscated way, you wrote "N*****". All us humans (or most of us perhaps) can readily see what word you mean because at some point or another we encountered the word without the obfuscation and we know that * here represent a particular concealed letter and we know about the tendency to censor words and this word in particular and that there are no other 6 letter words beginning with N which gets this treatment.

For an AI, depending on what algorithms it has been furnished with and how they work exactly, what its previous experience has been and what other guidance it has been given, that AI would probably find parsing the meaning of N***** extremely difficult! It certainly wouldn't find it in any dictionaries it had read.

Its first working assumption might be that * means "any character" which is how it is used in many computer scripting languages. So then N***** would look like it means not a particular naughty 6 letter word beginning with N but ALL 6 letter words beginning in N.
So Nagger, Nodder, Noodle, Nobber, Nibble, Nought, Nights etc. and not any one of those but ALL of them at the same time.

What does it mean to say "ALL THE SIX LETTER WORDS BEGINNING WITH N" ? What does it mean in the context of what you wrote? Let's drop this working translation into your sentence to get an idea how might look to an AI.

"Why would an AI be so heavily influenced by a small number of chatrooms when the rest of the internet has far more examples of ALL THE SIX LETTER WORDS BEGINNING WITH N being used as a pejorative?" :?: :lol:

On the other hand on the small number of sites which use it without censorship the usage is as easy to pick up as it is for any other word but what the AI doesn't necessarily pick up is the contexts in which the word is acceptable to use and when it isn't because again that is a really hard problem for a non-human or an idiot savant child to deduce.

Imagine letting your children be baby sitted by a bunch of gangsta rappers, how long before they used word themselves in complete innocence of the very current fashion for that word to be taboo for some people depending on their complexion?
#14853006
AIs learn by being subjected to texts. If an all white AI design team picks the texts, the selection of texts will have the implict biases of the people who selected them. This is why the Google AI shows you pictures of whites hands when you Google Image “hands”: because that is how they trained the AI that does the searches.

Also, PoGo automatically replaces the word N***** with what you see. You can check this by typing the word in your post and seeing what happens when you hit Submit.
User avatar
By Saeko
#14853010
Pants-of-dog wrote:AIs learn by being subjected to texts. If an all white AI design team picks the texts, the selection of texts will have the implict biases of the people who selected them. This is why the Google AI shows you pictures of whites hands when you Google Image “hands”: because that is how they trained the AI that does the searches.

Also, PoGo automatically replaces the word N***** with what you see. You can check this by typing the word in your post and seeing what happens when you hit Submit.


An alternative and far more sensible explanation is that there are more white people in America, and hence more stock pictures of white hands. When I google "hands" I get pictures of hands that reflect the white:black population ratio.

Apparently PoD believes that google design teams are so incredibly racist that they don't consider black people's hands to be hands at all.
#14853019
Saeko wrote:An alternative and far more sensible explanation is that there are more white people in America, and hence more stock pictures of white hands. When I google "hands" I get pictures of hands that reflect the white:black population ratio.

Apparently PoD believes that google design teams are so incredibly racist that they don't consider black people's hands to be hands at all.


That’s a good change from when this article was written:

http://www.bbc.com/news/technology-39533308

    There is growing concern that many of the algorithms that make decisions about our lives - from what we see on the internet to how likely we are to become victims or instigators of crime - are trained on data sets that do not include a diverse range of people.
    The result can be that the decision-making becomes inherently biased, albeit accidentally.
    Try searching online for an image of "hands" or "babies" using any of the big search engines and you are likely to find largely white results.
    In 2015, graphic designer Johanna Burai created the World White Web project after searching for an image of human hands and finding exclusively white hands in the top image results on Google.
    Her website offers "alternative" hand pictures that can be used by content creators online to redress the balance and thus be picked up by the search engine.
    Google says its image search results are "a reflection of content from across the web, including the frequency with which types of images appear and the way they're described online" and are not connected to its "values".
    Ms Burai, who no longer maintains her website, believes things have improved.
    "I think it's getting better... people see the problem," she said.
    "When I started the project people were shocked. Now there's much more awareness."

When I did it, there were 3 pairs of non-white hands in the first thirty five images. That seems like less than the actual number.

Anyway, there are studies that show that AIs learn our traidtiinal biases such as racism and sexism becuase these biases are embedded in how we write, and the AI learns these biases when learning.

https://www.sciencemag.org/news/2017/04 ... and-gender

    To test for similar bias in the “minds” of machines, Bryson and colleagues developed a word-embedding association test (WEAT). They started with an established set of “word embeddings,” basically a computer’s definition of a word, based on the contexts in which the word usually appears. So “ice” and “steam” have similar embeddings, because both often appear within a few words of “water” and rarely with, say, “fashion.” But to a computer an embedding is represented as a string of numbers, not a definition that humans can intuitively understand. Researchers at Stanford University generated the embeddings used in the current paper by analyzing hundreds of billions of words on the internet.

    Instead of measuring human reaction time, the WEAT computes the similarity between those strings of numbers. Using it, Bryson’s team found that the embeddings for names like “Brett” and “Allison” were more similar to those for positive words including love and laughter, and those for names like “Alonzo” and “Shaniqua” were more similar to negative words like “cancer” and “failure.” To the computer, bias was baked into the words.

    IATs have also shown that, on average, Americans associate men with work, math, and science, and women with family and the arts. And young people are generally considered more pleasant than old people. All of these associations were found with the WEAT. The program also inferred that flowers were more pleasant than insects and musical instruments were more pleasant than weapons, using the same technique to measure the similarity of their embeddings to those of positive and negative words.
By foxdemon
#14853095
Saeko wrote:An alternative and far more sensible explanation is that there are more white people in America, and hence more stock pictures of white hands. When I google "hands" I get pictures of hands that reflect the white:black population ratio.

Apparently PoD believes that google design teams are so incredibly racist that they don't consider black people's hands to be hands at all.


So I was thinking about it. What if I googled in a different language and observed the results?

My method: activated a Japanese character keyboard; used a translation chart to be able to write ‘hand’ in Japanese; search for images.

‘Hand’ got inconclusive results (lots of images of assorted Japanese stuff). So I tried with ‘leg’ and ‘arm’. The results?

For ‘leg’ I got a high rate of return on images of sexy Japanese female legs and quite a number of images of cat paws. For ‘arm’ I got lots of images of muscly Japanese male arms.

So what would the poor AI make of this?
#14853097
Nice idea @foxdemon. I just tried your experiment using google translate to get the hindi for hand to then paste into google's image search.

I did get quite a few seemingly Caucasian hands but also a lot of hands clearly from hindi sites like these:

Image

Image

Image

Image
By foxdemon
#14853108
@SolarCross it just goes to show. If one googles ‘hand’ in English, one is likely to find hands with the skin pigment of the ethnic group traditionally associated with that language.


To be honest, I started this thread because I was annoyed everyone was ignoring the very interesting thread I started on the subject of the discovery that dark matter is largely baryonic. So I thought “OK, I’ll try a science thread with racism in the subject.” And it got a lot more attention.


Yet it has turned out to be more interesting than I thought it would. AIs reflect those who create them. Like a child, they reflect ourself back at us. AIs also reflect the values of the society that created them. A contempory Western AI would likely be very politically correct. But it would become a literal reflection of those values. I wonder if POD would really be happy with the ultimate result?

To create a beneficial AI of great power seems to require a good deal more wisdom than most of us have. Maybe Elon Musk is right and we shouldn’t attempt AIs of great power at this time?
User avatar
By Saeko
#14853109
foxdemon wrote:@SolarCross it just goes to show. If one googles ‘hand’ in English, one is likely to find hands with the skin pigment of the ethnic group traditionally associated with that language.


To be honest, I started this thread because I was annoyed everyone was ignoring the very interesting thread I started on the subject of the discovery that dark matter is largely baryonic. So I thought “OK, I’ll try a science thread with racism in the subject.” And it got a lot more attention.


Yet it has turned out to be more interesting than I thought it would. AIs reflect those who create them. Like a child, they reflect ourself back at us. AIs also reflect the values of the society that created them. A contempory Western AI would likely be very politically correct. But it would become a literal reflection of those values. I wonder if POD would really be happy with the ultimate result?

To create a beneficial AI of great power seems to require a good deal more wisdom than most of us have. Maybe Elon Musk is right and we shouldn’t attempt AIs of great power at this time?


WHAT!? :eek:
#14853111
Saeko wrote:WHAT!? :eek:



That’s right, baryonic. Seems much of it is gas (boring old hydrogen) between galaxies that is at a temperature that current instruments struggle to detect it. Two teams used a method of building up background cosmic radiation makes such that they could resolve the predicted distortion of light passing through these gases clouds and they ended up with comparable results.

So nothing fancy, but it is still dark, at least to our instruments. But not all the missing matter has yet been accounted for.

Another subject which will be interesting to resolve, is dark flow. It seems many galaxies tend to have a parculiar motion in a particular direction, rather than random motions relative to the background cosmic radiation. What’s more, the point they are moving toward is beyond the limit of the observable universe (which means what ever it is, it is moving away from us faster than the speed of light, at least relatively). So galaxies are moving toward some great mass outside the universe as we know it.

This is disputed and to resolve the issue one way or the other requires better sensors for studying the background cosmic radiation. But the idea of being able to detect something beyond the universe is very exciting.

Our own galaxy and it’s local cluster are moving toward the Shapley attractor. But that is the same direction as this extra universal attractor. Could it be there is no Shapley attractor and it is that extra universal mass that we are moving toward? We will never reach it of course. But it does introduce non random motion into the universe which might ha e consequences for the evolution of the universe.

Umm, are you still awake after that? :D
Russia-Ukraine War 2022

@late If you enter a country, without permission[…]

My prediction of 100-200K dead is still on track. […]

When the guy is selling old, debunked, Russian pro[…]

There is, or at least used to be, a Royalist Part[…]