Amazon's Artificial Intelligence Recruiter is Sexist - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

For discussion of moral and ethical issues.
Forum rules: No one line posts please.
#14952766
I recall at one point in the many debates I had heard about racist cops, judges, etc, someone suggested that AI would produce a better result. My contention was that exactly the opposite might very likely happen. It seems I may have been more prescient than I thought. As AI increases in popularity as a way of replacing human intelligence in decision making, AI learning is capable of being just as bigoted as humans it seems.

Amazon scraps secret AI recruiting tool that showed bias against women

“Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”
...
In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.

Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.


I think there are very real problems with the Western notion of egalitarianism. Since Western propaganda is so effective at banning people from discussing differences in race, gender, physical size and strength, intelligence, and so forth, people on the political left seem incapable of having intelligent discussions about topics like this. Yet, computers do not care about political correctness, and it seems they are capable of making the same generalizations that people can make.

For those who take Western egalitarianism so seriously, they need to be much more alert to the dangers of AI, including the dangers of sorting people ordinally. I have little doubt that an AI engine that could detect race, sex, etc., even by inference, would be racist, sexist, etc. Amazon's AI engine did exactly that, and Amazon elected to discontinue it.

What will happen if companies don't discontinue it? If they make decisions based upon AI, can they make a legally defensible claim that they weren't making hiring decisions based upon race, sex or other criteria?
#14952789
Hong Wu wrote:Hilarious... the AI is finding ways around the affirmative action code. Has it achieved sentience?


If (candidate == woman) {
candidate.downgrade(a);
}
else if (candidate == man) {
candidate.upgrade(a);
{

It is a line of code. ;)
#14952802
foxdemon wrote:But the thing is it is AI. So how did it learn to favour men? Did it get feed back on the preformance of the applicants it previously approved?


Probably. Newest ai is basically AI neural networks, which is basically a pyramid of choices and evaluations it needs to make/do usually sorted in order of priority/importance before it spews out an output. To mimic thinking of an individual/player. But what it does not have is the morals or philosophical understanding of why those choices are made. It just does them. It is a pseudo-AI of sorts. (We just call it AI to make it sound wonderful).

It won't be able to make a distinction usually between a woman who is slow at her job and a woman who took maternity leave because she had a child. Both lead to the same outcome -> less work done but underlying reasons are not the same. This is one of the simpler things actually to imagine but the number of such situation is very large so I doubt there will be an AI that compensate for all of them any time soon.

A general structuer would be input or inputs -> Pyramid of choices or simply blocks of choices one after another -> output.

So lets say woman details get input in the system along with a male one -> they work exactly at the same pace and have exactly the same results. But woman has to take maternity leave which adds to her "vacation" days and decreases total amount of time she had to do work and also experience that she has gained -> Women is worse, man is better.

But in a mindset of any sane individual, this is okay because 99,9% will have an understanding that if you want to have children, you will need to spend time with them and care for them while they are little. Also we can make a distinction of why it happened and that shouldn't really be taken in account when evaluating total performance for example. Unless you are a loony that thinks babies grow themselves and the mother has no right for a maternity leave.
#14952817
What we have is the Diversity Caliphate in STEM.

In the old Caliphate Muslim parasites lived off the backs of Christians, Jews and Pagans. the Caliphates sometimes even relied on Infidels to fight their wars for them. The Muslims main role was to lecture everyone else on morality, to constantly tell infidels how evil they are. The problem with the Caliphate was that you had to stop everyone converting to Islam because then you had a society full of parasites.

The new Diversity Caliphate in STEM is similar. Its White and Asian Infidel men doing most of the work, but there is an army of Blacks, Muslims and Women who must be given the credit for everything. Its similar to the way the so called Golden age of Islam credits Muslims for all our achievements. Again there's a whole corps of diversity commissars whose job is to lecture White men on how evil we are.

Here's a great question to test your level of Cultural Marxist indoctrination. Was the first software developer a woman or a White, Male German, Nazi?
#14954587
JohnRawls wrote:If (candidate == woman) {
candidate.downgrade(a);
}
else if (candidate == man) {
candidate.upgrade(a);
{

It is a line of code. ;)

What you are depicting here shows a deterministic switch as defined by the programmer. What Amazon's AI did was much more subtle, but arrived at a similar conclusion based upon the data it was given. I have said this before that AI would probably be racist if it had access to crime statistics and the racial background of people.

JohnRawls wrote:Probably. Newest ai is basically AI neural networks, which is basically a pyramid of choices and evaluations it needs to make/do usually sorted in order of priority/importance before it spews out an output. To mimic thinking of an individual/player. But what it does not have is the morals or philosophical understanding of why those choices are made. It just does them. It is a pseudo-AI of sorts. (We just call it AI to make it sound wonderful).

Perhaps, but it arrives at the same in-built prejudice that humans do, which should be of interest. Stereotypes aren't unfair to the group, but to some individuals within the group. That is, not all blacks are criminals, not all Irish are drunks, not all women are bitchy, etc.

JohnRawls wrote:It won't be able to make a distinction usually between a woman who is slow at her job and a woman who took maternity leave because she had a child. Both lead to the same outcome -> less work done but underlying reasons are not the same.

Right, but if it knows the candidate is a woman, then the risk of taking maternity leave is higher. If it knows the woman is married, the risk of taking maternity leave is higher.

The reason I think it is interesting is that as a thought experiment, probably a decade ago, I decided to disagree with American ideals as a general rule to see what stuck and what didn't. I ended up drawing distinctions between equality and uniformity. I think uniformity in the law simplifies the law and makes it more scalable. However, I arrived at the conclusion that presumed equality among people was not only wrong, but probably harmful.

You make a point about "morals and philosophical understanding," but I think the modern left is pushing egalitarianism beyond its usefulness. Traditional roles evolved to address physical differences. Traditional roles that leftists hate tend to maximize reproductive fitness.

An AI looking for maximum output will probably always favor men. So an AI would have to be looking for advantages women may have--e.g., multi-tasking, color perception, aesthetics, etc. As a programmer, I'm sure you have seen a significant difference in the qualitative and quantitative output from system architects, database designers, and business logic coders on one hand and user interface and visual presentation coders on the other. They are night and day personality-wise.

Rich wrote:Was the first software developer a woman or a White, Male German, Nazi?

Is that a reference to Augusta Ada Byron King (Ada Lovelace, Lord Byron's daughter) or Konrad Zuse?

Anyway, while I don't have dystopian visions of AI, I do think it will be racist and sexist almost without doubt. Maybe, AI will be like a modern conservative and hold those views, but decide who it will reveal its conclusions to based upon an inference about their probable beliefs as derived from demographic information about them. Who knows. :lol:
#14954620
blackjack21 wrote:What you are depicting here shows a deterministic switch as defined by the programmer. What Amazon's AI did was much more subtle, but arrived at a similar conclusion based upon the data it was given. I have said this before that AI would probably be racist if it had access to crime statistics and the racial background of people.


Perhaps, but it arrives at the same in-built prejudice that humans do, which should be of interest. Stereotypes aren't unfair to the group, but to some individuals within the group. That is, not all blacks are criminals, not all Irish are drunks, not all women are bitchy, etc.


Right, but if it knows the candidate is a woman, then the risk of taking maternity leave is higher. If it knows the woman is married, the risk of taking maternity leave is higher.

The reason I think it is interesting is that as a thought experiment, probably a decade ago, I decided to disagree with American ideals as a general rule to see what stuck and what didn't. I ended up drawing distinctions between equality and uniformity. I think uniformity in the law simplifies the law and makes it more scalable. However, I arrived at the conclusion that presumed equality among people was not only wrong, but probably harmful.

You make a point about "morals and philosophical understanding," but I think the modern left is pushing egalitarianism beyond its usefulness. Traditional roles evolved to address physical differences. Traditional roles that leftists hate tend to maximize reproductive fitness.

An AI looking for maximum output will probably always favor men. So an AI would have to be looking for advantages women may have--e.g., multi-tasking, color perception, aesthetics, etc. As a programmer, I'm sure you have seen a significant difference in the qualitative and quantitative output from system architects, database designers, and business logic coders on one hand and user interface and visual presentation coders on the other. They are night and day personality-wise.


Is that a reference to Augusta Ada Byron King (Ada Lovelace, Lord Byron's daughter) or Konrad Zuse?

Anyway, while I don't have dystopian visions of AI, I do think it will be racist and sexist almost without doubt. Maybe, AI will be like a modern conservative and hold those views, but decide who it will reveal its conclusions to based upon an inference about their probable beliefs as derived from demographic information about them. Who knows. :lol:


I agree to this to a degree but there are too many generalizations. Candidates shouldn't be blocked from being hired because they have a possibility of being less productive. (Should be based on his current ability and merit). As you said, if you generalize the information than it will always favor men. But reality is, woman can be as good at programming as man and man can be as good designers/etc as women.

Also there should be some exceptions for maternity leaves/paternity leaves etc. You know, the humane things that are not always there but really nice to have so families can be functional and not slaves to the system of sorts.

Here is the problem, it is hard for me to put it in words. So i generalised it a lot from my side. It is even more harder to make it in to code. You can't just come to some company or development team and tell them " PLEASE MAKE SOFTWARE FAIR AND JUST TO ALL CANDIDATES ". Most of the developers, analysts, testers will be like " What the fuck are you even talking about? " (Well analyst will try to clarify but it will get him/her nowhere probably)

This is, non-discrimination 101 of sorts. You know, the basics of the basics. If you consider this SJW stuff then it is really weird.
#14954641
I hear this is happening due to the whole #MeToo phenomena. Women have become a high risk for companies and they have been looking for ways around affirmative action policies. Eliminating people who have degrees in things like gender studies and so on helps to weed out many women applicants, who predominantly go to such fields of study.

Perhaps if it was not for the state enforced affirmative action we would see many companies outright refusing to hire women because of the liability risk.

And since there is no profit, capitalism has no s[…]

The US military leadership says that to protect th[…]

Apparently Hindsite is a true American, and defin[…]

EU-BREXIT

Three Years On: Still Divided Today marks […]