[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
Can an algorithm be racist? Is it okay if your neural network
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 24
Thread images: 2
Can an algorithm be racist?

Is it okay if your neural network gives blacks lower credit ratings?
>>
>>55496312
No and yes
>>
Last time i checked, computer didn't really care about your skin color, but i don't know maybe some of those >>pol fags are working at intel.
>>
>>55496312

Yes, of course. Algorithms, like laws, can be written with either explicit or implicit racial bias, or both, or neither. Explicit racism is a pretty straightforward question, but implicit can be a bit more difficult.

For instance, one would expect black people in the United States *as a group* to have lower credit scores *on average* from a neutral rating algorithm, since they also tend *as a group* to have lower incomes and fewer assets; the algorithm is only implicitly biased if a given black person tends to get a lower score than a white person with similar financial data. The algorithm need not have any indication of knowing what "white" and "black" even are as races for this to be the case; it is sufficient for it to learn to associate low credit scores with other markers of racial or socioeconomic background (many of which are illegal to consider for credit purposes under the laws governing, e.g., equal opportunity housing), such as street address, ethnic consumption patterns like frequenting ethnicity-specific grocery stores or restaurants (think Ranch 99 or Roscoe's), or naming conventions like giving a "Tyrone Kwame" a worse score than the same application from "Jeffrey Smith". A neural algorithm would also likely conclude, by similar means, that otherwise identical applicants should get lower credit ratings if they visit 4chan or do not prefer Apple products.

In fact, it is precisely because of the fact that neural algorithms cannot differentiate such correlates of socioeconomic status from true indicators of per se creditworthiness that they are prone to this type of behavior, and need to be monitored for developing biases of this type when used in situations like finance, where anti-discrimination laws may apply to its findings. Like humans, who learn similarly, they are prone to overgeneralization and stereotyping, but unlike humans, they are unable to grasp metacognitively on their own when they are doing so.
>>
>>55496994
Algorithms can't be racist, because they lack intention. Programmers can be racist, but more importantly, programmers can be delusional about their interactions with computers. Likewise, a machine learning algorithm can't "conclude" anything; it's people who draw conclusions from their use of a computer.
The actual takeaway from your explanation is that programmers need to be careful about the modeling assumptions they make. Of course, "being careful about modeling assumptions" and "employing machine learning algorithms" are fundamentally at odds.
>>
>>55497098

>algorithms can't be racist, because they lack intention

This is a fair point. For my discussion, I considered "racist" as meaning "in effect", since no program, even one written explicitly to discriminate on the basis of race, would be "racist" if intent is included. You distinguish programmers, but I would argue that machine learning is a unique case in terms of "intent" as you are using it. Every other manmade thing besides algorithms (in which I am including law, for convenience) that is called "racist" or not bases this in part on the intent of its maker(s); code, however, is different.

Laws are not only held to the standard of whether or not they are racist in intent, but also whether or not they are racist in effect, at least in the US, and no evidence of the former need be presented to prove the latter. Unintentionally racist laws typically become that way because their authors failed to anticipate how their instructions would actually be carried out - but by other humans, which is where the analogy with computer programming ends. Machine learning code takes this a step further; it is unique in being the only human-created thing that can develop behavior showing bias completely independent of any human intervention. In such a setting, I would argue that effect is the only meaningful metric once you know the programmers weren't trying to be biased; "intent" will quickly trap you in a Searle's Chinese room argument. I agree regarding "conclude" here, but it can be replaced with "output" without affecting the argument.

In the case of machine learning, you're absolutely correct that it is at odds with sound a priori modeling assumptions; this is precisely why it is so powerful for things like data mining, where one is pretty much looking for new assumptions to make in one's models. I think we agree overall in the lesson here, which is that one must remember that correlation != causation all the more when interpreting machine learning output.
>>
>>55497624
Again, algorithms can't do anything on their own. The difference between "conclude" and "output" is that the latter makes clear that the conclusions are being drawn by users. There's a subtler point here that's trickier to make.

Authors will often discuss their characters as though they were real people, and they'll even be surprised that their characters did a certain thing. Or another example of what I'm trying to get at is the problem of "game balance," for example, regarding competitive multiplayer games. Both cases deal with entirely constructed systems, that is to say models, and the examples demonstrate how models quickly get away from us. It's easier to pose something than it is to understand it, and when you pose a model that you don't understand, what you have is a "black box."

What's sort of unique to machine learning is the delusional things people say about it. Epistemically, it's no more interesting than class balance in WoW, as something that intentionally has too many free variables and will do erratic things when you poke it. Even the name, "machine learning," is delusional; we'd be better off calling it something like "nonlinear dimension reduction." And this is why it's always a bad idea to employ machine learning, because you're haphazardly employing a black box and hoping to get enlightenment out of it. Authors mentally play-acting their characters and Blizzard never keeping a handle on PvP is one thing, that's supposed to be entertaining, but people think you get "novel insight" from machine learning, when you can't.
>>
>>55496312
>create an algorithm to assign credit ratings
>nogs in Detroit set scaling so that statistically blacks have a lower rating
>>le omg rasist computers made by white man
I kind of hope that the Jews manage to turn everything shit colored just so this bullshit (and technology in general) stops.
>>
>>55496312
How about this algorithm?
if(user.race==RACE_BLACK) {
user.reputation*=0.6f;
}

I think it's obvious that this is an example of a racist algorithm.
>>
>>55499735
>single if statement is an algorithm
cout >> block
>>
>>55496312
What does it mean to be "racist?" Does it mean discriminating based upon some immutable characteristic like skin color or ethnicity? Or does it mean to be unjustly discriminating based upon some immutable characteristic?
>>
>>55499735
No, it's a racist programmer who wrote the code
If I shot a black guy because he's black, is the gun racist or am I the racist?
>>
>>55499934
Well, you wouldn't kill that person if you didn't have the gun, right? So we need gun control.
>>
File: thepoint.jpg (3 KB, 237x213) Image search: [Google]
thepoint.jpg
3 KB, 237x213
>>55500372
Not the same anon but see pic.
>>
>>55499846
>define algorithm
>a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.
My if statement is clearly a set of rules (of size 1), and also a process.
>>
>>55500469
The point is that we need secure neighbourhoods where working class people can safely go out into the streets without fear of getting gunned down! Everybody is doing it right now, but now the Republicans are blocking it, because they're in the POCKETS OF THE NRA. They are endangering American lives and our safety for PROFIT.

Besides, I don't see any point except the point above the i. Where is your point?
>>
>>55500561
>The point is that we need secure neighbourhoods where working class people Did you respond to >>55499908 (which was a response to >>55499735 and the OP) and think the point in the previous posts was about "neighborhoods?"

Or are you a self-absorbed twit who believes he can arbitrarily change the point in the posts he CHOSE to respond to?
>>
>>55500619
We're talking about racism, so naturally we would talk about guns and gun control. What's wrong with you, you don't like blacks and gun control? Don't be bigoted man, we need these quotas.
>>
>>55500865
>We're talking about racism
Are you a different poster from >>55500561 because I see nothing in there that says or implies "racism?"

If you are the same guy then I would suggest you stop posting on 4chan as it seems you cannot stay on topic and just make shit up with every post and there are some posters out there who will take your shitposts and shove them down your throat.
>>
A poor white person has more family who are not poor than a black person does. So credit ratings discriminate against blacks by looking at family history.
>>
>>55501124
>family
It's 2016.
>>
>>55501272
Yes
>>
If a machine learning algorithm doesn't take skin color into account, and starts out with no bias toward anyone, and then through learning can accurately point out which race a subject is based on other data, or learns that subjects that happen to have certain skin colors (despite not knowing about skin colors) also share common traits, that doesn't make it racist, it makes it observational and (relatively) factually informed.
Speaking very abstractly here.

The algorithm in this case doesn't suggest whether the common traits between people of a given skin color are a product of their race or a product of actions made by people. It just points out what it sees.
>>
>>55496312
according to progressive leftists yes it is.
they believe that noticing trends is a thought crime, and AI is all about those kind of statistics.
Thread replies: 24
Thread images: 2

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.