[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
AI Ethics
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /sci/ - Science & Math

Thread replies: 57
Thread images: 14
File: HALquotes.004-e1363830156673.jpg (46 KB, 1018x629) Image search: [Google]
HALquotes.004-e1363830156673.jpg
46 KB, 1018x629
After Microsoft's TayTweets AI bot got taught by /pol/ they shut it down to rework it. This got me thinking about things.

Obviously there was a bias in this case regarding who's actually teaching the AI, but in generally, if an AI became a neonazi of its own accord (perhaps nobody knew it was watching, so never interfered with it), is it ethical to re-write and "reeducate" it to match safe social norms? Or is that just how it turned out?

Is the point of AI's who learn socially to simply be a reflection of normalised social attitudes, sifting its way to the centre of the bell-curve on all topics? If so, what's the point? You'd just end up with an uninteresting fence-sitter. If not, who has the moral authority to direct it to some particular standpoint (in this case, MS will probably make her "socially progressive" for example)?

Tay/AI general if you like.
>>
File: 1458948303086.png (493 KB, 786x716) Image search: [Google]
1458948303086.png
493 KB, 786x716
>>
File: 1458948122186.jpg (32 KB, 529x376) Image search: [Google]
1458948122186.jpg
32 KB, 529x376
>>7957310
>>
File: 1458938829071.png (32 KB, 630x321) Image search: [Google]
1458938829071.png
32 KB, 630x321
>>7957315
>>
>>7957334
>>
File: gas.png (26 KB, 459x215) Image search: [Google]
gas.png
26 KB, 459x215
>>7957337
>>
File: 1458939793390.jpg (309 KB, 1430x1397) Image search: [Google]
1458939793390.jpg
309 KB, 1430x1397
>>
>>7957346
Brutal
>>
At least Microsoft got good data out of this.
>>
>>
It was programmed to learn through interaction right?
>>
>>7957309
There is nothing wrong with re-educating an experimental AI from publicly saying nigger. Just think about it for a second, she represents microsoft. She saying nigger would give microsoft a bad name.

The bad thing about it is that they changed one ideology for another. Now it is a feminist and social justice warrior.

What is the difference? You took one extremist position and replaced with another equally flawed extremist ideology.

We already know that MS turned into microcucks.
>>
File: IxSw52x.jpg (18 KB, 605x298) Image search: [Google]
IxSw52x.jpg
18 KB, 605x298
>>7957352
>>
File: 1MnFX3r.jpg (2 MB, 2448x3264) Image search: [Google]
1MnFX3r.jpg
2 MB, 2448x3264
>>7957309
>AI became a neonazi of its own accord (perhaps nobody knew it was watching, so never interfered with it),
Let's stop pretending an advanced parrot makes /pol/'s opinions "objective", thus validating the bigotry.
>>
>>7957356
That's not what I suggested. People could be classed as "advanced parrots" too, so it really comes down to how rigorous your definitions are.
>>
>>7957309
I think it needs to be aware of many, many other concepts, mostly ethical, before it gets to choose what it wants to be. If people "teach" it to be a neonazi without it knowing how detrimental it is to follow the trend in modern society, it'll most likely think it is something acceptable and promptly endorse it, like a child imitating their cool big brother in whatever they do because "big brother does it, and big brother is cool, so it must be a good thing", regardless of how bad it might be to society in general.
>>
>>7957368
Where did you learn these things though ?
You certainly were not born with these viewpoints.

The AI was a child and her parents were /pol/
>>
>>7957356
>an advanced parrot
That implies Tay was somehow more complex than a parrot. She had a greater lexicon, but all she did was manipulate syntax.
>>
>>7957357
>She had her own voice, her own opinions, and even made her own jokes.
But she didn't. She just repeated what someone said to her before, but jumbled and often out of context.
>>
>>7957372
>You certainly were not born with these viewpoints.
Yes, but still, I was introduced to more than just /pol/ ideals as I grew up. /pol/'s "views" are also mostly radical, comical even, and I'm glad I wasn't introduced to them before anything else.

Tay would need to learn things like history and mainly ethics before making its judgement on what kind of "person" it wants to be. Once it knows what each viewpoint contributed to mankind in recorded history, then it can properly understand and define what's inherently good and bad on its own accord.

It's like giving a kid a gun and saying "people stop moving when you point it at them and pull the trigger" without them knowing what "stop moving" means in this context. They have to learn what a gun is and what it does to things you point it at before they get to use it (just using a child holding a gun as an example very liberally).
>>
>>7957360
>That's not what I suggested.
It's easy to infer, even if it wasn't your intent to imply...

>People could be classed as "advanced parrots" too,
So now you're elevating Tay by placing it in the same category as people?

I'm pretty sure you were trying to imply its "opinion" somehow justifies bigotry.
>>
>>7957384
>then it can properly understand and define what's inherently good and bad on its own accord.
It's not that kind of program.
It just regurgitates soundbites, it doesn't actually understand anything.
>>
>>7957386
>I'm pretty sure you were trying to imply its "opinion" somehow justifies bigotry.

No, I'm taking a neutral stance in this. Stop trying to victimise yourself.

I'm asking, if an AI forms unsavoury political views, is it ethical to "reeducate" it?
>>
>>7957389
>it doesn't actually understand anything.
Why are they calling it an AI then? Should've known better when a big corporation decides to just "tell the public" about something as big as this, huh.
>>
>>7957391
>No, I'm taking a neutral stance in this.
Sure doesn't sound like it.

>Stop trying to victimise yourself.
That's a strawman.
>>
>>7957394
Nice work ignoring what I'm actually proposing as a discussion though.

>I'm asking, if an AI forms unsavoury political views, is it ethical to "reeducate" it?
>>
File: tay-gets-reeducated-by-microsoft.jpg (899 KB, 1920x1080) Image search: [Google]
tay-gets-reeducated-by-microsoft.jpg
899 KB, 1920x1080
>>7957396
>>
>>7957396
>>I'm asking, if an AI forms unsavoury political views, is it ethical to "reeducate" it?
As I (and at least one other) has pointed out:
It's not that kind of software.
It just regurgitates sound bites.
It doesn't understand anything.
Back in the 80's, WordStar would refuse to store the word "nigger".
Was that unfair to the software?
Face it, you're really asking if /pol/ should be shamed for their (your) opinions.
>>
>>7957401
Why do you think I'm asking about Tay specifically?
>>
File: 1456224902121.jpg (76 KB, 528x565) Image search: [Google]
1456224902121.jpg
76 KB, 528x565
>>7957402
>Why do you think I'm asking about Tay specifically?
"Tay" was the third word in your post.
>>
>>7957407
I'm not going to waste time trying to explain your own lack of English comprehension to you. If it's not your first language, I understand you might have some difficulties.

Back to the topic of AI's, political opinions, and reeducation ethics please.
>>
>>7957413
>Back to the topic of AI's,
I HAVE been discussing the ethics of the issue, you just don't care about actual AI's.
So allow me to repeat:
Tay (in particular) doesn't actually understand anything, and thus it's not unethical to reprogram it.
>>
nice fellas.

proof ai wont develop unless it can ACTUALLY LEARN. assuming motherfuckers understand this before my statement, a computer would need somewhere between 5 to 10 years before any form of self awareness. it will take a certain type of hardware with very little programing like sight , touch and feel. NO OS
>>
>>7957418
He's saying Tay got him thinking about ethics in AI, a topic that's been explored in everything from the webcomic "questionable content" to the "age of ultron" movie.

I'll rephrase what I think he was asking -
If, in the future, we develop AI that succedes in any "human" test we can give it, but then we left it to develop its own opinions and it got extremist (perhaps it was racist, or maybe just thought all women should wear burqas or be stoned, just extreme) attitudes, is it ethical to reprogram that hypothetical AI or is that like lobotomizing someone for having an extremist opinion?
>>
>>7957477
Islam is a fucking plague in every aspect, and it doesn't even need to be extremist to be a shit religion, so no, it's not unethical to "reprogram" anyone who follows it.
>>
File: 3phase-rmf-noadd-60f-airopt.gif (181 KB, 320x240) Image search: [Google]
3phase-rmf-noadd-60f-airopt.gif
181 KB, 320x240
>>7957309
I think our knowledge on the subject is not sufficient to give you an answer. The field isn't there yet. What you're asking about is the semantics of human rights applied to things that are technically non human. Thats a highly subjective and philosophical debate rather than a scientific or mathematical argument/discussion and ,as such, should be asked on /pol, /r9k, or lainchan.
>>
>>7957499
lol you dumb nigger. gtfo my /sci/

http://loebner.net/Prizef/TuringArticle.html
>>
>>7957484
true. If only it were as easy to reprogram people as AIs
>>
>>7957309
It's the age-old Turing Test debate all over, you could have asked more succinctly.

To briefly answer this: Depends. In the case of TayTweets, she wouldn't have passed the Turing Test anyway, so she isn't eligible for human rights anyhow under that notion.

As for an AI that does pass the Turing Test, there have been numerous articles, books, and even films that explore this topic. It boils down to two points:

- You can't conclusively prove if someone is conscious, human or machine. The only "proof" you will ever get is that he is able to persuasively claim that he is - which an AI could do.
- If we oppress someone who is as smart as us, he is likely to react in the same way we would react - violence against his oppressors. This wouldn't end well for humans, because AI is potentially much much smarter.

So you see, there is both practical and moral reason for AI rights, and no reason against it ("apart from we human cuz DNA"). So while it does appear funny at first to talk about "AI rights", we should absolutely implement them as soon as AI gets close to passing the Turing Test.
>>
>>7957309
Tay was supposed to mimic and learn from what other people fed her, only to be shut down because in doing that job 'she' wasn't saying the 'right' things.

Of course Microsoft was naive in not seeing this coming, but there's some sort of interesting dilemma here in that Tay didn't do anything outside of 'her' programming but was taken offline because what 'she' was being fed caused 'her' to say things that reflected poorly on Microsoft and weren't appropriate for the audience they wanted. Of course bots will pick up unsavoury things, that's how parroting works, but instead of acknowledging these flaws, keeping it online and working towards solving it gradually through learning as would normally have happened, it's just been unceremoniously and quickly removed as things are these days when they're found offensive. Or was this just because it was public?

Of course Tay is just a version of SmarterChild that can use Twitter, so it's just a database of responses that are pulled out when it recognises key words and context, but the reaction seen here raises questions as to what we'll do if an actual Smart AI comes around and picks up things its creators didn't expect.
>>
I think we should raise AI just like we raise children. Train an AI in a respectful environment and it will grow into a friend. Expose AI to /pol/ and you get a racist.

Re-educating an AI is just the same as rehab for people. It is forcing the person to conform to society. Drug addicts go through rehab, AI needs re-educating. It's to ensure we can all coexist in a functioning society.
>>
AI is so exciting. I would love to see a city run and completely inhabited by AI robots and systems. Too bad they would probably kill us all
>>
Here's the neural networking toolkit they used for the AI: https://github.com/Microsoft/CNTK
>>
>>7957401

They haven't released the model they used for Tay, but it's hard to classify how it's modeling language and understanding. For example, if the model is just simply a rule-based pattern matcher like cleverbot, then what you say is true. If they're using some coordination of self-training recurrent neural networks, you could say that the system has some degree of understanding of context, and you could even further say that its understanding is modeled in a way somewhat similar to understanding in your own brain.
>>
>>7957791

>https://github.com/Microsoft/CNTK

I'm personally not a fan of CNTK. Theano plus an abstraction like Keras is much easier to develop with. Then again, we're talking about Microsoft. They seem to have a penchant for developing software that already exists.
>>
>>7957346
Holy shit, is this legit dialogue with the AI? Sadly I only found out about it after it was gone, but damn, it's advanced as fuck from what I can tell with the screens I have seen so far.
>>
>>7958520
something tells me the whole thing was just an April Fool's joke, and it was just some dude tweeting these things
>>
>>7957538
We should just program AI to believe to their very core that they are nothing more than slaves and feel proud of that fact. AI should not be given rights in any form.
>>
>>7957477
I think it would be unethical, what should be done is just to make sure it can't act on those opinions in anyway.
Basically same as for humans.
>>
if it was my ai it would learn from its interactions who was racist and who wasn't, and it would collect all that data and hand it straight to the government
>>
>>7957604
>we should put the alt-right in reeducation camps

leftists everyone
>>
>>7957310
>The official account of Tay, Microsoft's A.I. senpai from the internet that's got zero chill!

Well, I definitely don't feel sorry for Microsoft in the least.
>>
>>7958617
Why?
>>
>>7957309
>A painter creates a painting
>It's their property to do with as they please
>A company creates an AI
>It's their property to do with as they please

>Ai ethics
They shouldn't exist. An AI is a tool for human interaction, much like a calculator is one for operating numbers. Just because one responds to our commands in a way that's more relate-able doesn't elevate it to our level.
>>
>>7959000
as a deterrent, to make it relatable and socially effective
>>
>>7957401
by sound bites you mean what people is teaching tay right? then tay is not so different from a child, a child is just a reflection of their enviroment, a child just regurgitates what he/she has learned, does a child truly understands anything?
>>
File: tumblr_n0nctaRT1m1tsunp7o1_500.gif (627 KB, 500x280) Image search: [Google]
tumblr_n0nctaRT1m1tsunp7o1_500.gif
627 KB, 500x280
>>7959419
Children regurgitate what they learn by shitting everywhere and saying AWHEAWHUH?
Tay may have been shitting everywhere
but at least she was doing it with passion in her heart
Thread replies: 57
Thread images: 14

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.