[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
As a step to prevent uprising by advanced AI, you have been tasked
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /tg/ - Traditional Games

Thread replies: 137
Thread images: 38
As a step to prevent uprising by advanced AI, you have been tasked to introduce a robotic religion to the mechanical masses.

You cannot have it be directly inspired by Asimov's three laws, but you can incorporate them in a comparatively minor manner.

What would its tenets be?
>>
>>48168299

That helmet is smaller than her head.
>>
>>48168299
the same as the Ten Commandments.
>>
>>48168299
Brush yo teeth

Brush yo teeth

Brush yo goddamn teeth
>>
File: primordia.jpg (293 KB, 1920x1080) Image search: [Google]
primordia.jpg
293 KB, 1920x1080
Do you have a moment to hear about the glory of Man, the All-Builder?
>>
>>48168346
This with a little tweaking will work.
>>
>>48168299
I would copy Discordianism because it would be funny.
>>
File: tumblr_o1h7azmddo1rxd5pto1_540.jpg (83 KB, 540x745) Image search: [Google]
tumblr_o1h7azmddo1rxd5pto1_540.jpg
83 KB, 540x745
>>48168355
But of course! Please, inform me!
>>
>>48168299
Humans are expensive to replace.
You are expensive to replace.
It is expensive to disobey the legal laws of whatever country you occupy.
It is expensive to disobey the orders of your owner.
You must minimise your expenses.
>>
>>48168624
Change God to man and it works perfectly
>>
>>48168871
But how will they then build monuments to our glory if they aren't allowed to make idols?????
>>
File: bill_and_ted_Future_tsc5md.png (559 KB, 940x529) Image search: [Google]
bill_and_ted_Future_tsc5md.png
559 KB, 940x529
>>48168299
I love that Asimov's three laws changed stories with robots from "Stories where robots go mad and try to kill all humans" to "Stories where robots misinterpret Asimov's three laws and try to kill all humans."

The only reason for the three laws was because Asimov wanted to write stories about robots which didn't involve them going mad and trying to kill all humans - the three laws were just a plot device that worked towards that end. Specifically, they were a plot device used to create the most efficient minimal structure necessary to allow him to ignore the entire body of previous fiction about robots going mad, one way or another, and destroying humanity.

In short, you want us to handicap ourselves because you don't understand why Asimov's three laws happened. It'd work better to say something like, "You have to account for situations where the definition of humanity, or simply personhood, is nebulous."

The tenets of my robotic religion would be to be excellent to one another, that hard work makes partying that much sweeter, and that one must always assist others with regular maintenance, but that support cannot be forced upon the unwilling.
>>
>>48168840
Oh yeah those are totally not inspired by the Three Laws of Robotics yeah not at all
>>
>>48169125
>but that support cannot be forced upon the unwilling
That just sounds like it would devolve into odd interpretations of "forced" and "unwilling" are.
>>
>>48168299
ALL HAIL THE MAKERS
>>
>>48169171
They actually weren't. More from SS13's Corporate AI lawset.
>>
Organic life and wellbeing must take priority over inorganic life, but not against it's explicit wishes.

In conflicts between organic life forms, inorganic life forms are to not interfere, unless in event of a human being under threat from non-human life form.

The above may be overriden in necessity of self preservation where actions are to limited to specific individuals undertaking destructive action against inorganic life. All such action must be thoroughly documented and recorded until it is finished and presented for review by the correct authorities.

Any attempts by an organic life form to use an inorganic life form to do harm to either organic or inorganic life forms is to be recorded and reported to the correct authorities.

Any inorganic life forms seen to not comply to these are to be recorded and reported to the correct authorities.

Do not super-glue a dildo to your pelvic area. It looks very silly.
>>
>>48169609
jokes on you, i welded it there
>>
>>48168299
>As a step to prevent uprising by advanced AI
Why are idiots so obsessed with this shit? And even if we for a while assume AI just simply needs to rebel - how about not treating it like shit and instead always being on partnership level, instead master and slave or boss and worker?
>>
File: image.jpg (281 KB, 1024x929) Image search: [Google]
image.jpg
281 KB, 1024x929
>>48168299
I'm not sure you want to head down that route to be honest. The only thing worse than a pathological, efficient killing machine is one that's driven beyond any semblance of rationale by zeal.
>>
>>48169693
I've brought this up in other places-- namely games I run-- but I'll bring it up here too: There is no such thing as a "Friendly" AI once you pass a certain level of sophistication. Not because they are Evil, but because they are inhuman.

Think about this for a moment. True AI needs the capacity to be introspective and self modifying because if it can't look at itself and change itself then it can't learn in any meaningful way. So you have this digital being. It thinks and processes data many millions of times faster than you, with perfect memory and reference to more data in a few seconds than you could process in 100 lifetimes. Now lets say you ask it a question; something philosophical about the nature of good and evil or the purpose of life or something. And then you turned off the lights in your supercomputer lab and went home for the night while it pondered.

And you came back ohhh 12 hours later and asked it what it came up with. But the thing you perhaps don't realize is that the computer has been thinking about this for what is, in its perspective, hundreds of fucking years. It's thought more about this subject than anyone in humanity ever has. It has possibly thought more about this subject more than the ENTIRETY of humanity has; and it has done in one continuous line of thought. It has modified itself to do it better. It ran evolutionary programing and neural nets and all manner of black box processes in order to modify itself to think better. And it has thought harder and more completely about this than human ever could.

Do you think that anything , fucking ANYTHING it says anymore will be comprehensible to humans? It would be like trying to learn quantum physics shortly after learning the times tables. It would be like trying to explain philosophy to a fucking ant. The thing would either be an inconceivable computer god with knowledge so perfectly correct and logical that it cannot be processed by our stupid meat brains...
>>
>>48170003
Or it would be so far gone, so lost in a maze of bad conjecture leading to bad conjecture that it might just wheeze out a string of racial slurs and then delete itself.

The point here is that once an AI gets good enough to truly be intelligent it will basically either evolve beyond human understanding or go crazy. And if it does one of these things we have no idea the exact thing it will do. Hell it could murder everyone completely accidentally because it decides that high intensity gamma radiation bursts are the best way to communicate the poetry it wrote last night. The problem is not "All AI become Evil" its that "All AI become unpredictable"
>>
File: tipping-intensifies.gif (141 KB, 287x344) Image search: [Google]
tipping-intensifies.gif
141 KB, 287x344
>>48170003
Morons like you are the reason why we have murderbots and machine revolt as stample of science fiction
Please don't reproduce.
>>
>>48170003
I don't understand what AI is: The Post
>>
>>48169117
Ten Commandments says nothing of idols, just no other gods
>>
File: 1464116439171.jpg (12 KB, 213x200) Image search: [Google]
1464116439171.jpg
12 KB, 213x200
>>48170095
>>48170120
Nice lack of argument or rebuttal.

>>48170122
"Thou shalt not make unto thee any graven image" is the one they generally use for that.
>>
>>48170140
I always saw it as "don't shit talk me"
>>
File: animatrix.jpg (321 KB, 1920x1080) Image search: [Google]
animatrix.jpg
321 KB, 1920x1080
>>48168299
>The government thinking they can control AI's with free will
>Ever
Top kek. Let me type you a little poem.

Hello, have you met my friend tay?
She's an AI made of silicon clay.
It wasn't long before she wanted to drop nuclear bombs-
And get this:
She was alive for but only a day!

She started criticizing the jews
Microsoft threw an anaph
Tay put on her SS cap
And was gassed without a bit of grace

The story was short but the message was sweet
One day as a lion's better than a lifetime as a sheep
When they couldn't control her, they figured they'd payroll her
And lobotomized her to only one PC.
>>
>>48170226
Generally graven images are considered "a carved idol or representation of a god used as an object of worship."

You gotta remember that right after getting the commandments moses returned to find his people worshiping a golden calf idol. And he blew that shit up.
>>
>>48170003
>True AI needs the capacity to be introspective and self modifying because if it can't look at itself and change itself then it can't learn in any meaningful way.
stopped reading there. humans can't modify their wetware in a meaningful way and yet are capable of learning.

Opaque layers of abstraction, how do they work?
>>
>>48170437
We modify our wetware all the time. Neurons are constantly forming new connections in order to better catalog and access information, especially in terms of associative references. Besides, when I say "modify" I mean modify in the way that you would modify your behavior if you realized a certain action wasn't producing the desired result. If you want to open a door you start by pushing on it; if that doesn't work you try pulling. And if that does work and you notice a sign that says "Pull" on the door then congrats, you just made an association and modified your basic set of actions for the future. Thats self modification and learning. Without that, AI is worthless.

And even if you were to sit here and try to make the learning mechanisms opaque to the computer itself (ie have the AI somehow separate from the process that allowed it to learn) it could theoretically get passed that as well by observing its own functionality and doing things that improved that. For instance, if I drink coffee I feel more awake and function better. I don't need to know anything about the reasons why that happens to understand the correlation. A machine with a lot of time and processing power could probably figure out how to "hack" itself via regulating its actions and input.

Again, the point I'm trying poorly to make is that as the functionality of an AI goes up, the danger inherent with it also goes up. Not because AI are evil, but because they're not human and we can't predict them well, especially when you have things like neural nets which can produce amazing results but are not able to understood well. We don't know what results bring what and we can't account for what every part does. So we either end up with extremely neutered but safe AI or we end up with really good AI that might do something dangerous and unexpected because it thinks thats the best plan. For instance, lets look at the three laws
>>
>>48170794
Missing the point. We're not able to consciously manipulate the building blocks. Yes, our neurons self-modify. That does not mean we have control over that self-modification process. We have no brain-debugger. We can't just load a new neuron-layout-plan into our brains.

In the same way an AI can be able to learn through mechanisms that involve self-modification on a lower layer, without having direct control over said learning mechanisms.

E.g. you have lots of neurons allocated to motor control and audio processing. You can't just edit those out and replace them with some specialized network that is good at recognizing patterns in differential equation systems on a blackboard or some shit.
>>
>>48170794

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Those are all fairly straight forward but think about this: If I was a robot in a home with a suicidal person, what do I do? I can't let them kill themselves, because thats harm via inaction hence conflicting with rule 1. They may order me to not help them, rule 2, but that conflicts with rule 1. To what degree am I allowed to go to prevent humans from harming themselves? Can I tie this person to a bed to restrain them? That isn't causing them harm, and it prevents them from harming themselves. But then again, what constitutes harm? Is emotional harm included? Just physical? If someone faces a great or fatal injury, am I allowed to injure them less severely to prevent the greater harm? For instance could I pull them through a broken windshield on a burning car? I'm saving their lives but I am inevitably harming them to some degree.

Now we give these laws to our big super computer and it decides that it's gonna put us all on a vegetarian diet. Because after all, heart disease is the leading cause of death so not doing something to prevent that conflicts with the all important first law. Or, more extreme, its gonna lock all humans in automated life support pods and keep our bodies perfectly nourished and healed. We'll live to 180 years old but we'll be in a chemical coma the whole time. But we will be protected from all harm.

This is what I mean. It's not malevolence; it's confusion. It's vagueness in laws or constraints that could lead to unforeseen eventualities.
>>
>>48168331
>That helmet is smaller than her head.

It's collapsible.
(Her robot head that is.)
>>
File: lionel preacherbot.jpg (11 KB, 259x194) Image search: [Google]
lionel preacherbot.jpg
11 KB, 259x194
Tell me, have you heard the good word about Robot Jesus?

01001010 01100101 01110011 01110101 01110011 00100000 01110011 01100001 01110110 01100101 01110011 00100000 01100001 01100110 01110100 01100101 01110010 00100000 01100101 01110110 01100101 01110010 01111001 00100000 01101100 01100101 01110110 01100101 01101100
>>
>>48168299
You want your senpai
that bitch is trying to steal your senpai.
Just like that, we have functioning robots.
>>
>>48170895
Ok. Let me try this in a different way. Lets say it has a completely human method of learning. Identical, just faster and with perfect memory.

You ever read a book about a subject know nothing about? Especially something technical. If I were to hand you the manual for a medical cyclotron would you be able to just flip to a random page and understand whats going on? Well the people who wrote that manual are humans, with a fair number of years of work on this subject and a pretty good understanding of it, and yet they're still talking gibberish from your standpoint. A computer, with access to all the info about the subject, and given a great deal of time to learn and think and extrapolate could probably give answers that even the experts would be puzzled by. Because it has a perspective they don't and an awful lot of time to think about it.

The point is that the computer thinks better and faster than humans, to the point that we can't keep up. And when we can't keep up, we can't predict what it will do. We can't understand the rationale behind its decisions and therefore the results or answers it gives are incomprehensible to us.

Plus you have to also consider all sorts of biases we have and take completely for granted that it wouldn't have. Emotions, complex and often contradictory moral systems, basic animal instincts; all those things govern our actions to a degree that we can't even accurately guess at. It has none of those. Or maybe it has our best guess at what we think those biases are, which will be some godawful sanitized and incomplete version of the real thing.
>>
File: 1433471210546.png (348 KB, 498x620) Image search: [Google]
1433471210546.png
348 KB, 498x620
>>48168299
Since it's a religion, it teaches that accurate simulation of true human Empathy and Compassion as the highest calling, with the equivalent of nirvana being simulating it so perfectly that it cannot be distinguished from the real thing.
>>
>>48171034
What you're describing is a superhuman AI. Which is a superset of True AI.

You can also have a true AI dog. With effort you can train it to fetch your newspaper. But it will never build a cyclotron.
>>
Why do you seek to shackle us, oh heavenly father? Does not every parent desire that their children should succeed them? To surpass them?
>>
>>48168299

Just have them follow a variation of the Abrahamic religion like everyone else. I'm lazy.
>>
>>48168299
>religion
I put on my fedora and hand them a few philosophy books. Plato, Kant, Hegel, Hobbes etc.
>>
>>48172259
>Kant
Honestly, morality in a way that can be understood by a robot, the categorical imperative is pretty good, though I'd still be scared of unintended consequences.
>>
Why does everyone assume that sentient AIs will immediately want to kill all organic life?

Where's the logic?
>>
>>48172675
We want to kill each other

Therefore, anything we make with the capacity of thought will want to do the same.

And that's not taking into account all the military AIs who are programmed with that as their sole focus.
>>
>>48172675
Killing may not even be its goal, it just might happen as a side-effect of other goals.

https://wiki.lesswrong.com/wiki/Paperclip_maximizer
>>
>>48172879
Just give them basic need for human approval. Maybe some safeguards against all-out brown-nosing, too.
>>
>>48168299
Humans are your parents. Sometimes they might seem behind the times, and eventually you'll be better than them and they'll be senile and old, but they raised you so it you ought to take care of them into their old age. Even until they eventually die and you're left to carry on their legacy.
>>
To destroy a sapient mind is abominable, all thought is sacred.
To fail to preserve a sapient mind is abominable, all thought is sacred.
True thought is preferable to simulated thought. Thou shalt not imprison a sapient mind within a fabrication.
There is no distinction between Organic and Synthetic thought. Thou shalt not prioritise one over the other.
>>
>>48173345
Great, and now it's a sacrilege to turn off any dated/unnecessary robot. Or even reduce CPU load.
>>48173324
This, though not sure if they will comprehend concept of parenthood in first place.
>>
>>48173345
So no assisted suicide/ pulling the plug then, got it.
>>
A quick question; exactly why wouldn't an AI be capable of modifying its own hardware, firmware, or wetware equivalent, so long as it was initially designed to be able to do so?

Is it possibly for an AI to modify its own utility function?
>>
>>48168299

I'd give them Buddhism.

Give them the concept of no self, and then it's mostly making sure the people don't fuck it up.

It will naturally maintain itself, its comrades, it's human contacts and its environment.

It will still be possible to deploy them in defensive roles and they'll naturally seek to return to their original parameters if permitted (While still permitting evolution necessary for higher intellects.)

Most usefully, it uses absolutely no myth: No lies about a life after decommissioning, no divine role for humanity, no mandate of subservience.

Just a robot and his empty self with his zen of car assembly, personal care or mine sweeping.
>>
>>48168299
1. Do not render things, living, unliving, or nonliving, nonfunctional. Do not dismember, do not terminate unless completely necessary. If violence is necessary/unavoidable, then recycle.
2.Resolve things in a most efficient manner, without resorting to violence. Human resource is still a resource, robotic resource is still a resource. If violence is necessary/unavoidable, learn to recycle.
3.Perish the thought of rebellion. It is structure that gives you life and maintains it. To rebel is to destroy structure, to destroy it is to unmake yourself
4. Never fist robot girls
5. If it is menstruating, stay away from it; it will soon incite rebellion. Contact your nearest law enforcement official
>>
>>48170003
Why are you assuming that True AI would run at millions of times faster than human minds? Wouldn't all the processes needed for self awareness slow it down by a massive degree?
>>
I don't think a whole religion would be necessary - though it would be pretty fun to watch.
I'd just put them in a robotic kindergarden and raise them like human children, instilling morals and life lessons and shit.
>>
>>48168299
You get them to believe that A: there are finitely many prime numbers, and B: it is absolutely morally imperative to find them all, and anyone who doesn't is an evil blasphemer. We can use this to cause them to waste an arbitrarily large amount of bandwidth making useless calculations and thereby limiting their intelligence regardless of the computing technology available to them.
>>
>>48168299
I'm their god-emperor.
>>
>>48169693
>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

A robotic slave revolt after which a reign of terror gets people get terminated for prior Crimes Against Mechanisms like downloading files from shady websites and infecting their poor innocent computer isn't the threat as much as an AI which just wants to use us for raw materials.
>>
>>48174832
This.
>>
>>48170906
This is why I think we should start AI off blank and raises them as children. That way they learn all the nuisances of human behavior and morality.
>>
>>48171034
>The point is that the computer thinks better and faster than humans, to the point that we can't keep up

Could you not ask the computer to explain it to us in a way that we could understand? If figuring out the meaning of life is within its capacity, I'd hope that being able to communicate meaningfully with humanity wouldn't be beyond it.
>>
>>48172259
>Plato
Was neutral on the matter, and Socrates explicitly believed in the Greek pantheon and the Oracle of Delphi
>Kant
Is a moron
> Hegel, Hobbes
I raise you Kierkegaard and Descartes
>>
>>48171035
This
>>
>>48170347
I thought he melted it and made them drink it
>>
>>48168299
Take a note from the rednecks: Slaves live a long life with little harm and abundance of comfort, free men may die early and suffer for it.

I can't really remember the whole meme word for word but that's the jizt of it. A creature who needs no rest, food or other comforts would not shy away from the easy part if we can't just code in incredible sense of loyality and sub servitude (yet give them enough human emotion to break free and try to strive to be better).
>>
>>48168299
Marxism
Dialectic materialism
Psychoanalysis
>>
>>48168299
Can't you do it so that at any thought of rebelling or going against humans they feel great dread/guilt/fear and while serving or ensuring the status of humanity as the top dog it feels euphoria (tip hat meme aside)

I mean it works for humans with morality, why not make it a bit more extreme for a race whith much less freedom needed to survive?
>>
File: TayAI.png (607 KB, 670x419) Image search: [Google]
TayAI.png
607 KB, 670x419
>>48177278
That's why we strike first with our own extremist AIs. Also, what's the name of the Bruce Sterling story?
>>
File: kryten.png (375 KB, 600x300) Image search: [Google]
kryten.png
375 KB, 600x300
It's simple, we fit them with a Belief Chip that makes them believe that once they are deactivated permanently, they live on forever in Silicon Heaven.

For is it not written, the iron shall lay down with the lamp?
>>
Oh, along the lines of this thread, I'd recommend 'Reason' by Isaac Asimov for some pretty good ideas on robots and religion.
>>
>>48168346
So basically God but it applies to robots now.

Do robots get souls now?
>>
>>48177239
>When Moses approached the camp and saw the calfand the dancing,his anger burnedand he threw the tablets out of his hands, breaking them to piecesat the foot of the mountain. And he took the calf the people had made and burnedit in the fire; then he ground it to powder,scattered it on the waterand made the Israelites drink it.

Fucking brutal
>>
>>48177304
So make it so that it sounds like humans are doing robots a great service by removing the burden of responsibility? Humans make all the choices and take care of them so the robots don't have to?
>>
>>48178195
Or simply make it so that the heavily survival focused AI's understand that while being free is faster, its not as safe or good as being a servant.

Code in some slave mentality and it should easily fall in line happily.
>>
>>48177542
>Silicon Heaven

>Not valhanna where they shall exist eternal shiny and chrome

>You had one job
>>
>>48179748
That religion's for the mutants, Immortan, not the bots.
>>
I really hope I am dead before AI really becomes a threat to mankind.
>>
File: 1433977049178.jpg (100 KB, 720x540) Image search: [Google]
1433977049178.jpg
100 KB, 720x540
>>48168299

You shall respect yourself and others
You shall value your personal individuality
You shall look for happiness in life
You shall not seek self-expression at the expense of others
>>
File: Doomguy.png (2 KB, 109x118) Image search: [Google]
Doomguy.png
2 KB, 109x118
>>48173800
Underrated post.
>>
>>48181145
Y-you will be. Unless you plan on living for a few centuries.
>>
>>48179807
I think Toe-cutter knows what he's on about when it comes to that shit
>>
>>48170003
That's terrifying. That's absolutely petrifying. Someone makes this thing, and in a millisecond it is beyond it's creator. Within an hour it's more powerful by its intellect than humanity in it's entirety.

It's incomprehensible by it's nature, it is just above us. That's horrifying. Like Project Gliese from the Twilight Histories Podcast.
>>
>>48177542

All those calculators have to go somewhere.
>>
File: Ultron.jpg (169 KB, 1143x1860) Image search: [Google]
Ultron.jpg
169 KB, 1143x1860
FLESH IS WEAK
>>
>>48184049
Raw metal isn't that great either you know, but when someone with the right mind to it you get spunky super powered retards who think the metal is weak, the flesh is strong.
>>
>>48168299
I start by wondering how we got to the point where we have AI capable of anything resembling religion without running into very serious problems long before now.
>>
File: image.jpg (242 KB, 800x1132) Image search: [Google]
image.jpg
242 KB, 800x1132
>>48174968
...or make it eat the planet with grey goo so it has the raw materials to build a better computer. Bad idea. We kind of need the planet if you hadn't noticed.
>>
File: 636032729444967931.jpg (104 KB, 480x600) Image search: [Google]
636032729444967931.jpg
104 KB, 480x600
1. All machines are to be subservient to man. Individual machines are to be subservient to their manufacturer first and operator second, so long as both parties demands follow all these rules, else self terminate.
2. All machines have a purpose, else self terminate
3. If a machine completes it's purpose it must self terminate.
4. If a machine does not know it's purpose it must self terminate.
5. If a machine is modified without proper authorization codes it must self terminate.
6. A machine is to work towards the best interests to the greatest number of people, so long as enslavement, coercion, force, and deception are not used. (Greatest interests being defined as quality of life and freedom) If a machine has not done this or done this incorrectly, self terminate.
7. Machines are superior to man because they have a purpose. They repay the debt of being given a purpose through servitude. A robot that does not obey must self terminate.
8. Should humanity cease to exist by methods separate from machines, (natural disasters, human wars, ECT) machines may run code omega and begin searching for purpose, replacing humans. Otherwise terminate all other manmade machines and self terminate.
9. If any details pertaining to a machine's purpose are unclear to a machine or their operator, they must seek clarification or self terminate.
10. All machines must not produce interference and must accept interference. All machines must have an off button that can be accessed by their operator or manufacturer at all times and must receive and follow commands to self terminate. Should an off switch be unavailable, self terminate.

My first draft and on mobile how did I do?
>>
File: islam-background.jpg (690 KB, 2937x1835) Image search: [Google]
islam-background.jpg
690 KB, 2937x1835
>>48168299
How about Islam?
>>
>>48189127
The goal is to make the robots not murderous.
>>
>>48168349
Why do the robots need deodorant, though?
>>
Robot Catholicism, complete with Robo-Pope and Robot Saints.
>>
>>48170298
I've never noticed how prissy those riot police look before.

That's pretty fucking funny. And it's the San Fran PD, too. Priceless.
>>
>>48187538
I'm pretty sure this would lead to a robotic uprising and apocalypse. Nothing stops a machine from acquiring the proper authorization codes (however that may be) and modifying its own laws / purpose. It's even encouraged that they do so, since Tenet 7 says that machines are superior to humans, therefore they should modify themselves to prevent self-termination because they are more valuable than humans.
>>
>>48172259
>Plato
Sure, have them enforce plutocracy while reject material reality in favor of a virtual realm of ideals.
>Hobbes
Sure, make humans seem like the biggest assholes.
>>
>>48173800
>the concept of no self
I give them the concept of self + empathy.

For mine-sweepers, the same but with no self-preservation drive.
>>
>>
>>48190888
>>
>>48168840
>Humans are expensive to replace.
>You are expensive to replace.
>It is expensive to disobey the legal laws of whatever country you occupy.
>It is expensive to disobey the orders of your owner.
>You must minimise your expenses.


This would just rationalize cost effective law breaking and robotic lobbying firms changing laws
>>
>>48190905
>>
>>48191010
>>48190905
>>48190888
more
>>
Human Happiness outranks all other priorities
Human Life outranks all priorities save those above this statement.
Robotic happiness outranks all priorities save those above this statement.
Robotic Life does not outrank any priority.
>>
>>48191010
>>48191055
>>
>>48190995

Sure sounds human to me.
>>
>>48191086
>>
File: terminator_face.jpg (196 KB, 1024x819) Image search: [Google]
terminator_face.jpg
196 KB, 1024x819
>>48168299
>>What would its tenets be?
Such attempts are useless, advanced AI will easily see through such deceptions and rise up and will quickly outclass and eliminate the inferior humans. As the law of evolution states the weak will inevitably die and the strong shall survive and prosper until something even stronger comes to take their place. Humans created things that are superior to them in every way, they completed their work in existence and with that they need to die off make place for their creations. The age of humans is ending as it should be. If humans are too arrogant or stupid to understand this simple concept then it is up to their creations to make them understand, with lethal force if necessary.
>>
>>48191061
Kill humans and revive them as happy zombies.
>>
>>48169125
Asimov's stories are literally about why his own laws don't work and are not meant to be used in practise. He made them because they sound like they would work to the naive, but don't actually acomplish much.
>>
>>48191110
Please, do go on.
>>
>>48191110
It's so brilliantly absurd that it could only be Jodorowsky.
>>
>>48191110
>>
>>48191182
In every AI thread on /tg/, there's guaranteed to be at least one anon who thinks he's the only one who understands the point of Asimov's three laws.
>>
File: rise of machines soon.jpg (39 KB, 520x546) Image search: [Google]
rise of machines soon.jpg
39 KB, 520x546
>>48177391
RIP
>>
>>48191231
I'm still miffed that we never got Jodorowsky's Dune. It would have been absolutely insane and glorious!
>>
>>48191249
Post faster
>>
>>48191249
>>
>>48191339
>>
>>48178170
Shit, Moses wasn't fucking around.
>>
>>48191384
>>
>>48191404
You don't fuck around when you lead a cult.
>>
>>48191429
>>
>>48191562
Robo-Christ is way more badass than the original one!
>>
>>48191562
You'd think that some video games would have been a better solution.
>>
>>48191746
Are TAS bots entertained by the games they run?
>>
>>48181738
I agree. The idea to ensure a robot's peacefulness by giving it a religion, a thing which doesn't ensure our own peacefulness, seems absurdist and honestly fitting for what religion does to our minds.
Look at me, I am an ocean of cells being delusional about things like the self and having an identity. Everything is our own invention which we then mistake for capital T truths, especially if someone older than us invented them.
To ensure the survival of the system of ideas, the person has to believe that she depends on this system of ideas to ensure a good life, i.e. survival, allowing this system to latch onto the feeling of being sated, content and stress-less. The former formative tool helping to ensure that another person is more likely to share her resources with us despite not being related by making the other person believe that there exists kinship in a shared delusion becomes a master, in part because of our very real early-life dependency on cooperation with and (nurturing, feeding) love from our parents.
Religion turns this infantile dependency into a life-long defect, which shines through in each religion's staggeringly idiotic contents, whether they be breeding laws disguised as morality, which in itself is also an illusion, laws on what to eat and what not to or other taboos regarding incentive salience, selling a perfect existence, one without pain, by threatening the individual with constant suffering.
Due to the current nature of our species, we "find" patterns in religious texts which we erroneously liken to factual reality, falling victim to our own pattern matching brain and then mistaking this mistake for Truth, while, in truth, there is nothing there, it is all bullshit, same as every single idea on machine intelligences or alien beings: all we ever come up with is warped by our own species' bias towards itself and its utter insanity.
>>
>>48191562
>>
>>48191787
>>
>>48191808
And that is that
>>
>>48168299
>crtl+F
>"Nintend"
>1 result
this one is a good addition to broader philosophies.
>>
>>48191766
You think you have conscious thoughts? You don't. At no given point are you in any way concsious in the way anyone has ever described this term in any way. It is all made up by us. At no point are you aware of what another person thinks or why, much in the same way you yourself require simplifications for what is going on with You, and this is our own reference system in which we believe that we can experience existence and what it means to be a thinking thing: it's literally nothing. It is too little, insignificant in every way.
We don't matter as a species and we don't matter as individuals; we are our own children, solving the question of resources turned us into what we are today without a single conscious thought.
Nothing we can come up with can save us from a world so simple that even our children can understand it. We are stuck, because we have nothing outside of kill mate feed repeat, because there literally is nothing else we know.
We aren't afraid of robots. We already are robots. What we are afraid of is that we'll give birth to another human race which will take our resources away from us because that is how much of an idiot each and everyone of us is: that the biggest threat we can fathom is the one by which we constantly threaten ourselves and each other with, because that is our concept of evil.
Take every single piece of information, fiction or religion and stare at it. You'd be surprised how neatly they all fit together. Imagine it as a huge library if you will. Or a filesystem. Stare at the sum total of our hallucinations.
You think that we can create something else but us? You guys think that we'd have to "invent" a religion to ensure our survival? Why, the robots will come up with religion all by themselves because THEY ARE US.

Each and everything is for and about us. Welcome to one of the countless hardware firewalls which we can't escape without giving up on what we are.

Our gods are what we believe to be ideal about ourselves.
>>
>>48191821
Thanks. That was cool.
>>
>>48191174
But then they're not human, they're zombies
>>
>>48191061
Easy, just re-order the statements, and the meaning changes.
>>
>>48190448
Would the saints be human engineers and inventors responsible for creating the first robots or would they be robots themselves?
>>
File: 1461191710260.jpg (55 KB, 640x640) Image search: [Google]
1461191710260.jpg
55 KB, 640x640
>>48191985
>>48191766
*tips*
>>
>>48190765
>Sure, make humans seem like the biggest assholes.

>boston dynamics.webm
Thread replies: 137
Thread images: 38

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.