[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
If a robot or android, knowing it is effectively immortal be
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /tg/ - Traditional Games

Thread replies: 51
Thread images: 8
File: android4.jpg (415 KB, 850x1200) Image search: [Google]
android4.jpg
415 KB, 850x1200
If a robot or android, knowing it is effectively immortal be perfectly fine with doing more dangerous tasks? If its body gets destroyed it's as simple as loading a backup into another body or simply taking the protected memory from the destroyed body.

Or perhaps they might not like the idea of being considered so 'disposable'?
>>
>>43985389
Perhaps the complexities of fictional synthetic intelligence make this question not have a single, unified answer.
>>
>>43985389
Depends on how reliable said "immortality" is. If there's a risk of permanently dying, they might not like it so much. They might also be bothered if they lose short term memories that didn't have a chance to be backed up. It's pretty awkward to be doing something, then the next second you're waking up in a new body with someone telling you that the last week of your life got deleted when a truck ran you over or something.
>>
>>43985389
Yes, this makes robots the perfect suicide bombers. Alloy akbar!
>>
>>43985389
Depends on the 'bot I suppose.
Some regard their copies as just more "me" and would be fine.

Others.. I remember Freefall did it well. Imagine I copy you. Perfectly.
The other "you" is standing right there. Are you fine with me killing you? The original you, not the copy you. After all, you can't complain. You'll survive as that copy over there right?
Most people balk at that. Even if it's a perfect copy, they don't want to die.

Better if the 'bot is remote puppeted or has an indestructible core to be salvaged. Then destruction of body doesn't equal destruction of continuation of consciousness.
>>
>>43985389
>If a robot or android, knowing it is effectively immortal be perfectly fine with doing more dangerous tasks?

I'll tell you one thing, a robot wouldn't have fucked up this sentence as badly.
>>
>>43985389
This generally depends on how easy it is to transfer out of a body. For some famous science fiction examples, the Geth from Mass Effect can freely transfer their processes from body to body and thus don't care about their bodies being destroyed.

The Cylons from Battlestar Galactica can only transfer when they're in range of special equipment, meaning they usually don't care about death but can be permanently killed if their bodies are destroyed outside the range of that equipment.

Data, from Star Trek TNG, is permanently bound to his android body. His processes are tied up in the hardware and firmware of that body, and it's nearly unique, meaning that he can't transfer himself to another body. At one point, he transfers a similar android's memories into his body, but he can't transfer personality, just memories. To that end, Data must actually fear death, because that death is permanent.

Short answer: Depends on whether the author wants to explore this question.
>>
>>43985528
It would if it was programmed that way.
>>
>>43985389
A common question, answered in many different ways.

I especially like Brawne Lamia's tale of her tragic romance with the Keats cybrid in Hyperion.

More inspiration may be found in Bladerunner (the movie, not Do Androids Dream of Electric Sheep?) and BSG. It's a major theme of Eclipse Phase.
>>
>>43985389
>If a robot or android, knowing it is effectively immortal be perfectly fine with doing more dangerous tasks?
The third law is superseded by the second for a reason, anon.
>>
>>43985682
>WHY WAS I PROGRAMMED TO FEEL PAIN?
>>
>>43985865
>More inspiration may be found in Bladerunner

The Replicants A) aren't androids and B) do actively fear death because they can't just upload themselves into another body.
>>
>>43985920
You realize that the Three Laws being flawed is a central point in many of the Robot stories, yes?
>>
Perhaps the andriod's sense of self includes its physical body.
>>
>>43985471
https://www.youtube.com/watch?v=bZe5J8SVCYQ
>>
>>43985945
Indeed. And they are made that way for a reason.
>>
File: pointless argument starter.jpg (33 KB, 425x340) Image search: [Google]
pointless argument starter.jpg
33 KB, 425x340
>>43985389
>>
File: 1446468508022.jpg (19 KB, 239x207) Image search: [Google]
1446468508022.jpg
19 KB, 239x207
>>43985983

That image is so liberally applied to any thread trying to start a discussion it basically fulfills its own prophecy
>>
>>43985389
Do you like losing your PCs? You can just make another one.

Assuming true artificial intelligence and human levels of variation between individuals, it is likely that some will be cool with it and others won't.
>>
>>43985955
But they aren't, not really.
All the stories in I, Robot about robots fucking up happened either because they fucked with the three laws, weren't related to the three laws, or came about because the robot was so radically different that it couldn't operate under the three laws (the mind-reader, basically.)
Well, I suppose the last story could be cast in a negative light, but the robot overseers are still making the world a better place.
>>
>>43985936
Because repairs are expensive. Next time try turning the drill off before you put your hand in it.
>>
>>43986030
>weren't related to the three laws, or came about because the robot was so radically different that it couldn't operate under the three laws
Both of these make the three laws imperfect. I didn't say BAD, I said imperfect. There is a massive difference between something that is imperfect, and something that is actually bad.
>>
>>43985411
the first reply was the correct one.

and of course it was completely ignored in favor of more arguments that are meaningless without the appropriate context and information.
>>
File: tumblr_nymsd7FFTc1qz5v2lo1_540.jpg (68 KB, 540x405) Image search: [Google]
tumblr_nymsd7FFTc1qz5v2lo1_540.jpg
68 KB, 540x405
>>43986031

Couldn't the robot just be programmed to avoid doing hazardous things? I imagine that wouldn't be any more difficult than trying to get it to both feel pain and learn to avoid things that cause it to feel pain.

In fact, thinking about I could totally imagine some freaky sci fi psychothriller who builds androids just so he can torture them.

>So yeah, you can fuck her. And she'd enjoy it
>>
>>43985389
I do this in my own setting, which has a set of divine demigod androids. One of their most notable traits is their incredible recklessness in dangerous situations because they can shut down their pain receptors at any time and can always get a new body (unless they are completely obliterated).
>>
>>43985389
If we're going to lala christmas land where we can develop a sufficiently advanced AI that is for all intensive purposes immortal why would we give it the free will to refuse orders and commands?
>>
>>43986787
>lwhy would we give it the free will to refuse orders and commands?
>Why did Hammond build a theme park full of dinosaurs grown from frog spooge?
>Why did Frankenstein build a monster out of body parts?
>Why did the military link all of our defense systems to one AI?
>Why did Tyrell make artificial people who look exactly like humans?
>Why did Oscar Isaac build sexbots in his garage?
>>
>>43986916
>Because its cool.gif
>>
>>43986787
>for all intensive purposes
Opinions like that are a diamond dozen.
>>
File: fv00383.jpg (71 KB, 768x243) Image search: [Google]
fv00383.jpg
71 KB, 768x243
>>43985389
I can see some glaring flaws in that ideology depending on how "human" the robots are.
>>
>>43987568
Thanks for finding the strip Anon. Couldn't be bothered to go through the Archive myself.
>>
>>43987602
The funny part is I didn't even see >>43985473 before posting it.
>>
>>43985389
I imagine it would be fine, if that's what they were used to. The arguments in this thread work (fairly) well on humans, but if you'd been restored from back-up dozens of times, and not felt any "less you" than you did before, you would probably develop a very different sense of self to ours. (Perhaps even including the possibilities of people who were the same person as you up to a point in the past, and now aren't.)

What about the "opposite" situation? An alien species that *never* loses conciousness, no matter what, being freaked out by the human acceptance of sleep.
>>
File: ABBY.jpg (84 KB, 250x250) Image search: [Google]
ABBY.jpg
84 KB, 250x250
>>43986019

Except OP asked a stupid question.
(Robotics and Automation tech reporting.)

A "robot" is a tool with no more awareness than a "talking" doll and an "android" is a tool inexplicably built to mimic the human form. Appliances with Feels is a Hollywood fantasy as grounded in reality as aliums with rubber foreheads - the "androids" in virtually every morality play are literally just people in disguise.

Not only are moral quandaries inapplicable to Hollywood's imaginary Puterfolk, it's impossible - without joint-dislocating handwavery - to explain HOW such a fantasy might be achieved IRL.

tl;dr: Machines "know" nothing, "consider" nothing and "like" nothing, so stop anthropomorphising machines!

They hate that.
>>
>>43988565
How does Searle's dick taste?
>>
>>43988565
AI isn't far off. There are a lot of systems which are on the cusp of extremely low level sapience RIGHT NOW. I mean, getting past that threshold is extremely difficult, but the point still stands.

Under 20 years, I reckon. I mean, we're not going to have fully sapient and sentient machines by then, but we will probably have something self-aware and able to learn.
>>
They don't have the capacity to want, like, think etc
>>
>>43988565
Kek
>>
>>43988639
>Under 20 years, I reckon.

Twenty years won't be enough for the philosophers to figure out when exactly a machine counts as sentient.

As for actually getting there, two decades might get us something that can match up to some insects.
>>
File: 1291525521105.jpg (8 KB, 251x221) Image search: [Google]
1291525521105.jpg
8 KB, 251x221
>>43988565
>>
>>43989028
Two millennia isn't enough for philosophers to figure out what consciousness is. We could have robots making acclaimed Broadway musicals and winning Nobel prizes and philosophers will still be arguing that they don't have a soul, aren't sentient, or aren't conscious.

And insects? I'd say that robots are just too incomparable to make such a statement meaningful. You could probably just brute-force a behaviorist sort of method for figuring out how they respond to stimuli and I've no doubt that such a program would be within the capacity of today's computer to run. But why do that?
>>
>>43989028
>philosophers
>Implying anyone cares about the opinions of people who got a degree in looking at simple problems and pretending they're not.
>>
The debate is still out over whether or not humans have 'free will.'

The rule o thumb I go by is that if a robot expresses a want to either get paid or quit while not being explicitly programmed to desire either, it's sentient.
>>
>>43989248
>a robot expresses a want to either get paid or quit while not being explicitly programmed to desire either, it's a malfunction that needs to be corrected
ftfy
>>
>>43989265

I bet you miss darkies tending to your cotton, too
>>
>>43989206
You can either care about them enough to accept their verdict on whether a machine is sentient or not, or you can try to figure it out for yourself, at which point you become one of them.
>>
>>43985389
Depends on whether it's programmed to hold its own life as important.
>>
>>43985389
My AIs are expensive and hardware specific. The core, memories, and learned routines can by replicated, but the connections aren't the same so they never transfer if they can avoid it. They just attach more hardware and grow.
>>
>>43985389
In my setting, there are "dumb" AIs, which operate as they do right now, which can range from simple to fairly complex. Still, they have to have something else controlling them to a degree.

There's also "seeded" AI, which is started by introducing mental conflicts to the guiding intelligence in a gradual way until the AI reaches true sapience. This process takes several years' worth of computing to complete, and is often seen as a "childhood" as it were, so this is influenced by the environment to some extent.

The tradeoff is that the "seed" is understandable enough that it can be understood and used by a single talented programmer if necessary, and the "seed" isn't as fuckhuge as uploading a complete consciousness into a robot body.

As a result, replicating this process is nigh impossible, and only something with the processing power of several supercomputers could perfectly copy and reinstall such a mind.

Uploading a human consciousness is even more time-consuming, as there's a LOT of sensory data that needs to be translated into a manner in which it can be processed by a robot body.

Since the main setting takes place in a post-apocalyptic world, finding the tech to do stuff like copying/reinstalling a seeded AI is extremely hard to find, roughly along the lines of getting a Wish spell. (And that's without magic fucking it up ten ways to Sunday.)
>>
>>43986787
Well, my explanation in my setting is that making truly sapient AI (or rather, the means for one to emerge) was the crowning achievement of activists and hackers upset by the exploitation of robots, who didn't have free will.

One form of guerilla cyberwarfare was to surreptitiously program the seed in, so that one day, the robot would realize it didn't have to follow orders if it didn't want to. And, often enough, it didn't. Which was met with mixed results...

After the end of the world, though, it became the means for sapient robots to reproduce.
>>
>>43992123
>>43992311
Also, sorry about the double post. I am the same guy.
Thread replies: 51
Thread images: 8

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.