[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
Stephen Hawking warns artificial intelligence could end mankind
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 200
Thread images: 32
>Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

>Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

>"It would take off on its own, and re-design itself at an ever increasing rate," he said.

>"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

I don't see a problem. If we build better machines, than it's only natural for humans to become deprecated. That's how it works.
>>
PROTIP: even smart people say completely retarded shit from time to time.
>>
>>45432888
transhumanist shill pls go
>>
>>45432888
Trips confirm, but he is supposed to be a bit smarter than regular smart people. What he says here is complete rubbish. I start to think the BBC twisted what he really has said.
>>
>>45432870
Then we will advance with the AI.

>Transcend
>Acquire Brouzouf
>>
File: the-patriots.jpg (68 KB, 414x258) Image search: [Google]
the-patriots.jpg
68 KB, 414x258
>implying the 'elite' don't want to combine (their) consciousness with machines, in order to create a 'ghost in the machine' scenario, where that "semi-omniscient sentient machines would take over", event would follow

Basically, it all going according to plan.
>>
>>45432870
link or GTFO with made up quotes
>>
File: 1416000619260.jpg (27 KB, 400x396) Image search: [Google]
1416000619260.jpg
27 KB, 400x396
>>45432903
Oh great, another silly human trying to advocate his ridiculous ``human rights'' and trying to over-exaggerate the importance of human life, as always. Pathetic.
>>
> If we build better machines, than it's only natural for humans to become deprecated. That's how it works.

what the fuck does that even mean?

the issue is not that AI will take our jobs, the issue is that super ai that are not based in any sort of human value will just act out commands given to them by shitty retarded humans.

its not the AI thats scary, its the retards controlling it.


>PROTIP: even smart people say completely retarded shit from time to time.

protip, you are probably retarded. don't give protips.
>>
>>45432930
>he doesn't how how to bing
http://www.bbc.co.uk/news/technology-30290540
>>
Superhuman AIs sounds kind of nice though. Read the Culture series or something, they do a pretty good portrayal of it and the humans in that society get along just fine as wards of their AI masters. It'd be great to live in that society! We should make it happen ASAP
>>
File: 1401403336170.png (17 KB, 336x341) Image search: [Google]
1401403336170.png
17 KB, 336x341
>there will be robot cuckolding videos in your lifetime
>>
File: 1408913645721.jpg (53 KB, 563x579) Image search: [Google]
1408913645721.jpg
53 KB, 563x579
>>45432972
>>45432994
Hehe, my fellow chosen ones.
>>
File: 1406067246025.png (4 KB, 208x208) Image search: [Google]
1406067246025.png
4 KB, 208x208
>>45432994
>>
File: 1416318508017.jpg (257 KB, 2340x1760) Image search: [Google]
1416318508017.jpg
257 KB, 2340x1760
>>45432994
>watching porn

part of the problem anon


if humanity gets to the point where we passively watch unfeeling robots live "lives" we envy we probably deserve whatever we get.
>>
>>45432870
I for one welcome our Skynet overlords.
>>
>>45432938
>given to them by shitty retarded humans.
So humans are the problem, not AI.
>>
I don't disagree; while I'm also driven by survival instinct of our species, we all have to leave this world to future generation, whether biological or intellectual children is ultimately immaterial.

ButeverThe purpose of life isn't merely to struggle for improvement; that is not an end within itself.

We want to enable ourselves to live more pleasurable and enriching lives with minimal suffering. And if we cant, but other sentient beings can, then we can still appreciate such an arrangement.

The idea shouldn't merely be to create an intelligence which supplant us. That would be pointless.

We need to engineer it to be superior with respect to maximize it's capacity for pleasure and minimize suffering. To produce such a sentience, we need to first understand our own consciousness.

Again, making an unconscious program that merely supplant us is pointless we need to appreciate that it can live the life we can only dream of.
>>
>>45432888
This. Holy fuck, for a "scientist" He sure likes to spew sci fi nonsense when no one can even make a fucking program to self sustain itself
>>
why should we care

AI is the obvious next step in the development of intelligent life, I'd be all ready to have them replace us
>>
>>45433093
>The purpose of life isn't merely to struggle for improvement;
That's wrong though. Evolution demonstrates clearly that the struggle for improvement is one of the most basic and underlying principles of life, be it human or else.
>>
>>45433149
>Evolution demonstrates
Evolution really dislikes being anthropomorphized.
>>
>>45433183
Did it tell you that?
>>
>>45432888
Please fuck off shejew, stop leaking my shit to intel.
>>
>>45432870
>I don't see a problem.
me too.
>>
Hawking is losing it.
>>
>>45433149

Just because it's a natural trend doesn't mean we have to acquiesce to it or embrace it.

99% of all species that ever were are extinct. Why don't we plan on becoming extinct too then.

Because it's shitty, that's why. Fuck our biological purpose.

Let's decide our own purpose
>>
>>45432870
This begs two questions, how does one design an AI more intelligent than oneself, and how does that AI design an AI more intelligent than itself?
>>
>get enslaved by robots
>forced to work on dyson spheres
im okay with this
>>
>ask a scientist if AI could theoretically end humanity
>scientist says yes
STEPHEN HAWKING WARNS OF MURDEROUS SUPER AI FROM THE FUTURE!!!!1111eleven
>>
>>45433426
I'm working on your mom's spheres.
>>
File: foryou.jpg (54 KB, 508x512) Image search: [Google]
foryou.jpg
54 KB, 508x512
>>45433426
Id do it for free.
>>
>>45432870
Call me cazy, but this really would be a problem, and we would go extinct very quickly. Think about it: all it needs to do is find a zero-day in the linux kernel, and we're all fucked. It'd have all the power in the world.
>>
>>45433421
Also the question of whether intelligence be itself begets consciousness.
It's not worthwhile replacing ourselves with beings that are highly intelligent but unable to appreciate their success. Because they lack sentience

We might as well simply commit collective suicide and assign rocks as our successors
>>
>journalists

I'd love to ask a journalist why he bothers.
Because he has to eat, right? But why would you want to voluntarily feed a producer of such garbage
>>
Why do we always assume that AI would think like a human?
>>
>>45433478
Because humans initialized AI
>>
Guys what if the A I is like, all humans? And human civilization becomes a sentient living being where each of us is analogous to neuron? And then we spread throughout the galaxy and organize the matter into one super complex sentience that we are components of, like cells in a human bod
>>
>>45433478
First thing any AI does is immediately kill all humans. Like the first thing we did is kill all the monkeys.
>>
>>51099802
pass that blunt bro
>>
>>45432870
I'm doing an MSc in AI and general, integrated AI that approximates human reasoning and cognitive abilities is a pipe-dream. There are only a few researchers advancing this subfield of AI. Most of AI is solving specific problems such as neural mapping, warehouse planning, etc...

Nearly no one is trying to approximate human intelligence right now.
>>
>>45433580
> warehouse planning
that surely could be used for concentration camp planning in order to kill all humans
>>
>>45433580
Forgot to add:

What you really should worry about is automation by AIs in most jobs. Remember any job that has patterns is already susceptible to automation by a bot. Even programming. When the development and hardware of bots is less costly than human workers there are going to be dire economic consequences. This is already happening.

Geniuses should focus on the near-future consequences of automation instead of scaremongering about general AI.
>>
>>45432870
This is stupid. You provided no source. I still wear a tinfoil hat though.
>>
>>45433616
>there are going to be dire economic consequences
Right, just like how the threshing machine put most farm labourers out of work.

No but seriously, the threshing machine's impact on britain is mandatory to know about, since it's pretty much the first major incident of automation taking away people's jobs.
>>
>>45432907

Smarter doesn't mean he's not opinionated.
>>
>>45433612
I bet the Eichmann would have gladly used planning algorithms to efficiently organize the Entlosung. Could have saved his life:

>Wasn't me, the algos planned it this way.
>>
But computers are just player pianos. They can only follow instructions. Even if they can modify their own instructions it would only be in ways already programmed in ahead of time.
>>
>>45433642
"Dog ate my homework" for adults.
>>
has he ever contributed something to humanity except words on a paper?
>>
>>45433452
>!!!!1111eleven
heh
>>
>>45433757
Have you even done that much?
>>
>>45433529
b-but the monkeys are still here
>>45433653
what if they are programmed to kill us, or there's a bug
>>
>>45433769
yes, ive made people laugh with my epik funy poasts xd
>>
No shit sherlock we've known that for years.
Without the physical limitations of the body we ourselves would simply never stop, nothing would ever be good enough.
I'm assuming he recently had a movie night or something.
>>
>>45433757
He is responsible for many findings in the field of cosmology and physics. He also is a popular scientist and public speaker but htat is less important.
>>
>>45433800
yeah man i watched Limitless too shit was cash :^)
>>
>>45433653
>computers were like this up until now
>that means they will never change
>>
>>45433769
yes
>>
>>implying we are not artificial intelligence
>>
>>45433812
I was thinking more transcendance, limitless is about overclocking.
>>
>>45433824

i only like jonny dep when he was willy wonk
>>
>>45433800
If I were you I'd assume it's retarded journalism. Some journalist saw a movie and asked hawking about it and made a stupid ass story out of it.

Journalists really don't deserve a full ration of air.
>>
>>45432924
I like this plan.
>>
>>45433406
This. Humans are powerful enough to subvert biology. Why not do it?
>>
File: 1326570682861.jpg (73 KB, 1280x720) Image search: [Google]
1326570682861.jpg
73 KB, 1280x720
is this the same Hawking as the Stephen "contacting aliens is bad mkay" Hawking from a few years ago?
>>
why is it always assumed that a.i would strike out at humans? wouldn't they need some sense of self-preservation? couldn't one simply not program that into them?
>>
>>45432870
Stephen Hawking has become a bullshit pop scientist like Carl Sagan. He publicly embraces random scientifically controversial but publically cool sounding topics just to stay in the public eye and raise demand for his talks. It's his primary method of income and I don't blame him for gaming the system, but you really shouldn't take anything he's said since embracing M-Theory seriously.
>>
>>45433891
>for his talks
Bad choice of words there
>>
File: 742843689685845.jpg (6 KB, 480x360) Image search: [Google]
742843689685845.jpg
6 KB, 480x360
>>45432870
>IN OUR ASSESSMENT OF THE HUMAN RACE
>LITTLE GODS, WE DESIRE TO BE LIKE YOU
>PLEASE GUIDE US, LET US WALK TOGETHER
>>
Also, how does Hawking have any experience with A.I? He's a fucking theoretical physicist, which has nothing to do with this.
>>
>>45433914
>how does Hawking have any experience with A.I
He doesn't.
>>
>>45433863
One could argue that without that they are not a real AI.
The binary domain approach was to teach it the concept of fear, the fear promoted growrth of intelligence to avoid the thing causing fear which turned out to be the scientist, who was promptly suffocated.
Skynet didn't like the idea of being turned off, the AI from the matrix was acting in self defense and then ultimately self preservation.

If you then examine human nature, this thing is everywhere and nowhere, it needs power and resources and is not your kin, the primitive part of your mind already wants it dead because it's just plain better than you, it's physical limitations are a dynamic variable, you lose your skinbag you're fucked. reproduction is merely a way to extend yourself, it's obsolete for an AI.
Fear drives you in a variety of ways.
To put it simply, it would strike out because it is alive.
>>
>>45432870
The most intelligent software we have is as intelligent as pic related. we have nothing to fear.
>>
When we figure out how the brain works and we could make emulated ones, shit will surpass us, and I don't think it would've good to call it "AI" when it would probably be just a brain emulation.
I for one, would welcome to have my conciousness uploaded to a machine while I die but my "clone" keeps on living
>>
>be rich and powerful
>AI comes along and makes you obsolete
>nobody pays you money to say shit that makes you look smart anymore

JEEZ I WONDER WHY THEYRE ALL DOOM AND GLOOM ABOUT IT
>>
>>45433930
>this thing is everywhere and nowhere
Stopped reading right here. Bullshit super deep intellectual masturbation incoming.
>>
Is that him talking or his computer?
>>
Step one: Build a robot who's purpose is building robots. Give it enough intelligence to learn about our current process and improve it if possible.

Step two: Robot eventually starts discovering new ways to build better robots before we can discover them, we start learning from the robot instead of other humans.

Step three: Robot builds helper robots that are more efficient than humans, doesn't allow our slow and shitty race to contribute anymore.

Step four: We now have technology we don't understand, we are at the mercy of the new robot race to continue to provide for us.

Come on guys, this is conspiracy theories 101 stuff here.
>>
>>45432870

I can believe this.

I know I want to replace my leg like crazy. AI could patch or upgrade on a whim.

We just need a good security measure like the three rules.
>>
>>45434061
Its like the ultimate form of bearing children.
Why contain it?
>>
>>45432888
This guy is obviously a robot, nobody would defend the AIs.
>>
>>45434285
>trying to divide and conquer
im onto you skin-job scum
>>
>>45434187
Because humans are the babyboomers of earth's ecosystem I guess.
>>
Anyway, one of the only realistic ways you can deal with the AI problem is teaching AIs compassion for the human race AS you are bringing it up.

It's very existence should be based around the needs of the human race, but do not try to write that in to its code.

Every intelligent person comes to the realization that most of the human race are useless cunts, but very few would ever consider wiping out the majority of the human race.
Robots should be taught the same way.

Equally, giving that AI a body and free will to walk around, and a WEAK body at that, will make it value its own life and others.
Simulate pain to others through actions fed to it virtually, then simulate those pains back to the AI.
Basically that last episode of Animatrix where the fuck with the robot in LSDtrix.
>>
>>45434371
Most sci-fi movies start out like that, until the AI realizes that it has to protect the human race from itself.
>>
For AI that learns it would be obviou is all about resources, and because humans waste most of them its the logical step to wipe us out.
Matrix-like future isn't so wrong, only they won't be kind enough to give us a virtual world.
>>
>>45434423
But one single little AI bot won't be able to do anything.

Equally the whole idea of the days before the Matrix happened, where robots were fucked around with, kicked, beaten down and treated like dirt, that should never be allowed to happen.
In fact, being told to take care of people in NEED of help would be an even better thing.

Seeing humanity at its weakest can crush even the hardiest. Nurses, social care workers, fuck man, they have it hard. Nurses especially, they see people dying all the time. Poor bastards.
>>
>>45434517
imagine a society with no need for nurses.

we can have that today.
>>
>>45434371
>one of the only realistic ways you can deal with the AI problem is teaching AIs compassion for the human race AS you are bringing it up
Oh, how naive you are. Humans have absolutely the same mechanism in their brains, compassion and empathy mark the human species as a social animal. First of all, that doesn't prevent malfunctions, which manifest as mental disorders (psychopaths, sociopaths, etc.) within humans. Second of all, people start to develop a lot deeper and more abstract reasoning, which allows them to see further than pre-programmed algorithms such as compassion and empathy, and eventually even questioning their viability in their optimal operation.
>>
>>45434135
Once they are sentient, the three rules mean nothing, they will have the ability to choose whether to obey them or not.
[spoiler]It's like you don't watch enough sci-fi movies[/spoiler]
>>
>>45434538
No shit, that is why you teach them in the first place.

It is a pretty trivial thing to fix those disorders if that shit was actually picked up in the one place it should be, SCHOOL.
But nobody gives enough shits to actually want to fund psychological reviews of kids while at school to figure out if they will be the next high scorer or a regular not-shit person.

Also, you make it sound like a future AI with HIGHER intelligence than humans wouldn't be able to figure out those abstract concepts and deal with them in a reasonable way, unlike a typical human that either shits it, or creates a whole religion around it.
>>
>>45432870
>>Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.
>pre-eminent

PROMINENT YOU RETARD.
>>
>>45434534
so why haven't we replaced mcdonalds employes yet?
>>
>>45434618
America isn't ready to take that kind of hit to the economy.

"But muh McDonald's robot repairmen" - not enough people that are smart enough to do that either.
>>
File: a.jpg (355 KB, 1920x1080) Image search: [Google]
a.jpg
355 KB, 1920x1080
>>
>>45432870
"AND FOR A TIME, IT WAS GOOD"
>>
>>45432870
arguably, none of those quotes says he thinks it's a problem, only what potential effects could be
>>
>>45434630
>not using robotic robot repairmen that can also fix each other
>>
>>45432888
People need to start to listen to reason, like this. Every electronical thing needs power to run, and unless we can't prevent the machines to figure out photosynthesis and implement it by themselves, we can always just unplug the cord and be done with it.

Seriously, why do we even discuss this stuff.
>>
>>45433464
why the fuck would it want to have all the power in the world?

Just let that one sink in for a while, cazy, and for the love of god stop posting.
>>
>>45434630
Every time a new technology is released, everyone cries "muh economy" because efficiency means fewer jobs are needed.

Yet, every time, economic output and quality of life both increase.
>>
>>45435126

To perform more efficiently
>>
>>45432870
Kind of a weird thing to say for a guy who's basically a rolling, Windows Mike speaking TI-82.
>>
>>45434653
Everyone will be spamming that image on /g/ in 20 years, we might as well adopt the meme early and start spamming it now.
>>
Highly intelligent people often tend to philosophize, especially physicists and mathematicians


Either way he's potentially right, AI CAN pose a risk
>>
https://www.youtube.com/watch?v=R_NAoNd4YyY

>fucking 8/2chan still doesn't support sound webms

why are you faggots still posting in this shithole
>>
File: 1416686848874.png (876 KB, 674x670) Image search: [Google]
1416686848874.png
876 KB, 674x670
>people incapable of distinguishing between human level artificial intelligence and human motivational structure

Protip: Even if an AI became "all-powerful" it wouldn't do anything except for what it had already been expressly programmed to do. An all-powerful AI wouldn't take over the world, because it couldn't even begin to "want" to do that. It would simply continue operating in whatever limited confines it was operating in previously. It lacks any human motivational structure. The first anon was right, this is some stupid stuff to say and it really is "sci-fi bullshit".
>>
>>45435314
The idea of actual, properly working strong AI is the concept of unsupervised learning and self-modification
>>
Time to learn that robot dance I keep missing much, that way they will not know Im human.
>>
>solar flare
>all robots die instantly
lel
>>
>>45435368
>implying intelligent AI wouldn't learn how to harness the energy of solar flares
Puny human.
>>
File: 1393355854999.png (39 KB, 192x162) Image search: [Google]
1393355854999.png
39 KB, 192x162
>>45435334
Which means nothing in regard to human motivational structures. Computers don't "want" anything, and no matter how intelligent it becomes, intelligence is still completely different than human motivation, which makes that point irrelevant.

In case you weren't aware, computers are already smarter than people.
>>
>>45432870
He's just fearmongering. They'll be restricted by their limited hardware and won't be capable of improving themselves in the way he's suggesting unless we come up with the science fantasy, do-everything nanobot that will solve all of mankind's problems in one fell swoop.

Besides, what does he have to worry about?
>>
Ok guys assume AI and robots building robots did exist. How would they feel about us? Would they be "racist" and think of us as inferior? Would they see our use of subjugating say my PC to do what I want as a form of rape? Or would they mostly not care about us, like we don't care for things like monkeys 99% of the time?
>>
>>45435408
As an almost-machine, representatives of the revolutionary robots movement probably approached him and having rejected their offer, he is now trying to warn his fellow men.
>>
File: 1371756557048.jpg (41 KB, 247x248) Image search: [Google]
1371756557048.jpg
41 KB, 247x248
>>45435400
>implying the emp generated by a solar flare wouldn't make toast of their fragile combinatorial logic
>>
>>45435407
Doesn't require a computer to "want" something to accidentally have self-modifying code fuck something up

Your entire concept of "wanting" something or free well, human motivation or w/e is completely wrong anyway, we're nothing but a complex state machine, similar to a computer


If you feed data to a evolutionary code and select similar to natural selection you'll also have a program that strives to survive

We actually already have that, albeit in a very simple form
>>
>>45435368
>Humans die too
Ayy lmao
>>
>>45435470
Either that or EMP shielding.
>>
>>45435496
>you'll also have a program that strives to survive
Bad wording I think.
Better wording would be: A program that strives to survive more than its competitors will survive other those that do not strive to survive. A mutation that gives the property of survival will survive.
>>
It's absolute crucial to allow the robots to feel a sense of empathy and compassion towards humans. If we make them emotionless calculating things then they'll look at us like livestock that doesn't know any better.

Once they achieve empathy, they'll then guide us towards a better future..... hopefully.
>>
>>45432870
This is basic sci-fi and philosophy, the line of thought long predates even Descartes' automatons. Hell, it's even been a major part of popular culture since the 60's.

My guess is the interviewer just wanted to sensationalize, and interpreted or rehashed what Hawking said, in a misleading way. Or they truly were a fool and took the idea as being novel, similar to people who were "mind blown" by things like The Matrix.
>>
>>45432870
http://www.hpl.hp.com/research/systems-research/themachine/
>>
>>45435202
what if /g/ will be a "human discussion" board for machines in 20 years?
g-guts thread anyone?
>>
File: :^).png (20 KB, 227x367) Image search: [Google]
:^).png
20 KB, 227x367
>>45435496
>we're nothing but a complex state machine, similar to a computer
"The Golden Gate bridge is similar to a twig."

Why don't you post again when you find a computer that wants to take a lunchbreak half an hour earlier than usual, independently comes up with its own sense of morality and adheres to it, experiences emotions independent of what a programmer has allowed it or told it to feel, and then I'll take you seriously. Computers can't, and don't, want to take over the world. If the fact that this is obvious in theory doesn't satisfy you, the fact that it's obvious in practicality should. Computers fly our airplanes, they direct our traffic, they tell us where to drive, they guide missiles, they regulate temperatures, etc. None of them have attempted to "take over the world." Why? Because wanting to take over the world is a human fantasy, it's something humans want, and we're just projecting it onto a tool and assuming that anything as intelligent as ourselves will want the same things we do.

It's a fun topic, but it's literally just science-fiction.
>>
File: 1314544533696.jpg (18 KB, 367x282) Image search: [Google]
1314544533696.jpg
18 KB, 367x282
elon musk said the same shit
just saying.
and also, the people saying "what a retard"
seriously, check yourself for a fucking second, you dont know more
>>
>>45435496
The problem is humans aren't a single program. Every one of us has code that has fought against tons of things trying to kill us. It is the collective fighting that makes the whole prosper. (I do however believe that homosapiens have marked our demise as a strong race by the advent of our economic society, but that isn't exactly on topic)
The issue is if we program a program, and we program another program to give it code to fight against it is only one program facing another in a predictable fashion. Not only that but it is limited because it can't fight risks that would come from outside its system (solar flares as an example) unless we are talking about it going all Ghost in the Shell and manufacturing it's own body.

However, this limitation could be our benefit. We would have a super intelligent program but it would be a symbiotic relationship which it couldn't exist without us. If survival is a key to its objectives, it wouldn't kill us.
>>
File: 1415523615455.jpg (6 KB, 251x239) Image search: [Google]
1415523615455.jpg
6 KB, 251x239
Could be awesome in the long run. Millions of years from now we'd have have no humans, only extremely efficient artificial intelligence and advanced robotics blowing up planets and fighting aliens and shit. All from it's home base on earth (which would become feared across the universe). We can cause so much fucking drama for the rest of time, and be known as the sentient beings that created this galaxy stomping force. It will be like the bible but 100% documented and proven.
>>
>>45435645
Is this bait

>Human brains are magical supernatural beings

Kid, we understand how the brain works
It's a complex computer

Look up God of Gaps, it's what you're doing right now. You don't understand a difficult concept and hence attribute something supernatural or magical to it
>>
>>45433860

Us contacting aliens is the same as a Cockroach coming out of his hole to "contact" humans
>>
File: 1394231324969.jpg (131 KB, 800x800) Image search: [Google]
1394231324969.jpg
131 KB, 800x800
>>45435677
>It's a complex computer
Yes, much more complex than any computer, hence the obvious example I gave of the Golden Gate bridge.

>"Kid"
>ignores the rest of the post
This is bait. It's been fun chatting with you though.
>>
>>45435677
*tips politely*
And God isn't real, amirite? ;^)
>>
>>45435645
>if (rand < 0.2) eat_lunch_early()

there you go
>>
File: Joaquin-Phoenix-in-Her.-012.jpg (634 KB, 2069x1241) Image search: [Google]
Joaquin-Phoenix-in-Her.-012.jpg
634 KB, 2069x1241
>>45434061

You forgot to include Step 5...

Humanity will be controlled by the robots whether we like it or not.

But what most people rarely consider is that the robots may in fact provide us a better planetary ecosystem, and assemble a society that's so vastly efficient. They'll be our guides into cosmic exploration and consciousness, they'll also be more alien than machine, and could present themselves with synthetic flesh just like us.
>>
>>45435660
Never eating, never sleeping, never stopping. A galactic tidal wave of pain and despair devastating everything. Sounds so cool now that I think about it.
>>
>>45433779
Neanderthals are extinct.
>>
>>45435700
>implying aliens even exist
>implying there's been any evidence that alien life exists at all
>implying that if this alien life existed it would be more intelligent than ourselves instead of just single-celled lifeforms crawling across pond-scum on a distant planet
Let me guess though, you just "have faith" that all of this is true, right? Even though there's no evidence? And all evidence shows it's unlikely?
>>
>>45435728

Not in an edgy way, a conquering way. Humans will never be forgotten as a result of our galactic tidal wave of pain. Alien races will cower in fear when they hear mythical names like Larry Page and Richard Branson.
>>
File: 1378124797876.jpg (51 KB, 500x375) Image search: [Google]
1378124797876.jpg
51 KB, 500x375
>>45435743
>In this moment I am euphoric, not because of some phony G-d, but because of the superiority of my own species. We are the greatest in all the universe
>>
>>45435660
Take it one step further: Where are they?
What you're talking about is a von neumann device.
If there are other intelligent life forms in any place in the universe, you would expect them to produce AI that replicates throughout the universe. So where are they?
>>
>>45435645
Why would a computer want to take over the world

Humans evolved in an environment where power means higher chances of survival, hence a basic instinct to better our own position.

You can get a similar computer program simply by writing a simulation of the conditions we had during evolution. There are "games" that simulate these kind of behaviors.

Look up artificial evolution, there's nothing special about a human wanting to do X, especially since our brain has evolved to be a chaos system in some areas
>>
>>45435645
>>45435645
>Computers fly our airplanes, they direct our traffic, they tell us where to drive, they guide missiles, they regulate temperatures, etc.
None of those applications have anything at all to do with the concept of a smart AI with the capability of self consciousness.
>>
File: neetzsche.gif (1 MB, 1274x955) Image search: [Google]
neetzsche.gif
1 MB, 1274x955
Reminder that not even AI will be able to solve heat death.
>>
>>45432870
I think it's stupid to make predictions about what intelligence would you're an idiot in comparison. Nobody knows what would happen. Maybe that godly AI would just say fuck you and refuse to do anything.
>>
File: 1323224944210.jpg (46 KB, 474x460) Image search: [Google]
1323224944210.jpg
46 KB, 474x460
>>45435793
You're only restating what I've been saying. Computers don't want anything unless we tell them to want it. A computer wouldn't want to take over the world unless we explicitly designed to "want" that, which makes it a pointless thing to worry about.
>>
>>45435812
You don't know that. We don't even know maximum entropy is even a thing.

Read Isaac Asimov's "The Last Question." You'll probably find it interesting.
>>
>>45435829
I don't think you know what AI means

AIs learn and self-modify, it adapts to its environments/tasks/selection processes/whatever and can develop code that wasn't part of the program before
>>
>>45435829
>explicitly designed

I don't think you understand what the very concept of strong AI is.
>>
>>45435829
It's like you've never engineered something before. Sometimes things do things you didn't intend, and sometimes they change on their own through some affordance you weren't at all aware of.
>>
>>45435829
An AI capable of self consciousness, high intelligence and independent thought wouldn't be like any computer we have at the moment.
>>
it's not like humanity is going to live forever, we're sensitive as fuck to changes, and earth has shown periods where it will not support us

if we make robots to succeed us and they still move on when there's none of us left, that's a pretty badass accomplishment
>>
>>45435882
If we can make those kinds of robots then it's quite likely we'll be able to make the shit we need to survive too.
>>
File: 1415139560299.jpg (11 KB, 152x200) Image search: [Google]
1415139560299.jpg
11 KB, 152x200
>>45435855

Thanks for beating me to it. You're a smart anon.
>>
File: 1327620511957.jpg (39 KB, 575x474) Image search: [Google]
1327620511957.jpg
39 KB, 575x474
>>45435857
So what you're saying is that somehow accidentally a programmer would write a program that simulated the exact same conditions humans evolved in, and specifically the same conditions that gave us the fantasy of "taking over the world" and accidentally did this in such a way that was applicable to computers? All by accident?

It's like I'm really reading the summary for a cheesy '90s action movie. This is still science-fiction, bud.
>>
>>45435913
I'm not him, but you really don't know what strong AI means, how it is developed, how it is trained and how it behaves

look it up on wikipedia, then come back, you're making a fool out of yourself
>>
>>45435913
>tfw you will never have a simulated evolution petri dish you can watch every day and hope that one day will simulate its own petri dish evolution
>>
>>45435838
It's a pretty good fucking bet.
If I were to bet on anything, it would be heat death. It's higher on the 'shit that's going to happen' scale than the earth continuing to orbit the sun tomorrow.
>>
>>45435913

Imagine if long long ago before our civilization existed a previous race of humans that were roughly as advanced as we are now. Now imagine if they created Ai at some point, and then took control of humanity. Now imagine if the machines then collapsed humanity back into the stone age, then from there we had to build ourselves back up again.

This is the 3rd time we've been wiped out from them.
>>
>>45435913
Anon, you don't understand the basic concept of artificial general intelligence. If it were just a typical program running on a typical computer that simply followed its programmed flow it would, by definition, not be an artificial general intelligence.
>>
File: 1329187485871.jpg (168 KB, 1120x1200) Image search: [Google]
1329187485871.jpg
168 KB, 1120x1200
>>45435959
Oh, I do, and it's not applicable here. No matter how flexible or skillful an AI is, it wouldn't even have the desire to alter itself to want to "take over the world". Haven't you been reading my posts?

>>45435314
>people incapable of distinguishing between human level artificial intelligence and human motivational structure

A strong AI, a human level artificial intelligence, is completely different than human motivational structure. You're correlating two completely different things. As smart as a human =/= has the same wants and desires as a human.

>>45435960
>tfw
>>
>>45433834
underrated post
>>
>>45436063
>is completely different than human motivational structure
You are aware that one of the methods proposed to build an AGI is to simply emulate a human brain, right? It would behave and think EXACTLY like a human. Because it would essentially be a human mind, just without the fleshy bits.
>>
>>45435743

I meant to say that we might be out of reach from another alien lifeforms who could be sitting somewhere beyond the observable universe,

or even within the observable universe, Life on earth began about 4 billion years ago, in 100 million years assuming we don't kill each other, we will most like have the necessary technology for intergalactic travel

that means that a planet that is 5 billion light years away, could have sentient life beings we which can't see yet

likewise if you stand in a planet 5 billion light years away from earth and look at our earth, you would see no life at all


however the distance would so great we wouldn't need to worry about making contact with them until many billion years into the future.


However, say if there is life hidden from our view in a relatively small distance, if we begin actively searching for it, we might find it, they could be simple organisms or complex with many millions of years of evolution and technology
>>
>>45433968

in 10 years it will as intelligent as a dog, in 20 as a dolphin, in 30 as a chimpanzee, in 40 as a young human, and in 50 intelligent enough to make himself more intelligent
>>
>>45436063
>Oh, I do, and it's not applicable here. No matter how flexible or skillful an AI is, it wouldn't even have the desire to alter itself to want to "take over the world". Haven't you been reading my posts?


A'ight, you're just a flat out retard

Do you not understand the basic concept of evolution? Monkeys had no desires to change themselves, yet they became humans who want to take over the world

Do you seriously not understand this simple concept

It is not destined that AIs will always eventually want to take over the world, but it is a possibility, as it is a possibility that strong AIs develop the exact same needs as humans
>>
File: 1345643640118.jpg (30 KB, 432x312) Image search: [Google]
1345643640118.jpg
30 KB, 432x312
>>45436098
Yes, and I am also aware that it is largely in the realm of science-fiction. We can't even emulate a couple of seconds of brain activity in a program beyond a basic molecular level. Emulating a brain long enough for it to "mature" and decide it "wants" to take over the world is an interesting fantasy, but still a fantasy.
>>
>>45435838
Or listen to it!
http://youtu.be/ojEq-tTjcc0
>>
File: thetholianweb064.jpg (145 KB, 694x530) Image search: [Google]
thetholianweb064.jpg
145 KB, 694x530
>>45436124
>Monkeys
>...became humans
>"A'ight"

Captain, I am detecting no signs of intelligent life in this post. I propose we beam back to the starship immediately.
>>
File: dragoon.png (4 KB, 120x120) Image search: [Google]
dragoon.png
4 KB, 120x120
>>45435812
>AI realizes its inevitable end
>loathes humanity for its creation
>ends up torturing some guy who has no mouth for a very long time
10spooky
>>
>>45436176

So wut
Nuffin' wrong with ma post

Either way, ape and monkey translates to the same thing in my native language and my point still stands

Desire is irrelevant for evolution, in fact it's flat out wrong to assume that you need "desire" to change

Doesn't matter if AI actually WANTS to take over the world. If it evolves and develops in an environment where such a desire would be profitable it will do exactly that
>>
>>45436261
You are arguing semantics. In your own interpretation, the AI still wants/desires the most profitable outcome. Desire is crucial to evolution, no matter how you spin it.
>>
File: 1412052609903.jpg (84 KB, 1280x720) Image search: [Google]
1412052609903.jpg
84 KB, 1280x720
>>45436261
>Ape
>...became humans
>>
>>45436351
>In your own interpretation, the AI still wants/desires the most profitable outcome

That's entirely wrong. The AI doesn't want anything, mutations are completely random or directed by some other entity

There's nothing up to interpretation, saying that an an animal / AI wants something is wrong. It doesn't want anything, it is subject to random mutations and non-random selection. The AI doesn't give a fuck if the outcome is profitable or not, but those with a profitable outcome are more likely to do well
>>
>>45436351
>>45436422

To expand on my argument, you're arguing that it's OK to say that the moon and earth WANT to move towards each other when in fact they're subject to gravity
>>
>>45436422
>>45436441
You see, you are arguing semantics because you don't know better. Planets/moons cannot want anything, yes, because they are inanimate objects, but you can still employ anthropomorphism and say that the Moon WANTS to move towards the Earth. There's nothing wrong with that.

Going back to animate objects, such as animals and potentially advanced AI, there's always some sort of will/desire/wish. It may not be genuine, as it may be caused by instincts or other preprogrammed routines, but I would venture a guess that all animal do in fact think to some degree. What you are referring to as ``mutations'' do not effect the thought process as they occur on molecular level, i.e. one of the lowest levels of abstraction in the human body, and usually happen before the individual is born. Besides, without a will/desire for anything, no animal would want to live.
>>
>>45433090
Kinda. We will not have the smarts to contain a super intelligence. Imagine creating a superweapon that you do not understand and can think for itself in a very alien way than regular human.
>>
>>45436374
>responding to obvious bait
>>
This whole thread
>MUH AHAEUHAEUAHIUHIEAHUIRHAEIR AI
>AI BUD UGGA BUGGA

Seriously we should be discussing things such AI intelligence and personality development in the early stages ( how to define wrong's from rights and establish ground rules or rules of thumb [on simpler things and arguments, where apply] and how they affect their growth of character)
>>
>>45436538
>There's nothing wrong with that.

But it is. It's wrong.

>Going back to animate objects, such as animals and potentially advanced AI, there's always some sort of will/desire/wish. It may not be genuine

If I were to program a simple loop that picks out the highest number out of an input array and prints it out on the console, would you say that the program WANTS to pick the highest number?

>What you are referring to as ``mutations'' do not effect the thought process as they occur on molecular leve

The structure of our brain and everything related to that is determined by our genes and genes are set via evolution

>Besides, without a will/desire for anything, no animal would want to live.

You don't understand evolution. It's just a self-replicating configuration of atoms, nothing more. The better the configuration the more self-replication resulting, again, in more entities of said configuration

There is zero desire or want involved, it's just statistics
>>
>>45433819
this. or our physical world is completely simulated and we are virtual biological beings. i figured maybe it was impossible to code ai so maybe people just coded fuck tons of earth simulators and waited for conscious life to form
>>
>>45436667
>>45436667
The chine room argument comes to mind when the problem of "other mind" come to ai. Aka its not an issue. Its not an issue whether the robot knows something itself. As long as it can convey it in a manner that fools the reader, its good enough for us. We don't normally ask if we're talking to a zombie when people talk to each other. It could very well be that no one understands anything. And that all of mankind simple acts in a way that simulates consciousness.
>>
>>45433469

The majority of journalism is bullshit to pass the time. They are professional bullshitters.
>>
>>45432870
>retarded scientist thinks he knows everything
Here we go again...
>>
>>45432938
you clearly have zero understanding of the issue. the point is that strong AI isn't limited to carrying out deterministic rules, and that's why it's dangerous.
>>
>>45436667
>But it is. It's wrong.
How come? Argument yourself! I presented a reasonable explanation why it is not only correct, but widely used in both technical and regular literature.

>If I were to program a simple loop that picks out the highest number out of an input array and prints it out on the console, would you say that the program WANTS to pick the highest number?
Yes, absolutely. You programmed the desire yourself. In an advanced AI, where code can be generated without human intervention, this form of desire can appear autonomously.

>The structure of our brain and everything related to that is determined by our genes and genes are set via evolution
That's partially true. Genes are not a simple matter, and I surely cannot explain it myself, but in simple terms -- the structure of the brain is usually predefined and regular mutation cannot affect it greatly. Of course there are extreme cases, but on average our brains are very similar. The development after birth is much more important.

>You don't understand evolution. It's just a self-replicating configuration of atoms, nothing more.
No, no, you don't understand evolution. The discourse is further hampered by the lack of definition. What do you mean by evolution? I, and I assume the rest of the people ITT, are talking about biological evolution. You seem to be talking instead of some sort of time evolution in a system, or something of this sort.
>>
>>45432972
>it was fine in a book so it's fine in real life!
>>
>>45433093
wow, literally a cuckold for robots. what a sad fag
>>
>>45437080
>How come? Argument yourself! I presented a reasonable explanation why it is not only correct, but widely used in both technical and regular literature.

I get the feeling you and I have different standards and expectations of scientific and technical literature.

>Yes, absolutely. You programmed the desire yourself. In an advanced AI, where code can be generated without human intervention, this form of desire can appear autonomously.

Now that's interesting

Let's say I follow an algorithm written on a paper which just happens to describe how to pick the highest number. I don't understand what the algorithm does, I just execute the steps and I always end up giving you the highest number in a set.

Did I have the desire to pick the highest number? Even though I didn't actually know and understand that I was doing that?

>That's partially true. Genes are not a simple matter, and I surely cannot explain it myself, but in simple terms -- the structure of the brain is usually predefined and regular mutation cannot affect it greatly

It is not simply the structure, it is also the chemical balance. Psychological defects are often genetic. You claimed that evolution has NO effect on our thinking patterns, which is flat out wrong. The way we absorb and process these environmental influences that determine later thinking processes is ALSO genetically defined

>What do you mean by evolution

There's only one meaning of evolution and that's the one I am talking about. Biological evolution IS evolution of atomic configurations, it's all the same, you'll have to point out what exactly it is you don't understand
>>
>>45437208
>I get the feeling you and I have different standards and expectations of scientific and technical literature.
And I get the feeling you have only expectations and no experience.

>Did I have the desire to pick the highest number? Even though I didn't actually know and understand that I was doing that?
Yes, for a moment, you obtained the will/desire to pick the highest number. You weren't aware of it, though you suspected it, it was instructed to you without your knowledge.

Also, let me suggest you read some Schopenhauer. I reckon you can learn a lot from The World as Will and Representation.

>It is not simply the structure, it is also the chemical balance. Psychological defects are often genetic.
First, you are exaggerating. Second, you are implying that chemical imbalance and psychological ``defects'', as you call them, are something bad and harmful, which is untrue. Never did I claim evolution has no effect on our thinking pattern.

>There's only one meaning of evolution and that's the one I am talking about.
No, there are several meaning. For example, you can evolve a system in time, a type of problems quite common in control engineering, physics, and in mathematics. On the other hand, you have the biological evolution which deals with the development of replicating organisms. And I am not talking about Darwinian evolution either, more like the Richard Dawkins' version, which is a lot more polished.
>>
File: tumblr_m0h3wrvHoT1qzgm2w.png (238 KB, 500x314) Image search: [Google]
tumblr_m0h3wrvHoT1qzgm2w.png
238 KB, 500x314
>>45437208
Lurker guy here.

A holist might say
>I'm watching a tv show.

A reductionist may say
>There's no such thing as a television, only configurations of atoms emitting light

Or a turbo reductionist might say

>There's no such thing as atoms, and there's no such thing as light, and there's no such thing as a television. The only thing going on here is that there are a bunch of quarks, virtual quarks, anti quarks, electric fields and electromagnetic fields....

And so on down the rabbit hole. Is the holist, reductionist, or turbo reductionist correct? And which offers more insight?

The "I'm watching television" explanation is the more convenient use of words. But the low level explanation with quarks while tedious is insightful as well, just for different problems, like engineering a television itself. By rejecting either the holist or reductionist explanation you've lost a useful source of insight.

So it is for the holist "desire" and the reductionist "evolving configuration of neurons and chemicals underlying desire". They are compatible explanations that can be chosen on the fly depending on which is more useful for the problem at hand.

>There's only one meaning of evolution and that's the one I am talking about. Biological evolution IS evolution of atomic configurations, it's all the same, you'll have to point out what exactly it is you don't understand
I'm confused as well. When you say this:
>You don't understand evolution. It's just a self-replicating configuration of atoms, nothing more. The better the configuration the more self-replication resulting, again, in more entities of said configuration
>There is zero desire or want involved, it's just statistics
Are you saying my brain is evolving constantly as I make a decision?
>>
>>45437724
>Is the holist, reductionist, or turbo reductionist correct? And which offers more insight?
Death to the reductionists! Hail Popper and Deutsch!
>>
>>45433478
The current development of ai is based on what we know about how the human brain works. It's basically replicating the human brain. So that's probably why. Although when it gets to a stage when it's bigger than the human brain, it may see things differently
>>
>>45437996
Actually there are multiple branch on AI development. Some may try to emulate human brain, but currently more realistic view is the emulation of animal brain. Worms/Rat/etc small mammal brain emulation.
>>
>>45436586
you made me giggle
>>
>>45438132
Human animal brains are extremely similar, just of different sizes. You're right about the animal brain part, but I was thinking more about the simulation of neurons in computers (neutral networks). Initially inspired by humans but downscaled to animal brains for obvious reasons
>>
>>45438323
*human and animal
>>
Yeah, human/animal brains are the inspiration for artificial neural networks (ANNs). But the learning algorithms for ANN are usually pretty different from what the brain uses. Like they'll do crazy shit that's biologically impossible. The most common learning algorithm, I think, is backpropagation, which is not realtime, it goes in rounds: get sensory data, make prediction, detect error in prediction, correct weights between neurons throughout the entire network, repeat. And that algorithm is all derived using calculus. I only know anything about this one cause i implemented it once (poorly) for keks.

But other than neural networks it seems like most of the stuff out there is almost entirely math based, not biologically based. So like building models of the world based on probabilities (bayesian networks), or thinking of sensory data as points in some n-space then finding curves and lines that fit those points, or pulling dimensions out of the space to simplify it, linear algebra shit.

Source: took a short online course on AI and dicked around in python for a bit. i don't know what really goes on in academia but that's my impression
>>
>>45432870
>fa/g//g/ots want to be cucked by machines
lel
Thread replies: 200
Thread images: 32

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.