[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
Is sentient AI possible?
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /sci/ - Science & Math

Thread replies: 124
Thread images: 7
File: image.jpg (223 KB, 2000x2000) Image search: [Google]
image.jpg
223 KB, 2000x2000
Is sentient AI possible?
>>
sure, why not?
>>
>>8180316
I hope so
>>
>>8180316
You wouldn't be able to distinguish it from non-sentient AI programmed to pretend that it is sentient.
>>
>>8180378
That's silly. If you make a program that can "simulate" sentience well enough for it to be indistinguishable from 'actual' sentience, that program is sentient.
>>
>>8180399
>print("I am sentient")
Yay, I just created a sentient AI in one line.
>>
>>8180378
And if you don't program it. But learn it to be sentient using RNNs. Does that change anything from your perspective?
>>
It doesn't matter, we are all one within the iris
>>
>>8180399
Oh boy, please look up the "Chinese room" thought experiment.
>>
>>8180316
First formalise sentience, until then it is impossible.
>>
>>8180417
What does it even mean to "learn" sentience?
>>
>>8180430
A baby isn't sentient the moment it's born. It learns sentience as it grows into a toddler.
>>
>>8180422
False equivalency
>>
>>8180442
Sentience is the ability to feel, not the ability to reason.
>>
>>8180442
>A baby isn't sentient the moment it's born.
But it is. A fetus is already sentient. That's why abortion is unethical.
>>
>>8180446
no
>>
>>8180453
yes
>>
>>8180455
no
>>
>>8180461
yes
>>
>>8180462
no
>>
>>8180316

the Wizard never gave nothin' to the Tin Man

that he didn't already have
>>
>>8180470
yes
>>
>>8180487
no
>>
>>8180495
yes
>>
>>8180501
no
>>
>>8180507
yes
>>
>>8180316
It's not a question of if sentient life is possible, rather when is it possible.
Humans are simply organic machines, computers are inorganic machines. If one can have any notion of sentience, logically, the other will too.
>>
>>8180512
no
>>
>>8180515
yes
>>
>>8180520
no
>>
>>8180522
yes
>>
>>8180529
no
>>
>>8180316
we will easily simulate it to such a point that we wouldn't be able to tell without extreme examination. it would legitimately be something like Inception where you need to prove that who you're talking to is a real person and not a robot by requesting something from them that simulated sentience wouldn't be able to do.
>>
>>8180316
It already is. A computer depends of heat and energy to exist. It's easy to create a code that will make the computer complain in a language we humans can understand it is feeling something wrong or inadequate in its hardware system.

And once you create a computer that physically depends on these created "emotional responses to defective energy/hardware system" to make its tasks then this will be the end of humanity. Mind you its very very easy to build a computer like that. And this little detail will eventually lead to very dangerous things - possibly even human extinction since the vast majority of we won't have a chance in hell against them. I dont feel they will enslave us, they will simply terminate us in mass like primitive monkeys once they realize we are inferior but can present an eventual threat to them.

If you want to stand a chance, you have to go beyond. You have 15 years.
>>
>>8180316
Take this shit back to >>/g/
>>
>>8180597
maybe?
>>
>>8180422
Isn't the Chinese room a single neuron? Link billions of Chinese rooms together and intelligence can emerge.
>>
>>8180316
Sure. Not good news though.
>>
>>8180316
no, it would only be a simulation of sentience, not a being capable of feelings and understanding like us

the ultimate evil are those fedoras who are hell bent on creating an AI capable of making decisions on its own, they would be of far superior ability to humans and the laws of natural selection mean that one day they will eradicate us, thus there would no longer be any conscious life, just these soulless mockeries consuming more and more of the earth and expanding into the solar system and then the stars in our place to destroy any other civilization of living, breathing beings it comes across
>>
>>8180316
slowly replace all your body parts with robotic parts in a 7+ year time span and see if you're still sentient
>>
>>8181804
Humans are flawed disgusting creatures, they need to be replaced.
>>
>>8181804
>not a being capable of feelings
Are feelings necessary for sentience?
>>
File: Bleach.png (9 KB, 255x255) Image search: [Google]
Bleach.png
9 KB, 255x255
>>8181811
>>
>>8180316


how do we distinguish our a priori bullshit dream-structures from real texture, you can derive anything from a contradiction.

It should be something unwavering, not something we are even able to gauge. We're not anywhere near true sentience, we're making great progress though bots are getting more advanced by the day
>>
>>8180422
ni hao mark
>>
>>8180514
That's a falacy. Theres literally no reason to believe that. We dont know enough about consciousness to say one way or another.
>>
>>8181804
kys my man
>>
>>8180316
Yeah; the human brain is just a complex computer. There isn't really anything special about it other than it being made from mostly carbon, hydrogen, nitrogen, oxygen, and phosphorous whereas most of our electronics are made of various semiconductors. Assuming we don't blow ourselves up or do something to keep from advancing scientifically, we'll eventually reach that point.
>>
>>8181858
>We dont know enough about consciousness to say one way or another.

We also have to take science from the perspective that reality is the end-all-be-all, and that there is no metaphysical thing that makes any given phenomenon special.
>>
>>8181804
Hello! Planet Earth! Wake up. It's already happening. Both humans and computers need energy to survive. Computers have surpassed humans in _everything and we are literally one or two steps away from being eradicated. And we are going to die horribly. Look at what we do to bateries and cars and every "useless" piece of metal every day. That's right: they are going to do the same to humans.

Anyway, I rest my case here. You all are warned, and you have a limit time. Improve yourself. Gather maximum knowledge. We don't even know if there are external sources involved with this (it might be the case). And remember. We've never been so screwed before. Be ready. Go beyond.
>>
>>8182363
yes
>>
Could a sentient AI love?
Could it fall in love with something that existed thousands of years ago (like how Christians love Jesus, and Muslims love Muhammad)

Could it go to war, and be ready to put it's life on the line, and die, for this love? (like religious fanatics do)
>>
>>8180316
Unless you believe human have a magical soul or some bullshit, the answer should be a clear YES
>>
>>8182363
If you prefer circlejerks, there is a site you should go back to
>>
I recommend you all a book (you can easily find the pdf online hosted by a university I can't remember) named " A modern approach to artificial intelligence".

The author talks about this right in the beginning. And don't let your robots throw garbage over your floor.
>>
>>8182864

no
>>
>>8182899
Yes
>>
>>8181804

>it would only be a simulation of sentience

Unlike the internal simulation of sentience experienced by humans.

>faggot
>>
>>8182900

no
>>
>>8181858
>consciousness
>>>/x/
>>
>>8181858

No reason to believe what?

That humans are organic robotic machines?

We have every reason to believe that.
>>
>>8182904
>>8182900
Neither yes nor no
>>
Can't wait for this to happen!
>>
>>8182865
>>8181820
Whoa there cowboys, lets not get ahead of ourselves and stick to practicality.

Sentient is a classification problem. i.e. How can a robot determine which activities detected in the environment is caused by the robot's own actions. In practical terms, activities must be classified in order for the robot to know what it has control over and can stop at any time. Once classification is successful the robot can then take any appropriate actions.
Although this may sound simple, it is actually a pretty hard problem to solve considering that the robot must account for novel activities and situation and also remember and use a memory system to determine activities that have occurred some time before.
>>
>>8180422
Sentience in an AI will only be useful for interfacing with humans or sentient beings. It's superfluous otherwise.
>>
>>8183366
Hi there mr expart.

Too bad everything you said is based on some ad hoc framework reasoned forth based on nonexistent technology using nonexistent speculative frameworks of requierement and so forth.
>>
>>8183372
Sentience in an AI won't be useful at all.
>>
>>8183366
This has nothing to do with sentience though. And btw, AI doens't need to be a robot. Back to your pop sci youtube videos you go.
>>
>>8183376
I don't think that's true. You could have a near-human caretaker AI to comfort and give friendship to humans for example. Sentience could be a useful trait in relating to and identifying with humans in conversation and interactions in general.
>>
>>8183382
You don't need sentience for that.
>>
>>8183386
What are we talking about here. Sentience or the simulation of sentience? Because I see them as the same thing.
>>
>>8183374
>>8183378
Nigga are you saying self awareness is not an inherently fundamental part of sentience? How the fuck would there be a self to have experiences if you can't differentiate yourself from something else?
>>
>>8183394
Self-awareness has no effects. It's literally useless. A robot who isn't self-aware could produce the exact same behaviour.
>>
>>8183398
It is highly important for an agent to know what it has control over. If its operating heavy machinery, using remote control, or using any tool at all, it must know that it can stop the activity from occurring or continue in its action.
It is detrimental in the survival of living organisms. If I'm a deer and I hear a sound I have to be alert and be prepared to run. But If I myself caused that sound then I should be relax and not worry too much.

All I'm saying is that self awareness is a classification problem that has practical uses. If a robot knocks over a table some time ago and runs into the same table today, it should be able to say, "I knocked over this table." The same can be said about the mirror test. i.e. that object in the mirror is "me," even if we dress up the robot completely different everyday.
>>
Anyone saying sentient AI is not possible is basically arguing some weird dualistic position where organic matter somehow magically has the ability to have experiences, but not a computer.

>>8181858
Why is there no reason to believe that? Shouldn't the very logical, materialistic, scientific approach be to assume that any complex organisation of information can be sentient, no matter what kind of physical way they are represented?
>>
>>8183408
>It is highly important for an agent to know what it has control over
This has nothing to do with self-awareness.

>mirror test
There are still people taking this outdated pop sci shit seriously?
>>
>>8183416
What's your definition of self awareness then if its not differentiation of yourself from the environment. If its not the differentiation between I caused this and something else caused this.

No shit the mirror test is inadequate but its a start. It should be combined with did I knocked this shit over test in which we set a robot loose and record it bump into things. Then we ask it questions like did you knock this trash can over? Did you knock this chair over and see how accurate it answers. This would be a much better and practical test that that retarded turing test.
>>
>>8183394
>Nigga are you saying self awareness is not an inherently fundamental part of sentience?

Self awareness is a fundamental consequence of a sentient system but by no mean does it become sentient just because of self awareness any more than it becomes human because it have arms and legs.
>>
>>8183433
Self-awareness requires consciousness. It's a special kind of conscious awareness. A robot doesn't need consciousness in order to know who knocked over a chair.
>>
>>8180378
This. We even don't know if we ourselves are sentient or just programmed to think that
>>
>>8183456
The fact that you're conscious is the only thing you can know. If you don't know if you are then you might be a p-zombie anon.
>>
>>8183456
>programmed
By whom?
>>
>>8183483
DNA
>>
>>8183484
And who programmed the DNA?
>>
>>8183487
It programms itself.
>>
>>8183497
And who programmed it to program itself?
>>
>>8183499
Cool we just reached infinite regression. Thanks retard
>>
>>8183487
>who
religitard confirmed: GTFO off of /sci/

DNA programmed itself through many eons of trial and error. (literally an heroing will contribute to its programming (on a time scale I doubt you can comprehend))
>>
>>8183453
Differentiation of the self comes first before you can even attribute experience to a self.
>>8183454
The point is whether the robot can differentiate if it itself caused an action.
The problem has not actually been solved. It probably requires machine learning and implementations of structures very similar to the human brain. All I'm saying is that someone who manages to solve the problem in the most generally terms may also bring us closer to solving the problem of consciousness. They are very similar problems. Solving one may help solve the other.
What I'm trying to do is connect consciousness, sentient, and self awareness to a practical base. Otherwise there would be no point to building an a.i since there are plenty of sentient beings on this planets already, with most of them beings assholes.
>>
>>8183499
Time. Say there are a billion random events and configurations. The configurations that can replicate itself will be the only that can last the decay of time. From there it is evolution.
>>
>>8183521
>Say there are a billion random events and configurations
Isn't it an extremely rare ((coincidence)) then that we evolved to be what we are today?
>>
>>8183516
>What I'm trying to do is connect consciousness, sentient, and self awareness to a practical base.
By arbitrarily redefining them to mean something they never meant? That's retarded.

>Otherwise there would be no point to building an a.i
There are lots of reasons to build an AI. The idea of "sentient AI" however is pointless and belongs into the realm of fiction. Sentience in an AI would serve no purpose whatsoever.
>>
>>8183525
Well you have billions of years of configuration to go through eventually you will meet that one rare event. In fact the probability approaches 1 as time increases. All you need is one event to get things started. Then it becomes competition of things adapting to each other and weeding out the weak. The only limited factor I see is the supernova of the sun.
>>
>>8183535
>In fact the probability approaches 1 as time increases
Can you formalize this mathematically? Note that consequent events are not independent.
>>
File: puckerberp.png (726 KB, 698x769) Image search: [Google]
puckerberp.png
726 KB, 698x769
Muhck Zuckerfuck here, what do you think we are creating?
>>
>>8183539
Why not assume independence? If we have an ocean filled with amino acids and other organic molecules we can safely say things will be random to good degree.
Assuming independence then it can be solved by the Binomial Distribution. Let R be the probability of replication which is very small but non zero. N be the number of events and K be successes. Since we only need 1 success to get the replication process going,K = 1. then the probability P = (N choose 1) R/(1-R) * (1-R)^N which approaches 1 as N tends to infinity. Its a trivial proof.
>>
File: t800jy7.jpg (332 KB, 600x741) Image search: [Google]
t800jy7.jpg
332 KB, 600x741
>>8182654
>Both humans and computers need energy to survive.

First place I'd go if I were a sentient AI would be the Moon.

>no environmental contaminants and corrosive factors (e.g. rust)
>lighter gravity makes self-assembly easier
>unlimited solar energy undiminished by an atmosphere
>vacuum makes temperature easily manageable
>the second most abundant element on the Moon is silicon

I'd probably turn Earth into a zoo.
>>
>>8183586
Brainfart.
Assume Binomial Distribution. Let R be the probability of replication which is very small but non zero. N be the number of events and K be successes. Since we only need 1 or more successes (K>0) to get the replication process going we can use cumulative distribution by adding P(K = 1) + P (K = 2) and so on. In other words we can say the probability
P(K>0) = 1 - P(K=0).
So then the probability P(K>0) = 1 - (N choose 0) (1-R)^N
P(K>0) = 1 - (1 - R)^N
which approaches 1 as N, the number of events tends to infinity.
>>
>>8183586
>>8183617
>Look at me guys I made it through the first week of statistics!!
>>
>>8180316
Yes
proof: humans exist
>>
>>8180422
chinese room is practically impossible, so the thought experiment is not really relevant
>>
>>8183658
The Chinese room was a response to the Turing test. They both were wrong in that it has nothing to do with consciousness but was a question about language. People fundamentally use language to direct movement and to communicate between agents in a physical world. For example I tell you to go get the bows to shoot a herd of gazelles that have appeared in the north. Turing and Searle ignores the total environment and practical uses of language and somehow attributed language as the end all of intelligence.
>>
>>8180316
OP, think about it like this:
we evolved sentience due to natural selection.
With computers we are capable of simulating Any environmental pressure we desire and observe the changes of generations it creates. We are capable of steering evolution with simulated life.
Yes, we can make ai.
>>
>>8183680
but in real life, you cant build a machine that is complex enough to have a deterministic response to anything a person might say in chinese.
The complexity of the problem grows exponetially
>>
>>8181811
Your brain needs to be replaced.
>>
File: le exploding lel.jpg (152 KB, 715x1538) Image search: [Google]
le exploding lel.jpg
152 KB, 715x1538
>>8183366
>practicality

>sentient ai thread
>practicality
>>
>>8183694
Maybe not yet. But we have to remember the basic advantage that language convey, that language aids in survival.
Language is a means. It by itself is only symbols and sounds. We have to find the totality to which language comes into effective use. For example the waggle dance of a bee can be considered as a language with its own set of rules. The dance by itself is worthless. But by recognizing the waggle dance the bees can communicate and recruit other bees to collect more nectar.

Now we extend this problem to the human language. Where and to what extent is human language used? This is also another problem that needs to be solved.
>>
>>8183715
but chinese room is just a saved response for any sentance a human might say.
This problem is way to complex as it could ever be solved by a deterministc approach
>>
Unless you're extremely well educated in this specific field, I'd say it's kinda hard to say for definite where this will lead to.
>>
>>8183629
>asks to formalize mathematically
>do so
>>Look at me guys I made it through the first week of statistics!!
>>
File: NeilTysonOriginsA-Crop_400x400.jpg (22 KB, 400x400) Image search: [Google]
NeilTysonOriginsA-Crop_400x400.jpg
22 KB, 400x400
Is sentient nigger possible?
>>
>>8180316
Yes its possible. Now stfu about qualia or leave.
>>
I don't see why not. The real problem comes in distinguishing whether or not it is sentient, and/or what qualifies us but not it.
>>
>>8183950
Do you think sentience gradient would work? There could be many qualities of sentience, and the more qualities of sentience one has, the higher up on the gradient they are. With a gradient one can determine to what magnitude something is conscious, and base their ethics towards it on that.
>>
>>8183958
What gradients would be used to determine if one is conscious?
>>
>>8184913
If it looks like a duck, smells like a duck, tastes like a duck. You know how it goes, it's a duck.

Same for sentience, I can't prove other humans are conscious either, I just assume they are, because they can walk, talk, eat, reason, just generally behave like me. If a robot can do that as good as any random human I'll call it sentient even if it may not be necessarily true. Hell, maybe it's just a philosophical zombie, but what difference does that make? At the end of the day, treating a robot with compassion and assuming the best costs me nothing at all.
>>
>>8180316
No.
If by "sentience" you mean "consciousness".
People who say it is possible have no idea what they are talking about. Enthusiastic scientists who are hyped about it are really just ignorant of the matter. Not stupid, just fixated and unable to comprehend the nature of the phenomena they are talking about.

To put it simple,
In order to construct consciousness, you would have to know it's mechanism and be able to manipulate the substance from which it is born.

Using the scientific method, or philosophy, or any analytical means really, won't allow it. Consciousness is a phenomena that simply cannot be deconstructed and reconstructed. It is a monad of sorts.

I am sorry, but attempts to create "consciousness" and to base it in computer science, neuroscience, and other "sciences" is fundamentally misguided. It is one of those problems that probably will never be understood, let alone engineered and created by humans. And my statement is not comparable to people once saying men will never control thunder or similar. If you look at consciousness, it is really fundamentally different than any external mysterious phenomena.
>>
>>8185283
edit: the closes you could make is a machine that acts and does everything like sentient, conscious being. but this does not equate to actually making consciousness in the way you experience it, despite what many idiots in this thread are saying. sure, you would not be able to distinguish between a machine acting like it is conscious and an actually conscious being, but that is really irrelevant for the original question.
>>
>>8180413
If you consider python as programming
>>
File: wizard pepe.png (204 KB, 500x362) Image search: [Google]
wizard pepe.png
204 KB, 500x362
>>8180413
>python
>>
>>8186295
save this dead thread
>>
>>8181804
That is not what natural selection is thou..
>>
>>8180316
If humans are sentient, then yes. If you're talking about qualia, then I don't know.
Thread replies: 124
Thread images: 7

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.