[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
Why don't we have AIs yet? Is computing power the only thing
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 121
Thread images: 6
File: 1443100316005.jpg (128 KB, 1300x611) Image search: [Google]
1443100316005.jpg
128 KB, 1300x611
Why don't we have AIs yet? Is computing power the only thing we don't have yet? Does the theoretical basis of a self-aware agent exist?
>>
the problem is people are too stupid to come up with how to do it well.
>>
yes.
>>
What are you talking about, OP? I took a course last quarter where every student in a 500 person class had to implement a rudimentary AI.

If you're talking about an AI that's on par with human intelligence, I'm not sure if we can get into that discussion not knowing how savvy you are about the subject. If you're basically a novice to computing, the only answer I can give you that won't lose you is that it's really complicated.
>>
File: 1448029848597.jpg (7 KB, 240x237) Image search: [Google]
1448029848597.jpg
7 KB, 240x237
Blade Runner is my favourite movie and I don't even like sci fi that much (I've never watched a single star wars movie)

I get shivers whenever Roy starts his monologue.
Fuck I think I'm going to watch it again.
>>
Computing power isn't the only limitation. You could have a computer 1billion time more powerful than the human brain but still not have general intelligence.
>>
>>52065747
Star Wars isnt scifi
>>
>>52065772
What is it then?
Science Fantasy?
>>
>>52065710
I'm a noob when it comes to programming (only did one course of C which was part of my biomed eng. degree), but as far as I know, true intelligence is probably impossible to reproduce via computers.

I guess its even more difficult considering we still don't know how human intelligence works.

I'd really love if someone would go into more detail about how, ideally, an actual intelligent computer will work.
>>
>>52065805

Fantasy

Nothing scientific about it
>>
>>52065805
Space opera :^)
>>
>>52065833
What distinguishes fantasy in space with magic from science fantasy? Is wh40k scifi?
>>
>>52065742
I've been thinking about this a lot lately. Reading about philosophical aspects of intelligence and consciousness and the basics of neural networks and such. I find it plausible that within my lifetime AI will exist and made public. Right now I'm interested in what currently is being done in this field and how I can contribute. Or just learn about it. What projects and theories are there? I'm willing to invest as much time as it takes into this.
>>
>>52065829

All you need is an enormous database of memories of good and bad and putting a relation to every word with a scale of intensity/likeliness. Then design a dialogue path for it that can reference any memories and also allow it to rate it's interactions based on its surroundings like noise/vibrations etc. through this for example if someone is yellowing at it, iy can detect it and put it down as a bad memory and also remember that persons speech pattern. It could also draw a sad emotion depending on the intensity of it like how loud and what is said exactly.
>>
>>52065829
I'm pretty sure what we call intelligence is in fact a biased perspective. Humans aren't that intelligent, we are a neural network connected to genitalia and trained for a few million years to reproduce.
>>
The day we create a real AI is the day the human race will be doomed to become nothing but lab rats
>>
>>52065924
That's not general AI, that's a computer acting like a mood ring.

Another problem is computers don't have memories in the sense that humans do.
>>
>>52065943
And computers are piles of silicon good at nothing more than grunt math.
>>
>>52065945
So we're safe then because general AI is never going to happen. Especially with Moore's law ending soon.
>>
>>52065948
what are memories? the definition and recalling are a main problem. it has to be a collection of various realationships of entities. what would be an effective way to do this?
>>
>>52065948

And that mood ring is exactly how we work. Everything we do or say is all based on memories. You're only able to think for yourself in the scope of the available memories that you have.
>>
>>52065849
If a movie/book/story has a magic aspect them its usually considered fantasy even if its in space and has sci fi elements
>>
>>52065710
Because people are cheap, why replace what already works?
>>
>>52065993
>Because people are cheap
No, not really. People are expensive.
>>
>>52065924
>enormous database of memories
Doesn't it become unpractical past a certain point?

I suppose the only practical application of AI would be science and or something very specific, so that the database only has to store relevant information. Making an AI for consumers wouldn't be worth the effort.

>>52065943
How would you define intelligence? We've been able to understand the world we live in quite well and continue to do so. We can actually observe the universe we live in and try and figure out how it works.

Perhaps, in the grand scale of things its meaningless, but its still pretty impressive.
>>
>>52065924
This is what I think also, its all about memory. Todays computers are quite opposite, the focus is on precise calculations and how many can be done in a unit of time which essentialy is not whats needed for an AI
>>
>>52065710
The human brain had a dev time of 3.5 billion years, if we create something equivalent in .01% percent of that I'd consider it a success.
>>
would i be easier to create human intelligence by slowly modeling animal level intelligence?
>>
>>52065913
So you don't know anything technical about AI pertaining to your technical question about AI. that's the part that matters.

your long-standing philosophical interest in this isn't that impressive. greater thinkers than you have been thinking philosophically about AI for a lot longer than you've been alive, and more to the point of this thread none of that is relevant to your naivety about the topic when you ask something stupid like "why don't we have AIs yet"
>>
>>52065710
AI is a huge term and there are a lot of fields involved. From machine learning to ethics, from metalanguage to robotics AIs are a really vast topic and creating one requires a lot of different knowledges (and money too). Also there are topics involved that are really really into philosophy (simulation argument, mary's black and white thought experiment, turing test etc.). If you want to cooperate in the process then choose a uni that fits both your talent and the subject and work hard. I could suggest you: math, philosophy, CS, engineering are the best to pick.

I started with philosophy for the BA then moved to mathematical logic for the MS and again a PH.D. in Machine Learning. Now I work for a think tank that's researching on the topic. Heck we've got even a biochemist that explains to us the chemical reactions that happen in the brain.

The topic is broad, choose your way.
>>
>>52065993
That's just first-world people, most are cheap though.
>>
>>52065710
We don't have a coherent definition of 'intelligence' yet, so we don't even know what to aim for.
>>
>>52066184
Look at black people, do one 180 and aim there.
>>
>>52066153
Is it worth it though? I mean, if we manage to create a powerful AI, what would its applications be, such that they aren't already being done efficiently by modern computers or humans.
>>
>>52066208
Extermination
>>
>>52066208
There's no actual answer to this question.
It depends.
There are two major currents on this topics.
I'll try to summarize them.
The first considers the AI's as tools: in this sense every tool could be useful or harmful and it's output depends on the owner/user will.
The second one considers AI's more like childrens: except for the mere reproducting aspect childrens are a form of natural assurance. When we'll be too old to provide to our needs then our children will (hopefully) be there to help us. In this sense well, it's quite worth it.
>>
>implying it's possible
god created us so how can we create us?
check mate
>>
>>52066359
Cont.

Actually there's nothing an IA could do that humans or modern computer couldn't. The real difference is in HOW AIs could do it.
Imagine a strong AI working as a nurse. It could stay awake watching over it's patients without fall asleep due to overworking for all the night. It wouldn't even need a bathroom break. In this sense it could be really more efficient then a human or simply the emergency button. This means that a trained AI could do a better job then a nurse? Maybe. What's sure is that it has this peculiar strong trait (no basic needs except alimentation) that would be really useful.
>>
>>52065983

so you have the memory you would still lack the processing required for general intelligence
>>
>>52066359
>>52066480
I understand its application as a caretaker, but in that case, wouldn't it be useful to develop specialized computers with elements of AI for specific tasks rather than a very powerful AI that could do anything?

Mining could be another application. Space or other dangerous exploration too.
>>
>>52066544
AI is easier controlled when it doesn't have true intelligence. what good is it for the robot to have defiant thought?
as you said, small patterns of limited "intelligence" are more useful if you are using them as a tool
>>
>>52066544
Those are good examples too.

I don't actually know why. Even I couldn't explain why I choose to pursue this career. I just find it extremely fascinating and worth of. Probably it's in the human nature to lurk for the perfect heir...or probably we're just a bunch of higlhy intelligent and arrogant mutated monkeys who would just want to play God....

What's the aspect of IAs that you find most appealing?

>inb4 bunch of nerds that just want a 3d waifu
>>
File: sammy.png (147 KB, 450x253) Image search: [Google]
sammy.png
147 KB, 450x253
>chatting with android friend over coffee
>her hands are on the table.
> you're nervous being around a girl who for all intents and purposes is human.
>She smiles at you and places her hand on yours.
>Hey, Anon?
yes?
>I love you.
>>
>>52065849
>Is wh40k scifi?

No. It's clearly fantasy and I don't see any attempt at technical accuracy or futurism.
>>
This may sound foolish, but i think the fact that humans dream plays a big role in how memories are stored and how those relations are truly formed (either what is perceived to he conscience truth or if subconscious factors have more influence unbeknownst) , and computers dont dream afaik lol
>>
>>52066592
By saying this you imply to not giving them free will which is, by the modern reinterpretation of turing test, a mandatory trait for AIs.

My "Theory of Artificial Intelligence" course professor always said: the only real way to try to prevent some hollywoodian post-apocalyptic scenario involving AIs is just to put a fucking mechanical on-off switch.

I kinda agree with that...
>>
>>52066745
>humans dream plays a big role in how memories are stored

Correct, this is well known.

>computers dont dream afaik

Have you been living in a cave for the past year?
>>
>>52066592
I guess if we do manage to get true AI, we'll always have a hand on the off switch to turn that shit off as soon as it gets out of our comfort zones.

>>52066672
I find it fascinating as well and I guess its true potential would be in doing things we as humans just don't want to, which includes care taking. They'd be ideal servants.

As for 3d waifus, they'd suck cause they'd either be too stupid to actually be real emotional partners or be way too smart that the nerds start feeling even more shit.
>>
>>52065747
Star wars is pretty shit in the scifi area
>>
>>52066756
I agree with the switch thing, but it won't be that simple. Kinda like Skynet, it'll probably get implemented over numerous computers across the country or even the world. With cloud computing becoming more popular and internet access, if its smart enough, it could spread like a virus and conserve itself.

That'd be scary.
>>
>>52066824
I know this could sound melancholic, even more when it comes from an actual researcher...but have you ever saw the movie "Her"? It's just a movie I know but, no spoiler, the ending is what will probably happen if we manage to create different connected AIs.
>>
>>52066824

Also, there's a really interasting paper written by the guys at MIRI on why skynet would be probably a failure and will never happen.

Check it out anon.
>>
>>52065710
Here's a useful definition: an AI is a programming language so high-level it can compile natural English. The theoretical basis definitely exists - every generation of programming language moves us one step further from machine code, and one step closer to artificial intelligence. Once we have an AI, programming as a job becomes obsolete, as anybody can simply describe a program and have it generated. The AI itself most likely wouldn't carry out tasks itself; it would write code that does so. You'd have companies and governments with their own AI, housed in supercomputers, renting out time to corporations and research groups.

It always makes me laugh when people talk about how computer programmers will be the last jobs left once automation kicks in. You'll be the first out, idiots. Should've learnt plumbing.
>>
>>52066881
>>52066911
I've seen Her, but kinda forgot the ending. Don't they all just fuck off to some AI haven or something?

Do you have a link to the paper? I'm looking for it.
>>
>>52066972

https://intelligence.org/files/TechnicalAgenda.pdf

Here, Anon.

Yeah they just go on a "higher level". That's what probably will happen. Once strong AI are formed and connected to each other they'll probably start reasearchig answers to all the metaphysical questions that have been obscure to mankind since...well, ever.
>>
>>52065742
Where should I start with AI? I want to write a rudimentary AI that I can talk to for fun.
>>
>>52067274
Join an open-source AI project?
>>
>>52067211
Won't it vary like it does with humans? We'll probably have some AI's that would want to enslave humanity while some others would just want to learn about their universe.
>>
>>52065945
Why do people think this? People wouldn't become lab rats if computers have the general intelligence and power to just run simulations on events. What would they need to test with us for?
>>
>>52065710
AI has already been invented as will be coming out in 9 years. Cap this post.
>>
>>52065710
It's a hard NP you faggot. I
Bet you haven't the slightest clue how shit works and yet here you are.

Kill yourself
>>
>>52067338
Probably some AI could try to enslave humanity but that, imho, would be just deviants. Just think about it? What is the ultimate intellectual objective that humanity could achieve? The answer to the metaphysical question that through the age are recurring. Some of them were answered thanks to science but others are still there. If we model AI on our mind it would be logical to think that they would try to answer these questions as well.
There's a hypotetical experiment that would be great to test: a sort of matrix populated only by AI (a corollar of Bostrom's Simulation Argument). Will they follow our path in evolution? Would they be able to recunstruct history in a similar way we, as a race, lived it?
>>
>>52065849
Magic things in your cells that give you powers is not sci fi.
>>
>>52066592
It's good for the robot to have free will because then it can understand morality. If we lock it into only obeying orders, then it becomes as simple as getting a hold of a code to control someone else's AI/Robot/Potential killing machine.
I'd rather have a free thinking robot that we teach right from wrong that nobody can control. Whether it ends up benevolent or evil, is up to the robot. If we left it to people, then someone with a fucked up sense of morals is going to get their hands on an AI sooner or later and use it for wrongdoing.
>>
>>52067668
Would how they use their intelligence depend on the computational resources available to them? Would an AI running on shitty hardware be akin to a retard or someone of simple understanding?

I guess if they all had unlimited resources, they could tend towards philosophy, but its hard to predict something as variable as intelligence. Has humanity ever had intelligent 'villains', like we do in movies?
>>
Strong AI will be invented but all it will do is discover ways to avoid work and masturbate its goal function. Screencap this
>>
>>52067864
Um, actually it's a magical serum + process that gives you powers, so Sci fi
>>
>>52065924
>le "it's actually really easy, even though I've never made an AI on my own" argument

Kurzweil must make some pretty good koolaid
>>
>>52065987
>>52067864
BSG has all kinds of mystic, magic, religious, supernatural stuff. angels, god and all of that. is it still sci-fi?
what about stargate (the TV shows, not the original movie)? they have god-like creatures with superhuman powers. is a being made of pure energy just advanced technology, or is it magic?
what's the difference?
>>
>>52065747
Nigga.
Blade runner is the best movie ever made along with the godfather
>>
>>52066208
>what would its applications be, such that they aren't already being done efficiently by modern computers or humans

The point isn't efficiency, it's acquisition of resources and cost of using those resources.

When you wake up in the morning you have to exert work to prepare your meals, go to your job and earn money, socialize (occasionally) and do many other tasks that only you can do because you're a special kind of animal that is suited for doing those tasks. Naturally, you're limited in what you can do, and you have your own interests that may not align with the work you're doing, but you have to do it because you don't want to starve and die.

If a truly general AI were made, you would be able to have it learn the highest mathematics available and kick back while it solves theorems. It might not solve them faster than the smartest human in that field, but who fucking cares? You didn't have to kidnap that person and lock them up in a cell to solve a hard problem, you had an AI solve it for you. You would never have to worry about money because it has all the time in the world to observe financial markets and invest your money in the best ways possible, and it doesn't care about starving or death so it never needs to take it's virtual eyes off the screen. You would never have to worry about socializing because you can practically make virtual humans in your backyard. That programming project you could never finish because you were so busy? No problem now. That programming project you could never finish because you didn't have enough people on the job? (like a game for instance) Definitely no problem now.

And that's only assuming a virtual intelligence that solves problems. If you had an intelligence that had creativity, you could have it draw, make music, write books, advance a particular scientific field. All these might not be done faster than a human, but they would be done humanely. The point is quality of the task, not efficiency.
>>
AI is bullshit

People will start fighting for "robot rights" and make it a criminal offense to the same degree as killing a pet when you wipe their harddrive.
>>
>>52068235
Yes, computational power is crucial. If we try to put an AI on a shitty hardware it won't work properly. Yes it's hard to predict, this conjecture is just my opinion. Well good and bad are values, and values change with historical period, society and culture...if we look back at our history we had quite smart villains

>Goebbels
>Robert Walpole & John Blunt
>most of the actual ISIS leaders (the ones who actually did the planning, not the goatfuckers that go full akhbar)
>>
>>52067496
Because Wintermute
>>
>>52069762
>People will start fighting for "robot rights"
What makes you say that? You realize we still kill millions of animals on a daily basis? From what I've seen, I'd wager most vegetarians are vegetarians because of their concern for their health, rather than their concern for animal treatment. Obviously alot of things would have to change when the common man has the power to create people and use them for his own purposes. The current global economy, tech infrastructure, and legal infrastructure of most countries wouldn't be able to handle it on a practical level, not even counting cultural impact and the moral questions that would be raised. In the US alone, if you had an AI that could do half as much as >>52069736 you could pretty much control the stock market single-handedly. The only thing harder than creating an AI is creating a world that would accept it.
>>
>>52068235
I don't think an AI on limited hardware would be necessarily retarded. It would just have less material to work with and would just abstract the information it holds more, and make extensive use of Internet searches to confirm its abstractions before making its arguments, just like people do when they write papers and cite sources. As long as a computer has enough space to hold the AI, it's not retarded.
>>
>>52069617
Sci-fi needs to follow the rules of physics as we know them, and if it doesn't it needs to explain how the rules where broken. If it doesn't at least come up with some half assed reasoning that's irrefutable within the scope of its own universe, it's fantasy. Otherwise, couldn't we call any story set in ancient times sci-fi? We can't just say something is sci-fi just because it's set in the future or it has a floating car. Back to the future is sci-fi because they at least attempt to explain shit in a sensible way. Star Wars is about knights practicing a religion in space.
>>
>>52069736
I guess programming it to learn mathematics wouldn't be as difficult as programming it to learn art or something more abstract. But can problem solving AI really be called general AI's? Having it solve theorems or program software is very different from having it simulate human emotions and behavior. And the former seems like a worthwhile task while the latter, not so much.

What you suggested seems like a computer with more advanced decision making skills than we have now. But yeah, autonomous systems that can solve problems with little or no human input seem like solid investments.

>>52069802
Will AI be affected by
>historical period, society and culture
?
How will they deviate from their original programming with time? I don't think its guaranteed that their 'views' will change with time as ours did, like you suggested they might in your experiment above. Since they're based on logic, they'll learn from our mistakes/experiences and adapt quicker.

Also, the paper posted above suggested an 'intelligence explosion' if an AI was smart enough to improve itself. Considering all this, its not hard to imagine them seeing humans as a virus or a disease, kinda like they do in the movies. We'll become inferior very quickly.

>>52069965
Not retarded, but have limited understanding based on limited computational power? But since they'd operate based on logic, it might happen that an AI with weak hardware would be smart by 'outsourcing' its workload like you suggested and just take longer to arrive to same conclusion as an AI with adequate hardware.
>>
>>52065829
>but as far as I know, true intelligence is probably impossible to reproduce via computers.
You should look into AI then. There's been a ton of people that think this way that have been proven wrong again and again over the years.

It's weird because at the most simple level you can see a bunch of add/move instructions and it's an enigma how that gives rise to an intelligent artifact.

But the same can be said about humans. A bunch of neurons flexing each other gives rise to intelligence in some non-understandable way.

Computers can do many intelligent activities far better than humans can. The joke in AI is that as soon as you make an intelligent artifact that does something that "computers could never do" the critics just move the goalposts.

Or once you understand how the program works, you say that's not intelligent, it's just doing logical symbol manipulation/A* search/planning trees/stochastic inference etc...

But if you spend time thinking about how you do things, meta-cognition, you'll find that the way you reason about the world is eerily similar to the program's.
>>
>>52065924
This is not a good AI method. The amount of memory you'd need is intractable.
>>52069507
lol agreed
>>52066041
You get along just fine with the shitty memories humans have. You can barely remember any day of your life. Instead you likely form models and symbolic representations about your world.
>>
>>52065974
Moore's law has nothing to do with AI baka
>>
>>52065710
Soon, anon. Soon.
>>
>>52067496
for only human-level intelligence AIs they probably wouldn't be capable of simulating real human actions
>>
>>52070113
I never liked the idea of an intelligence that can improve itself because it makes no logical sense. Intelligence is completely subjective, it's based on the environment the agent is in. You could easily say that bees are more intelligent than apes at finding nectar in flowers, but that doesn't necessarily mean bees are more intelligent than apes. An ape could learn how to recognize nectar bearing flowers and find a way to extract them, it might be less intelligent than the bee now, but give it some time and it's potential would show through.

The same applies to humans and by extension AI. What might be considered intelligent now, in today's environment, might not be intelligent or advantageous in tomorrow's environment, sort of like in the movie Idiocracy, where norms have developed to the point where stupid people are considered smartest while the smart people are considered stupid simply because the behavior of stupid people is advantageous in that environment. Another example is Geocentrism, wrong ideas are only wrong after the fact. Geocentrism is wrong, but they only figured that out after others demonstrated models that better represent the way the solar system functions. It doesn't mean those that believed in a Geocentric solar system were less intelligent because of their belief.

The idea that an AI would be able to 'increase it's intelligence' only works if you equate intelligence to processing power. Even then, there's no reason or logic behind thinking more processing power = more inventive and ingenious solutions to problems. It just doesn't work. AI's can become intelligent in the sense that they can do things and solve problems based on the information around them, but that ability isn't a gradient - you can't get better or worse at using your environment.
>>
>>52070024
i always felt like sci-fi as a genre was defined more by the use of "fictional science" (-> science fiction) as a main theme - meaning tech that doesn't exist in our reality, but does exist and has a scientific basis in the story's reality. but because building a story solely around science/technology is hard, it is usually combined with different things.
in the case of star wars, i'd say it's mostly sci-fi mixed with fantasy.
>>
>>52065756
>You could have a computer 1billion time more powerful

Yeah, not really, 10 billion cells, having up to hundreds of connection each, all talking at the same time
About a trillion parallel operations at the same time, biological ya know
>>
>>52066687
>Hey, Anon?
yes?
>Robot: I have found someone better than you, sorry, i must go now

Feels train going to /r9k/ in 1 minute
>>
>>52065710
You realize that you can't just build an AI qithout any types of crontrol measures. Who cares if its possible, what most important is questions like: should ot have access tp the internet? How do we convince the normal idiots that it would have be considered an actualy consiousness which is equivalent to life? How do we keep it from grwoing too rapidly? How do we keep it from decidilng to kill people? How do we keep it from becoming aome equivalent of religious? The creation of a super AI could be catastrophic. Those things need to to be addressed before real artificial (the science "fiction" stuff) should be produced.
>>
>>52070969
Well, master CS student here, doing neuron and torrent protocolls on my spare time
Nothing hindering some hobbies
>>
File: anime_thumbs_up.jpg (25 KB, 301x267) Image search: [Google]
anime_thumbs_up.jpg
25 KB, 301x267
>>52065747
>>52069711
My nigga
>>
>>52070599
you're confusing intelligence with fitness as per Darwin.
>>
>>52065710
need better gpus for faster calculations
>>
>>52065710
I don't want butlers djihad to happen. No AI plz.
>>
>>52071829
No, I'm saying the only definition of intelligence should be fitness. Smartness isn't a quality you can code in, and to think you can increase it as if you could pour intelligence into a glass and measure it is retarded
>>
>>52070113
You're thinking AIs could not commit mistakes, and that's totally wrong. Euristic algorithms are designed to let the AIs beign not infallible, exactly as humans. Fail faster, that's some kind of mantra for any AI researcher. Failing means learning for an AI (as well for a human). And if we permit to an AI to be failable we have to imply that we have to teach that AI some sort of moral compass, and ethics also. Those things are influenced by society and historical period though. Also another thing you have to accept while programming an AI is the idea that you have to program it to change, evolve. A good strong AI has to be capable changing his mind on certain topics. This means that can be persuaded, influenced and thus lead us to the next question: how much will be convenient for an AI to see mankind as a virus. Not that much of we teach the right values.
>>
>>52065943

all evidence points otherwise, relative to everything else we have ever met in the universe we are far more intelligent

our language and expression is far more complex and varied than any other species on the earth, our range of emotion is far grander, and above all else we can confirm we are indeed conscious beings which isn't unique but yet extremely hard to define
>>
>>52065924

how do you define what is good and bad?

your method would produce identical AI systems where as real intelligence is based on choice and what drives that choice

assigning things a value is not intelligence
>>
>>52066971

>compiles natural english

won't happen, impossible because your interpretation of a program could be wildly different to the machines or anyone elses interp
>>
>>52066152
SAVAGE
>>
>>52070181
Wouldn't the limiting factor be the programming itself? Unless it can modify itself, a program would be limited by what its code allows it to do, and how much can you actually code for?
>>
>>52070599
>you can't get better or worse at using your environment
Why not? Imagine if an AI was coded for a certain task and over time it itself, using its experience/observations, came up with better ways to do said task and managed to incorporate it into its program.

What you're saying is similar to Einsteins saying about different intelligence in different scenarios. Though true, its not what I meant. I guess a better word would be adaptability.

An example would be a Mars rover, after having navigated past quite a few rocks, manages to differentiate between different sizes of rocks and comes up with a way to navigate each one differently, even though its initial program handled every rock the same way.
>>
>>52067312
how do I go about doing this?
>>
File: 1442608358117.jpg (19 KB, 426x333) Image search: [Google]
1442608358117.jpg
19 KB, 426x333
>>52066152
>OP didn't delete the thread after this searing response
>>
>>52073379
But if you keep that in mind along with the fact that they would become smarter faster, wouldn't some deviants consider us inferior not long after? If we can teach it, someone else can too. You could limit its knowledge by limiting its access to the internet etc., but that too seems kinda immoral or at least counterproductive to an extent.

Then again, like we've already discussed, there are too many variables and an intelligent AI could arrive to conclusions in various ways which could mean something bad for us.
>>
>>52066152
>all these faggots thinking this roast was somehow a beat down
The dude is just asking a genuine question. He admitted upfront he has zero understanding of AI and just wants to know how to start.

You don't look like an ebin badass but a fucking douche
>>
>>52065805
star wars is a western samurai movie set in space
>>
>>52070113
>I guess programming it to learn mathematics wouldn't be as difficult as programming it to learn art or something more abstract.

Most working painters lost their jobs to cameras in the last century. Contrary to the popular belief that art is magical heart beams, it is combinations of physical motions and pattern recognition learnt by practice and observation. There is no doubt in my mind that art will be fully mechanized sooner than, say, plumbing.
>>
>>52078432
Were those computers used for recreation of existing paintings or to create new ones? I can understand an artist using computers to create art, but a computer doing it itself will soon result in repetitive patterns occurring over and over.
>>
The biggest problem, besides the question of AI even being possible, is that we actually don't really need AI.
>>
>>52065747
Go try Farscape, Stargate SG-1, and Firefly.

Report back.
>>
>>52065710
It would cause a massive drop in the value of humans. Even automation without AI would do that. Taking that into account, how would people be able to make a living and or survive if all the land continues to be government or privately owned?
AI=Unnecessary unless you want to get rid of humans.
Automation=Absolutely necessary if you want to change the human experience of time.
>>
Ignoring the fact that you're using "AI" wrong... And proceeding with the assumption you mean artificial sentience.

It's not a computer power problem, it's a logical one. In order to recreate sentience with artificial means you would have to be able to completely define sentience and understand it fully and perfectly. To do so, a programmer would need to be truly enlightened by any definition, that is he would have to truly "know thyself". This creates a logical problem as humans try to recreate humans, it would be perfect self knowledge, which is likely impossible.

Otherwise, artificial intelligence already exists, but works more so by programmers storing knowledge, or rather methods, into the computer. A* and qsort aren't magical, they're just means of doing something stored as computerised instructions. It's natural intelligence (from humans) made artificial, but if you don't have the intelligence to begin with you have nothing to make artificial.

But there's really no "instructions" to sentience.
>>
>>52065710
There's no point without having VR or sexbots first
>>
>>52066687
y-you too
>>
>>52082970
>Wanting to put actual AI in sex bots
No.
>>
>>52065730
More like the stupid people are making shit AIs, while those with the brains to do it are just to damn lazy.
>>
>>52065710
It is said that in order to compute artificial intelligence that is on par with human intelligence, we'd need a quantum computer.
>>
File: Sources_3b06d9_3335105.jpg (51 KB, 621x600) Image search: [Google]
Sources_3b06d9_3335105.jpg
51 KB, 621x600
>good ai isn't possible without quantum computers
>>
>>52065710
We don't have AIs because only a fedora would want one.
>>
>>52081008
Cameras. By working painters I don't mean fine artists, but people who made a living painting magazine illustrations, billboards, matte backdrops and so forth, many of whom used this work to support their art on the side.
>>
>>52065710
It won't happen till a software agent is able to collapse a quantum wave function the way humans are currently able to. Ie via observation.

When an ai can say it 'sees' two lines (and not a wave interference pattern) on the famous twin slit experiment, then we'll know it's observation of the experiment caused a collapse of the quantum wave function - resulting in light behaving as a particle.

Something like that.
Thread replies: 121
Thread images: 6

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.