[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
How close are we to true Virtual Intelligence or Artificial intelligence?
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 148
Thread images: 7
How close are we to true Virtual Intelligence or Artificial intelligence?

For those who don't know, VI's are programs used to make the interface of computers and the like easier, allowing voice commands with scripted responses and actions. The problems encountered with this is that it cannot learn like human intelligence can. They can 'learn' by getting more scripting done to improve their collection of answers, but they cannot take initiative or solve problems, and above that they have to be 'taught' by a user.

AI is something I believe we haven't even come close to yet. At NASA and Google, they have been experimenting with quantum computing, but no real results can be seen yet when it comes to Artificial Intelligence. Judging, thinking, decision-making and taking initiative are all things a true AI should be able to do, and that's extremely hard to mimic.

Hopefully you guys have more insight in this than I have, and I was wondering what your opinion was.

1. What exactly is an A.I. according to you?

2. How close are we to accomplishing A.I.?
>>
>>42347352
Look to Google. They're the ones building huge neural networks and cannabilizing robotics companies. I'm sure it's mentioned often during blue sky meetings.
>>
You said "What exactly is an A.I. according to you?". Is this correct?
>>
True A.I.? If you assume that Google/NASA has the funds and time, we could probably see it in out lifetime. I think that people would get buttmad though. Like people who'd seen too much I, Robot and the like. They'll try to stop the research or force some law against it. Still,

I'd piss myself if I'd have a computer program ask me: 'Who am I? Why am I here? Do I have a soul?'

I'd freak the fuck out. If they do accomplish it though, we have to be careful as fuck with it. Hackers would become the most dangerous criminals in the world. If you tamper with an A.I. that has the processing power to spread itself across the internet it could turn into a digital apocalypse.
>>
>What exactly is an A.I. according to you?

The more important question is: What is intelligence according to you? Is a computer that can solve lots of problems intelligent? Does a computer have to be self aware to be intelligent? Must a computer be able to adapt to new surroundings to be intelligent? You see, it's already difficult to define intelligence itself.

Let's assume we're trying to make an A.I. that behaves like a human, like lots of them do in science fiction movies. I find it hard to believe that such an A.I. could even exist. A great part of our human behaviour comes from the struggle of surviving, the need to form relationships with other humans and most importanly our interactions with the world around us. An A.I. in a computer would never have to survive anything, it just exists. Also the fact that our current understanding of creating an A.I. involves training it and make it learn everything itself, makes the assumption that you could just create an A.I. exactly the way you want to even harder to believe.
>>
>>42347778
>would never have to survive anything,

you could program shit into the AI's world that threatens to wipe out his memory/turn him off.
>>
>>42347454
Yes, hackers would be a huge problem if we ever got AI.

Also, VI can be pretty simple. Take google's search engine. People enter things into the search, so the next time something similar is searched, it would fill the search.
>>
>>42347352
You tell me

http://www.openworm.org
>>
>>42347454
>life that isn't human is scary
Why? The advent of a new form of life doesn't have to result in conflict. What makes the computer program any different from any other person in the world?
>>
>>42348143
What instincts would it have? How would it behave without basic ingrown responses that define as as living beings (fight or flight response for.example)
What would be it's wishes and motivation?
>>
>>42348185
I don't know. No one does. That doesn't mean that you should be afraid of it. A sentient AI that knows nothing doesn't know anything of hostility unless you teach it hostility.
>>
>>42347454
Do people still really think that there's some special algorithm that can give a computer program the ability to think and speak English?

True AI is going to be constrained in a lot of the same ways that real intelligence is e.g. needs some kind of body, sensory inputs/motor outputs, etc.

You'll never be able to produce anything that is convincingly alive without mimicking what is going on in the brain, and even then there's the big question of what makes human brains capable of what they are compared to other brains.
>>
>>42348461
>You'll never be able to produce anything that is convincingly alive without mimicking what is going on in the brain
While it may take lifetimes worth of time, I think it's a bit much to say that it's impossible to determine the mechanisms behind sentience.
>>
>>42348143
Fear of such technology is a natural evolutionary reaction. Creating our own successors is pretty much inevitable, and we will become obsolete. It could be a harmonious relationship, but more likely than not, it would EVENTUALLY lead to our demise, or at least to the end of humanity as we know it (via augmentation, etc). This can be scary, but personally I find it quite beautiful.
>>
>>42348461
Not an equation but a basic concept.
Like that of artificial neural networks.

What are you composed of ? Atoms (if we only go that far), and these are defined by a set of nuclear, magnetic, physical, state... properties. Eventually after, I guess, an initial movement these made everything we know around us including life.
>>
This thread is literally worse than hollywood's hack the 5 mb pipe bullshit.
Congratulations.
>>
>>42348525
I would not say that it'd necissarily lead to human demise, but I would say that it is extremely likely that some sort of fighting will occur. Accepting non-human sentience as people who are equals will truly be the last great social barrier, and I can imagine there will be people who will fight as hard as possible to protect their provincial worldviews. Assuming that enough people exist to see the synthetics as normal people, the people fighting for human supremacy would either be killed off or forced to change the way they see the world and to accept the newcomers. Or if too many people are too ignorant to change their views, the synthetics would be in danger of being killed off.

Either way, I doubt it'd be an actual "doomsday" scenario.
>>
>>42348608
>talk about the development of technology and the possibilities of scientific progress
>HURR THIS THREAD IS SHIT
>>
>>42348662
I hoped nobody would reply to him.
I guess he's happier to see summer threads spawning every minute about apple or whatever.

moot you can show me all the "summer statistics" you want, it'll take a lot of work to convince me those figures are real.
>>
>>42348662
Yeah, man! The digital matrix of the hyperbrain would require at least 3 octo-hypercores with 4 pipes of gddn8 rom to compile! But if we use the genetic algocryptic code then we can overload the lattice and do hyperdimensional knowledge mining to learn the intelligence!
>>
>>42348746
Oh you mean like : we only have supervised AI that use back-propagation to calculate the loss function gradient, and requires a known output for each input value.

I don't even see anybody talking about the deep technical aspects of current AIs here, and pretending to know how it works.
>>
>>42348831
I don't know whether to laugh or to cry, but what I know for sure is to never try to look for anything but 10 years old being retarded in summer /g/ threads such as this one.
>>
>>42348635
I'm not really talking about some sort of Terminator-style war scenario, or even conflict at all. The human mind is limited to its current infrastructure, and there will be a point when synthetic thought will significantly surpass our own capabilities. We are essentially creating/helping to create another step forward in evolution, and humans will eventually become obsolete, leaving us in a position in which, down the road, we may not be able to adapt to certain unforseen changes. You say they will be just like people, but what the fuck is the point of making anthropogenic AI?
>>
>>42348858
> I don't know whether to laugh or to cry
I know that feel.

I just wanted to see what /g/ does in those occasional AI threads, but I believe it's just meant to attract morons like you who spit on every attempt one could have to inform himself.
>>
>>42348893
>attract morons like you
Oh the irony.
>spit on every attempt one could have to inform himself.
My sides just departed orbit.
Well, if spouting literal nonsense and refusing to do even the most basic of research while praising other retards doing the same thing is what you consider an "attempt" to "inform yourself", more power to you I guess.
>>
We already have programs that can beat grandmasters at chess. Others have already beaten the basic Turing test. These are essentially the primitives of AI. Whereas the human brain accomplishes a great deal of biological functions, an AI brain could essentially neglect them. I don't think there is some magical self-learning program, but much like the human brain, a variety of complex interacting systems may be able to build such a knowledge base themselves. This could be possible within the next couple decades.
>>
>>42348926
I see, nice bait. Enjoy yourself, try another thread.
>>
AI we have today are can be very complex and adapt at specific domains but they are dependant on programmers. I wouldn't call AI true if it cannot code itself.
>>
>>42348867
I don't use the word "people" to mean necessarily human like, but rather to refer to entities that are intellectual equals. I don't know how a sentient AI would behave, so I can't comment on how it would act, hence why none of my posts commented on how the AI would actually function. I *do* know how people would act, though. People don't like new things that they don't understand. The creation of a working sentient AI, or even just technological augmentation of humans WILL spark conflict with those who don't understand it or have beliefs, religious or otherwise, that make them uncomfortable with new ideas.

I have not seen Terminator so I can't comment on your comparison.
>>
>>42348993

Agreed, the capability of self-evolution is when I will regard it as intelligence for sure.
>>
>>42348950
>This could be possible within the next couple decades.
lol

singularitard detected.
>>
>>42348950
If AI could have been invented, it would have been invented already. It hasn't happened yet, so it's never going to happen.
>>
ITT: /Mount stupid general/
>>
>>42348993
>>42349014
That's a really high expectation for an "intelligent being", I do not believe self modification is linked to intelligence. You do not consciously make the decision to rewire your brain.
>>
>>42348993
>I wouldn't call AI true if it cannot code itself.
Why does this always come up when dilettantes try to talk about AI? It's the most vague, ass-pullish excuse for an idea I've every heard.

Does the human brain alter the algorithms under which it functions? No, it works within them. You don't need to "change your code", the code just allows for plasticity.
>>
>>42349023
>>42349031
>>42349033
>>42348926
:^)
>>
I made an AI that can modify itself, I leave it running in the background to see what'll happen.

So far not much changed, it runs a bit faster (!), but most of the time it's just breaking itself (failed unit tests, or dramatically worse performance/deadlock).
>>
>>42348831
I've got a masters in machine learning. These threads are for trolls.

The modern approach to machine learning is to approximate functions. Which is to devise an algorithm that when called with some inputs, produces a prediction such that the difference between the prediction and the optimal value is small.

If you're actually interested in ML then check out the following books:

http://statweb.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf

http://www.inference.phy.cam.ac.uk/itprnn/book.pdf

and Machine Learning: A Probablistic Perspective

At a high level modern ML is divided into 7 camps. Supervised learning, unsupervised learning, Bayesian unifying, Neural networks unifying, Kernel unifying, Reinforcement Learning, Structured Learning. There's some overlap between all of these.

Interestingly, problems like board game AI have little to do with AI at all. Versus problems like classifying images according to what is in them, predicting a credit score, stop-light optimization, recommending a product based on past purchases, etc...
>>
>>42347352
Still pretty distant. As someone who's studying the field, I think it's too optimistic to expect anything magical before 2050. Most of the AI research today is either focused on specific problems (e. g. natural language parsing, computational game playing) or attempting to do something with raw computational power and a vast number of simple atomic models interconnected to form a "black box" of intelligence (the example would be Google's experiments).

Personally, I'm not quite convinced that raw computational power is sufficient to achieve AI (although it's most likely necessary, due to our brains being massively concurrent and complex).
>>
>>42349085
SOTA on go uses reinforcement learning. SOTA on poker uses a model selector trained with supervised approaches on models trained with RL as well as opponent modelling techniques based on ML approaches.
Classification is only one class of ML tasks. Modern ML is not divided in these 7 "camps" at all. ML does not approximate functions, it optimizes a criterion that amounts to a heuristically motivated search through function-space. While you're clearly nowhere near having the competency to even pretend you have a masters in ML, at least elems of stat learning is a good reading suggestion to everybody in this thread (including yourself).
>>
>>42349085
Thanks for the reading material.
I know for the board game AIs, not long ago I read a guy won an AI Go tournament with his program using a variant of the Monte-Carlo.

By the way, why is it considered that ANN and their variants will never be able to emulate (even basic ?) human like thinking ?
>>
>>42349010

you
>there's no need for fear/conflict.
me
>source for fear/conflict.
you
>there will be conflict for those reasons
me
>I'm an idiot and try to refute conflict


Apologies, 100% agree with you. I think we are looking at the same idea from 2 different vantage points. There is no need for conflict, but as I was saying earlier, our evolutionary instincts will absolutely spark retaliation.

I'm pretty much arguing, at this point, that our demise will not be met through this conflict, but through obsolescence.
>>
>>42349100
Modelling problems with neural networks is pretty well understood, we only lack the computational resources to carry it out efficiently.

See Hinton et al's recent work with dropout neural networks. Basically we know that our neurons are connected in a large graph. When a neuron is sent a strong electric signal it will not always produce the same outputs, it is a probabilistically activated heaviside function.

In building neural networks, we can model this behaviour by masking some of the internal neurons during the training phase. This is equivalent to building many neural networks and taking the geometric mean of their predictions.

If we combine this trick with linear recitified units(to overcome the vanishing gradient problem wherein our signal is dampened back-to-front during backprop) and l-BGFS gradient descent, we can acheive state of the art performance on problems like MNIST.

This is exciting because most of the ML world is moving in the direction of feature engineering, which can get almost for free with neural networks.

The problem is that training a neural network amounts to computing some very large dot-prods, which is an O(n^2.8074) operation.
>>
>>42349195
>why is it considered that ANN and their variants will never be able to emulate (even basic ?) human like thinking ?
Because journalists like their headlines.
>>
File: worstinventions_clippy.jpg (18 KB, 307x409) Image search: [Google]
worstinventions_clippy.jpg
18 KB, 307x409
Quick way to understand potential conflicts between a sentient AI and the human species:
http://wiki.lesswrong.com/wiki/Paperclip_maximizer
>>
>>42349054
It is vague because /g/ is not a scientific journal, we can't all have PhDs at CS and talk in right jargon all the time.

>Does the human brain alter the algorithms under which it functions

No but human brain has the ability to create new programs and solutions based on I/O it gathers.

>No, it works within them
AI will have to live within the limitation of its hardware. This is its "hard" limit it can't rewire if you want to compare it with human brain.
>>
>>42349039
Yes it's a bit extreme. By self-evolution, I meant the ability to purposefully alter their environment. Much like humans and other intelligent species.
>>
>>42349259
>This is its "hard" limit it can't rewire if you want to compare it with human brain.
1. Create a mutation engine powered by an AI
2. Give it control of an FPGA
3. All sorts of optimizations become available.
>>
>>42349226
>Modelling problems with neural networks is pretty well understood
You know what's baffling? How stupidly ignorant you are despite your eagerness to spout your nonsense all over this thread.
>>
>>42349193
>SOTA on go uses reinforcement learning. SOTA on poker uses a model selector trained with supervised approaches on models trained with RL as well as opponent modelling techniques based on ML approaches.
I've not familiar with SOTA, but the competetive chess algorithms are tree searching with some heuristics. They don't learn in the classical sense, they just guess. You could develop a well principled reinforcement learning algorithm for board games, but it is doubtful it would be competitive with tree searching.

The Monte-Carlo Parzen trees you mention is more of a high dimensional search problem than a reinforcement learning problem. Though the same could be said about hyperparameter optimization or meta-machine learning.

>it optimizes a criterion that amounts to a heuristically motivated search through function-space
There are plenty of examples where this isn't the case. Gaussian processes and SVMs have exact tractable solutions. Their running time and memory usage is poor though.

>Classification is only one class of ML tasks
Regression, structured prediction, contextual bandits, sample from a generative model, reconstruction learning, are all subsets of the listed fields. I was just lumping together ML by the types of papers I've read. I haven't read an ML paper that doesn't fall into one of those classes.
>>
>>42349301
>1. Create a mutation engine powered by an AI
>"mutation engine"
we are stepping in science fiction territory. it's like saying neuroscientists will control human evolution.
>2. Give it control of an FPGA
FPGA has its limits as well
>3. All sorts of optimizations become available.
Yes?
>>
>>42349226
I never said modelling problems with NNs is not well understood.

The problem is that interpreting the prediction of such networks is practically impossible for any network that is bigger than say 3 hidden layers, as a sequence of steps.

As in, your ANN-controlled ship might work perfectly for quite some time, until a specific set of incoming features causes it to steer sharply for seemingly no reason and flip.

On the other hand, expert systems allow you to track and implement decision-making in a more controllable way but their ability to generalize being far lower than what can be achieved via models based on connectivity or regression.
>>
>>42349213
Ah. I get what you're saying, and I agree with you. While I don't think there is a legitimate reason for fear or conflict, there most likely will be fear and conflict for the reasons given.

I think the main confusion here is the use of the word "demise". I don't think that a movement from biological humans to augmented entities really should be called a demise to humans, but rather an extension of the human form.
>>
Any good book on learning AI from scratch for a dev ?

I'm a C++ guru, but I have 0 knowledge of nontrivial AI and other neural networks.
>>
>>42349193
Ng, Jordan, Bengio, LeCun have all expressed that ML is function approximation.
>>
>>42349344
>we are stepping in science fiction territory
How so ? A mutation engine is not some SF shit, it's very real, even if the name does sound hollywoodish.
I actually made a mutation engine for x86 assembly, it's used for metamorphism. It doesn't try to optimize itself, but it's very real.
I'm not seeing how adapting that to a FPGA would be difficult, especially with LLVM's wonderful libs.

>FPGA has its limits as well
>Yes?
The brain has it's limit as well. My point was, you can actually rewire it, as an analogy with brains.
>>
File: siri on her.jpg (121 KB, 620x465) Image search: [Google]
siri on her.jpg
121 KB, 620x465
seems fitting for this thread
>>
>>42348950
>We already have programs that can beat grandmasters at chess.
That in itself is not very interesting, like being impressed that a formula 1 race car is faster than a sprinter. This has been true since Deep Blue in 1997, Rybka dominating in 2005, a cellphone winning a master tournament in 2008.

The interesting thing is that chess programs are continuing to improve against each other! The recent TCEC tournament that just finished put the open source Stockfish at the top of the list ahead of professional programs like Komodo and Houdini.

The secret sauce isn't the Stockfish algorithm itself, but the testing framework "fishtest" that allows thousands of volunteer's computers to participate. Chess programming has historically been somewhat isolated from the rest of the machine intelligence community, but this test framework has the potential to be adapted to other machine intelligence problems.
>>
>>42349415
> A mutation engine is not some SF shit, it's very real, even if the name does sound hollywoodish..
I'm sorry but how it this related to hardware if you are talking about self-mutating code?

>The brain has it's limit as well. My point was, you can actually rewire it, as an analogy with brains.

I wasn't really trying to argue there is no way to rewire the brain. But it is the practical limit in the foreseeable future
>>
>>42349345
If your dimensionality of inputs are small enough you can validate Neural Network NARMA models by looping through every floating point.

Another approach would be to probabilistically test for property equivalence up to whatever sigma you feel comfortable.
>>
>>42348950
>We already have programs that can beat grandmasters at chess.
That's not AI, that's raw calculation power.
>>
>>42349471
>I'm sorry but how it this related to hardware if you are talking about self-mutating code?
A FPGA is programmable hardware. You basically send it your code, and it'll magically hardwire your algorithms.
Going from self-modifying code to a self-modifying FPGA is a tivial step.
Althought I haven't mutated HDL before, I'm not seeing any technical limitations.
If I can do it with the mess that x86 machine code is, it's certainly possible with HDL.

>I wasn't really trying to argue there is no way to rewire the brain. But it is the practical limit in the foreseeable future
Okay.
>>
>>42349443
siri isn't actual AI. just pre-programmed responses to frequently asked questions. pseudo-AI
>>
>>42349193
Masters from UCLA, bitch.
>>
>>42349523
Samantha is a fictional AI character, though. She's "Her".
>>
>>42349523
Nah. The speech-to-text engine is a convolutional neural network. The text-to-answer engine is mostly wolfram alpha.

There was a post on rebbit/r/machinelearning
>>
>>42349317
>but the competitive chess algorithms are tree searching with some heuristics
Yes, but that's merely due to how easy chess is compared to go. Since there's literally no reason to go further (for all intents and purposes, the game of chess is solved), nobody is interested in applying more complex methods to it.
Classification is widely different from regression. The thread main tasks in ML are distribution learning, classification and regression. Structured predictions are a framework, contextual bandits are a model, sample from a generative model are a method, and reconstruction learning is not actually a thing, you're probably referring to autoencoders which are a class of representation learning algorithms that perform classification, in essence.
SVMs are a prime example of what I'm talking about: with a linear kernel, you're looking for a function f* = A.x + b (in particular: optimizing A and b) that scores maximal distance to all points.
MoG are the same thing, but this time you're looking for the function of the form m[i] * G[i] where G[i] ~ N(m, s^2) for some m and s^2.
>>
>>42349521
>A FPGA is programmable hardware.

Oh I see your point, but I count FPGAs as a device runs code ultimately.

When I said hardware limitation, I meant hard metal limitations in silicon not emulating different instruction sets etc.
>>
What ever happened to Cyc, the artificial common-sense database from Doug Lenat?
>>
>>42349372
No, but they've shown that certain models are able to approximate any function arbitrarily well given sufficient conditions (deep-enough 2-units networks, enough-units 2-layer maxout (or was it 1 layer?)), which is another matter entirely.
>>
>>42349567
It failed miserably, unsurprisingly. After a while any new rule they tried to add created conflicts with a number of previous rules.
>>
>>42349524
Whatever you say, cholo!
>>
>>42349554
s/thread/three/
>>
>>42349561
I bet you think a brain is infinitely large, too!
>>
File: Untitled.jpg (54 KB, 872x523) Image search: [Google]
Untitled.jpg
54 KB, 872x523
Talk by OpenWorm guy given in Marvin Minsky's MIT AI class.

http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-868j-the-society-of-mind-fall-2011/video-lectures/lecture-11-mind-vs-brain/

He's really smart and raises a lot of intersting points. Good listen for anyone interested.
>>
>>42349650
No it doesn't. My whole argument was this at the start:

>>42348993
and this
>>42349259


ie: It's not unreasonable to call "true AI" if it has the ability to code itself. Coding itself does not really mean it is hacking it's own evolution because ultimately it obeys its hardware limitations, like human brain has its own limitations.
>>
>>42349554
>Classification is widely different from regression.
Not in the sense of function approximation. $f(X) = p, p \in (0,1)$ versus $p \in \mathbb{R^+}$
They're almost the same problem. Every discriminative model I can think of has a regression and classification version.

>The thread main tasks in ML are distribution learning, classification and regression.
Distribution learning is imprecise jargon. You could be referring to kernel density estimation/one-class classification. Or you could be referring to algorithms that map inputs to confidence intervals.

>Structured predictions are a framework.
How so? I'm referring to multivariate outputs btw. As in plane-cutting SVM and structured random forests.

>contextual bandits are a model
It's a problem. Minimize regret subject to sequential selection in an metric space that outputs a stochastic reward.

>sample from a generative model are a method
A useful method that solves a class of problems.

>reconstruction learning is not actually a thing
For instance:
http://en.wikipedia.org/wiki/Superresolution

>SVMs are a prime example of what I'm talking about
Then you go on to define the details of the particular universal function approximator.

>MoG
Call is GMM, you hipster.
>>
>>42349771
"Code itself" is just such a misnomer. If you mean something like modify synaptic weights, then we've had this for a while and there's still no animal-level AI. If not be specific as to what you mean.
>>
>>42349589
That theorem is for one hidden layer, which is technically a 3 layer neural network with an activation function.

If you watch Andrew's google tech talks, he basically says that he signed up for artificial intelligence and he studies curve fitting. IMNHO Machine Learning distinguishes itself from pre-AI Winter lisp tomfoolery, by focusing on function approximations because that's where all of the value is. That's where all of the incremental achievements have been made.
>>
>>42349824
>"Code itself" is just such a misnomer
I can't give a rigorous definition to that. My vision is somewhere between in the lines of having the ability to autonomously gather data, identify problems and create solutions without any human help.
>then we've had this for a while and there's still no animal-level AI
I'll take your word.
>>
>>42349776
>For instance:
That's either denoising or inpainting depending on what exactly you're focusing on.
>It's a problem.
Contextual bandits are a model for describing a form of the exploration-exploitation problem.
>A useful method
Which is irrelevant to the point.
>Then you go on to define the details of the particular universal function approximator.
Not even close.
>>42349910
Protip: nobody actually counts the output and input layers. Furthermore, ML existed long before the AI winters (and isn't about function approximation).
>>
>>42349776
Also, just because your precious snowflake feelings tell you otherwise doesn't mean that classification and regression are similar problems. Distribution learning is a properly defined term that everyone in the ML world understands.
>>
>>42350038
Whatever, thanks for having a reasonable discussion is a ridiculous thread.

>Protip: nobody actually counts the output and input layers.
It's important to distinguish itself from a perceptron, which is just an input layer and an output layer. Which is equivalent to a linear activation function in the hidden layer(s) by simple linear algebra.

>ML existed long before the AI winters
It did, but there's been an explosion in it's utility since the most recent rebranding.
>>
>>42350129
Name a discriminative classification algorithm that doesn't have a regression analog. If you've studied generalized linear models, you'd realize how ridiculous you sound.

"Distribution learning" is often KDE or one-class classification, but the confidence interval literature also uses it when discussing adapting models using CDFs.
>>
I enjoyed this series check it out if you're interested in this topic.

Androids & Artificial Intelligence: A Modern Myth - PT1

https://www.youtube.com/watch?v=3hvEiNpZrGU
>>
>>42350191

>often
Please. Next thing you'll tell me is learning has nothing to do with computers but sometimes it's related to AI.

>Name a discriminative classification algorithm that doesn't have a regression analog.
A programming language and a circuit are two different things, yet it is possible to emulate a circuit in any turning-complete language.
>>
>>42350414
>Please. Next thing you'll tell me is learning has nothing to do with computers but sometimes it's related to AI.
You know too little to realize how imprecise you are.

>A programming language and a circuit are two different things, yet it is possible to emulate a circuit in any turning-complete language.
You're missing the point. A learning algorithm isn't distinct just because the output distribution changes.
>>
>>42350489
There exists a method to directly convert any regression model into a classification one. That doesn't mean that the regression algorithm is any good at classification. That is because they're 2 different problems.

>You know too little to realize how imprecise you are.
That's so cute, coming from you.
>>
>>42350540
>That doesn't mean that the regression algorithm is any good at classification.

You're missing the point. You still haven't named a discriminative classification algorithm that doesn't have a regression analog. Fundamentally all of the algorithms that work with classification also work with regression with minimal change and satisfy equivalent theorems about their ability to approximate function spaces.
>>
>>42350254
Even more morons talking out their ass and plastering the net with utter nonsense. Great!
>>
>>42350585
You're the one missing the point. You're the one still claiming that programming languages are the same things as circuits because programming languages can emulate circuits. And finally, you're the one who keeps thinking you know better than every ML professor ever.
>>
>>42350641
You mean the professors that wrote the implementations of support vector regression, random forest regression, gradient boosted regression, Knn regression, logit, linear output units, gaussian process classification et al ad infinitum
>>
>>42350783
Yes, these professors precisely. Nice to see you finally admit you have no idea what you're talking about.
>>
>>42350254
>logistical battle between man and A.I.
was space oddysey a movie about space cargo?
>>
>>42350834
You dense motherfucker.
>>
>>42350893
Oh, you were trolling? Well you got me good. Nice job.
>>
>>42347454
Neuromancer
>>
I just learnt how feedforward neural nets work and how to implement them.

Seems awfully slow and hard to train, but I can see the appeal. This thing could be powerful if it wasn't so basic. It's too simple to be great.
>>
>>42351393
Literally *sigh*
>>
>>42351468
Why ?
I'm learning. Learning is good.
>>
>>42347352
>How close are we to true Virtual Intelligence or Artificial intelligence?
Very far away. Very, very far. We can barely get some highly-specialized applications to "learn" very specific things (like voice recognition for instance or more commonly OCR).
>>
>>42351523
OCR has been used in banks since forever to decode signatures and amounts on cheques, for instance. Stop spouting nonsense.
>>
fucking never
>>
>>42350254
That AI has not been developed yet is not a good argument that it never will be. There's nothing in these videos that argue against it being possible in principle.
>>
File: substitute-for-interaction.gif (978 KB, 500x365) Image search: [Google]
substitute-for-interaction.gif
978 KB, 500x365
>>42349213
>I was wrong
Holy shit, I don't think I've ever seen this before on the internet, there should be more of it.

I'm looking forward to neural augmentation, I think that it is going to happen eventually even if there is inital opposition to it.
Maybe it will be argued that it should be used to cure people with mental disabilities, and eventually will be used as a cure-all solution to anything such as anxiety. It'll probably happen very, very fast.
>>
>>42351468
>**
>>
AI for me is just a complex script
Complex enough so I can reliably predict its actions
And of course it needs to learn from its actions/surroundings to build its character

In terms of video games there's nothing I would even consider a basic AI
Not nearly as complex enough and more importantly: not learning

We will have pretty good AI in about 5-15 years
I can guarantee that because I have to skills to do it myself right fucking now
P.S. of course all is lost if I die prematurely
>>
>>42352496
*so I CAN'T reliably predict
>>
>>42352496
Next albert einstein right here, folks.
>>
>>42352518
I'm merely a humble savant
>>
>>42352518
Yeah, I'm so honored he'd have to time to talk to us brain-dead plebs.
>>
>>42352533
Please don't bully me for my spelling mistakes
>>
>>42352496
You suffer from "Idea Guy Syndrome". It's what happens when someone has never actually implemented any of their ideas and has therefore never had them explicitly proven false.

There's also a nasty disease of language where people think they understand something well because they can put into words. I find topics to do with intelligence and cognitive phenomenon are especially prone to this sort of thing because they are seemingly so familiar to us that everyone feels they have a clue as to how they function.
>>
>>42352644
This whole thread in a nutshell.
>>
>>42352644
No, I'm not
And you'd also believe me if you knew me personally or I'd link you my showreel
inb4: I have no showreel
>>
>>42353032
I showreel is unnecessary. Just write for a bit about what your actual plan to implement AI is.
>>
>>42352853
Actually, I asked for opinions, not solutions or ideas.
>>
>>42353170
I'm,not sure I understand your question correctly

You wanna know how I'd go about creating one and how it would look like?
>>
>>42353393
>We will have pretty good AI in about 5-15 years
>I can guarantee that because I have to skills to do it myself right fucking now

Presumably you have some idea of how to implement AI? Expound on them.
>>
>>42353393
I will tell you if that's your question btw
You just need to confirm or I can't be bothered to write it down
>>
>>42353186
Please go be 7 years old somewhere else.
>>
>>42353573
Thanks auto-refresh

I'll answer you in about half an hour (brb)
Short version: I'd use a simulated 3d environment
>>
>>42353761
I'll be waiting.
>>
>>42353681
I'm from 1993, so you might want to up your mathematics skills a little.
>>
>>42353761
My sides have finished reserving spots on the commercial space shuttle.
>>
How can we define self-aware? Brain is just a sufficiently complicated collection of neurons. For AI we just need a sufficient amount of processing power.
>>
>>42354070
Jesus Christ, kid...
>>
>>42353761
>>42353787
So basically I'd make a video game.
A 3d environment with real world rules.

The point of playing for both player and AIs is simply to survive and to fulfill your "human" needs.
By that I mean things like hunger / thirst / having to sleep / not wanting to be killed / etc...
But I'd put them in the world pretty much blind
They don't know where food is
They don't know a saber tooth isn't friendly

I'd just give them the ability to learn
So when they see someone else being ripped apart by the wildlife they know saber tooths are bad for them and act accordingly
When they eat a mushroom and feel bad shortly after they know its bad for them too
When they drink water and feel less thirsty (thirsty == bad) they know drinking water is good
etc... etc...
When they're in a room and there's no food/water or a wild animal they will try to get out
For that I'd make separate game objects visible to them put I wouldn't give them info on what they are (in that case 'wall' and 'door')
So they might try to crush the wall to get out
Or someone might push around the door handle ultimately learning that moving the door handle around somehow opens the door

It would run in real-time (no matter if the player is in the game world or not)
And AIs can learn from the player (the same way they learn from other AIs.
The architecture I will be using is pretty much like a MMO (only offline)

That's pretty much the basics
I'm not really good at explaining things like that
>>
>>42354630
My sides have successfully left earth aboard the spaceshuttle. Unfortunately, the space shuttle blew up on its way to the international space station. My sides are now in orbit.
>>
>>42354680
>implying there's anything wrong with it
>>
>>42347352
I'm just going to be that guy right now.

Fuzzy Logic. Bart Kosko. If you do not know these, you don't know AI.
>>
>>42354630
>I'd just give them the ability to learn
LOL

This is what I mean. People assume that they can just say something and have it mean jack shite when it actually comes down to programming the damn thing.

You're an idiot and an idea guy. Get fucked.
>>
>>42354972
Is this 1935 again?
>>
File: 1399578228814.jpg (24 KB, 274x378) Image search: [Google]
1399578228814.jpg
24 KB, 274x378
>>42355137
Giving them the ability to learn said things isn't too difficult

You shouldn't project your inferior programming skills onto others
>>
>>42355299
You're literally a retarded person.
>>
>>42355365
I am but my programming skills are great
>>
>>42355382
neither of those guys, just wondering if you can give us a quick overview of how you'd implement the actual learning?
>>
>>42355499
Well the AIs see other entities as well as their actions and make assumptions upon them.
Simple logic.

When some entity enters their field-of-view they check what they already know about it and act accordingly.

Of course there have to be categories and subcategories of entities.
And there has to be a priority system on what things they check (like "might it kill me?" always needs the be checked first and "what kind of berries does he like?" only has to be checked in specific situations.

Its a rather complex subject.
>>
>>42355898
Note: sides died on the way to its home planet.
>>
>>42347352
>they have been experimenting with quantum computing
totally unrelated topics. current state-of-the-art quantum computers are toys compared to mainstream classical personal computers, and so they are currently useless for the kind of problems faced by AI

>how close are we
I don't know. Assuming a materialist explanation for the human mind (there's no reason to assume otherwise), and assuming those physical processes are computable (no evidence to suspect there are physical processes which elude Turing-completeness); then strong AI is a matter of technical advancement in hardware or the finding the correct algorithms.

I think A.I. wouldn't have to shoot in the dark if the natural sciences, neuroscience specifically, developed a successful and complete theory for intelligence and consciousness. Natural science is attacking the problem from the bottom-up, whereas A.I could be said to be working top-bottom and trying to abstract the model regardless of the biological substrate/instantiation of our particular intelligence
>>
>>42347454
>Do I have a soul?
wouldn't strong AI render the soul hypothesis (read superstition) obsolete, at least for educated people/intelligent computers?

after A.I. and a scientific theory of the mind, calling supernatural dualism to explain the mind-body problem would be as stubborn as creationist phylogeny
>>
>>42355499
Wait... who is "us" exactly?
>>
>>42348950
what is funny about Turing's test is that humans on average pass it only 63% of the time. on the other hand, AI programs targeted at conversational intelligence such as cleverbot get more than 50%
>>
>>42356339
>>42355499
Please don't steal my ideas again without giving me money, Google.
>>
>>42351513
Try one of the GPU training implementations. Pylearn2 is alright and it's pretty fast.
>>
One thing I think would be an impediment to this is that as you get closer to 'true intelligence', not only will the learning process of the computer take longer and longer, but it will get more difficult to accurately measure its capabilities.

When it takes months, or even years, to determine the results of a change in the code of a piece of software, development will slow to a crawl.

If you haven't read it, take a look at this: http://edge.org/conversation/one-half-a-manifesto

There's some interesting discussion points there, both in the article and the comments.
>>
>>42356340
All I can possibly say to this is *UGH*
>>
>>42357371
Looks like AI topics truly are imbecile-magnets.
Thread replies: 148
Thread images: 7

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.