[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
i wish people would stop using the term "ai" until
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 97
Thread images: 9
File: 640_ex-machina-4.jpg (22 KB, 640x427) Image search: [Google]
640_ex-machina-4.jpg
22 KB, 640x427
i wish people would stop using the term "ai" until REAL ai exists
>>
>>54686483
that's because you're a fucking retard that has no idea what an AI is
>>
>>54686483
I wish people would stop being illiterate sci-fi children. There is not and will never be "real AI" or "true AI" or "strong AI" because those terms were only made up for stupid children books or hollywood movies. Actual AI research doesn't give a shit about your fantasies. Actual AI research is about algorithms and statistics.
>>
>>54686483
ai literally means the theory and development of computer systems able to perform tasks that normally require human intelligence

a fucking roomba has artificial intelligence, albeit it isn't as intelligent as cleverbot is
>>
>>54686483
i wish uneducated faggots would stop moving the goalposts on what they think "ai" is
>>
>>54686483
>I wish people would only use the science fiction definition of a term and not the actual, accepted definition
Your and idiot kill you'reself
>>
>>54686483
>Hey man, what are you working on?
>I can't say
>>
>>54686504
There is no basis for saying that there never will be. We have no idea how to make it yet, but that doesn't mean that it will never happen.
>>
>>54686483
ai lmfao
>>
>>54686873
that's true but there is no reason to believe it will happen either (in the next 200 years or something) aside from a lot of things can happen in the future

we are nowhere even close to understanding how to begin to approach it or even if it's fundamentally possible on current circuitry

in the far-far future it like 500 years (or 10,000 years for that matter) we probably have a perfect map of the brain and can probably design "machines" based on it and better
>>
>>54686483

The problem is movies have made everyone think its human level intelligence or bust.

Two problems. First is that we underestimate the damage low level ai can do. The army is using drones to target militants and going with ai to find them. We have multiple times blown up people who were actual;ly forces for good who taught people the militants were crazy. Also we have killed reporters who take similar steps to hide themselves like encyption and turning off phone tracking. Just wait for the first self driving car to decide who to hit in a no better choice situation. Or some assholes puts ai on a train and some unforeseen situation occurs that makes the train 'decide' to crash 10k passengers into a wall at high speed.

Also it is a very big very dangerous mistake to think that high level ai will see the world the same way we do even if we make it. There is a good chance it will have completely unexpected results. Which is why neural networks are so fun right now. Look at the guys who built the ai that plays go. They have no idea what strategies it uses or how it comes up with its moves they just used a combination of learning techniques combined with a little survival of the fittest to make a machine that plays the game better than a person. Someday a ai is just going to do shit no one saw coming. Not just something like skynet that has a very human motivation. Something that we simply cannot comprehend.
>>
>>54687033
>who to hit in a no better choice situation
This is arguably not a bad thing if the AI is making an informed and unbiased decision
>>
>>54686504
>no and never be "strong AI"

Nils J. Nilsson, a founding AI researcher disagrees with you here, anon http://ai.stanford.edu/~nilsson/OnlinePubs-Nils/General%20Essays/AIMag26-04-HLAI.pdf

The truth is in the middle - general AI (in the form of deep reinforcement learning) is possible, but it won't look like a stereotypical movie walking talking robot meme.
>>
>>54686944
>in the far-far future it like 500 years (or 10,000 years for that matter) we probably have a perfect map of the brain and can probably design "machines" based on it and better

It's not if you are following the ML field. The progress in machine learning/deep learning for last 5 years has been literally staggering. Read about new state of art results. There is already superhuman visual object recognition, voice recognition, good machine translation, description of scenes in natural language, question answering (facebook babi dataset), visual question answering, algorithm learning (neural turing machines, neural GPUs), and finally really strong reinforcement learning (deepmind's DQN, A3C algorithms).

The last one, reinforcement learning, is really important because it's a basis for general intelligence. By rewarding/punishing a reinforcement learning agent you can train it to execute very complex tasks, for example navigating a maze with pixel input or controlling a robot body to do some tasks. DeepMind is on the cutting edge of this kind of research, you can read about their latest agent here http://arxiv.org/abs/1602.01783

With such rate of progress it's not inconceivable if a sub-human but general intelligence could be developed in 10-20 years, or even sooner.
>>
>>54687539
not inconceivable in 20 years but I'm not holding my breath
>>
>>54687578
I believe in DeepMind, they make very cool algorithms http://arxiv.org/abs/1605.06065

with this in place it may be that general AI is "just" an engineering/scaling problem now.
>>
>>54686583
Every single thing you said in this post is wrong.
AI is about creating systems that behave rationally, make rational decisions.
AI has nothing to do with machines preforming tasks done by humans. That's a side-effect of a good intelligent system. Many systems focus on preforming work that would be impossible or very time-consuming for humans to perform.

Roombas shouldn't even really be compared to cleverbot, but even if you would, please explain to me what makes cleverbot more "intelligent" than a roomba? Cleverbot learns replies (in a very ineffective way imo) to give the illusion that you are talking to a computer, when really it's spitting out a phrase that was said to it at some point. Although it kinda seems like it's smart because it's speaking english, and PEOPLE speak english and PEOPLE are the smartest ever, the mechanics of what it is doing aren't that impressive. Roomba cleans your floor, learning the most efficient path to take, given ANY floor layout as input. It is complex, robust, and effective. But it doesn't talk to you so I guess it's not going to star in the next fucking terminator reboot.
>>
File: 1462110565848.jpg (7 KB, 276x207) Image search: [Google]
1462110565848.jpg
7 KB, 276x207
I don't give a shit about any of this "science". When are scientist going to build a decent sex bot?
>>
>>54686483
>wanting real advanced AI to exist
I better you want us to get our asses kicked in a war against the machines and have our planet cucked away from us too.
>>
>>54688340
Couldn't care less. It's not like I have any attachment to this world anyways.

No friends, no family and no loved ones.
>>
>>54687965
A mandatory youtube lesson from Dr. Hutter about the formal definition of AI

https://www.youtube.com/watch?v=F2bQ5TSB-cE

TL;DW; "an agent's ability to achieve goals in a wide range of environments"
>>
>>54688340
Nope. I want to live in Utopia, that's it. Post-scarcity can be achieved with general AI, possibly even with subhuman-tier one.

A subhuman but general AI can be trained once to do empirical science in domain of your choice. Then you run 1000 copied instances of said AI in AWS or GCP, and they solve your problems for you. Compare this to human scientist's/engineer's lifecycle: 20+ years of learning, then 10 years of on-job practice before he can contribute valuable results, then 20 years of productivity, then retirement. Every human is valuable and different, but an AI can be copied at marginal cost.

Applying AI to science/engineering can have profound effect on human wealth. Antiageing research is the obvious potential application (Google/Deepmind's investments and rhetoric confirm this, see the interview http://www.theverge.com/2016/3/10/11192774/demis-hassabis-interview-alphago-google-deepmind-ai ).
>>
>>54686483
fucking subjective
>https://www.edge.org/conversation/gary_marcus-is-big-data-taking-us-closer-to-the-deeper-questions-in-artificial
>>
>>54686617
>stop moving the goalposts

hey nigger. I was debating the definition of AI on a 1999 vbforum. No goalpost moving here.

AI will never exist until we've literally cracked the human "soul". And if that ever happens, then god save us all (not from machines, but from humans who take advantage of it).
>>
File: smug_sakura.jpg (29 KB, 337x404) Image search: [Google]
smug_sakura.jpg
29 KB, 337x404
>>54689317
>AI will never exist until we've literally cracked the human "soul".

>this is what they actually believe
>>
>>54689317
>2016
>unironically believing in souls
>>
op here. the pic is just a movie still.
theres is no ai right now. maybe there will be one day but today there isn't.
>>
>>54689398
>Implying modern machine learning isn't AI

there is no human level AI, but there exists limited AI.
>>
>>54687385
>machine learning
>unbiased

pick one
>>
>>54689588
ML model is only as biased as the data you train it on. Garbage in garbage out.
>>
>>54686483
Ex Machina wasn't a movie about AI.

It was a movie about not trusting lying bitches
>>
>>54689605
well, that depends...
there are forms of ML where the bias is not strictly linked to the model (e.g. DT learning)
>>
>>54689681
Hmm you have a point
>>
File: 68a.jpg (29 KB, 400x366) Image search: [Google]
68a.jpg
29 KB, 400x366
>>54689606
You must be 18 years or older to post on this board.
>>
>>54689380

I don't believe in souls.

"souls" is just the word of respect I use for something 20+ centuries of humankind hasn't cracked despite having unlimited resources.
>>
>>54689735

Think about it. The main character fools himself into thinking that he's some kind of hero liberating the next generation of intelligent beings.

In doing so, he let's Eva's looks/sexuality sway his judgment, starts making completely irrational decisions (especially at the end, when Nathan already reveals to him everything he needs to know), completely and blindly trusts Eva and hands her everything she needs to betray him

The film is only about AI if you realize that it's really saying "don't overthink AI".
>>
>>54689766
Humans are not the only animals that exhibit Intelligence. Higher Apes, Dolphins and Crows appear to have considerable problem-solving ability (intelligence) as well.
>>
File: ML2.png (27 KB, 537x300) Image search: [Google]
ML2.png
27 KB, 537x300
>>54689733
Also, unbiased machine learning means certainly overfitting, which also means "you're fucking this up, pls staph"

pic related
>>
File: tumblr_n6alweVNdX1rcwa0zo4_250.gif (1 MB, 245x200) Image search: [Google]
tumblr_n6alweVNdX1rcwa0zo4_250.gif
1 MB, 245x200
>>54689766
>Try to explain
>Sounds even more autistic then before
>>
>>54689802
Kudos for using "bias" correctly in the statistical sense
>>
>>54689802
I meant that bias induced by ML algorithm selection is way more abstract (and probably unrelated to bias human would exhibit in similar setting) than bias induced by biased training data.

>Also, unbiased machine learning means certainly overfitting
Hmm that's a strong statement. I know we can minimize overfitting by using Minimum Description Length as model selection/regularization criterion (though it is rarely used in practice due to being computationally heavy).
>>
>>54689490
stop it

>>54689606
you're not even "i"
>>
>>54689813

are you muttering at me again? i have no clue what you said.
>>
>>54687033
Not really. The problem is people think Siri, Watson, Google Self-Driving Car Project, and AlphaGo are "AI".

AI was a cool little thing to help deadbeats who won't (or can't) go to work get grant money so they can shuffle around CS dept hallways for a few more years without harassment. But this use of terminology has crept into the normal lexicon and now it must be stopped.
>>
>>54689865
There are plenty of methods to reduce overfitting, like C4.5 (decision tree) or SVM

>mfw we are the only one talking about AI actually
>>
>>54686873
>we
>>
>>54686483
real AI has existed for decades, moron

i fucking hate all these retards fuck

this guy knows what's up
>>54686504
>>
>>54690087
Those "algorithms and statistics" that poster speaks of are not AI.
>>
>>54690393
Intelligence is a quantifiable property.

Machine Learning & Artificial Intelligence researchers are interested in building systems that automatically solve hard problems (by learning on data and/or interactions with environment). They do this by establishing standard benchmarks and comparing their system performance against these benchmarks.

It's all quantified. The stronger your AI system is, the better scores it gets. The benchmarks are representative of real world problems, so you can expect a better real world performance as well.
>>
>>54690450
That's "just" better software, not AI.
Better software is great, I'm all for that. But let's drop the AI label.
>>
>>54690510


Define AI
>>
Will we ever be able to even create AI? We don't even know much about consciousness yet and will probably never be able to recreate it.
>>
>>54686617
>>54689355
>>54689380
>>54689796
it's really stupid to use the term "soul" here, because there's nothing supernatural going on. Humans (and animals) are real physical objects we can look at. They are made up of many components ("organs") ensuring the object as a whole can work, including one that manages to retain memory, interpret stimuli from various sensors, develop and learn language, feel emotions, make decisions. In short, what we call intelligence. Okay, so if you look at the organ that does this (the brain and nervous system), you see it's made out of trillions of neurons, which are specialized cells that basically have multiple inputs and outputs, trigger each other constantly (really, they are not that different from transistors, the basic building blocks in modern computers) and can build new connections. Through that process of signals being sent back and forth trillions of times a second, emotions and all that stuff happens. We have no clue how or why that is the case. But if we find out how it functions (if that's even possible), and if we find out how to build, replicate and modify it, we will have achieved what OP calls "real AI". Except it won't have anything to do with "artificial" Intelligence anymore. It will be real Intelligence by any reasonable measure: If "real" Intelligence (what we have) is just trillions of cells sending 1s and 0s and building new connections between each others, then an synthetic implementation using silicon/software or whatever that does the exact same thing on a different platform, will be the same thing. This thing will be able to be happy the same way we can be happy, suffer the same way we can suffer. These machines, if we build them in man's image, will be just as responsible for man's decline, and it will be as immoral to send them to war as it is with humans.
>>
>>54690510
Software that is able to learn from experience is different from ordinary software.
>>
>>54686498
I just came here to say this.
>>
>>54690832
>consciousness
C-word is a meme. The best measure of AI agent's generality is its total score on a benchmark composed of various environments. You don't need any "consciousness" in an AI system, you need "only" learning & general problem solving capabilities.
>>
>>54690890
Congratulations for buying into the reductionist materialism meme hook, line and sinker w/o even realizing it.

You've committed yourself into a historically naive delusion in which you believe our current technological context can be appropriately used to describe the building blocks of human consciousness. And I bet you think you're so fucking smart, too.

>protip: the human body, let alone the brain, is not a computer, it's not even a machine, it's an organism
To describe either in terms of a machine is an insult and it only highlights your own ignorance. Stay in school you dumb fuck.
>>
>>54691027
>You don't need any "consciousness" in an AI system
>don't need consciousness
>you need general problem solving capabilities
What the fuck do you think consciousness is in this context? Being able to write emo poetry?
Moreover, strong AI needs to be able to understand, recognize and solve problems without any context. That is the algorithm equivalent of being able to forget that you forgot at will.
>>
>>54686483
I wish people would stop making moronic threads on /g/ until they know what they're talking about.
>>
>>54687385
The answer is the person with the best insurance gets to live. Thankfully you can automatically negotiate your issuance during the crash in a bidding war over who gets to survive.

Hopefully you and your family have good enough credit to take out the sort of loan required to survive.
>>
File: 1463154597589.png (175 KB, 909x579) Image search: [Google]
1463154597589.png
175 KB, 909x579
>>54690890
Your argument assumes that the only way to achieve Strong AI is to emulate the human (mammal?) brain to a high degree of precision (i.e. computational neuroscience-tier models), and that these models will be ethically problematic because they will be similar to human brain even if you consider low-level cellular dynamics.

I agree that such models could raise ethical questions, BUT your first assumption is completely unnecessary.
There is no proof that you really need to emulate low level human brain biology to make a strong AI agent. Modern Machine Learning proves it: even though deep learning models are composed of very simple units (orders of magnitude simpler than real neurons), they still can achieve human and even superhuman performance in computer vision, and good performance in other domains.

The most obvious path to strong AI is scaling modern deep learning techniques, and this path doesn't create any ethical issues at all because the systems are completely unrelated to human brains. Deep Learning NNs are just sequences of matrix multiplications, simple vector arithmetics, convolutions and nonlinearities.

>They are made up of many components ("organs") ensuring the object as a whole can work, including one that manages to retain memory, interpret stimuli from various sensors, develop and learn language, feel emotions, make decisions. In short, what we call intelligence.
This is almost true, except useful definitions of intelligence don't include any feelings, qualia and such vague stuff. Machine doesn't need to "feel" to learn to solve problems.
>>
>>54691006
Not really.
>>
>thinking humans have consciousness
We are just 1 billions cells combined together and called ourselves 'anon'. Consciousness is just an illusion, human intelligence is same as AI we are creating except hardware and algorithms a lot better.
>>
>>54691401
>What the fuck do you think consciousness is in this context? Being able to write emo poetry?
Laymen often mean "conscious experience" and qualia under consciousness. To ML practitioners it is obvious that consciousness is irrelevant to problem of creating problem-solving AI.

>Moreover, strong AI needs to be able to understand, recognize and solve problems without any context. That is the algorithm equivalent of being able to forget that you forgot at will.
Solving problems without any context sounds unrealistic. Latest neural networks are able to use their short-term memory though, i.e. learn how to learn: http://arxiv.org/abs/1605.06065

>>54691278
hello /lit/
>To describe either in terms of a machine is an insult
or should I say /x/ ?
>>
>>54691740
CNNs are NOWHERE near how a "true AI" should work buddy.

You can't compare a CNN made to recognize photos to a human even if you train it with mils. of photos.

Biology needs to make "another"(because the first one didn't lived up to the expectations) major break-through to make "AI" realisable.

Probably after 15-20 years assuming synthetic diamonds will be used to make quantum chips.(And assuming Google and few other corps will provide the technology, which is 50/50% for now).

Meanwhile those break-throughs (if they happen) will probably be used firstly to enhance humans, then people will use those for "AI"s.
>>
Then it'd just be intelligence
>>
>>54691893
>Solving problems without any context sounds unrealistic
>what is strong ai

>or should I say /x/ ?
Totally justifies you comparing that full-fledged humans emerging from fertilized eggs to pieces of silicon. You just went full euphoria.
>>
>>54687965
>AI is about creating systems that behave rationally, make rational decisions.
A roomba has obstacle detection, and makes the RATIONAL decision to avoid those obstacles.
>>
>>54691911
>CNNs are NOWHERE near how a "true AI" should work buddy.
They are a very good candidate for visual frontend for general AI. There is even a paper showing that CNN representations are similar to ones found in natural mammal visual cortex http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003915

>You can't compare a CNN made to recognize photos to a human even if you train it with mils. of photos.
You can http://karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/
http://people.idsia.ch/~juergen/superhumanpatternrecognition.html

>Probably after 15-20 years assuming synthetic diamonds will be used to make quantum chips.(And assuming Google and few other corps will provide the technology, which is 50/50% for now).

Lol the staggering progress of recent years didn't require any exotic hardware, there is no reason to expect that this will change. GPGPUs and ASICs are totally OK.

>>54692001
>>what is ai
"A system that can be trained to achieve complex goals in wide variety of environments"

>>what is strong ai
"A system that can be trained to achieve complex goals in wide variety of environments and does it at least as good as a human"
>>
>>54692001
>Totally justifies you comparing that full-fledged humans emerging from fertilized eggs to pieces of silicon. You just went full euphoria.

It is normal not to care how your blackbox model works if it gets good performance on benchmarks. AI is empirical science. Silicon, carbon, germanium - it doesn't matter if it works.

So far silicon works really well and there is no pressing reason to move away from it.
>>
why do retards confuse artificial intelligence with organical intelligence?

should we drop the label mechanical transporation or transportation on cars and trucks because they're not alive or are horses?
>>
File: blue_pill.png (65 KB, 277x592) Image search: [Google]
blue_pill.png
65 KB, 277x592
>>54692235
>why do retards confuse artificial intelligence with organical intelligence?
Because they are dumbed down by the media. We should educate them.
>>
>>54692235
It's the AI people who have created this idea that AI means a type of intelligence similar to human intelligence, except that it can increase exponentially.

I, for one, have argued that artificial intelligence is an oxymoron, intelligence is only natural and present in a living being. Nothing artificial can be intelligent, ever. If it's artificial, it can't be intelligent. It doesn't adapt, it doesn't feel the need to adapt, it doesn't live, it doesn't feel, it doesn't have any motivation to do any of the things a living intelligence does. So, it can't be intelligent, because intelligence implies an effort made by a living entity to get past a practical problem by inventing solutions, using its sensory abilities and its capacity to actually think, organically.

So, in this sense, artificial intelligence will never exist. It's a contradiction in terms.
>>
>>54687523
It's pretty common knowledge that all of the founding researchers of AI were spectacularly wrong about their predictions
>>
>>54687539
All of these advances are off the backs of two techniques that were invented decades ago but only recently became feasible due to hardware advances, namely CNNs and LSTMs.

DQN is also just a combination of some really old techniques with neural networks thrown in.

These are low hanging fruit.
>>
>>54692087
I love it how the argument is infinitely retreating with you. You throw in three or four things in each post, and drop whatever didn't stick in the next.

That is why I will not bother.
>>
>>54692389
>except that it can increase exponentially.
>exponentially
buzzword

>It's the AI people who have created this idea that AI means a type of intelligence similar to human intelligence
Similar in capabilities, sure. Though now it is understood that intelligence is a continuum and that many animals possess intelligences of varying strength.

>I, for one, have argued that artificial intelligence is an oxymoron, intelligence is only natural and present in a living being.

Your definition sounds vitalistic. I agree with the standard (functional, empirical) definition of intelligence, see https://www.youtube.com/watch?v=F2bQ5TSB-cE

Also there is a lot of biological experiments that examine various facets of human and animal intelligence, they can be used as definition too. See https://en.wikipedia.org/wiki/Animal_cognition https://en.wikipedia.org/wiki/Human_intelligence

In the end definition only matter as much as they lead to useful systems.
>>
>>54692389
A* is intelligence in the sense it can solve a problem.

a self driving car is intelligent.
artificially intelligent.
pls don't mix both organic and artificial intelligence.
>>
>>54692404
Being wrong doesn't mean "never" is a right estimate, esp. given recent progress.
>>
>>54692434
>All of these advances are off the backs of two techniques that were invented decades ago but only recently became feasible due to hardware advances, namely CNNs and LSTMs.

This is only partly true. Modern architectures are quite different from CNNs and LSTMs - see Neural GPU, Neural Turing Machine, and the latest Memory Augmented ANN.

>DQN is also just a combination of some really old techniques with neural networks thrown in.
True, but latest A3C is more novel. And it achieves unprecedented results.

>>54692438
Ok, it's just I don't prefer to argue with the most boring parts of anon's argument.
>>
https://www.youtube.com/watch?v=dRLNnJlT8rY

I wish stuff like this got more attention. Many times more interesting than AlphaGo.
>>
>>54687033
Parts of this were the premise for a SF novel: The Two Faces of Tomorrow." Some of the references are out of date, but its pretty good....
>>
Also, if AI ever gets real, I think this will be at least as likely as a skynet scenario

http://imgs.xkcd.com/comics/judgment_day.png
>>
Meanwhile, someone please tell me how an RNN is able to learn long histories given truncated backprop
>>
>>54693045
It is cool, also there are even cooler works on evolving programs http://people.idsia.ch/~juergen/compressednetworksearch.html

But note that evolved controllers do not learn. AlphaGO and other DL systems are cool because they can learn, and they can do it much faster compared to evolution methods.
>>
>>54693481
1) If you don't zero the state vector, obviously RNN can use it as context
2) Even if you sample all over your sequence and zero the state vector, still very many subsequences will overlap and it allows the RNN to remember the whole sequence.

At least this is how I understand this, I remember Karpathy wrote something similar.
>>
People do use it incorrectly all the time, but we do actually have real artifical intelligence which that phrase is applicable to.
>>
>>54693453
If we talk about AI novels and stories, I like these ones:
http://ttapress.com/553/crystal-nights-by-greg-egan/
http://karpathy.github.io/2015/11/14/ai/
https://subterraneanpress.com/magazine/fall_2010/fiction_the_lifecycle_of_software_objects_by_ted_chiang

They are pretty realistic.
>>
>>54693859
Say K is the length of your truncated backprop. My trouble is if the relevant information you're learning is more than K apart (e.g. open/closing parentheses always appear more than K characters apart), how does the RNN ever learn to "remember" and open-parentheses if by the time it sees a close-parentheses, the open-parentheses is already out of scope of the back-propagation. (Potentially it could get lucky and pick it up, and have that strengthened over time, but it would have to get lucky)
>>
>>54686483
what is "real" ai?
what is the human brain but a complex series of if-else statements?
>>
>>54693909
In this case it may learn either a continuous variable inside its state that increases or decreases with parens nesting, or it can learn some approximation of stack data structure. RNN are unusual because they exhibit "strong generalization" - that is, they can extract limited algorithmic regularities from their input that scale well with length. Often it results in some cells in RNN state strongly correlating with some algorithmic state of input sequence.

This is what makes them different from mere markov models.

See http://karpathy.github.io/2015/05/21/rnn-effectiveness/ for exploration of RNNs internal representation.
>>
>>54693948

I guess I'm thinking more in the LSTM framework - How does the RNN/LSTM learn to remember that structure if it's only ever used more than K units down the line (i.e. backpropogation cannot reach far enough to the open-paren example to learn to remember it)

Put another way, what exactly is the significance of the length of the truncated back-prop?
>>
>>54694002
I guess I explained it not in the best possible way. I think BP works in this case because it "latches" on the same RNN state cell because the cell already correlates with some useful feature in a lot of subsequences. These correlations are random and weak at first just after weight init, but they are reinforced at each BP batch.

Not clearing RNN state helps a lot with this, see the discussion https://www.reddit.com/r/MachineLearning/comments/3w3ppz/confusion_about_sequence_lengths_and_truncated/

>Put another way, what exactly is the significance of the length of the truncated back-prop?
It makes for faster learning of long-term relationships. Longer-than-K relationships aren't guaranteed to be learned.
>>
>>54694183

>Longer-than-K relationships aren't guaranteed to be learned.

I guess that's what I'm wondering about. Is it that longer-than-k relationships can't be learned, or *might* be learned (depending on how lucky you were with initializing your weights), or will probably be learned as long as you're not super-unlucky, and slower.
>>
>>54694597
They will be learned depending on data and maybe on init, and it will be slower, yes.
>>
>>54694798

Okay, so not a death sentence but certainly makes things potentially slower / less likely.

Thanks for the help. Just curious - what's your background in machine learning / deep learning?
>>
>>54695131
>Just curious - what's your background in machine learning / deep learning?

I'm an amateur lol, my job is ordinary programming.

Deep Learning is easier to get into than more complex bayesian ML (1500 page PGM book, ...uuh). Though recent DL architectures became pretty complex as well, it's hard to follow.
Thread replies: 97
Thread images: 9

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.