[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
Artificial Intelligence
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /sci/ - Science & Math

Thread replies: 112
Thread images: 16
File: tumblr_nfnttfNqGA1qb8fnso1_500.jpg (39 KB, 500x500) Image search: [Google]
tumblr_nfnttfNqGA1qb8fnso1_500.jpg
39 KB, 500x500
Seriously, this board is wasted potential. Why can't there be a AI general discussing computer vision, machine learning, artificial neural networks, game theory, etc. instead of literally only shit threads and meme threads? Front page right now features threads such as "why are women dumb whores", "what is a good haircut that is chill and also academic", and meta threads about degrees.

Anyone into AI? What are you working on? Are you on ResearchGate? Any papers/books/MOOCs/resource that you found amazing?
>>
>>7720002
Because that'd be computer science, and everyone here is a physical science person. Machine learning is related only in a tangential sense, since it's applied math.

Also, within forty seconds it'd turn in to "arguing about conscious AI" and armchair-machine-superintelligence general, instead of actually discussing real mundane machine learning techniques.
>>
>>7720002
fuck the AI btw
>>
>>7720002
because that requires people to do math.
>>
>>7720002
If you want high-level discussion you have to put forward high-level ideas, not just "what are you doing RIGHT NOW".
>>
>>7720002
Bishop's is a great book.
>>
>>7720002
Well you know what they say, you want something done right, do it yourself. If there was an AI/ML general i would post in it all the time - the board just doesn't seem to have the right culture for general threads but it would be wonderful if we could change that.

On the topic of /sci/ generals, I've been considering putting together a pastebin/introduction for a 'paper sharing general' thread, where we could trade .pdf and help each other out when our institution doesn't have access to something (or for those with no institution access at all). Thoughts? Would you use that if it existed?

this is now a general board improvement thread
>>
didn't they try that with #icanhazpdf
>>
>>7720071
to >>7720069
>>
>>7720002
As much as machine learning seems to be real I have not met anyone who could actually talk about it seriously. Even in machine learning courses and machine learning conferences.

There are practically 2-3 working methodologies that are like 200 years old and rest is arbitrary works-in-1000-cases-out-of-1001 bullshit.

And the higher you get the less people that are actually working with it can tell you about it. It just WORKS (or just does not) and noone can say anything about it.
>>
Here's all I have to contribute, OP. Best AI podcast I've found: http://www.thetalkingmachines.com/
>>
>>7720073
What do you mean they can't talk about it seriously?
>>
>>7720002

Okay, then, why don't you start and I'll contribute if I know anything about it. Oh wait, AI is a very complex topic and nobody yet knows how to even start with how to make a program learn.
>>
>>7720091
People make "programs learn" all the time tho
>>
>>7720091
>nobody yet knows how to even start with how to make a program learn.

Jesus christ, I'm sorry about this. I meant to say that making a program learn is a very hard task. In addition, there's nobody in the field who knows how to make a program learn to the point where it can be considered self-aware.

>>7720092

Not to the point where the program has become self-aware and learns on its own.
>>
>>7720094
Well duh?
>>
>>7720095

So what do you want to talk about? There isn't any headway so far in this direction. Unless you're an AI researcher, noone here would have anything to add to the topic.
>>
>>7720091
>Oh wait, AI is a very complex topic and nobody yet knows how to even start with how to make a program learn

...?
Machine learning is an extremely active field with practical applications you're already using. Google Image Search relies on it to classify images without having to have humans tag them, for instance.

Getting programs to learn is something you can do in an afternoon with Python. It's one of the basic tools of large dataset analysis.

>>7720097
You know, the entire rest of the field? Of course nobody's made progress on self-aware machines, which is why AI is in vast part not about that.

If you don't actually do any machine-learning stuff and don't care about its methodology or techniques, and don't have any interesting papers to share, there is indeed nothing for you to talk about here. Much like there's nothing for you to talk about in the OChem threads, but nonetheless conversations occur and productive discussion happens even despite the fact that none of them can synthesize even the simplest living cell. And if you come in and point this out, they'll act like you're being silly and irrelevant, which you are.
>>
>>7720002

Just because I work with AI and comp sci doesnt mean I want to talk to a bunch of autistic NEET armchair experts circlejerking about muh singularity.

I find shitposting much more interesting anyways.

Pic related, a computer that costs 300 thousand dollars
>>
BECUOS KURWZWIEL IS A HAAAACK
>>
File: img_20151030_192502_1024.jpg (201 KB, 1024x576) Image search: [Google]
img_20151030_192502_1024.jpg
201 KB, 1024x576
>>7720123

And no, it's not a dell machine, it's 4 ultrascale FPGAs in a dell case
>>
>>7720126
m..may i have yuor penus, mr fancy?
>>
File: dubs.png (57 KB, 300x177) Image search: [Google]
dubs.png
57 KB, 300x177
>>7720133

You can have these two bad boys
>>
>>7720002
But anon, AI is a meme. :^)

Poor guy, you thought it was real.
>>
The funny thing is we're all technically involved in AI research because we're all helping train google's image recognition every time we post (unless you bought a 4chan pass)
>>
File: augh.jpg (10 KB, 215x268) Image search: [Google]
augh.jpg
10 KB, 215x268
>>7720155
>>
>>7720209
how does that work?

i see the captcha, i fill it in, i submit it, and then there's a validation process somewhere. in order to validated my answer, they have to already know the correct answer. thus, how am i helping their image recognition system at all? they already know the answer and they only check if my answer is correct
>>
>>7720216
they don't actually know the answer.

i've submitted tons of wrong answers in google captcha.

they pool together tons of answers and just say the average is correct.

they throw in random captcha's that they DO know the answer to sometimes to make people think they do know the answer.
>>
>>7720216
AFAIK, only some of the images are marked as "correct" and "incorrect" and others it's not sure about.

If you hit all the "correct" images, but don't hit or hit some of the unknown ones, that's feedback on those guesses.

That's how the old CAPTCHA worked, too: one word was unknown, one was known.
>>
>>7720221
afaik they allow you to get one character wrong. i never got more than one character wrong and submitted my captcha successfully. at least in the new ones there's like only 3 digit numbers that requires all digits correctly.

the old captcha had 2 words out of which one was irrelevant and i was never filling it, but the first one had to be correct with a margin of error of one character

but i guess it makes sense
>>
File: 20151214_222316.jpg (588 KB, 2048x1152) Image search: [Google]
20151214_222316.jpg
588 KB, 2048x1152
>>
i bet their number photo captchas are off google maps
>>
>>7720209
Do you have sauce on this? I always just assumed they were doing this. But haven't bothered to check.
>>
File: plotmatrix.jpg (185 KB, 1022x743) Image search: [Google]
plotmatrix.jpg
185 KB, 1022x743
nigs im doing hardcore data science rn by the end of the day i will have a self-aware AI
>>
>>7720231
>afaik they allow you to get one character wrong. i never got more than one character wrong and submitted my captcha successfully. at least in the new ones there's like only 3 digit numbers that requires all digits correctly.

What? New captcha is selecting the images from a 3x3 grid of thumbnails that meet a certain classification.
>>
>>7720241
i think i opted somewhere that i want the old captchas cause the classification ones are fucking annoying af
>>
forgot pic
>>
>>7720240

Cool, care to explain what you're doing in detail? I'm not OP, but I'm interested.
>>
>>7720253
i was kidding, it's just my assessment for neural networks. basically i'm given a dataset and im meant to find the best way to predict new values for new input i.e. regression

>analyze/explore data (descriptive stats/PCA/mutual information/correlation matrix/etc)
>decide on a neural network model to solve the problem (hidden layers/neurons/activation function/training algorithm/partitioning data)
>present and assess my result (how well does it work on new values? what are my errors on validaiton/test sets? etc
>>
I'm a second year comp engineering student. I find thus stuff fascinating. What else can I use to play with neural networks? I can do rudimentary stuff with matlab but I'm not a fan of proprietary tools.
>>
>>7720002
What do y'all think about TensorFlow? Also, maybe just a general CS Theory thread?
>>
>>7720293
matlab/octave, python, R

tools don't matter that much, and your university should provide you with a matlab licence
>>
>>7720298
Oh sweet python can do neural networks? Never knew that. And yeah they have given me a license for it but it's just personal preference to use non proprietary stuff whenever I can. Thanks for the info!
>>
>>7720305
Literally any language can do neural networks.
>>
>>7720312
python has community/library support more than a lot of other languages tho
>>
>>7720305
Oh yeah. Python has libraries for everything. Grab Theano and maybe PyBrain.
>>
>>7720312
I'm realizing how stupid I am now. Arnt they basically just another specialized data structure? (May be totally wrong) if so then it makes sense. Sorry for being an undergrad pleb.
>>
why do people think neural nets are some kind of esoteric black magic it's literally dot products and gradient descent
>>
>>7720322
no, they're a concept/method of computation
>>
>>7720323
because muh brain.

that being said, googles networks are starting to look like actual neurons at each node.
>>
>>7720323
Because they have a cool name, the reason why any particular trained RNN works well is often rather opaque, and "throw an RNN with backprop at it" works remarkably well for an unreasonably large array of difficult problems.
>>
>>7720323
cause human neural nets are still black magic and ANNs try to emulate them
>>
>>7720305
>it's just personal preference to use non proprietary stuff whenever I can.
>/g/ mindset in academia
these are 2 different world
>>
>>7720327
Yes I understand that it's a different method of computation. I looked at them briefly in a discrete systems class that touched on graph theory. I'm probably using incorrect terms. I'm not in the higher level courses yet so this is just out of interest.
Can you describe how one could make a neural network without any extra libraries?
>>
>>7720348
high level view:

>have a matrix of data points/variables and an array of outputs for each data point
>take the inputs and feed them to a function that assigns a weight/parameter to each variable
>check output of the function against the desired known output (supervised learning)
>measure error and adjust weights accordingly
>try again
>stop when the error is something you're okay with

now the function can be arbitrarily complex with other "subfunctions" that are just meant to further manipulate the data to produce a better result
>>
>>7720367
Oh that makes a lot of sense. I've read that there is a researcher who developed and ASIC for neural networks to speed up computation time. Are these chips simply registers holding the data in the array or actually like neurons? (I assume the latter would be worse because it would be a set network and thus not re programmable)
>>
>>7720002
Yes. Biomedical signal processing using machine learning and wavelets. No, not yet. Yes, Elements of Statistical Learning or, if you're still retarded, Introduction to Statistical Learning. The latter only needs matrix algebra and linear regression. The former needs some legit linear algebra and other stuff (haven't read that one yet)
>>
>>7720297
promising but still performs worse on benchmarks than other ml frameworks like torch7,caffe.( source: https://github.com/soumith/convnet-benchmarks/issues/66) but this is on a single gpu. google itself uses tensorflow with massive gpu clusters, and will soon release the multi machine version.


it may be poorly optimised now, but google will be updating and maintaining it though, so it will likely end up one of the best frameworks.
>>
>>7720297
>Also, maybe just a general CS Theory thread?
This. We need this
>>
>>7720322
original poster of the 'any language' thing here. Sorry if that came off as being mean. A neural network is basically a concept for a flow of computation like in pic related. So all it is is applying operation then summing the results in a special order. To be fair, some languages make it easier than others
>>
>>7720431
Yep, third vote for that here. That would make /sci/ 10 times better for me
>>
When do you use multiple hidden layers in ANNs?
>>
File: asap.jpg (72 KB, 800x600) Image search: [Google]
asap.jpg
72 KB, 800x600
>>7720322
they're really not as mysterious as they sound.

>arrange your input data into a vector(eg a vector of pixel intensities for an image)
>multiply the vector by a weght matrix to get the input matrix for the first hidden layer
>evaluate a function(eg tanh(x)) on every element of that matrix to get the activation matrix
>multiply this by the second weight matrix to get the second hidden layers input.
>repeat for as many hidden layers as you want

congratulations, you made a feedforward neural network!( training it with back propagation is only slightly harder.)

'deep learning' isn't all that arcane either. 'deep' just means you have a lot of hidden layers
>>
>>7720451
the partial answer is that it allows the network to develop different abstraction levels for the data

https://www.youtube.com/watch?v=AgkfIQ4IGaM
>>
oh sorry i thought you asked "why"
>>
>>7720459
How would you add back propagation? I never understood what it entails
>>
>>7720297
make /csg/ happen /sci/ pls
>>
>>7720475
back propagation involves taking the partial derivative of the error function with respect to a particular weight of a neuron and using that to perform gradient descent (basically multiply it by a small negative learning rate and then and it to the weight)

it's called "back" propagation because the partial derivative of the weights for the neurons in the nth layer depend on the partial derivatives of the n+1th layer, so you have to start at the front and work your way back

the best derivation of backprop i've come across comes from this very low quality (in terms of video and audio quality) vid from some indian university:
https://www.youtube.com/watch?v=nz3NYD73H6E
>>
>>7720493
fuck no
>>
>>7720451
when the output is a very complicated(requires a combination of a large number of simple functions to approximate) function of the input.

think about it like a taylor series approximation. the more complicated the function f(x) (or decision boundary for ml), the more terms you will need in the taylor series(or the more hidden layers) to get a good approximation.

For example say you were trying to predict heart attacks using weight and height data. The best predictor to use would be body mass index which is weight divided by height squared. The hidden layers might capture complex interactions like body mass index.
>>
where do you study OP?
>>
File: BinomialTree.png (34 KB, 633x623) Image search: [Google]
BinomialTree.png
34 KB, 633x623
>>7720493
This would be nice. It hasn't been very popular the few times I've created one though.
>>
>>7720239
their own videos explain that text captchas were used to transliterate books, and I guess they have videos explaining this, too
>>
Anyone know when to use levenberg marquadt vs scaled conjugate gradient when training a neural network?
>>
>>7720475
>make a loss function, which represents how wrong your prediction is. the simplest example is just 0.5*(nn prediction-the actual data label)^2. another example is the -1*log of the probability your nn assigns the actual answer. the goal of training is to minimise this loss function,

>feedforward your input data(as a matrix of column vectors) through the nn. then compute the partial derivative of the loss function with respect to the last layer weight matrix weights(n).

>multiply the partials by a chosen scalar called the learning rate, usually about 0.1. this matrix is called delta(n) (neural net has n layers). subtract this off the last layer weight matrix.
>multiply delta(n) by the weight matrix of the previous layer to get a matrix with the same dimensions as the previous(n-1) layers activation matrix. multiply delta(n)*weights(n-1) by the derivative of your activation matrix ELEMENTWISE

>subtract delta(n-1) from activation(n-1)
>repeat this until you're back at the first layer.

>then feedforward data through the nn again, compute new(hopefully smaller) loss derivatives again

>repeat this for as many iterations as it takes for loss to converge to a minimum

congratulations you just trained your feedforward nn with backprop!

it's really just gradient descent to minimise loss.

the previous layers error derivative is related to the layer after its error derivative through the chain rule, since the nth layer is a function of the (n-1)th layer so you compute loss derivatives for the nth layer then BACK-PROPAGATE through the nn
>>
File: image.jpg (49 KB, 600x600) Image search: [Google]
image.jpg
49 KB, 600x600
>>7720123
>$300,000 dollars
>Dell
>>
File: :^).gif (2 MB, 500x500) Image search: [Google]
:^).gif
2 MB, 500x500
>>7720658

Read the follow up post :^)
>>
File: deathgrips-624-1375567634.jpg (171 KB, 624x420) Image search: [Google]
deathgrips-624-1375567634.jpg
171 KB, 624x420
>>7720002
yes! I'm just an undergrad transferring to cs from physics. I've done a Ng's and Hinton's MOOCs, started entering in kaggle competitions and reading papers.

artificial intelligence for humans: deep learning for neural networks is a great textbook. up to date with modern techniques and with clear explanations and code examples. would recommend.

paper i found interesting (but didn't quite follow all of it): http://jmlr.org/proceedings/papers/v37/gregor15.pdf.

also what geoff hinton says about hierarchical coordinates is super interesting, though I don't really understand a lot of it.
>>
I am working on neural architectures which are trainable without backpropagation. Think of HTM, but better.
>>
>>7720800
>HTM but better
>piece of garbage but better
probably going to be shit.
>>
LOL

Neural networks and machine learning are just memes. It's just statistics with a fancier name rediscovered by retarded "computer scientists"

Computer Science is a joke
>>
[b][i][o][u]MACHINE LEARN MY ANUS[/u][/o][/i][/b]
>>
>>7721158
looks like someone's mad because 'computer scientists' are automating his job.

have fun trying to do any even slightly complex speech/image recognition task without using some kind of neural network ya fookin pleb
>>
>>7721158
If you don't see how remarkable backpropagation is, you truly are an idiot.
>>
>>7720123
Holy balls man, what are you doing with FPGAs and AI.

Are FPGAs actually worth it? Or will general purpose computers just overtake them as has happened in the past.
>>
>>7721234
LOL

Chemistry and atomic theory are just memes. It's just physics with a fancier name rediscovered by retarded "chemists"

Chemistry is a joke
>>
>>7721307
LOL

Biology and evolutionary theory are just memes. It's just chemistry with a fancier name rediscovered by retarded "biologists"

Biology is a joke
>>
>>7721311
LOL

Quantum mechanics and relativity are just memes. It's just classical mechanics with a fancier name rediscovered by retarded "physicists"

Quantum mechanics are a joke
>>
>>7721319
>>7721311
>>7721307
>can't even make a proper analogy
shouldn't you be working on your java assignment?
>>
>>7720523
why not anon?
>>
>>7721383
I'll do it if no one does after this thread dies

and I'll bump it and try to answer questions and fight to keep it alive until it becomes a thing

thumbs up if u agree
>>
>>7720451
never
>>
>>7720088
what's good about them? serious question
>>
File: hopfieldnet.gif (4 KB, 351x308) Image search: [Google]
hopfieldnet.gif
4 KB, 351x308
>>7721394
thumbs up
will help
>>
>>7720073

yeah, it's a very practical, competitive field, so a lot of the knowledge comes from observations on what works best made while trying to outperform everyone else on cifar-10, rather than from well grounded theory and as a result many of the best techniques are not at all well understood(dropout, dropconnect comes to mind)

perhaps ml needs some more pure maths people to give it a better grounding in theory and actually PREDICT what will work best?
>>
I'm finishing up my undergrad CS, just took a course on natural language processing. Probably the most fun application of ML I've ever got to work with
>>
>>7721494
>not going for a postgrad
niggga what the fuck
>>
>>7721495
Why would I do that when I can go straight to working and make money?

It's an option after I've saved up a bit, but what I've seen of academia doesn't appeal to me at all
>>
>>7721510
can't be worse than "software engineering" though. i gave up on making lotsa money as a code monkey when i realised i enjoy research a lot more even if there's no $$

i still work part time as a dev but that's just cause i need to fund my studies

what part(s) of academia doesn't appeal to you?
>>
>>7720072
Just post stuff anyways, in every board there are different ways on conversing. That's the beauty of /sci/ if you look hard enough someone will eventually post something relevant. Also asking it right will do the trick as well.

In any case, I would say that the biggest hurdle, Is to take every scenario that applies to human society and program it to a formula. Ex: if you need to apply this aspect of a problem, every single solution and variable must be thought of to have perfect AI. Which is why AI won't really matter until we cab have memory that has an enormous amount of memory. We have to be able to hold all those variables. I guess the question is, how is quantum computing coming along?
>>
please stop memeing in this thread
>>
>>7721543
>In any case, I would say that the biggest hurdle, Is to take every scenario that applies to human society and program it to a formula. Ex: if you need to apply this aspect of a problem, every single solution and variable must be thought of to have perfect AI.

I'll take the bait.

the whole point of learning/ai is that you only need a small subset of the data to generalise relationships between data features and make predictions for new data.
Feeding your program literally every possible scenario is not only impossible but the stupidest, most difficult way of making AI. Also if your program memorises every possible scenario, it's not really an intelligent agent, it's a giant lookup table. a mind is not a massive set of stimuli and responses, it's a relatively small set of complex principles whichcompute a response from stimuli. We shouldn't need 'every single solution and variable' for making AI, (though it will likely take a huge amount of parameters compared to what is being done today.)

>Which is why AI won't really matter until we cab have memory that has an enormous amount of memory. We have to be able to hold all those variables

We can already hold all the variables we need for now. lack of storage/ram isn't bottlenecking AI at all and probably never will. AI is being held back the most by slow GPUs. More floating point calculations per seconds would help AI much much more than more memory/storage.

>I guess the question is, how is quantum computing coming along?

also very wrong. quantum computing might not be feasible and doesn't help at all with memory or storage.(though simulated annealing will make optimisation MUCH faster)
>>
>information theory exam in january
>dunno shit
anyone has some cool resources that explain stuff like information capacity for different channels (erasure channel, confusion channel, Z channel, etc), differential entropy, gaussian channels?
>>
>>7720002
Do people honestly expect to have good discussions on /sci/? WTF
>>
>>7721475
The problem is that math has its own problems and applications, and current CS studies simply don't even go anywhere near foundations of ML.

Most CS graduates today are braindead zombies with set-in-stone OOP thinking, couple of algorithms in a pocket and zero understanding of anything more complex than a counter or multiplexer.

And most machine learning courses are about how to use already existent software to get already irrelevant results.

That is why machine learning is now only moving because of advancement of hardware tech.
Couple of companies that still try to do something meaningful lack manpower and can't really push universities into more relevant subjects since universities don't have specialists in that area.
>>
>>7722831
>>Most CS graduates today are braindead zombies with set-in-stone OOP thinking, couple of algorithms in a pocket and zero understanding of anything more complex than a counter or multiplexer.
>
>And most machine learning courses are about how to use already existent software to get already irrelevant results.

/sci/ will forever be full of faggots like you who don't know what they're talking about at all

i don't recall one single lecture in my ML class where they brought up a any software whatsoever. that's for practicals, and even there, we were asked to implement the algorithms from scratch to understand them at every.single.module not only ML. neural networks? implement a perceptron in the first practical. computer vision? implement all image processing algorithms manually

go fuck yourself
>>
>>7721383
show me an example of a general that isn't utter fucking cancerous circle-jerking garbage and i'll tell you why not

then again that's probably what redditors like >>7721394 want
>>
/g/
>>
>>7721394
thumbs up
>>
>>7723225
/eternal skating general/ on >>>/asp/
>>
>>7723225
/web dev general/ on >>>/g/
>>
>>7723225
/bdsm general/ on >>>/d/

generals are just places for people to chill and discuss their interests anon
Thread replies: 112
Thread images: 16

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.