[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
/aig/ - Artificial Intelligence General
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 255
Thread images: 40
File: hug yui.gif (129 KB, 540x540) Image search: [Google]
hug yui.gif
129 KB, 540x540
HUG YUI edition.

Come and discuss, learn about, and panic over all things artificial and (marginally) intelligent.

Links:

>Deep learning frameworks

https://www.tensorflow.org/ (google)
http://caffe.berkeleyvision.org/ (Berkeley Vision and Learning Center)
http://deeplearning.net/software/theano/ (Université de Montréal)
http://www.cntk.ai/ (Microsoft)
http://torch.ch/

>Frameworks comparison

https://en.wikipedia.org/wiki/Comparison_of_deep_learning_software

>Required reading

http://www.deeplearningbook.org/

>Intro videos for beginners:

https://www.youtube.com/channel/UC9OeZkIwhzfv-_Cb7fCikLQ/videos

>Required watching

https://www.youtube.com/watch?v=bxe2T-V8XRs&list=PL77aoaxdgEVDrHoFOMKTjDdsa0p9iVtsR

>Something else that you should probably watch

https://www.youtube.com/watch?v=S_6SU2djoAU&list=PLy4Cxv8UM-bXrPT9-ay4E1MuDj1KFTg9H

https://www.youtube.com/watch?v=6oe1Tmg9rjM

Books

magnet:?xt=urn:btih:a680ca553dd69a6a5e8b7dc8c684ceb006c7ecc5&dn=Artificial%20Intelligence

How are those networks comin' along?
>>
Not enough people on /g/ know enough about the subject to warrant a general. Myself included.
I'll check out your gay ass links since I want to learn more, thanks faggot.
>>
>general
Back to réddit or LURK MORE
>>
>>53969203
This has been a general thread for like a week anon
>>
SEED YOU FUCKING NIGGERS!
>>
>>53969291
You dont get it, do you
LURK MORE, seriously
>>
>>53969348
Now you listen here you lovely person I own a Kali Linux and I will trace you if you don't back off /aig/
>>
artificial intelligence is a waste of time. Billionaire investors with zero understanding of technology outside of scifi films are the only reason it is taken seriously.

Machine consciousness is a myth. Seed AI is logically impossible.

Humans arent self-aware enough to create machines that way.

Its best purpose would be to create cuddle robots which are programmed to pass the turing test for normal bedtime sleepy talk. No more sentient than an iphone but tricks the user into thinking otherwise.
>>
The internet is a database for future machine-learning robots who will reconstruct people minds in specific points of their lives.
Too bad they won't be accurate, but they will try to.

Oh, yeah, and people will always tell you that cars, gravity, radio and others are useless research.
>>
>>53969406
>Oh, yeah, and people will always tell you that cars, gravity, radio and others are useless research.

No cars, gravity, and radio are tried and true. AI is mans silly way of projecting itself.

although i personally consider a manned mars mission to be a waste compared to using robotics. but thats cause mars is an inhospitable desert, im more open to places like Ganymede or Titan.
>>
>>53969348
>>53969203
been here for years mate. Sorry that this isn't a shitposting general like you're used to. We'll try to bring the bantz while doing something interesting as well.

>>53969374
Dunning Kruger etc etc.

Your concept of what AI is comes from Hollywood movies and you obviously have no idea what's actually being researched. If you even just took the time to look at one of those links you would know better.
>>
>>53969374
[citation needed]
>>
>>53969492
>AI is mans silly way of projecting itself.
Holy shit you're so stupid. Ai isn't about making artificial people. It's about getting computers to do work that might require guesswork and other factors that usually require human intervention to get done. Why would you come here and talk about something that you obviously don't understand?!
>>
>>53969512
What do you think AI is?
>>
>>53969512
>You obviously have no idea of what youre talking about.

Do machines have ideas of what theyre talking about?
>>
>>53969573
Creating more clever systems of software or hardware to solve classes of problems or complete tasks that traditionally require human intelligence. That's the most common definition. You would know that if you were actually any type of researcher or even read a book every now and then.
>>
>>53969563
The name is misleading as hell. Most of the research is focused on creating tools to answer a handful of problems, you wouldn't call that "intelligence". I think most people wouldn't consider something intelligent until it thinks.
>>
>>53969602
Yes. :)
>>
>>53969548
Recursive self-improvement is seed AI.

machines have no self.
improvement is subjective and sometimes is synonymous with destruction.

Missing any thing?
>>
>>53969604
>if you were actually any type of researcher or even read a book every now and then.
When did I say I was a researcher? AI is cool but it's just another form of computation that's probably way over my head too. I don't see the need to get as aroused about it as everyone these days seems to be getting.

Are you a researcher then?
>>
File: stupid.jpg (209 KB, 1280x720) Image search: [Google]
stupid.jpg
209 KB, 1280x720
>>53969602
>>53969617
Jesus christ /g/!

It's called Artificial Intelligence because it's performing tasks that traditionally require intelligence to complete. I would say get the fuck out of here but you really need to read some fucking books.

magnet:?xt=urn:btih:a680ca553dd69a6a5e8b7dc8c684ceb006c7ecc5&dn=Artificial%20Intelligence
>>
>>53969621
where do machine ideas come from?
>>
the retards studying AI are using feminist tier arguments.

But you dont know what FEMINISM MEANS.
>>
>>53969658
Engineer here. I call that process automation.
>>
>>53969658
>you really need to read some fucking books
it takes a lot of fucking free time to read a single book, and nobody's gonna read a book they're not too interested in just because a random anime faggot on a mongolian origami board is telling them to. stop being a pretentious fuck.

besides, I don't see how what you're saying makes the term "AI" any less confusing for anybody hearing it the first time
>>
>>53969641
>When did I say I was a researcher?
I meant to imply that this thread is for people who want to learn the science and techniques related to AI, not that you have to be a PhD

>AI is cool but it's just another form of computation that's probably way over my head too.
Then why are you here?!

>I don't see the need to get as aroused about it as everyone these days seems to be getting.
If it's developed satisfactorily it'll make life easier for alot of people by automating many common tasks, thus improving production and driving industry.

>Are you a researcher then?
I'm a student but that's where i'm aiming. If you plan on staying in this thread at the very least watch some of the videos in the OP.
>>
I have two questions:

1) Is there any part of AI at all that one could learn even without understanding advanced math/statistics topics? Which AI-related topic demands the least knowledge of those topics?

2) which prog language is best for playing around with AI, is it Python ?
>>
>>53969699
That's exactly right, it's automating classes of tasks that were previously only performed by humans.

>>53969708
>I don't see how what you're saying makes the term "AI" any less confusing for anybody hearing it the first time
Perhaps people should not just make assumptions as to the nature of things before actually studying them?
>>
>>53969759
Yeah just learn genetic algorithms like that one retard on here
>>
>>53969759
>1) Is there any part of AI at all that one could learn even without understanding advanced math/statistics topics? Which AI-related topic demands the least knowledge of those topics?
It depends do you consider calculus advanced? The meat and potatoes of simple neural networks are just calculus with matrix multiplication.

>2) which prog language is best for playing around with AI, is it Python ?
Alot of students and researchers use python because it's simple. I'm more of a C++ person myself but I understand the disadvantages if you're not familiar with the language. If you're designing a network yourself from scratch just use python so that you can make sure you understand the structure and dynamics of one. If you need gpu acceleration just download one of the existing frameworks from github (in the op). They're mostly in C++ but you know-
>>
>>53969780
As a process automater building farming robots, none of them are artificially intelligent. Process automation is just conditional logic, the program is as intelligent as the programmer who wrote it.

AI to me means something much much different, and using it to describe process automation is redundant and pretentious.
>>
>>53969879
AI isn't about automating simple and routine processes but rather more open ended tasks that may require more knowledge about the environment and subjects in it as well as some guesswork from variables that are unknown.
>>
>>53969799
>>53969862
I'm totally fascinated by AI and I keep reading about it.
I'm just thinking I'm probably not smart enough for it, especially considering the geniuses that are already in the field...
My mind is telling me: "stay in web dev, that's your level you fucking pleb"

Any thoughts on this reluctance ?
>>
>>53968599
>magnet:?xt=urn:btih:a680ca553dd69a6a5e8b7dc8c684ceb006c7ecc5&dn=Artificial%20Intelligence
>There is someone from India downloading this RIGHT NOW
>>
>>53969929
guesswork is still an element of process automation, its just an else condition.
>>
>>53969979
>My mind is telling me: "stay in web dev, that's your level you fucking pleb"
Your level is wherever you decide to stop. Git gud at math and catch up. There's plenty of time. If you want it you'll get there.
>>
>>53970012
The difference is that the tasks in question are more open ended than an assembly line.
>>
>>53970032
still sounds like process automation with increased complexity.

calling it AI is unnecessary and will result in needless debates.
>>
>>53970015
Thanks for the encouragement anon!
>>
>>53969979
Stop baiting for encouragement like some kind of autistic five year old

You're a fucking grown man aren't you?
>>
>>53970114
You're right...
>>
>>53970078
It is process automation with increased complexity. There's no debate. The confusion comes from people not understanding terms and making assumptions about them. What you're thinking about is artificial GENERAL intelligence. General intelligence is the ability to adapt a logic unit to almost any task without supervision in the learning process. While the ability to adapt to changes in environment is necessary for creating robust systems it's nowhere near the level of a human, nor is anyone trying to build a human, just solutions for specific applications, like self driving cars or Watson's expert systems for diagnosing medical patients.
>>
SEED THE TORRENT YOU STREET SHITTER!!!
>>
File: DMNplus.png (1 MB, 1108x917) Image search: [Google]
DMNplus.png
1 MB, 1108x917
>>53969617
Wouldn't you call answering questions about arbitrary pictures "intelligent behavior" ?

>>53969699
There is a lot of overlap, but nowadays it is commonly accepted that AI/ML differ from traditional automation because they learn from experience (i.e. algorithm (really its parameters) should be trained to perform some task).
>>
File: swimming-all.png (177 KB, 1600x900) Image search: [Google]
swimming-all.png
177 KB, 1600x900
>>53969759
>1) Is there any part of AI at all that one could learn even without understanding advanced math/statistics topics? Which AI-related topic demands the least knowledge of those topics?

Yes, using Genetic Algorithms is pretty easy (randomize, mutate, select best, repeat) and can work surprisingly good (videorelated https://www.youtube.com/watch?v=pgaEE27nsQw ).

You can start at https://medium.com/sfi-30-foundations-frontiers/computation-information-adaptation-and-evolution-in-silico-f8098d3ab13f#.oc5cmtyh9 and google for more.

Also there are classical papers you may skim, for example http://www.karlsims.com/papers/siggraph94.pdf
>>
>>53969979
>I'm just thinking I'm probably not smart enough for it, especially considering the geniuses that are already in the field...

Anybody is smart enough to write programs as a hobby. Even AI programs. Any kid can implement a genetic algorithm.

>My mind is telling me: "stay in web dev, that's your level you fucking pleb"

Why not overcome yourself? You will feel better later. I'm a web dev too and I have implemented a simple neural network in JS, it's entirely possible. Also JS is a nice fast language for such experiments.
>>
>>53970615
>https://medium.com/sfi-30-foundations-frontiers/computation-information-adaptation-and-evolution-in-silico-f8098d3ab13f#.oc5cmtyh9

sry wrong link, the right one is http://www.alanzucconi.com/2016/04/06/evolutionary-coputation-1/
>>
Why do so many deep learning resources seem to use python? Tensorflow does and the Neural Networks Demystified video series for example. Isn't python a terrible choice because it is not compiled and therefore relatively slow, when deep learning is something that values speed greatly?
>>
>>53969659
Unicorn farts :^)
>>
Have you heard about Reinforcement Learning, anons?

https://en.wikipedia.org/wiki/Reinforcement_learning is the most general kind of machine learning, other problems (supervised, unsupervised learning, regression) are just special cases.
>>
>>53969602
>Do machines have ideas of what theyre talking about?
What is an idea? How do you measure it? Does impact a quality of problem-solving for your system?
>>
>>53969636
>Recursive self-improvement
>seed AI.
Is a crank meme.
>>
File: 1459299217348.jpg (65 KB, 390x470) Image search: [Google]
1459299217348.jpg
65 KB, 390x470
>>53970736
RRREKT
>>
>>53970686
That's not deep learning. It's just neural networks in general. You'll often find scientists using python to interact with a compiled framework that accesses gpu acceleration but all that is abstracted away in python for neat little functions in indented blocks.
>>
>>53970854
Well but why, that just seems like extra unnecessary complication
>>
>>53970686
Python is excellent choice because it's a de-facto standard for scientific (numeric) computing. You can read about it more at http://www.scipy-lectures.org/intro/intro.html#why-python
>>
>>53970888
Because python is nice & easy high level language, because no segfaults, because no compilation, because just werks.
>>
>>53970888
It's easier to just pick up and start using it. or apply it to other things. Scientists like pyhton because it has a shittone of easy to use libraries and spares them from having to think about anything actually computer related while trying to get software to run.
>>
How do I make an AI waifu?
>>
Has any of you niggas tried to create a neural network / AI that learns how to buy/sell Bitcoins to make a small profit on an exchange ?
>>
>>53971320
I thought about it but ditched the project. There are much better quants than me.
>>
>>53971353
But the potential is to make thousands of buttcoins. What could be a better reward than that?
>>
>>53971366
It could be done several years ago but nowadays you would be stupid because real quants have already come to bitcoin.

Maybe you could do it as a hobby though, as a n example of timeseries prediciton problem.
>>
>>53971466
What about doing it with smaller cryptos / alt-coins ?
>>
>>53971320
Basically all stock trading is done by similar AI's in this day and age.
>>
>>53968599
usefull information on /g/ wtf is going on
>>
>>53971510
Are these AIs all custom made or is there some stock trading code available to study it and tweak it ?
>>
>>53971562
The ones used by serious traders are custom made but i'm sure you'll be able to find something if you look.

>>53971540
I got tired of rampant shitposting.
>>
>>53971487
I think that could work, but smallcoins are more prone to pump and dump schemes which are hard to predict.

The bottom line is that I just don't want to risk my time and money by getting into gambling (^:

>>53971562
https://github.com/search?utf8=%E2%9C%93&q=stock+trade
https://github.com/search?utf8=%E2%9C%93&q=stock+bot
https://github.com/search?utf8=%E2%9C%93&q=symbolic+regression
https://github.com/search?utf8=&q=timeseries+prediction
>>
File: 3.jpg (58 KB, 329x440) Image search: [Google]
3.jpg
58 KB, 329x440
Imagine you receive an android body by mail.

It has a 48 Degree-Of-Freedom body with 12 hrs of battery life (avg physical activity) with all necessary sensors (eye CMOS cameras, microphones, speech dynamic) and 10-core ARM CPU inside (there is a wifi as well, so offloading computation to the cloud is possible).

How would you program it?
Which approaches/methods/algorithms would you use?
Which software would you use?
>>
Bump, good thread folks.
>>
>>53972080
I would install Gentoo on it
>>
>>53972179
What would you do next?
>>
>>53972080
Depends on the hardware. If it had gpus inside suitable for deep learning then I would install ubuntu on it and use open CV as well as some other shit to get it to recognize faces first. Then Kaldi so that it could recognize speech. Really the motor functions would be last on my list of things to do.
>>
>>53972263
That's better than >>53972179

I'd do the following:
1) Install ubuntu on the robot
2) Connect it to my server via WiFi
3) Write an app to read every sensor (including eyes) and control every actuator. Measure lag.
4) Test everything, calibrate all actuators
5) Try to setup basic object recognition (probably with this network https://www.youtube.com/watch?v=_wXHR-lad-Q ) and face recognition (libccv).
6) Try some speech recgnition (e.g. kaldi) and speech synthesis (?)
7) Make a safety belt on a rail for my android as a measure against catastrophical falls (similar to what other robot researchers do)
8) Try to write a program that balances the robot against small perturbations
9) Try to implement a slow ZMP walk

That'd be a start.
>>
>>53972730
>walking, networking-enabled robot
What a time to be alive, also did you check out Qualcomm's Zeroth?
https://www.qualcomm.com/invention/cognitive-technologies/zeroth
>>
>>53972968
I did. Looks like it is NDA'd qualcomm-only suite of machine learning libraries. If I had a qualcomm-baed system maybe I'd use it.

>walking, networking-enabled robot
Actually it's not that hard, you can build one for 300$ if you want a walking one. Maybe cheaper.
Main components are
* Single board computer (RPi 3 is OK)
* Webcam
* Microcontroller with servo ports (you can buy an arduino with servo shield or make one yourself like I did)
* LiPO 3S battery
* DC-DC for converting LiPo voltage to 5V you need for servos and electronics
* Servos
* Chassis and wires
>>
>>53971562

Serious trading bots have to be custom made.
Actually many people got scammed out of their money with these bots.
One person was selling tons of them and soon many people were using them in the market.
Then the creator came in and knowing the basic algorithms of these bots, he played against their weaknesses and took everyone's money.

One other instance I saw a bot go insane and it sold and bought from itself at an insane pace and lost all of it's owners money.
This happened once in the real stock market too and in 15 minutes it cost years worth of profits. You can google the story.

Also many of the bots were easy as shit to profit from, because they followed very simple pattern.
You can also manipulate them fairly easily if you know anything about them.

So can you make a trading bot?
Yes and they're widely used everywhere because you can't beat their speed.
Downsides: They can go insane and if the bot is too simple, you can just profit from it without any effort.
>>
Anybody else here interested in combinatoric search spaces?
>>
>>53973257
Do you mean combinatorial optimization? What kind of problems?
>>
>>53973390
Not just combinatorial optimization, but combinatoric search spaces in general. Right now what I'm working on is taking a continuous, infinitely dimensional search space and discretizing it into a discrete combinatoric approaximation. I then map it down into a lower dimensional space (1D) so I can apply Kriging and Bayesian Optimization.
>>
>>53973463
Do you do it for your Phd thesis? I'm not a researcher, I just apply algorithms to problems for fun.
>>
>>53973490
Mechanical engineering junior. CS a shit. :^)
>>
>>53973508
(^:

So does your search space represent some kind of mechanical constraint problem or a mechanism design space?
>>
>>53973531
The case study I'm working on right now is for the design and optimization of the thermal properties of mechanical parts (i.e. heat sinks), but this technique is generalizeable to strucutual mechanics, fluid dynamics, circuit design, and code generation.
>>
>>53969799
That one retard who keeps spamming muh genetic algorithms as if they even work remotely well enough for any application pisses me off so fucking much.
>>
>>53973632
Who?
>>
>>53973508
>using the smiley with a carat nose
>>53973531
>using the backward smiley with a carat nose
>>
>>53973595
It is very interesting, especially if it works. Combinatorial optimization is hard.
>>
File: smile_with_a_carat_nose.gif (2 MB, 500x500) Image search: [Google]
smile_with_a_carat_nose.gif
2 MB, 500x500
>>53973701
Yes, I'm
>using the backward smiley with a carat nose

>>53973632
Maybe he does it to lure more anons into the field. GAs are very easy.
>>
>>53970888
Scientists are not engineers. They want high-level languages, but they also want something they're familiar with. Since AI/ML people typically come from a CS background (sometimes statistics, sometimes even physics), they choose a language with a familiar imperative syntax which is popular, i.e. python.

It's actually a fairly bad language for ML, all the tools end up being C in the backend when running on the CPU (i.e. never lmao), but GPU stuff is cuda anyway so who cares. The biggest problem with python is that verification is done entirely at runtime, so a typo could stop your experiment 20 hours into the task, wasting compute time and resources. This actually happens disproportionally often, doubly so when taking API changes between framework versions into account. The quirks certainly don't help, though, causing lots of weird bugs nobody can really figure out. Also, since all tools are C in the backend, you get unexplained segfaults without a useful backtrace quite often.
>>
>>53973729
No, he actually believes GAs are magical and the best and perfect and flawless. He spouts nonsense like "my algorithm has optimization issues so I should use GAs instead since they don't get stuck in local minima".

>>53973657
Some fag on /g/, often seen in /dpt/ and most ai-related threads.
>>
>>53973729
>using a picture of the smiley with a carat nose
>>
>>53973811
You mean me? >>53973463
Lel, stay mad. I'm doing actual academic research.
>>
>>53973811
>fag
Why do you think he's a fag, did he dump you Anon?

>GAs are magical and the best and perfect and flawless
ofc they are not, they lose to SGD, but they may work better in non-smooth search spaces

>>53973846
(^:
>>
>>53973952
>they may work better in non-smooth search spaces
We have a bazillion papers demonstrating that this isn't true, though.

>SGD
That's SO 2006.
>>
>>53974029
>We have a bazillion papers demonstrating that this isn't true, though.
How do you optimize parameters of some simulation then? Bayesian Optimization Algorithm or something else? GA would outperform gridsearch here, but I'm not sure how it competes with Bayes.
>>
>>53973952
>using the backward smiley with a carat nose with a picture of the smiley with a carat nose made up of smileys with the carat nose
kill yourself
>>
File: s_smiley_with_a_carat_nose_lq.png (19 KB, 319x310) Image search: [Google]
s_smiley_with_a_carat_nose_lq.png
19 KB, 319x310
>>53974246
Only after you do it yourself, my dear friendo (^:
>>
>>53974086
I should have been more explicit. The gazillion papers we have show that when there are any finite amount of points in the function that breaks its smoothness, standard methods work significantly better than GAs; and moreover that it's extremely hard to happen upon a function with infinitely many smoothness-breaking points by chance, given a realistic task.

In other words, explicit second-order methods are significantly more relevant and useful than GAs in all tasks.

Of course, this also holds for simulations. However, there are also many papers about designing smooth functions for simulations. For example, differentiable rendering is a thing.

Other than that, there are appropriate methods for different types of simulations. For example, for discrete or discretized outcomes, there is MCTS. For actor simulation, there is RL in general. Otherwise, it's usually MCMC or BLOG.

In any case, the best method to deal with black boxes has always been and will probably always be sampling, both in terms of wallclock time and performance. GAs suck dicks and get stuck in every single local minimum ever, never to emerge. They don't scale for shit and perform like shit compared to even classical methods like hillclimbing in many cases, let alone actually good method like SGD, adadelta, rmsprop, ADAM, pcd, PM's, etc.
>>
>>53974422
>using a picture of the smiley with a carat nose
>using the backward smiley with a carat nose
>>
>>53973762
THIS
>>
>>53973939
Are you the guy that keeps spamming those slides on genetic algorithms and talking about evolving programs at the bytecode level? If so then yes you are a faggot. Doing "academic research" doesn't change that.
>>
>>53969659
Where do your ideas come from? The outside world, and experience. I don't believe that a machine that does not have senses can think, but put a body on it give it some experiences and I think it would start thinking eventually.
>>
I trained a neural net to generate music some time ago. It got pretty decent, but I could never get it past the point of generating pleasant compositions that lasted longer than 15 seconds or so. They always broke down into inharmonious noise after a while.

https://a.uguu.se/qjyted.wav
>>
File: 1455928707326.jpg (65 KB, 384x500) Image search: [Google]
1455928707326.jpg
65 KB, 384x500
>>53975598
Awful just awful, though pretty much state of the art right now. Why has so little progress been made on the audio generation front when visual stuff is so advanced?
>>
>>53973632
I don't know who you are talking about but I am in a genetic algorithm class right now and I fucking hate it. I was told it was supposed to be AI but goddamn it is very easy. Although my professor told me that one of his colleges is making a GA that is almost as good at programming than a CS freshmen.
>>
>>53975636
Because long sequences are both very hard to annotate and collect and very hard to process meaningfully due to their structure.
>>
>>53975673
>a GA that is almost as good at programming than a CS freshmen
So not?
>>
Hurr durr computers will never be sentient. Think about this if I created a neural network on a computer that was as powerful as my brain AND had the exact same architecture, weights, etc. How would it think different from my brain?
>>
>>53975858
Hurr durr you can perfectly simulate any physical system on a computer.
>>
File: Alan-Turing-796x1024.png (544 KB, 796x1024) Image search: [Google]
Alan-Turing-796x1024.png
544 KB, 796x1024
>>53975879
Yeah, that's pretty much the whole point about computers.
>>
>>53975879
Alright so there is no doubt that a computer can think, and if it can mimic my brain than it can build it's own identity through experiences just like I have.
>>
Have any of you guys implemented https://github.com/jcjohnson/neural-style?
I did but I have a stupid fucking amd card with opencl which makes it so that the pictures that turn out are shit compared to one with nvidia/cudnn

Also anyone know any other cool projects like this one?
>>
>>53974484
>For example, differentiable rendering is a thing.
I've heard about that but I think that getting analytical gradients for complete physical simulation is unrealistic.
>let alone actually good method like SGD, adadelta, rmsprop, ADAM, pcd, PM's, etc.
These are only applicable when you have gradients. Of course I know about them and I've used some of them for fitting NNs.

>In any case, the best method to deal with black boxes has always been and will probably always be sampling,
I agree with that, but how do you sample efficiently? People say to use a BayesOpt for optimal sampling, but I haven't seen papers that prove its superiority yet (prob' didn't look for them).

>The gazillion papers we have show that when there are any finite amount of points in the function that breaks its smoothness, standard methods work significantly better than GAs; and moreover that it's extremely hard to happen upon a function with infinitely many smoothness-breaking points by chance, given a realistic task.
>In other words, explicit second-order methods are significantly more relevant and useful than GAs in all tasks.
I tried to search for relevant papers that support your claim but didn't found anything significant. Found this one http://arxiv.org/abs/1005.5631 but haven't seen definite conclusions in it.

I'm really interested in papers that prove that GA performs badly for derivative-free problems and propose alternatives. Could you point me at some?

> it's extremely hard to happen upon a function with infinitely many smoothness-breaking points by chance, given a realistic task.
Searching in space RNN weights or in space of program trees.
>>
>>53975458
I often mention this topic because I think it's a bit forgotten. Deep Learning is cool but it can't learn even moderately-complex short algorithms. Also it's funny that the closer deep learning + gradient descent gets to program learning tasks the harder it is to nudge the system towards convergence.

Recent Neural GPU paper [1] required 730 random seeds (= training runs) to find a strongly generalizing (from 20 to 20000 bits) multiplication algorithm.

1. http://arxiv.org/abs/1511.08228
>>
>>53975983
The performance (and thus output) will be the same eitherway, the performance (and thus wallclock time) will be completely different. Opencl with an AMD card comparable to the nvidia one running with cuda and cudnn would be 3x+ slower using torch, for example.

The real questions are: did you actually train the model?
If you did, did you make sure to use exactly the same parameters as they used in their paper?
>>
>>53975858
who cares about sentience

I need my computers/algorithms to solve hard problems and learn from their mistakes. Both things are possible right now, the question is how to solve even harder problem and how to learn better/faster.
>>
>>53975983
>I have a stupid fucking amd card with opencl which makes it so that the pictures that turn out are shit compared to one with nvidia/cudnn
You don't remotely understand what you're talking about
>>
>>53976051
This. Most worthwhile problems don't have gradients, so then you're left with either statistical models or evolutionary algorithms.
>>
File: neuralnetwork.jpg (46 KB, 512x512) Image search: [Google]
neuralnetwork.jpg
46 KB, 512x512
>>53976079
>>53976093
Alright I was not super clear. I had to use the other less good model (NIN Model) because opencl uses too much memory for my card.

Anyways are there any other cool neural network projects that are opencl that you have found?
>>
>>53976116
I have seen this paper http://www2.peq.coppe.ufrj.br/Pessoal/Professores/Arge/COQ897/dfo-Sahinidis.pdf and it benchmarks weird algos like NEWUOA and TOMLAB/MULTIMIN that fit quadratic function over samples that are found via some other search algos, possibly random search.
>>
>>53976249
It looks like those are just special snowflake versions of Kriging and Bayesian optimization.
>>
>>53976051
>I think that getting analytical gradients for complete physical simulation is unrealistic.
I disagree with this. Time will tell.

>but how do you sample efficiently?
That's the important question. That's also why models like VAE were developed: literally to bypass sampling because it's too slow. Other than that, pcd and other mcmc approaches seem to be the best option. Nevertheless, they are very good options (far better than GAs anyway).

Bayes optimization is good only when you can efficiently sample from the posterior. In reality, this is typically intractable, which is why mcmc methods, such as pcd, are used. Other methods like importance sampling work, too, in simple enough cases. Which method is best will depend from the details of the problem, but the idea is the same.

>Searching in space RNN weights or in space of program trees.
In the space of tasks.
>>
>>53976304
You may know about hammers, but not everything is a nail.
OK, wrong analogy.
>>
>>53976203
Anything that uses torch can use cltorch, and thus opencl. I think caffe also has an opencl branch. In any case, it will run like shit because
>opencl
>>
>>53976370
No I mean really. The whole bit about interpolating with quadratics and updating the model as you get more info.
>>
>>53976067
You're not comparing "deep learning" to whatever else you're trying to compare to, you're comparing "one rnn" for this. You should be comparing NTM (results on which show you don't even have half a leg to stand on).
>>
>>53976353
Considering using acronyms less liberally.
>>
>>53976402
NTM has a habit of exploding and diverging too. It takes several seeds (restarts) and lots of hyperparam finetuning to make it converge. I'm very excited about DL for algorithm learning, but the closer it gets to solving this problem, the more it resembles stochastic search.

>Other than that, pcd and other mcmc approaches seem to be the best option. Nevertheless, they are very good options (far better than GAs anyway).
>Bayes optimization is good only when you can efficiently sample from the posterior. In reality, this is typically intractable, which is why mcmc methods, such as pcd, are used. Other methods like importance sampling work, too, in simple enough cases. Which method is best will depend from the details of the problem, but the idea is the same.
Interesting insight.
>>
File: walking_qties.png (229 KB, 1214x493) Image search: [Google]
walking_qties.png
229 KB, 1214x493
Now explain this, gradient-kun!

The paper: http://www.goatstream.com/research/papers/SA2013/SA2013.pdf

Videorelated: https://www.youtube.com/watch?v=pgaEE27nsQw

What do they use for optimization?

>Both our muscle model (Section 3) and control model (Section 4) introduce a large number of free parameters, which are determined through off-line optimization (see Table 1 for an overview). The total set of parameters, K, is optimized using Covariance Matrix Adaptation [Hansen 2006], with step size σ = 1 and population size λ = 20.

>Covariance Matrix Adaptation

https://en.wikipedia.org/wiki/CMA-ES

>CMA-ES stands for Covariance Matrix Adaptation Evolution Strategy. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation (via recombination and mutation) and selection: in each generation (iteration) new individuals (candidate solutions, denoted as x) are generated by variation, usually in a stochastic way, of the current parental individuals.


Basically a more sophisticated version of GA, with sampling.

Why didn't they use gradient descent, hillclimbing, MCMC or something else? Why does it works so good?
>>
>>53976591
People research dumb things all the time. Just because someone does it doesn't make it good. It's science.
https://tspace.library.utoronto.ca/handle/1807/26516
http://arxiv.org/abs/1506.02438
https://spiral.imperial.ac.uk:8443/handle/10044/1/15225

There are better, more meaningful ways to get BTFO than on an anonymous image board, anon-chan.
>>
>>53975983
Yeah. I'm on my phone right now, but later I can dump some stuff I've made with it later if you want.
>>
>>53976733
Interesting, I'm looking into it. Sometimes it's good to get out of my bubble and try something new.
>>
>>53976783
I especially like the paper that used RL. I saw the demo at nips in person, it was really impressive.
There's been a lot of buzz about integrating RL methods into NN stuff (instead of simply connecting them, or putting NN stuff inside RL as function approximators - which is a standard setup). Personally, I think it's where the future lies. It has a wide range of applications from tree pruning to attention mechanisms and results speak for themselves.
>>
>>53969699
>Someone who doesn't know what they're talking about here.
>>
>>53976919
Except he's right. Don't you have an intro to java course that you're supposed to be studying for?
>>
File: wallpaper_mami49304.jpg (179 KB, 1500x1038) Image search: [Google]
wallpaper_mami49304.jpg
179 KB, 1500x1038
>>53971562
>>53971744
http://www.investopedia.com/simulator/
If you are interested you can program/tweak a bot to use investopedia's simulated stock market which gives you 10,000$ to buy and updates based on real stock market prices.

>>53968599
Congrats OP, this is the first thread on /g/ in over a year that is interesting. Save for the tards/baiters who don't think AI is possibly.
>>
>>53976823
I believe in RL too, but state-of-art methods (A3C and what you showed here) are a bit over my head (DQN is pretty simple though).

WIll look into these methods more.

> I saw the demo at nips in person
Cool, Anon, you are a real researcher! I'm just a hobbyist.
>>
check what I have been working on with a group: https://github.com/FakeNameSE/Boids-with-obstacles-and-goals
>>
>>53970536
Why the fuck did I think the guy on the bottom right was holding a big ass fish instead of a surf board? Did I just fail a variant of the Turing Test?
>>
>>53977048
lol true
>>
>>53977048
Vision models are typically superhuman in pretty much every single task. In some cases like digit identification, it's so much better than humans that it's effectively perfect whereas humans are only correct about 90% of the time.
>>
File: whoa_workdesk.jpg (151 KB, 800x1066) Image search: [Google]
whoa_workdesk.jpg
151 KB, 800x1066
>>53975879
Well I will agree not perfect, analog vs digital argument of course. But look at:
http://www.nature.com/news/fragment-of-rat-brain-simulated-in-supercomputer-1.18536
http://www.openworm.org/

>>53972080
If you dropped the battery for a wall plug and gave up on teaching it to walk you could make this. Firslty I would offload all the work to a cluster using CUDA+NVIDIA cards. Secondly I would use Google Now Api for voice recognition and kinect sensor for sight with neural network. Would probably have ot hardcode any and all actions. Unless you could somehow watch it's vision on a screen, have it mimic your motions and then tie them to keywords. Hmmmm...
Reading for you:
http://www.ted.com/talks/hod_lipson_builds_self_aware_robots
https://www.youtube.com/watch?v=54i8_37aJoo
>>
>>53976941
>Linguist here. What you actually mean by "java" is ...
>>
File: shitposting.gif (1 MB, 500x280) Image search: [Google]
shitposting.gif
1 MB, 500x280
>>53978680
https://en.wikipedia.org/wiki/Optimal_control
>>
>>53978757
Mathematician waxing poetic here. What we call control really is nothing more than a fallacious attempt to understand our current reality.
>>
File: taylor_beef.png (871 KB, 628x768) Image search: [Google]
taylor_beef.png
871 KB, 628x768
>>53976761
I'm back
>>
File: fillet manlet.png (467 KB, 512x512) Image search: [Google]
fillet manlet.png
467 KB, 512x512
>>53980018
>>
File: morgan_spongemanv2final.png (389 KB, 372x512) Image search: [Google]
morgan_spongemanv2final.png
389 KB, 372x512
>>53980040
>>
File: neural-style1.jpg (78 KB, 632x395) Image search: [Google]
neural-style1.jpg
78 KB, 632x395
>>53980054
and here's one I made just on deepart.io to see how it was

not really worth the 3 day wait tbqh
>>
>>53970686
Python is just used to call functions that used a framework written in c++ usually that will do the actual calculations
>>
File: neural-style4.jpg (139 KB, 579x431) Image search: [Google]
neural-style4.jpg
139 KB, 579x431
>>53980066
more deepart.io
>>
File: neural art harold.jpg (67 KB, 605x355) Image search: [Google]
neural art harold.jpg
67 KB, 605x355
>>53980098
and ofc you can't talk about neural-style and deepart.io without posting this
>>
>>53980107
These dudes are trying to make money with their stuff.
I liked google deepdream better.
>>
>>53969301
Psst you can get Russell and Norvig and other great books on libgen.
>>
File: DRSaliency00216l.png (700 KB, 840x2207) Image search: [Google]
DRSaliency00216l.png
700 KB, 840x2207
>>53976591


>Why didn't they use gradient descent, hillclimbing, MCMC or something else?

..Because they couldn't think of an objective loss function which was differentiable with respect to all the parameters they changed? duh.

>Why does it works so good?

..because this a problem to which CMA-ES is well suited?
..And also because they are evolving things which walk. The success of this experiment relies heavily on the researchers in-depth understanding of walking mechanics as much as it relies on CMA-ES. They engineered stance-liftoff-swing-stance preparation states and target poses. They understood walking so well they managed to reduce the entire process to only 29 parameters to optimise over. The problem was already quite well optimised before the GA did anything.

GAs don't seem promising for AI because intelligence is much less well-understood than walking mechanics, so the search space will be ridiculously huge and good fitness criteria are near impossible to think of/compute.

Don't get me wrong, GAs are cool, and there have been lots of awesome things done with them, but they are a silly way to go about neural networks(and probably AI in general). Deep Reinforcement Learning(using gradients :)) seems much more promising to me.
>>
>>53982047
also, if anyone's interested, pic is a saliency map of my deep convolutional net trained to localise nose/eye/mouth/eyebrow coordinates on pictures of faces. The red bits are the pixels which have the largest effect on the nets predictions.
>>
>>53968599
Thanks for the books.
>>
File: NouriNetCONV3X).png (125 KB, 612x812) Image search: [Google]
NouriNetCONV3X).png
125 KB, 612x812
>>53976067
hello friend it's me again
>Deep Learning is cool but it can't learn even moderately-complex short algorithms.

What are you comparing it too though? Has evolutionary programming had more success at program learning? has anything?

Up until a few years ago, no one was even really trying this. NTM paper was only published in 2014. Relatively little thought/effort has gone into making NN architectures for writing programs yet. Learning a sorting algorithm from sorted/unsorted vectors is a pretty impressive debut for deep learning making algorithms IMO.

>Also it's funny that the closer deep learning + gradient descent gets to program learning tasks the harder it is to nudge the system towards convergence

Isn't that what you'd expect? Program learning
is a more complex, dynamic task than what neural nets(or anything else for that matter) have been used for previously.

>Recent Neural GPU paper [1] required 730 random seeds (= training runs) to find a strongly generalizing (from 20 to 20000 bits) multiplication algorithm.


http://www.primaryobjects.com/2013/01/27/using-artificial-intelligence-to-write-self-modifying-improving-programs/
A genetic algorithm takes 580,000 generations to print 'hello world' and still requires junk code to be trimmed off. Has a GA ever produced a bit independent multiplication algorithm?
>>
>>53981379
You have little or no control over deepdream. With neural-style (if you install it yourself) you have tons of control. You pick the content, the style, the size, and lots of tiny options that influence how the content is redrawn and how style is actually seen by the program.
>>
>>53973182
Do trading bots learn off other bots? For instance do bots look at accounts who make high dividends and learn from it? Don't know much about stock bots or even if profits are publicly available. I mean the benefits of learning from competitors might not be as large as expected. When trading I guess you also have to factor in others behaviour and not just copy it because their actions affect you profit margins. Very interested in this topic though. Does anyone here own a stock bot or done actuarial studies/ Financial math?
>>
>>53983442

No. Bots are used to effectively snipe and perform actions faster than a regular trader, it does not involve the ability to 'read' stocks.
>>
>>53983470
Do the bots just make thousands of micro transactions a second or try to simulate a human broker with very good timing/reflexes? Do you think there is benefit to statistical analysis of stock market buy and sell behaviour to refine existing stock bot algorithms? But how relevant is this compared speed/distance from the node...
>>
>>53968599
When AI will be advanced enough to recreate thoughts of my long dead Girlfriend?
>>
clone this universe without any life forms,
fast forward very significant time, will life emerge somewhere in this cloned universe?
>>
File: Perceptron.png (35 KB, 769x378) Image search: [Google]
Perceptron.png
35 KB, 769x378
>>53968599

>ITT: people throwing arround with buzzwords
>do u even Russell / Norvig ?


>>53969602

>Do machines have ideas of what theyre talking about?

No, but you can measure their rationality and performance which is (more or less) the same.


>>53969759

>1) Is there any part of AI at all that one could learn even without understanding advanced math/statistics topics? Which AI-related topic demands the least knowledge of those topics?

Not necessary "advanced", but for each field you need at least some basics. You should understand stuff like Logic (propositional logic and predicate logic), Bayesian Networks, Perceptrons and Fuzzy Logic.

>2) which prog language is best for playing around with AI, is it Python ?

Prolog. It's excellent to understand the way you need to think in AI.

>http://www.learnprolognow.org/

And then you could study stuff like STRIPS and HTN:
>https://en.wikipedia.org/wiki/Sussman_Anomaly
>https://en.wikipedia.org/wiki/STRIPS
>https://en.wikipedia.org/wiki/Hierarchical_task_network

Not because those languages are "better" than Python, but to understand some basic concepts of AI. It's completely differnt than writing Apps.
>>
>>53969374
My professor said strong AI is pretty much never going to happen. But we can make specialized expert systems that make intelligent decisions in specific situations. Think self-driving cars
>>
>>53983769
There is a group who thinks it's doable, and a group who thinks it's not. One of the arguments used for the group who thinks it is, is that "it's never going to happen" has been said about technology we take for granted today, time after time. Usually, the group who thinks it's not considers AI to be a tool, where the endgame is to make the AI's plane to a human expert's bird.
>>
>>53983769

Exactly.

It's not an "all or nothing" decision and no, Skynet or the Hivemind won't happen anytime soon.

But how much "intelligence" do you need to build autonomous war robots? Not much.

Self-driving trucks and treains are already reality (Rio Tinto uses them for years):

https://www.youtube.com/watch?v=66KkI3mNgPg

And it will get bigger:

>https://www.newscientist.com/article/dn27485-autonomous-truck-cleared-to-drive-on-us-roads-for-the-first-time/
>>
>>53983934
Self-driving trains are pretty huge in France, by the way. But it's not like self-driving cars, and I've never heard of self-driving trucks. Maybe drive-assisted trucks. Trains being on track, and having virtually no chance of contact with anything foreign (i.e. having full knowledge of all obstacles at all time) make them significantly simpler to automate.

As for intelligent weapons, we have auto-tracking guns already. In fact, they're really easy to make.
>>
>>53984016

>and I've never heard of self-driving trucks

As I said, they are already in use. Just look at the video, they are fully autonomous trucks.

>http://www.theage.com.au/it-pro/business-it/forget-selfdriving-google-cars-australia-has-selfdriving-trucks-20141020-118o47.html

Of course people are still there to check, but they are functioning all by themselfes.


But what we COULD do is even bigger.

https://www.youtube.com/watch?v=cj83dL72cvg

https://www.youtube.com/watch?v=-e9QzIkP5qI
>>
>>53984134
>in their mining ops
Wow, it's fucking nothing!
>>
Is TensorFlow good? I want to make a face recognition thing that only recognizes my face (me or not me). Would I be better off using TF instead of making my own program?
>>
>>53984472
No. The public release has barely anything to do with the inhouse version that google uses. It is slow as balls and is missing half the features they have. I don't think they even released multi-gpu support yet. Use torch or theano.
But if you want to make something so trivial and useless, just use opencv instead.
>>
>>53984370

Uhm.. reading comprehension?

They are moving all by themselves, they are driving on roads and they can evade obstacles.

Also (I'm reapeating myself here) they are already on the road in the US:


>https://www.newscientist.com/article/dn27485-autonomous-truck-cleared-to-drive-on-us-roads-for-the-first-time/

>http://www.wired.com/2015/05/worlds-first-self-driving-semi-truck-hits-road/
>>
>>53984545
>>>/reddit/
>>>/x/
>>
>>53984533
Fuck it, I'll just add conv shit to my own NN.
>>
>>53984555

What are you even trying to tell me?
Is it so hard to read a website?

Ok, I'll post videos:

https://www.youtube.com/watch?v=6bFCrkUbdDE

https://www.youtube.com/watch?v=wmQRWpKJDaQ
>>
>>53983748
I think it's pretty clear at this point that Norvig was wrong, considering the massive progress everyone is making.
>>
>>53985032

>Norvig was wrong

He was "wrong"?
You you mean he as a person was entirely wrong?

Or do you mean some of his opinions were wrong?
Then you might give me a hint which opinion you are referring to..

Also I wrote "RUSSELL / Norvig" which are the authors of a popular book about AI basics. Or do you mean to suggest that propositional logic is wrong?

Please help me, I have no clue what you are trying to say!
>>
>>53968599
Oh boy another general
Let's see what we have so far...
/edc/
/tpg/
/wdg/
/sqt/
/flt/
/watch/
/fucko/
/ptg/
/csg/

Do we really need another one?

But on topic for the thread:
Do you think you will see true ai in your lifetime? Do you fear ai?
What would you do with a waifubot?
>>
>>53985238
This is an AI thread. Autists who don't know the first thing about anything are not allowed. Go back from whence you came and don't come back.
>>
>>53985393

I posted nothing about my whereabouts, you're making assumptions out of your ass.

But the sentence "it's pretty clear at this point that Norvig was wrong" is not usefull here. Is he wrong about help systems for unix? Wrong about how long it takes to get good at programming? About the effectiveness of data? About using Lisp?
>>
>>53985736
>>>/reddit/
>>
>>53985315
>Do you think you will see true ai in your lifetime? Do you fear ai?
Define "True AI", plebe
>>
>>53983748
>Prolog. It's excellent to understand the way you need to think in AI.
kekkks

go talk with http://www.masswerk.at/eliza/ about your problems
>>
>>53985833
He means AGI
>>
>>53983640
If you have enough data on about her, creating some sort of model of her behavior is not impossible. Especially if you notice that women behave pretty similarly. You will just need to take a generic woman behavior model and finetune it on your gf's behavior dataset. It can be formalized as either supervised learning or reinforcement learning.
>>
>>53982047
>Don't get me wrong, GAs are cool, and there have been lots of awesome things done with them, but they are a silly way to go about neural networks(and probably AI in general). Deep Reinforcement Learning(using gradients :)) seems much more promising to me.
I agree with that.
>>
>>53985874
Still it is useful to force plebe-kun into giving a decent empirical definition of AGI. Maybe we should just put it into OP-post.

"Artificial General Intelligence is an algorithm that can achieve complex goals in wide variety of environments"
>>
>>53985789

>>>/sticksandstones/

>>53985862

Yes, there's also an Prolog implementation of Eliza. What are you on about?

>http://www.lpa.co.uk/pws_dem4.htm
>>
>>53985953

Bad definition.

Is chess complex? Definately!
Yet we can solve it by AI or by "simple" methods.

A good definition should include the importance of percetion, reevaluation and acting rational.
>>
>>53985891
Thank you...Naoko
>>
>>53986147
Rather, replace complex by arbitrary.
>>
File: mfw.jpg (55 KB, 1434x670) Image search: [Google]
mfw.jpg
55 KB, 1434x670
>>53986173

From a computer's point of view, everything is arbitrary..
>>
>>53986251
But the computer did not generate that definition.
>>
File: maxresdefault.jpg (115 KB, 1920x1080) Image search: [Google]
maxresdefault.jpg
115 KB, 1920x1080
>>53986346

Can you PROOF that?
>>
>>53969374
go back to /r/scifi
>>
>>53969602
How many angles can dance on the head of a pin?
>>
>>53969699
>Field that is being automated here
>>
>>53986173
"complex" and "arbitrary" are just words. Of course you need to define a set of benchmarks that are sufficiently diverse & reflect real world problems. If an algorithm scores well on them then it passes the definition.
>>
>>53986901
Infinite because the head of the pin is circular.
>>
So did anyone make an "agi" yet? Just something that you can say "allocate 5 GB of memory to make a fly brain?"
>>
File: jesus christ.gif (1 MB, 288x198) Image search: [Google]
jesus christ.gif
1 MB, 288x198
>>53986946
>CS majors actually believe this
Lel, as someone who's researching engineering automation, you couldn't be more wrong. Code generation is going to be automated long before engineering.
>>
best fiction on ai?
>>
>>53987397
I kek'd out loud. 2/10 though, too obvious.
>>
>>53987448
Engineering is an infinitely dimensional continuous problem space. Generating code for Turing machines is discrete, finite, and one dimensional.
>>
>>53987418
http://ttapress.com/553/crystal-nights-by-greg-egan/
https://subterraneanpress.com/magazine/fall_2010/fiction_the_lifecycle_of_software_objects_by_ted_chiang
>>
>>53987077
You can say that to your dog; what's the point?
>>
>>53987543
Lol
>>
>>53987620
I suppose I'll take my topology optimization research and apply it to code generation this weekend to show you filthy codenonkeys what I'm talking about.
>>
>>53987685
lmao, simply lmao.
>>
I hope everyone here knows about Searle's Room... And also p-zombies.

The worst thing that could EVER happen is mankind getting replaced by functionally similar but actually unaware machines. I'd take a world of ISIS terrorists over that.

AI usage is too potentially useful to be ignored, though, we just need to know what the fuck are we doing.
>>
>>53987747
> functionally similar but actually unaware machines
>ISIS terrorists
What"s the difference again?
>>
>>53987735
>prajeetsweating.jpg
>>
>>53987397
>Engineer here
Are you actually bragging about researching engineering automation?
Congrats on figuring out how to automate some Mexican child's $3 a day sweatshop job.
>>
>>53987747

>Searle's Room... And also p-zombies.
Memed
>>
>>53987888
Lol. Funny how codemonkeys are the ones losing their jobs to Prajeets. Engineers can't be outsourced because when something goes wrong people die.

What I'm working on is a bit limited in scope because there's no way you can automate an engineer's full job without sentient AI. Code generation is another story, however.
>>
>>53987582
You can't copy and paste your dog.
>>
>>53968599
Why is that occurring?
>>
>>53987397
>Sits down at computer at work
>tests model of system in matlab
>create draft in autocad
>finish in solidworks
>"haha fucking CS faggots, they actually think code could ever replace the ART that is made by a REAL HUMAN engineer."

It's already too late for your field m8. You've been reduced to pulling levers. The big boys has already moved on to automating doctors and paralegals.
>>
File: laughter.gif (2 MB, 390x271) Image search: [Google]
laughter.gif
2 MB, 390x271
>>53988102
>this is what they actually believe
>>
>>53988153
humor is a good way to cope, anon :^)
>>
>>53987945
Anyone who can be replaced with some minimum wage 3rd world coder shouldn't have bothered learning to code in the first place.

Engineers aren't outsourced because there is no need, all non-creative work is already done by computer programs.

Wow code generation???? What a totally fresh, new idea that isn't a part of every compiler ever.
>>
>>53988191
>accuses engineers of pulling levers because they use some software tools
>ignores that CS is nothing but software tools on software tools on software tools

>>53988374
Code generation as in you come up with some unit tests and it writes everything
>>
What algorithm would one use for this task and is it even realistically possible?

Let's say I have the computer generate art out of different shapes and colors. Then I rate that art as good or bad. Then the idea is that the art will get more pleasing with each iteration. Sounds like a genetic algorithm with supervision to me. But can this actually be done on a normal computer without super complicated code? Will it even work out or will it get stuck in a local maxima?
>>
>>53990641
Use a VAE for generation, use a rating predictor for supervision.
>>
>>53990703
In english please.
>>
File: .jpg (198 KB, 1920x1200) Image search: [Google]
.jpg
198 KB, 1920x1200
YES!
finally got my AI to bypass captcha in less than 1 second
>>
>>53989819
WOW you are right, I never thought about how people in CS use computers and software. What a CRAZY thing, software that helps you make software. Thanks for opening my eyes! Thank god we have engineers like you around, who can masterfully show the world why you matter
>>
>>53990641
A simple solution: https://en.wikipedia.org/wiki/Procedural_texture#Genetic_textures

Complex solutions:
http://soumith.ch/eyescream/
http://arxiv.org/abs/1511.02793
>>
File: 13787.jpg (697 KB, 2710x1620) Image search: [Google]
13787.jpg
697 KB, 2710x1620
>>53990812
I, as a programmer, respect engineers more than mere programmers and CS researchers (with exception of ML researchers because they actually do hard numerical math on distributed clusters).

We have an overabundance of software, it's easy to create and easy to use. 99% of software can't endanger our lives, thus we accept that it's OK that it's unreliable.

On the other hand our life and health depends on multiple hardware/mechanisms/objects designed by engineers on a daily basis. Engineers can't afford to make an error.

So, Engineering implies:
1) Responsibility
2) Ability of using mathematical tools to predict behavior of your systems
3) Ability to design working systems

Your average programmer only has 3). That's why I respect engineers.
>>
>>53990976
EE is a good preparation for studying deep learning. Math is all there.
>>
>>53991005
>I literally don't have a clue about anything whatsoever but I'm an expert on everything please believe me
>>
>>53991065
>>
So, which one of you faggots did that?
>>
>>53990991
6/10 pasta
>>
Ayyy senpai, I need some help understanding naive bayes for training, what exactly am I supposed to do with my classifier, do I train a classifier for each instance in the set and then that's used for testing or is there something I'm missing? Also, is the classifier supposed to be ridiculously small? Although I guess it's not supposed to be a value right? Sorry if this isn't making sense
>>
>>53991950
When did f.a.m turn into senpai?
>>
>>53991994
Since chinese moot
>>
What kind of math skills are needed to understand all these new ML methods?

I completed the Standford coursera ML course but it was very basic with a lot of spoon-feeding so when I looked at a ML research paper I felt lost.

I don't want to develop any ML methods but merely be able to implement them in practice.
>>
>>53987888
trips speak truth

If you had any imagination or ambition you would have studied machine learning instead
>>
Lay it on me brothers, how useful is/was decision tree learning? Because it's the only one I can be arsed to remember right now
>>
>>53992433
>all these
Tons and tons of math. Even mathematicians in ML probably don't have the complete necessary background.
>the most important of these
Math stat, lin alg, calc 3, some optimization theory.

>>53992753
Decision tree learning was great. It worked really well but, as most methods at the time, didn't scale well. You're not going to do modern ML with it, but it's still fun to know about. There are attempts at bringing them back by using RL to decide on tree structure, but I don't really believe this is ever going to work well enough.
>>
>>53992753
Random forests are one of the best classifiers for arbitrary types of data. It's hard to beat a random forest.
>>
>>53991950
You have a single set of parameters. You want these parameters to be the best fit for your entire dataset. You then take the set of learned parameters and apply them, without any additional learning, on a hold-out "test" set. This is your generalization, or test, performance.

The output of the model is not the same thing as the parameters of the model.
>>
>>53992862
Random forests are a type of ensemble method. Decision tree learning learns to split the decision surface. It's basically an advanced parzen windows with variable-size partitions.
>>
>>53992790
>It worked really well but, as most methods at the time, didn't scale well. You're not going to do modern ML with it, but it's still fun to know about
Wrong, kinect pose classifier uses RF (decision trees): http://pages.cs.wisc.edu/~dyer/cs540/notes/17_kinect.pdf
also RFs are often used at kaggle competitions.
>>
>>53992915
See >>53992889
Or go be retarded somewhere else.
>>
What kind of network would I make to automagically box certain objects like the nvidia car video had?
>>
>>53992967
a convnet. you don't make it, you train it. at least 1000 examples for each class of boxed objects.
>>
>>53992967
Generic convnet.
>>
>>53992996
>at least 1000 examples for each class of boxed objects
Holy fuck. How much money would it take to have some retards box me some objects?
>>
>>53993158
There are many available datasets with tons of objects, already. Unless the object you want isn't in that set, use those. Otherwise, look into perusing amazon's mechanical turks.
>>
>>53992790
>Tons and tons of math.

Even if you just want to implement it?
Thread replies: 255
Thread images: 40

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.