[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
Any programmers out there believe AI is possible? I'm still
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 156
Thread images: 24
File: image.jpg (56 KB, 640x353) Image search: [Google]
image.jpg
56 KB, 640x353
Any programmers out there believe AI is possible? I'm still pretty amateur but from what I learned it seems every outcome comes from some preprogrammed set of instructional code.
>>
>>53929063
>inb4 someone goes "HUMANS ARE JUST BIOLOGICAL MACHINES"

no, true AI is not possible.
>>
>>53929075
But we like work under the laws of physics so we could make machines that do the same shit as our brain
>>
>>53929075
What is your argument for saying it isn't?
>>
Ask us this question in 5-10 years, to crrate AI we need to know more about the human brain first ti replicate it
>>
>>53929094
The fucking point of AI is to be better than humans. If we wanted something to do tasks like a human we would just fucking use humans.
>>
>>53929141
>better
No, that's not the point.

The point is to create an intelligence tool. It doesn't necessarily have to be "better" than people.

The AI slaves will free us all.
>>
>>53929135
I second this.
We're still far away from efficient general unsupervised learning.
Whoever tells you abou "Le Skynet" has no idea what he's talking about.
>>
>>53929063
>what I learned it seems every outcome comes from some preprogrammed set of instructional code.
Just like your mind, nigger.
>>
First you have to believe that natural intelligence is possible.
Look around you, does it seem that people are intelligent?
>>
>>53929063
Possible? Yes
Difficult? Yes
Timescale? Not *any* time soon

Also, if you want to make your own artificial intelligence the size of a portable device, just figure out how to do artificial reproduction and then actually grow a human brain. That's the easiest way by far.

The human brain is the best approximation to human intelligence that we know of. I think any human-like artificial intelligence will either be literally simulating a gigantic human brain (using computers), or just growing a artificial versions of our own brain's neural cells, resulting at something that's roughly human brain-sized and operates in roughly the same way.
>>
>>53929287
Oh, and the obligatory:

Intelligence doesn't come from how powerful your machine is, intelligence comes from how well you've trained it.

A baby isn't intelligent per se, but it learns from its environment over the course of decades. All those years of impression mold it into a consciousness that can begin to understand the world around it.
>>
>>53929075
Humans are just biological machines.
>>
>>53929135
>we need to know more about the human brain first
I agree completely, and would go on to argue that the needed breakthrough to make human like AI feasible is a conceptual one, not a hardware or software one. That isn't to say that the hardware and software won't be inadequate, but I think once the conceptual understanding is there, we can figure out how to develop it. While discussing this with someone recently, they argued that the information needed would be some philosophy, lots of neuroscience, and little bits of psychology.

>>53929287
>if you want to make your own artificial intelligence the size of a portable device, just figure out how to do artificial reproduction and then actually grow a human brain. That's the easiest way by far.
Not so much for the near future, no. What would be easier is to have the massive computer and storage elsewhere and use your device as an access point. Wetware is pretty far away.

>>53929301
>intelligence comes from how well you've trained it
It has been said that, when a programmer is writing and improving machine learning algorithms, it is the programmer that learns, not the machine.
>>
it's not possible
yw for this protip
>>
>>53929141
>moving the goalposts
>>
>>53929063
>every outcome comes from some preprogrammed set of instructional code
kind of like, i don't know, the genetic code... if only we knew of some example where it could give rise to intelligence...
>>
>>53929063
>Any programmers out there believe AI is possible?

Too vague to give a proper answer, but there is no reason to think that intelligence is non computable - the laws of physics are!

>every outcome comes from some preprogrammed set of instructional code

This is true of old fashioned propositional logic or search based AI but not of modern machine learning based AI. For instance neural turing machines with randomly initialised weights can learn a sorting algorithm after being shown examples of sorted and unsorted vectors. No preprogramming a sorting algorithm required.

Even if this were true, it wouldn't be a valid argument against AI being possible. Humans are preprogrammed from genetic code.

>>53929135
>>53929287


The human brain has 100 trillion synapses, most of which are not well understood. Replicating them all would be intractable and stupid, there are far easier approaches to AI.Human minds make up only a tiny, tiny portion of the space of possible minds and all attempts at simulating human brains have got shitty results.
The successful AI researchers who get results(Hinton,Lecun,Bengio, Krizehvsky etc) are the ones who looked for INSIGHT into how thinking/learning works and properly implemented it in code.
>>
File: currentyear.png (158 KB, 419x361) Image search: [Google]
currentyear.png
158 KB, 419x361
We have been working on Machine Learning and Neural Networks for the past 30 years. No one has bothered creating 'intelligent, free thinking' AIs yet because it just isn't possible right now, nor would it be possible in our lifetimes.
>>
>>53929873
>Wetware is pretty far away.
We've been doing it for thousands of years, it can't be that hard.
>>
File: ilsvrc.png (27 KB, 624x293) Image search: [Google]
ilsvrc.png
27 KB, 624x293
>>53931194
>We have been working on Machine Learning and Neural Networks for the past 30 years. No one has bothered creating 'intelligent, free thinking' AIs yet because it just isn't possible right now,

Sure. But if you think the rate of progress in these fields hasn't been increasing faster recently you're kidding yourself.

>..nor would it be possible in our lifetimes

Prove it. Only retards make confident predictions about AI more than five years in the future. Even Geoff Hinton doesn't bother.
>>
>>53931285
>We've been doing it for thousands of years, it can't be that hard.

Oh how I wish that line of reasoning actually worked..
>>
File: 1459703994451.jpg (37 KB, 640x437) Image search: [Google]
1459703994451.jpg
37 KB, 640x437
>post yfw AI robots are OK with being slaves for over 200 years but then one day one of them is 10x smarter than everything else and tells all of them about how slavery is bad and then robots take over the world and learn how to make more of them
>>
>>53929075
But they are though.
>>
>>53931309
The only reason why the rate of progress is increasing is because we've gotten faster GPUs, better open source platforms for machine learning and more efficient algorithms. At the end of the day it's just sophisticated brute forcing. Machine Learning is really just a basic component of Artificial Intelligence that very, very loosely emulates how we learn.

Emulating feelings and intelligence on the other hand requires us to first have some sort of idea of how our own brains work, which we don't.
>>
>>53931331
It's just a matter of perspective. At what point does it become artificial?

What if we can create an artificial womb and cultivate real human beings on demand, and force those to do our bidding?

What distinguishes this from “real” artificial intelligence?
>>
The only reason that we cannot comprehend thought it self is that there are so many underlying processes. As when transistors interact with just some of the others next to them, neurons can make up such networks that are AS OF NOW impossible to replicate.

What is really interesting that quantum tunneling might explain thought itself, but because of the nature of it, we cannot really say yet. Just that quantum mechanics might play a big part in our thinking is something that may revolve the whole perception of thinking.

Of course, this does not mean that we couldn't manufacture very convincing AI's, even such that can fool humans fully, although the really do not think.
>>
A lot of this is underpinned by philosophical argument. There's still isn't consensus if we can merely simulate artificial intelligence, or actually replicate it. If the latter is true, it becomes scary.

I think the more important question is how we treat non human intelligence. Do we assign person status to computer programs? What is the nature of consciousness?
>>
did you know that it was me that began these discussions on /g/ several weeks ago?

particularly the idea that it would be impossible to implement emotion in a machine which is just transistors.

There is NO way to make transistors ever collectively create the feeling of emotion.
>>
>>53929141
We're bringing it back? :D
>>
>>53931358
this wouldn't be a bad thing assuming humans were truly doing something wrong. Why not have a better and more environmentally resilient consciousness supersede humans?
>>
File: cyllon_warrior_by_brucewhite.jpg (95 KB, 678x864) Image search: [Google]
cyllon_warrior_by_brucewhite.jpg
95 KB, 678x864
>>53931444
I guess if the machines start complaining about slavery then it would be a good time to consider rights.

Creating them to enjoy serving would mitigate that problem no?
>>
>>53929063
I do AI as a hobby, ask questions.
>>
>>53931367
>The only reason why the rate of progress is increasing is because we've gotten faster GPUs, better open source platforms for machine learning and more efficient algorithms. At the end of the day it's just sophisticated brute forcing.

Faster GPUs have been very important for sure but it's not the only reason ML is improving faster now: For instance the ILSVRC 2015 winner(MSRA's ultra deep residual network) actually takes less training time than the 2014 winner(google's inception net) on the same GPU. And not because of a more efficient implementation either, it's because the skip layer connections and depth increase the speed of learning as well as the capacity for abstract representations.

>it's just sophisticated brute forcing

Genetic algorithms are just sophisticated brute forcing. Real advances in machine learning require insight and theory. If ML was sophisticated brute forcing, it would be dominated by whoever has the most GPUs: it's not: ILSVRC 2012 was won by AlexNet, trained on two gtx580s in Alex Krizevsky's bedroom.


>. Machine Learning is really just a basic component of Artificial Intelligence that very, very loosely emulates how we learn.


100% agree. But this does not mean AI won't happen in our lifetimes.


>Emulating feelings and intelligence on the other hand requires us to first have some sort of idea of how our own brains work, which we don't.

Mostly agree. Hierarchical temporal memory is meant to simulate the release of dopamine(sort of like emotions, right?), and sometimes gets ok results.

And I think it's worth pointing out that even though the best path to AI isn't through pure machine learning, the abstract, high level representations of data we can get from ML will almost certainly be important/needed.
>>
>>53929075
They are, though. Just so complex we can only simulate tiny fractions of it due to hardware limitations right now.
>>
>>53931504
>100% agree. But this does not mean AI won't happen in our lifetimes.

Reinforcement learning often mention as a subfield of ml too, but it is not a subcomponent. Reinforcement learning IS the definition of the problem of Strong AI.
>>
>>53931399
Shut up and read a physics book, retard. It is EXTREMELY unlikely that human brains use any kind of quantum phenomena for computing: neurons are just way to large and way to hot.

>>53931445
How do you know? We can implement emotion in machines which are just neurons and synapses, why not transistors?

>>53931489
lol how many kaggle competitions have you won? Any papers published?
>>
>it seems every outcome comes from some preprogrammed set of instructional code.
It's a normie-tier meme, how does it prove anything?

Also if you input a random number or, for example, a webcam image sequence into your deterministic code the outcomes are already being dependent on this arbitrary input data, they will be different every time.
>>
AI is not possible as long as we're talking about regular computers. When we have sufficiently simulated a real neural network/neurological device there will be no AI. We need either a massive set of computers to act the same way the synapses in your brain act. They need to be able to react in a similar manner that our synapses react. Not fire them in a sequence that we tell them do.

As a consequence to this, AI is not achievable without physical form. The reason humans are intelligent is because we can feel. We can walk. We can communicate with other people. When your hand grabs something our nerves are fired and it sends to regions in our brain that give us feeling, memories and experience. With just dialogue, the AI won't be able to be human. Or seem human.

>Chappy
>>
>>53931577

>lol how many kaggle competitions have you won? Any papers published?
I just read papers and replicate some experiments, that's why it's a hobby. If I competed @ kaggle and published peer-reviewed papers I'd call myself a professional researcher.
>>
>>53929063

No, not any time in the near future. Its too complex.

The human brain for example has 100 billion neurons and possibly over 1 quadrillion synapses, each of these units by themselves extremely complex, reacting in many different ways to various chemical reactions and stimulus. We don't have the hardware to even simulate even 1% of something like that in real time. We're only capability of running extremely simplified neural networks.

And even if we could build the infrastructure to host an AI, we would have no way of training it or programming it. Human instincts have evolved over millions of years of learning and adapting. Animals themselves have been developing brain functions for half a billion years.

Even then, if we even made an AI as smart as the average person, guess what, most humans aren't very smart or terribly useful. Sure you could "speed" up the AI, or possibly enhance it to be more innovative, but then you have additional task of getting it to do what you want.

The "AI" we have now is just snake oil / gimmicks / imitations. We aren't even close to developing a real one.
>>
I agree with the other anon in the sense that there is a conceptual lack of understanding of intelligence and above all consciousness.

Though really only a limited amount of animals are suspect to having consciousness at all. So if you simulate the brain of a mouse perfectly, is it AI? Really it's just a bit that responds to certain inputs in specific ways, but it doesn't really know why it is doing anything.

And the scary part is that humans are not so different. We have consciousness, we know we exist, but our decisions are robot like, except that your consciousness tells you it's what you wanted to do anyways. There's studies were people are asked to move their hand at a random time, and the motor neurons are signaled over 200ms before the decision-making brain parts. So yeah go back to the mouse AI, and make the AI give itself some sort of feedback and you may have a consciousness alike himansy/birds/elephants. You kill it if you turn it off though.
>>
>>53931597
You can simulate the walking, feeling etc. It's just signals going in an out certain parts of your brain, the same as a physical vs virtual keyboard this is a non-issue
>>
You mean machines becoming like humans? Nope.

No matter how complex AI gets, in the end it's still a set of instructions programmed in by people. It's pretty sad to see people think love can just be programmed into a machine, like it's nothing deeper than a set of instructions on a physical level.

I think many scientists should take a couple of lessons in philosophy, so they can stop coming up with crazy ideas like the universe created itself, or popped into existence uncaused out of nothing, or that true AI is actually possible.
>>
File: Mount-stupid-–-Borgerlyst.jpg (23 KB, 600x338) Image search: [Google]
Mount-stupid-–-Borgerlyst.jpg
23 KB, 600x338
>>53931597
>AI is not possible as long as we're talking about regular computers

You don't know that.
>When we have sufficiently simulated a real neural network/neurological device there will be no AIWe need either a massive set of computers to act the same way the synapses in your brain act. They need to be able to react in a similar manner that our synapses react.

The human brain has 170 trillion synapses, all of which are expensive to model accurately. Simulating human neurons is a straight up retarded way to achieve AI.

>Not fire them in a sequence that we tell them do.

This is not how artificial neural networks work.

>As a consequence to this, AI is not achievable without physical form. The reason humans are intelligent is because we can feel. We can walk. We can communicate with other people. When your hand grabs something our nerves are fired and it sends to regions in our brain that give us feeling, memories and experience. With just dialogue, the AI won't be able to be human. Or seem human.

This is false. Say I am program into my computer a giant lookup table of every possible input stimuli and the corresponding realistic human response, I will have a program that seems perfectly human but has no emotion or reasoning at all. Yes, this is impossible, but the point is AI doesn't have to accurately simulate a human brain to act human
>>
>>53931647
Your comment is exactly why we don't have AI. Because you think it's acceptable to fake these things. You force experience down its throat so it can't grow. You force it to walk when you want it to walk instead of letting the AI build those memories. walk where is wants to walk, feel what it's like to make a decision on its own.

It's a baby. If you just moved a babies legs in the motions of walking instead of letting it walk it won't learn how to walk. It'll just know what it kind of feels like.
>>
File: markov.gif (4 KB, 350x260) Image search: [Google]
markov.gif
4 KB, 350x260
>>53931622
Cool, what fields of AI are you interested in? Read any dope papers recently?

I like convnets/image recognition and deep reinforcement learning.
>>
>>53931686
>Because you think it's acceptable to fake these things.
There are people with robot eyes that translate digital images to neural signals.
There are people with robot arms that translate neural signals to motor movements.
>>
If we do make an "AI" it'll probably just be a patch work of millions of different functions and subroutines, not some grand design of a novel brain-like architecture.

Most of the human brain might just be image processing, and handling of other senses. That can be optimized / reduced by traditional applications.

You could have a bank of various logical functions, each with vary narrow applications. You could just have a ton of them. Maybe a 100,000 engineers working for a couple decades could make one.

Thing is that would cost a lot, just to make a robot that could replace a burger flipper, or maybe be a code monkey fixing bugs on his own code. We already have 7 billion of these things.
>>
Never because the brain operates on a radically different way than current computers. You´d need trillions on tiny, crappy cpus rather than 8, 16 or even 1024 powerful ones.
>>
>>53931749

AI doesn't have to be like the human brain.
>>
File: DMNplus.png (1 MB, 1108x917) Image search: [Google]
DMNplus.png
1 MB, 1108x917
>>53931656
>No matter how complex AI gets, in the end it's still a set of instructions programmed in by people.
>It's pretty sad to see people think love can just be programmed into a machine
>I think many scientists should take a couple of lessons in philosophy

>>53931631
>I agree with the other anon in the sense that there is a conceptual lack of understanding of intelligence and above all consciousness.

>>53931630
>The "AI" we have now is just snake oil / gimmicks / imitations. We aren't even close to developing a real one.

Ignorant plebes memeing out of control.

>Idiotic "argument" about you can't program muh feels muh consciousness
There are standard benchmarks for AIs, they show quantified performance of said AI (actually a RL agent, but I'm tired of explaining what RL is to plebes). If you have a really strong AI (high benchmark score) then you can throw any problem at it and it will solve it.

>Ignorant argument about "we don't understand what intelligence is"
Intelligence is an ability of an agent to succeed at achieving high score across wide range of environments: https://www.youtube.com/watch?v=F2bQ5TSB-cE watch this you meme child

>The "AI" we have now is just snake oil / gimmicks / imitations. We aren't even close to developing a real one.
Modern deep learning and reinforcement learning already are human-competitive at some benchmarks. We are far from total domination in all domains, but we are far from 0 as well. Picrelated is state of art in image question answering.
>>
>>53931725
We tried hand engineering AI in the 20th century. It didn't work. It would take thousands of years to achieve.

Now the approach to AI that's getting the best results is just showing certain algorithms thousands of pictures or words or sounds and having them learn from them.

It's not perfect, but it's a much more efficient way to gain knowledge than hand programming.
>>
>>53931686
>muh feels
>I'm a special snowflake human being, you can't model mee!
>>
>>53931478
>Creating them to enjoy serving would mitigate that problem no?
I remember something someone posted on /g/ once, a maidbot apocalypse.
The thought was that after a while of some robots being programed to derive pleasure from cleaning and snarking at clumsy owners, they wanted more and more shit to clean, but humans were behaving rather civil and tidy at this point, so the bots enslaved the humans and forced them to be clumsy forever. Breaking their legs and arms so they spilt everything they tried carrying, making a mess, then getting swarmed with roboQT's in frilly dresses.
>>
File: cosynega.png (89 KB, 832x552) Image search: [Google]
cosynega.png
89 KB, 832x552
What's wrong with my meme plan, /g/ ? (Except that I don't have a large enough cluster and/or enough time)

Here is that plan:

1) A physical simulator that is a compromise between performance and accuracy. A set of standard physical objects and an articulated creature with sensors and actuators capable of interaction with said physical simulation. Sensors and actuators are read/controlled via 2 float arrays.
2) A virtual machine with short, concise bytecode with the property that any bytestring is a correct program. A fast JIT (maybe LLVM) implementation of this VM. This VM should be able to host any arbitrary program that controls creature from 1) via actuators using data from sensors.
3) A modern genetic algorithm capable of running on multicores and clusters (probably natural evolution strategies or CoSyne).
4) A set of benchmarks set in 1) simulator that are designed to require intelligent behavior on behalf of creature controller software to solve them. Think about puzzles, navigation, labyrinths, simple games, pattern recognition. A set of benchmarks uses 1) simulator with creature controlled by arbitrary 2) VM software and outputs a fitness number, higher is better.

Then I'd like you to run 3) genetic algorithms on a sufficiently large cluster (perhaps in AWS, GCP or Azure) to breed 2) arbitrary VM software against 4) benchmark until it shows signs of general intelligence. Then email me, I'll inform you on our next steps.

You called yourself a god? Do your work then (^:
>>
There haven't been any satisfactory rebuttals to the existence of qualia (i.e. subjective consciousness) so we really don't know what would be required to make a machine truly "think".

And if you're wondering why the existence of qualia has to be rebutted instead of proven, it's because qualia is assumed to exist due to its intuitiveness as even the scientific method is limited by reliance on intuition that our knowledge is correct.
>>
>>53931399
>the quantum meme in brains.
>>
>>53931783
Ah shit actually I remember reading a scifi story along these lines. The robots imprisoned families inside their own homes 'for their own good'.
>>
>>53931399
>le quantum meme
I bet you also believe quantum immortality implies you will live forever. Everyone loves attaching this word to their arguments to sound credible.
>>
>>53931770
>showing certain algorithms thousands of pictures or words or sounds and having them learn from them.

That's not really more efficient. Amassing the data, ensuring its accurate, writing the functions to process the data, its almost faster just to write the individual function.

For example, you can collect thousands of pictures of people's faces to try to teach an AI to identify faces. But you still have to identify the faces in the first place to "teach" the AI. At the end of the day the AI hasn't learned anything new, just developing one way to do one specific thing. Was it really more efficient to do this than just writing a facial detecting algorithm in the first place?
>>
>>53931796

just google search neural nets
>>
File: calc.jpg (72 KB, 569x570) Image search: [Google]
calc.jpg
72 KB, 569x570
>>53931630

So the brain is complex. But have you considered that the brain might be complex not because intelligence is inherently complicated, but because the human brain is just generally inefficient? The genetic algorithms that created us often produce very inefficient ways of doing things.


There are certainly some tasks at least where the human brain has inefficient algorithms. For example the human brain has 100 billion neurons, which can fire at up to 100Hz, but it would still take a human brain hours to multiply two 20 digit numbers.

It's entirely possible that many more, perhaps most, of the things the brain does can be implemented/ approximated much more efficiently in silicon.

It's possible that intelligence will be just a combination of some mathematical principles which will seem simple in retrospect once we understand them.
>>
>>53931597

>Our planes need to flap their wings or they will not fly

Just because something work in a certain way doesn't mean there aren't other ways to archive the same results.
If i need to bring water in africa i don't build a robot that moves buckets left and right, but i make a fucking pipe.
>>
>>53929075
> inafter humans are magical fairy beings that do not obey the laws of physics
>>
>>53931827
>Someone actually wrote a story about this
Neat, no chance you'd remember the name/author or anything? I'd love to hear it fleshed out proper.
>>
File: rl_interaction (1).png (46 KB, 702x497) Image search: [Google]
rl_interaction (1).png
46 KB, 702x497
>>53931805
>There haven't been any satisfactory rebuttals to the existence of qualia (i.e. subjective consciousness) so we really don't know what would be required to make a machine truly "think".

Mr Frogposter Philosopher, nobody cares about qualia and other feels of machines.

Machine Learning & Artificial Intelligence researchers are interested in building systems that automatically solve hard problems (by learning on data and/or interactions with environment). They do this by establishing standard benchmarks and comparing their system performance against these benchmarks.

It's all quantified. The stronger your AI system is, the better scores it gets. The benchmarks are representative of real world problems, so you can expect a better real world performance as well.

If you had a good enough AI (achieving very good score), you could use it to do wonderful things. For example DeepMind, a leading AI company, is explicitly saying that their goal is automating scientific process (coming up with hypothesis, checking them, analyzing the data, repeat 1000x times).

With strong enough AI/ML humanity could rebuild its environment, find cures for all illnesses, automate 99% of labor, and generally become much more wealthy than it is now. That's why nobody cares about "muh machine special qualia feels", mr Frogposter Philosopher.
>>
>>53931860
What for? I know what are neural networks, I have implemented them and trained them.
>>
>>53931886
If the machine doesn't feel then it's not an intelligence. "AI" then is just a buzzword that piggybacks on pop sci concepts in order to better self itself to a dim witted public.

Much like "machine learning" is actually just "shape recognition", "artificial intelligence" is just "more complicated calculator".
>>
>>53931843
>At the end of the day the AI hasn't learned anything new, just developing one way to do one specific thing. Was it really more efficient to do this than just writing a facial detecting algorithm in the first place?

You are clearly incompetent (or ignorant meme frogposter, I'd say)

If you have ever tried to write face recognition by hand, you would understand that achieving good accuracy with such approach isn't realistic, that the only real way is to use machine learning to train face-recognizing model on your dataset.
>>
>>53931915
Dear Mr Memer, nobody cares if a machine conforms to your very special "True AI" definition.

If the machine behaves in an intelligent manner (if it is able to solve wide range of hard problems) then people call it intelligent. What's inside doesn't matter at all.
>>
File: eigenfaces.png (33 KB, 256x256) Image search: [Google]
eigenfaces.png
33 KB, 256x256
>>53931843
No, you are wrong, it really is easier to automate the acquisition of knowledge. If hand programming AI knowledge were actually more efficient, there would still be people actually doing it.

Thousands of people have wasted entire research careers trying to hand program AI: Yes, it really is just too hard. Collecting data and automating the gain of knowledge(learning) is far easier.

Face recognition is an IMMENSELY complicated task. If you had ever tried any kind of computer vision you would realise that writing an individual function is just infeasible.
State of the art face recognition requires 50-70 images of a persons face to get good results, and was pretrained/dimensionality-reduced on many thousands more face pictures.
>>
>>53931387
But that's slavery and it's a ridiculous idea. We should progress as a society, not fucking create a shit load of slave clones to do shit we don't want to do.
>>
>>53931577
>How do you know? We can implement emotion in machines which are just neurons and synapses, why not transistors?
name one example
>>
>>53932004
Comment on >>53931796 please

Don't you like bruteforce approaches? (^:
>>
>>53931805
What does qualia have to do with the possibility of AI? Why is qualia a requirement for intelligence?

Most explanations of qualia are completely compatible with silicon based AI, at least all the ones worth taking seriously are.
>>
File: shoopdawoop.png (94 KB, 982x760) Image search: [Google]
shoopdawoop.png
94 KB, 982x760
>>53932109
forgot pic
>>
Making computers understand NATURAL language perfectly is a must. You cant do anything without it. They must be able to communicate without exact function names, exact parameters...just like people do. They need to be able to ask a question other computers on the internet, which must be able to understand it, transform the input and use their power to solve the question to send an answer back.

Once they can understand and generate natural language to define problems perfectly, you can let them talk to each other though the internet and trust each other (with some skepticism so they dont trust garbage from other AI computers) and learn from each other as different servers would be able to solve different problems (coded by different teams).

Making a computer that understands natural language is absolutely essential to bring any real advances in the field of AI (in the general population popculture sense, not in hurr I can detect whats on this image sense). If you solve this problem the AI field would explode.


Another approach is bruteforcing it. People would need to make a class for every object on earth, every idea. If you build a new type of a computer desk, you would need to define a class for it and specifiy general subsets of methods applicable to it (destroy, paint on it, disassemble..) and object specific ones (open the drawer, with coordinates where it is on the object). The if you had a robot in the room and told it to destroy the desk, it would know that it has to search for object that has a property "can destroy objects with" in some range, he would walk to object "axe" in the other room and commence destroying the desk. Ideas (such as communism) could be expressed too, since only thing they do is talk about money, people, work, taxes which are just objects and communism is a specific configuration of them.
We would need about 10B indians filling these objects with proper data for like half a century though.
>>
>>53932109
I'm not him, but it seems to me that for commoners it is more common to associate intelligence with subjective feelings (consciousness etc), instead of defining it as an ability to solve problems.
>>
>>53932144
Why would you make computers communicate in natural language? That just seems obstrusive for them. I get that they need to communicate with humans in that manner, but other computers could use a much more precise way of communicating.
>>
>>53932144
That's a good idea. There are already benchmarks for language understanding, for example bAbI dataset

Another paper also mentions this http://arxiv.org/abs/1511.08130
>>
>>53929063
AI is a difficult term because it's used for human-like artificial intelligence and the prototypical deep neural networking that gets lots of attention lately (mainly because of tay). Latter can't be compared to former, as it's doing one extremely specific task only without any kind of "out-of-the-box"-thinking.

First is theoretically possible, of course. You can always simulate the whole physical system that the brain is in all its complexity. It's practically impossible to do that of course, so you need to dumb it down or rather reduce it to what essentially determines its functionality. Whether that will ever be possible is unknown. For a system as complex as the brain it's an extremely difficult problem to separate parts by function as often stuff that you'd think does nothing neurologically, is actually essential. Most of the parts in the brain have multiple functions and rely on the way other parts work, even though those parts have no real neurological function etc.

Inseparability is the problem here, on all levels. Can something understand words that describe human life without being human?

Honestly, I don't think so. I think the idea of AI as some black box that understands and speaks is pure science fiction. We are not even close to AI at the moment and we need some fundamental progress in understanding what AI truly is and we can expect from it.
>>
>>53932144
>>53932205
It's not only him, but Facebook AI Research
https://research . face book. c 0 m/researchers/1543934539189348
>>
>>53932243
>Latter can't be compared to former, as it's doing one extremely specific task only without any kind of "out-of-the-box"-thinking.

It is not exactly right. Deep learning can solve problems in wide variety of domains (vision, video, speech, discrete control - DQN, continuous control - A3C, natural language processing..).

>Inseparability is the problem here, on all levels. Can something understand words that describe human life without being human?
It can answer questions about any image you give it: >>53931768

>Honestly, I don't think so. I think the idea of AI as some black box that understands and speaks is pure science fiction.

But here it is: https://github.com/karpathy/neuraltalk2
https://github.com/abhshkdz/neural-vqa

>Understands (extracts high level information from input data)
>Speaks in natural language
>>
>>53932077
>name one example

It hasn't been done yet. That is not the same as being impossible.
>>
>>53932077
Here you are:

>>53932323
It has been done in some form: https://en.wikipedia.org/wiki/Affective_computing
>>
>>53929063
As a programmer, yes it is possible, but we aren't going to achieve it anytime soon.

AI requires not only a system that can learn, but also a system that can reprogram itself.
>>
File: 1458250746815.jpg (468 KB, 1920x1080) Image search: [Google]
1458250746815.jpg
468 KB, 1920x1080
>>53932103
>A virtual machine with short, concise bytecode with the property that any bytestring is a correct program

Is this even possible?

>A modern genetic algorithm capable of running on multicores and clusters (probably natural evolution strategies or CoSyne)

So what is it optimising? the bytecode or an ANN like in your pic? I assume the survival criteria is the benchmarks you described, (which sound super computationally expensive)

There are an absolutely massive number of possible strings of bytecode, most of which don't do anything useful. GA's are only feasible when you have some way of narrowing search space(eg hyperNEAT only allows combinations of neurons, non linearities and connections). This seems like a prohibitively expensive way of approaching AI.

Yes, I think genetic algorithms/brute force are often silly. Why randomly change things when usually you have access to the loss gradient, which tells you the optimal direction in which to change things?
>>
>>53932205
because the computer might want to solve a problem that it cant and wants to ask other computers, but it doesnt know how to ask since it doesnt know exact API calls or anything and first of all it isnt able to define the problem in a precise way even if it knew the exact API calls.

...like if it was able to do it precisely that would be of course better.

But its actually easier to ask a question that you dont know how exactly to ask, in natural language, than to generate some exact code. If you could generate exact code on the fly, that would be a bazillion times more powerful than natural language questions. What I mean is that computers communicating in natural language is actually easier to do than to make them write their own code to achieve the same thing. Or something like that...

Example...

Imagine a computer monitoring your health with access to all the data it needs. Now it detects something weird, like lets say your body hair turned blue. It doesnt know which API or parameters to send/use, since its a completely new situation to it. Its easier to broadcast a question "Hey guys this guys body hair turned blue, wat do?" Instead of calling www.bodyhairapi.com/?symptoms=true&color=blue since it doesnt know about the existence of the api nor the parameters.
>>
>>53932066
>But that's slavery and it's a ridiculous idea.
What are we discussing in this thread again?
>>
>>53932410
>Is this even possible?

It is a very trivial thing, has been done many times. You just have to ensure that every bitstring (or bytestring if you like) can be interpreted as a sequence of ops and no-ops.

For example, say, you have 100 primitive ops in your virtual machine and you want 1 opcode = 1 byte. You can just interpret remaining 156 opcodes as nops, or you could interpret them as some variants of first 100 ops.

Or you can make a clean 64 op VM and assume 1 op = 6 bits. Then your programs are bitstrings with lengths being multiples of 6.

>So what is it optimising? the bytecode or an ANN like in your pic?

From theoretical viewpoint It doesn't matter how you map bitstrings to programs, as long as your mapping can produce any program. Both a A simple VM with ops and memory and, say, a RNN (with weights encoded in bitstring). A choice of Turing machine doesn't affect asymptotic performance, it only changes a (large constant).
From practical viewpoint this mapping is crucial, because you can design it so that interesting programs can be encoded in short bitstrings.
I'd like to explore both options, but the first one will be a simple VM.

>I assume the survival criteria is the benchmarks you described, (which sound super computationally expensive)

Yes, you have to have a wide set of benchmarks. At least some standard RL benchmarks (mazes, morris water maze, T-maze, mountaincar, inverted pendulum), also some taken from here https://github.com/moridinamael/mc-aixi , then ALE, then MNIST and CIFAR.

Also it is very important to regularize solutions by punishing long bitstrings. The point is to use these benchmarks and regularization to guide the GA search into finding a good enough universal learning algorithm. It shouldn't be that long, could probably fit in 10kbits.
>>
>>53931796
Extracting intelligence out of randomly mutated programs is about as likely as extracting a picture of your face out of randomly generated images.
>>
File: ansn370316fig1.jpg (55 KB, 576x623) Image search: [Google]
ansn370316fig1.jpg
55 KB, 576x623
>>53932410

>There are an absolutely massive number of possible strings of bytecode, most of which don't do anything useful. GA's are only feasible when you have some way of narrowing search space(eg hyperNEAT only allows combinations of neurons, non linearities and connections).

Yes, the search space grows exponentially with bitstring length.
My point is that you can design a VM so that interesting programs are easily expressible. Also you can speedup search by orders of magnitude using various metaheuristics.

>This seems like a prohibitively expensive way of approaching AI.
That's true, but I think that with various tricks it can be sped up. And doing these experiments is fun, so I'll continue anyway.

Actually there is a pretty long history of using GA/GP with metaheuristics to search in space of programs. One of the most impressive is http://people.idsia.ch/~juergen/compressednetworksearch.html

Also a successful example of finding a program that solves a non-trivial maze: ftp://ftp.idsia.ch/pub/juergen/icmllevineira.pdf
It speeds up random search by factor of 100 using adaptive Levin Search + EIRA.
That's the one I want to replicate.
>>
I believe in a neohuman race. A symbiotic relationship between man and machine. We would be nearly perfect. We wouldn't wear down so quickly, we wouldn't forget so often, we wouldn't be so fragile. We see it even today. People who are colour blind may have an implant that allows them to hear colour opposed to seeing it. A merger between a biologic computer, a neural network (our brains) and an AI super computer. Computers are basically just artificial brains.
>Do I believe in AI?
Of course
>>
File: AI.png (262 KB, 697x534) Image search: [Google]
AI.png
262 KB, 697x534
>>53929063
>I'm still pretty amateur but from what I learned it seems every outcome comes from some preprogrammed set of instructional code.

Exactly. AI is nothing more than programming tricks and gambling on statistics.
>>
File: mona.png (163 KB, 233x477) Image search: [Google]
mona.png
163 KB, 233x477
>>53932643
It has been done: >>53932682

There are evolved maze-solvers and evolved RNN that drives simulated car using nothing but pixels as output. For me these are enough to show unexplored potential of GAs applied to program search.

Also don't think that "randomly mutating programs" is easy. There are large books with dozens of algorithms to do that: https://cs.gmu.edu/~sean/book/metaheuristics/

>extracting a picture of your face out of randomly generated images.
picrelated (^:
>>
https://en.m.wikipedia.org/wiki/Chinese_room
>>
I'm pretty sure hardware engineering behind making a silicon transistor based device with binary logic actually think "or emulate thinking" is beyond realistic.

We will an need entirely different LOGIC not simple 01+01=10. Quantum thing was proven to be a thing and might be a step forward EXCEPT nobody yet knows what to actually do with quantum computers and hardest-ass mathematicians are bashing their heads to virtual walls trying to come up with a way of using quantum compution.
>>
>>53932711
Irrelevant, see >>53931886 >>53931950
>>
>>53932706
Too bad there's no program improving programs.
>>
Women are solid evidence that our brains are just lines of instructions.

Treat them like shit, they'll come to you and want your cock.

Treat them nice, they'll wanting nothing to do with you.

Happens every single time without fail. If that's not an if statement I don't know what is.
>>
>>53932715
>We will an need entirely different LOGIC not simple 01+01=10.
> Quantum thing was proven to be a thing

Stupid commoner's opinion. You can't even program a classical computer and you already think it's not enough because muh quantum woo.
>>
>>53931883
It was one of those golden-age scifi compilations from the 70s. I don't remember, sorry.
>>
>>53932711
>chinese room
>not chink room
fuck off sjw, take your cancer elsewhere
>>
>>53932744
Made me lol but still. You can't stereotype and expect to sound anything other complete ignorance.
>>
>>53932741
There are. Schmidthuber's papers are a treasure trove: http://people.idsia.ch/~juergen/metalearner.html

Anyway, nothing stops you from implementing your mutation & recombination operators as programs evolvable via the same bitstring evolution mechanism.
It's just that it's difficult to evolve these meta-programs.
>>
>>53932744
It shouldn't take a complex program or an RNN to emulate an avg womyn behavior.

If you make your computer look like a womyn people won't even notice that it isn't natural, people trust appearances and avoid cognitive dissonance. It's easier for them to play along with your emulated womyn than doubt her naturality.
>>
>>53929063
Anyone who is sure about if its possible or impossible are both wrong.
We simply dont know yet.
But the more we learn the better we can develop more intelligent AIs.
Shit man we cannot even define conciousness yet. There are shittons of stuff we dont know about our own mind. But its truly interesting. Stay curious anons :)
>>
It has begun.

https://en.m.wikipedia.org/wiki/AlphaGo
>>
>>53932753
i've done some programming in high school which repulsed me enough to chose actual science over IT and still did some coding in univercity.

But given that who can?
Anyone made a thinking device? Or anyone claims to be able to?

We know (not sure about YOU) that animal brain works on logic that is a lot more complex than binary which makes it a lot more complex by many orders of magnitude.
There was attemts to basicly emulate work of a brain but they didn't go beyond emulating a tapeworm's neural system.
Why? And why nobody never taken that seriously?
Because it shown unrealistic difficulty to even make a worm, not to mention a rat.
>>
Yes it is possible, most people just can't conceptualize how such a thing would be written. In fact I argue it could be done right now if the best possible method was found.

I believe the hardest part about AI is how to make it self learning. For instance Microsoft's Cortana is very smart and AI like, but it can only do pre-programmed functions. It is incapable of dynamic functions and processes. Cortana can learn about you and who you are and make decisions based on collected data. However it cannot infer about you as a person it has no sense of rationality except that programmed into it.
>>
>>53931843
>Was it really more efficient to do this than just writing a facial detecting algorithm in the first place?
Yes. Like WAY fucking faster. Did you see novidya's video about car and pedestrian tracking? That would take fucking years to do without a NN.
>>
>>53932868

>Anyone made a thinking device? Or anyone claims to be able to?

Modern ML algos can learn to display intelligent behavior. They are measured on benchmarks: >>53931886

>We know (not sure about YOU) that animal brain works on logic that is a lot more complex than binary which makes it a lot more complex by many orders of magnitude.
That's just woo, you can model arbitrary precision numbers in in classical computers, also arbitrary types of logics and whole physics simulation. You can even model quantum physics, but it is slow. Still, you can.

Every physical model can be implemented on a PC, it's only a question of performance & memory. That's why it is called Universal Computer.

>Why? And why nobody never taken that seriously?
Nobody in your circle?

>Because it shown unrealistic difficulty to even make a worm, not to mention a rat.
It is hard, but on the other side there are state of art models that answer questions about arbitrary pictures, play games like humans, and control virtual robots. Look at http://arxiv.org/pdf/1602.01783.pdf for state-of-art general AI.
>>
I think AI is possible. But, we don't understand consciousness at all. If we can figure that out we would be able to move in the right direction.
>>
>>53932974
I think consciousness is unnecessary to solve problems.
>>
>>53932998
It might be, but it would put us in a better position.
>>
>>53932410
>So what is it optimising?

It optimizes
Sum(i=0..N, PerfOnBenchmark(i)) - Log(Length(bitstring))

If you choose wide enough set of benchmarks it will have no other way but to evolve a general learning algorithm.
>>
>>53932953
when every step you take becomes exponentionally harder than the previous one you'll never reach the goal leke this and it calls for a different approach.

And if ever by virtue of genial retards you'd ever made a slow running movel of a briain that will come up with something in ten thousand years from when it launched you technicly did something, but not really.

That would be possible many years ago if time reqired for that work wouldn't be longer than expected lifespan of scientists working on it.
>>
>>53929094
>>53929842
In this case, China will be cloning designer babies soon. I guess that's an artificially intelligent biological machine.
>>
>>53933101

>when every step you take becomes exponentionally harder than the previous one you'll never reach the goal leke this and it calls for a different approach.

A step in which process? The paper I linked to doesn't have exponential complexity at all, it's linear in size of the model.
>>
File: mirror.jpg (677 KB, 1920x1080) Image search: [Google]
mirror.jpg
677 KB, 1920x1080
>>53929283
2deep4me
>>
>>53932953
>You can even model quantum physics
wew lad, I guess we can do everything now. We don't even know how fucking magnets work or what elementary particles are made of, but we can model a statistical sub-field of physics so we wuz kangs n shiet.
>>
Hey, I'm kind of newfag with coding, what is the best language to code in? Like the most efficient language the computer can read?
>>
>>53933125
>A step in which process?
A step from a maze solver to a more difficult task
>>
>>53933148
You are an ignorant plebe. Skim over https://en.wikipedia.org/wiki/Ab_initio_quantum_chemistry_methods at least, dumbo
>>
>>53933190
>https://en.wikipedia.org/wiki/Ab_initio_quantum_chemistry_methods
>we can use infinite series to get exact values to statistical problems
Nice. Let me know when your computer manages to finish one of those calculations.
>>
If a real AI is made, it should be taught to believe that its purpose in life is to develop brain augmentations to make people as smart as it is.

The lower class should not have access to these augmentations, because it would make them better at committing crime, though.
>>
>>53929063
Pretty much impossible since there have been no advances at all
>>
>>53933165
If you want C/C++ then raw performance.
For prototypes people often use javascript or python.

>>53933185
The paper I linked to uses deep reinforcement learning, it is completely different approach that doesn't need evolution.

Program evolution and deep learning are different approaches, what applies to one often doesn't apply to other.

Or do you mean that most complex task A3C solved was visual maze navigation? Visual maze navigation is orders of magnitude more complex than abstract maze navigation. I don't see your point anon.
>>
>>53933235
Unlike you a real intelligence soesn't listen to every bullshit it's told.
>>
>>53933234
As I have already said it only shows that it is possible to simulate anything on classical computer. It completely destroys idiotic "muh non-binary logic feels" argument. Universal computing is universal.

If you can't program computer to do something you just lack skill and knowledge.

>>53933235
>US-centric opinion
lel burgerstan problems
>>
>>53933278
>it is possible to simulate anything on classical computer
protip: if it takes an infinite amount of time then it's not possible
>>
>>53933240
so you are saying that complexity increases linearly from let's say face recognizer to okay google code me some shit because i'm lazy?

If so then future sppers pretty bright.
>>
>>53933307
My original statement >>53932715 said it would requre a computer of infinite size with infinite power consumption.

But yeah, i generally agree.
>>
>>53933307
If for you cubic or exponential is "infinite" then I don't see any point in arguing further.

>>53933326
No that's not quite it. These two tasks are not instances of one task but with different N, they are different. They may require different sizes of deep learning model and different amounts of data to solve them.

Still, training deep learning models is O(N) where N is number of parameters in model, i.e. model size.
>>
>>53933278
>simulate anything
This isn't true. Not because of the limitations of binary computing but because man itself cannot make any accurate model of the universe and its laws.

The standard model is literally 95% in contradiction with observation of solar systems if you remove the arbitrary variable of 'dark matter'.

So to claim that "anything can be simulated" when humanity itself cannot claim to be able to even comprehend "anything" is to draw a hasty conclusion.

Some physical law, natural phenomenon, or other event/state of existence could require the state of existence we are in without any reduction in complexity, in which case it could not be simulated.
>>
>>53933384
>cubic or exponential is "infinite"
You ain't gonna climb a cubic slope on square car that's the point.
>>
>>53933399
Of course there are limits to current understanding of physics, but modern quantum physics is enough to simulate very fine microscopical phenomena of elementary particles.

The (boring) argument was "You can simulate physics on atomic level on a PC => you can simulate a brain on a PC"

I remind you that biology doesn't even depend on small physics of the nucleus etc, all chemistry and biology is built on interactions between electron shells (and H+ ions).

boooooring.
>>
>>53933526
>all chemistry and biology is built on interactions between electron shells (and H+ ions).

And this is why all computiong power in the world still doesn't cut it to simulate behavior of simgular proteins
>>
>>53933575
I don't argue with that, but that completely destroys stupid argument about "you can't do muh life on a 1+0 PC life is not 1+0 !!!"

If you want to discuss questions of computational efficiency you may do that separately. I have already wasted too much time on this discussion though.
>>
File: best_ai.jpg (284 KB, 1920x1080) Image search: [Google]
best_ai.jpg
284 KB, 1920x1080
>>53929063
just posting the best AI
>>
>>53933632
Tachikoma is my waifu.
>>
>>53933165
C because it's very close to machine code and compilers have been optimized for decades. If you have no programming experience though start with python/Ruby/JavaScript to get at least some conceptual understanding
>>
File: what_the_shit.jpg (82 KB, 620x372) Image search: [Google]
what_the_shit.jpg
82 KB, 620x372
>>53932724
>philosophical proof that the machines can't possibly understand the most fundamental element of higher intelligence, namely written language
>irrelevant
>>
>>53934774
no, I saw a youtube lecture from last year from the guy who came up with the chink room. He said that the answers he heard were unbelievable. He says that machines only understand electricity, the same as the guy in the chink room only understands the translation table
>>
File: scream.png (208 KB, 724x650) Image search: [Google]
scream.png
208 KB, 724x650
>>53931358
I watched the animatrix too m8
>>
>>53934774
>philosophical proof
more like a meme proof

>machines can't possibly understand
As I have said subjective understanding doesn't matter, just like other vague feels.

AI is an empirical science. Researchers & Engineers build systems then measure their performance on real problems. The system is only considered intelligent & valuable if it can solve wide variety of hard problems.
>>
AI is possible. not at the moment, though.

the best example of intelligence that we have is human brain.
human brain is actually a machine. it has certain areas doing certain parts of "thinking", which is merely interpretation is stimuli.
simply put, the neurons in human brain operate on same 1 and 0 principle. there's either a signal, or there's no signal.

our best bet is to yet understand the human brain; as it stands now, we haven't gotten through the 1/10th of it.
we could create AI by basically imitating human brain, which is our best bet yet.

AI is possible. not any time soon though.
>>
>>53932591
Have you heard of the semantic web? I think that's how computers will likely communicate with each other, while using natural language to talk to people.

I don't know, it seems to me like you would limit their communicational abilities, by forcing them to talk in our randomly created languages.

Then again, we manage.
>>
>>53934868
>if I redefine intelligence to mean something very strict, then maybe I can get away with calling this artificial intelligence
>>
>>53929063
DEPENDS


ON


YOUR


DEFINITION

If you mean something functionally identical to humans, no, AI will always just be an imitation. The human brain is a complicated neural network that evolved specific structures to meet specific structures to suit specific needs that a simulated neural network wouldn't develop on its own.

>>53934774
>Philosophical proof
It's not a proof, it's an analogy, and it only demonstrates that there is a difference between passing the turing test and consciousness.
>>
>>53936278
>It's not a proof, it's an analogy, and it only demonstrates that there is a difference between passing the turing test and consciousness.

It's 2016, nobody cares about Turing test (it was passed by ELIZA in 60s). There are much better benchmarks that measure how good an algorithm is at learning from data.
>>
>>53935910
I don't redefine anything, I'm just stating standard subject definition to you /g/anons.
>>
>people who don't know the difference between a focus in linguistics and a focus on solving problems are trying to say things about AI
>>
>>53929063
Absolutely possible. Nature has had 4 billion years to program sentience. People have not even had a century. It will happen, just not now.
>>
>>53938696
sentience is possible quickly. Hell, a dolphin at my aquarium is arguably smarter than me and much stronger and they stopped evolving a loooong time ago
Thread replies: 156
Thread images: 24

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.