[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
>"For strong AI to exist it must be able to rewrite its
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /sci/ - Science & Math

Thread replies: 39
Thread images: 3
File: artificial-intelligence-merl.jpg (289 KB, 1920x1080) Image search: [Google]
artificial-intelligence-merl.jpg
289 KB, 1920x1080
>"For strong AI to exist it must be able to rewrite its own code!"

Why do retards repeat this so often?
>>
>>7635233
And why do you disagree with the idea?
>>
>>7635246
In neither the theoretical mathematical formulation of AGI (AIXI) nor the current state of the art AI systems does "rewriting code" come into play in the least. It seems entirely like something someone who knew less than nothing about how computers or AI actually work imagined as just sounding cool, which would be no one's business but their own except for that I've seen it repeated so frequently.

Programs that rewrite themselves are also nothing novel in CS and yet none of it has resulted in AGI, proving that rewriting your own code is not a necessary or sufficient condition for strong AI.

SO STOP SAYING THAT IT IS.
>>
A strong AI will be able to potentially rewrite it's source code, but its not a requirement to qualify as strong AI
>>
>>7635294
if you told a random person on the street that they were actually an AI living in a simulation 99.99% would have no chance of ever being able to understand, let alone improve upon, their source code
>>
What a shitty thread
>>
>>7635449
self hypnosis
>>
>>7635449
This is a horrible analogy.

A strong AI, is by definition as smart, or smarter than a human being. If humans were able to program this AI in the first place, than the AI will have the intellectual capability to do so.

Now your argument is bad for the following reasons. One, it implies that the constructors of that simulation are vastly more intelligent than the AIs in that simulate, something we have ruled out in the very definition of strong AI. Two, it imposes a kind of sandboxing constraint to prevent modification of the source code, which has nothing to do with the potential able to write source code. This also implies some kind of perceptual limitation upon the perceptual abilities of the AIs, as in they cannot understand that which is outside their simulation. Now surely an AI could have perceptual limitations, like we humans do, but it in fact the possibility that an AI will have a much larger perceptual range than humans is more likely than having a smaller perceptual range. AIs could have IR vision, hear above human frequency ranges etc....
>>
>>7635233

There already exist code that can rewrite its code

Also, such code has been blocked by the OS/hardware since forever as it is malware heaven.
>>
>>7635548
lisp would like to have a word with you.
>>
>>7635233
Because writing a superhuman AI is really, really hard.

So for all practical purposes, or so the reasoning goes, it ought to be far easier to simply shoot for the simplest possible AGI which can rewrite itself into a smarter and more general intelligence. After all, a "general intelligence" that can't write code would be a pretty poor AI indeed, and so there ought to be no reason it could not be given its own source.

The idea is to allow it to overcome design limitations. We've been trying to write a workable, fully-intelligent AI for more than half a century now and it hasn't turned out well. Focusing on a narrower "seed AI" and letting it work out the problems seems like a better approach.
>>
>>7635562
Also, being able to rewrite code seems like the obvious way to implement unbounded metalearning. Schmidhuber's Godel Machine is a notable theoretical example of this.

That said, code rewriting as a metalearning technique has obvious problems - for instance, how to avoid fucking yourself up in such a way that it impairs your ability to improve yourself or to revert to a previous better state.
>>
>>7635552
first order functions are slightly different from being able to rewrite exactable data

>>>/g/ is that way
>>
>>7635532
you seem to be extending the definition of strong AI to be "whatever makes my viewpoint correct"

strong AI is minimum human level AI, what you're changing the definition to is "as smart as the smartest humans on the planet, and groups of them for that matter"

not to mention the possibility that the architects in charge don't fully understand how the AI works, which would happen if there were some components of it developed through evolutionary algorithms
>>
>>7635233
Because humans are incredibly slow, have very finite mental resources, and are subject to variation. An AI could work constantly and be evaluating hundreds of thousands of approaches to expand itself relative to any given stimulus or context it finds itself in, every single second. And it doesn't need to rest. The angles it sees don't necessarily depend on the day, the month, the year, the decade,

It's all around better. It doesn't need a compiler, it doesn't even need to write itself in assembly. It can work directly in machine code and test it in realtime. It can make a clone of itself and evaluate its performance to decide whether to incorporate a change, if it has no other means of testing. If it writes itself into a corner, it can fall back to a protected subroutine that allows it to rollback to a state of prior functionality. It could split itself off and evolve in parallel, then remerge. Or not.

If the goal is a good general intelligence (more general than our own even), it's the best way to do it. Mankind seems to desire to create a God in its own idealized image, this is how that will happen. Human lifespans are otherwise too short and we just aren't suited to this type of engineering.
>>
>>7635643
that reasoning extends to why humans choose to train neural networks on lots of example data rather that tune the individual weights one at a time, AND YET this does not necessitate the rewriting of any source code on the part of the neural net

the whole problem i have with this idea is that it's utterly circular in how it proposes to solve the AI problem.. "let's make an AI by first creating an AI that can somehow modify itself intelligibly" well gee that sounds a whole lot like the AI you set out to build in the first place

my point is that the problem isn't made any more easy by saying "oh let's just limit ourselves to an AI that can make improvements on it's own source code" that's already a very tall order if it's not just doing it by stochastically flipping bits in its machine code, and it if IS doing it this way then that's ridiculous and more likely to result in program crashes than any useful gains, self-modification can be coded directly into the program a la the neural network example with no need to modify source code.
>>
>>7635604
No I'm not. I gave an explicit definition.
>>
>>7635670
Obviously I'm oversimplifying how I myself would actually start off when trying to engineer such a thing. Of course its primary directive wouldn't be "search for indication that you need to improve yourself, then figure out how". That introduces a problem that, as you said, is circular and self referential in its own problem and solution.

Likely I'd start with a training model. It doesn't need to understand anything but the plane it works on presently. It would have external memory, ability to compile and transform data as it receives it, various indexes that allow drawing connections, etc. It also need not have any concept resembling our "I", or a sense of some divide between processing and stored data.

The point is, it can't be shackled in such a way that it can't learn things and decide to improve itself in ways that the engineer has deliberately predisposed it to prefer. This is the best way for it to eventually generate its own concept of reality, then come to realize its perspective is different than actuality (the real reality), and continue from there.
>>
>>7635573
That's not an issue when you give the AI a separate processor to work with, if the offspring AI is more capable then the original AI data is stored and archived or the two AI collaborate
>>
>>7635233
i used to think this before i took a few AI courses
kinda laugh about it now
>>
>>7635688
there are a lot of assumptions in here
>>
>>7635725
Everything is an assumption until tested. The only variance lies in how accurate you're expecting it to be.
>>
>>7635688
>Likely I'd start with a training model. It doesn't need to understand anything but the plane it works on presently. It would have external memory, ability to compile and transform data as it receives it, various indexes that allow drawing connections, etc. It also need not have any concept resembling our "I", or a sense of some divide between processing and stored data.
ask me how i know you've never programmed
>>
>>7635246
We can't rewire our brains on purpose
>>
File: 1441161262754.png (715 KB, 629x758) Image search: [Google]
1441161262754.png
715 KB, 629x758
>>7635759
maybe we just didn't discover how to yet
>>
File: 1367856116639.png (184 KB, 1280x800) Image search: [Google]
1367856116639.png
184 KB, 1280x800
>>7635759
I have
>
>>
>>7635233
well we have a LONG way before we are able to create a human like AI.

until we can create a machine that can keep itself "alive" by feeding or some si-fi shit than plants are superior than computers

when we get to the point where computers are life animals and can evolve to adapt to an environment than we still have a LONG way before reaching human capacity let alone neanderthal capacity

then the cool si-fi shit happens when computer surpass human capabilities
>>
>>7635759
You sure about that?

Do one thing every day, and you'll be changing your brain.
>>
>>7636268
Yeah but you're working within the confines of the system built for you by evolution. An AI changing its code is like a human changing the rules by which synaptogenesis and neuroplasticity take place in their brain. It's not necessary and will probably kill them.
>>
>>7635233

Because an overwhelming majority of the people who discuss strong AI are complete retards.

Case in point - you
>>
>>7636260

Given the right peripheral devices (in this case, a simple power sensor, a movement mechanism, and a camera), anybody with a rudimentary knowledge of machine learning and graphics processing could create an artificially intelligent robot capable of charging itself.

Do you always speak from pure ignorance?
>>
wait a mintue, humans cant even re-write their own code!!
>>
Rewrite? No. Add on? Yes.
>>
>>7635233
>Why do retards repeat this so often?
Because it's part of the intelligence explosions superAI meme bundle.

It's the same with Strong AI being some unspecified entirely new sort of AI.
It's the same with AGI(which is defined as EQUAL or better than human) always being assumed to self-improve to superhuman in a few mintues time.
It's the same with "AI greatest threat to mankind" being mentioned constantly.

People are too stupid to understand and debate details, so they just memorize the whole paragraph and parrot it like a fucking meme.
>>
>>7637198
>Add on? Yes.

Not even that.

A large neural network will netiher need to add or rewrite it's code.

It'll change the weights according to the initial code and that's all, it can still be flexible and learn things.
>>
>>7637240
Yeah that's just backpropagation. Like there are probably plenty of self programming AI's. In the definition of strong AI it would have to be better than a human. Now I'm not sure if a human programming a computer or a human programming themselves would be the comparison. Considering humans are "programming" themselves by learning new information and then change their future response through such "programming".

Like if I learned stop drop and roll, got into the fire, and then stop dropped and rolled, would that mean I programming my self? Can learning be considered self-programming?
>>
>>7637233

>superAI meme bundle

fucking lost it.
>>
>>7635233
well,human beings cannot reprogram themselves and we're pretty fucking advanced. therefore that argument makes literally no sense. i think it's that linear.
>>
>>7637329

b-b-but... i read on my fav nardo forum that u can get a sharingan if you meditate! :(
Thread replies: 39
Thread images: 3

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.