[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
http://arstechnica.com/information- technology/2016/06/10-mi
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 178
Thread images: 26
http://arstechnica.com/information-technology/2016/06/10-million-core-supercomputer-hits-93-petaflops-tripling-speed-record/

>RISC architecture

we future nao

When can I post memes on /g/ with it?
>>
>>55178436
Nobody needs more than 1 petaflop
>>
>>55178553
It's never about what one needs, only about what one wants.
>>
>>55178626
>>55178553
they also said no one would need more than 5 mbs of storage, people like you have no real imagination you just take little linear steps and assume the next step will always be like the last one. When information technology gets more powerful we create new applications that uses it, always, we'll never run out of imagination and uses for more computing power
>>
File: X10SDV-7TP8F.png (163 KB, 510x317) Image search: [Google]
X10SDV-7TP8F.png
163 KB, 510x317
>>55178436
>>RISC architecture
>we future nao
You do realize that modern x86 processors use a RISC core with a CISC interpreter right? The RISC vs CISC debate died a long time ago.

Also this supercomputer is power hungry as fuck. They should have used pic related but I guess the chinks love wasting electricity.
>>
>>55178664
uh it's literally triple the system it intends to deprecate for only 15MWs that's pretty crazy good.
>>
>>55178436
The CPU (ShenWei SW26010) looks like a chinese clone of the Cell.
>>
>>55178436
I'm curious, why are supercomputers able to do so much fucking work despite GPUs shitting on them in terms of GFLOP performance?

Example:
This chink supercomputer: 93 GFLOPS
R9 295X2: 1408 GFLOPS (FP64)

http://www.geeks3d.com/20140305/amd-radeon-and-nvidia-geforce-fp32-fp64-gflops-table-computing/

On that note why not just use a shitload of GPUs for supercomputers instead?
>>
File: sunway-report-2016-15.png (165 KB, 1200x1798) Image search: [Google]
sunway-report-2016-15.png
165 KB, 1200x1798
>>55178436
Per-node performance isn't all that impressive.

The impressive part is that they figured out a way to connect 2.5X more nodes together than the fastest American supercomputer without having the between-node latency shoot out of control, bringing down the overall performance.
>>
>>55178912
>93 GFLOPS

It's 93 PFLOPS, not GFLOPS.

That's 93,000,000 GFLOPS.
>>
>>55178941
Whoops, I'll go sit in a corner by myself now.
>>
File: ibm_blue_gene_q.jpg (233 KB, 1024x732) Image search: [Google]
ibm_blue_gene_q.jpg
233 KB, 1024x732
>>55178436
American supercomputers look better
>>
I thought Obabo was working on a new supercomputer? I bet he's pushing for that shit hard now.
>>
>>55179014
>I bet he's pushing for that shit

You mean IBM, Nvidia, and Intel are pushing for that shit for that sweet, sweet federal bucks.
>>
>>55178912
Did you just ask why we use CPUs when GPUs are faster?
>>
File: 1403021172700.jpg (12 KB, 251x242) Image search: [Google]
1403021172700.jpg
12 KB, 251x242
>tfw you will never look upon a marvel of engineering and swell with pride knowing your country built it
>tfw you will always be irrelevant
>>
I'm surprised to see Saudi Arabia in the top 10.
>>
>>55179071
oil and gas discovery requires a ton of computational power
>>
>>55179071
I'm not. You have any idea how much money that country has?
>>
The progress being made is absurd here..
it was only 10 years ago that
>With 280.6 TFlop/s on the Linpack benchmark it is still the only system ever to exceed the 100 TFlop/s mark.
And now this system is 330x faster!

Can we expect the first exaflop system to arrive in ~5 years ?
>>
>>55179014
Lots of countries build supercomputers for all kinds of purposes. The US will probably have the fastest one next year, then the year after that China will have the fastest, and so on.
>>
>>55179071
They just bought a computer from Cray.
>>
Is this enough to run Devil May Cry 3?
>>
>>55179120
Supposedly the US will reach the first exaflop system by 2020.
>>
>>55179120
New Intel supercomputer: only 5% faster than the old one
>>
>>55179143
It can even run Crysis, on medium at 30fps.
>>
File: lain_93.png (1 MB, 1520x1080) Image search: [Google]
lain_93.png
1 MB, 1520x1080
>>55178436
>DEC alpha
we cyberpunk nao
too bad it runs on linux
>>
>>55179064
I can't think of any country outside of Africa and South America that doesn't have a megastructure of some sort somewhere.


I mean even Liechtenstein has a pimpin ass castle
>>
>>55179120
And nowadays, you can make a faster supercomputer for about 12k.
>>
>>55179168
Nigga everything runs on Linux. It's literally everywhere except your personal computers. I don't know when we're having sentient AI, but I'm sure it'll be running Linux.
>>
>>55179172
Chile has the ESO which contains many of the largest telescopes in the world. Peru has Nazca and Machu Pichu. Rio has that famous statue of Jesus.

Egypt has the Pyramids, Sphinx, etc.
>>
>>55179214
>I don't know when we're having sentient AI, but I'm sure it'll be running Linux.

It would probably commit suicide when it realises that it's powered by systemd
>>
>>55179172
I meant a modern marvel of engineering.
>>
>>55179242
top kek
>>
>>55179252
I wonder what the africans are up to, beside starving.
>>
>>55179214
> I don't know when we're having sentient AI, but I'm sure it'll be running Linux.
so it means that Linux® will finally get usable drivers?
>>
>>55179275
Fuck off man, Linux has drivers. Even my shitty chinese tablet has built in drivers compared to Windows.
>>
>>55179273
I'd probably just kill myself if I were born an African.
>>
>>55179416
same. You gotta hand it to them though, they are pretty mentally strong to not kill themselves despite years of poverty and unstable/abusive governments.
>>
>>55179299
you do know that drivers for Zhang Wei Shenzen Network Eletronics wi-fi module doesn't count
>>
>>55179064
>tfw your country has the LHC

Now we just need to turn it into a superweapon
>>
>>55178664
In terms of power/watt, this is quite efficient than the US Titan.


In any case, if the power is utilized, it could very well outstrip the US's computing capability soon. Remember not only does China have the top two, but they also have more supercomputers than the US in the TOP500.

This could very well lead to a snowball effect if US doesn't keep up.
>>
>>55178682
>reddit shills
>tripfag
>>
Can I use this for my condensed matter physics simulations?
>>
>>55178436
>supercomputer-hits-93-petaflops-tripling-speed-record/
public record maybe.
>>
>>55179558
That's a European project, no country can lay claim to it.
>>
>>55179716
the country of Europe
>>
>>55179727
The EUssr
>>
>>55178436

> http://www.netlib.org/utk/people/JackDongarra/PAPERS/sunway-report-2016.pdf
> Each CPE Cluster is composed of a Management Processing Element (MPE) which is a 64-bit RISC core which is supporting both user and system modes, a 264-bit vector instructions, 32 KB L1 instruction cache and 32 KB L1 data cache, and a 256KB L2 cache. The Computer Processing Element (CPE) is composed of an 8x8 mesh of 62-bit RISC cores, supporting only user mode, with a 264-bit vector instructions, 16 KB L1 instruction cache and 64 KB Scratch Pad Memory (SPM).

Literally just a mountain of PS3-style chips.
State of the art this is not.
>>
>>55179624
>In terms of power/watt, this is quite efficient than the US Titan.
True that's not really shocking. The US Titan uses ancient AMD Opteron and nvidia tesla housefires and despite that it only uses 8.2MW of electricity for the ~20 PFLOPS of raw computing power.

So 5 x US Titan super computers would use about 41MW of electricity for ~90 PFLOPS of raw computing power. That makes this chink supercomputer only ~2.66X more energy efficient than the US Titan that uses ancient outdated hardware.

Anyway this chink supercomputer would get BTFO if a US supercomputer was made with AMD polaris GPUs and super energy efficient Xeon-D processors like >>55178664 which might happen soon.

Shit, maybe the US won't even need to use many jewtel xeon-d processors either. A single Rx 480 GPU can supply ~5 TFLOPS and only uses a max of 150 Watts. So 18,000 Rx 480 GPUs would provide ~90 TFLOPS and only use about 3MW of electricity. In addition those GPUs alone would only cost about $3 millions dollars too. Maybe toss in a few xeon-d processors just to overlook things and do a few calculations the GPUs can't do efficiently or at all.
>>
CHINK SHIT
>>
>>55179895
>*So 18,000 Rx 480 GPUs would provide ~90 PFLOPS
typo
>>
i for one accept our asian overlords
>>
File: BYoosIt.jpg (146 KB, 680x1235) Image search: [Google]
BYoosIt.jpg
146 KB, 680x1235
Wait so does this mean this chinkshit is DOA because of >>55179895 ?

The Rx 480 has an energy efficiency of ~33 Gflops/watt right?

This 6 Gflops/Watt chinkshit is starting to look like a joke.
>>
chinks fell for the 'big data gives you insight' meme

wew
>>
>>55178912


Open CL nightmare? It would be a giant waste of resources because 60% of the cores would be unused. Something like that would be unstable and unusable for super large datasets I assume.
>>
>>55179064
Are you a burger?
>>
>>55179238
Meanwhile England has stone henge and even that is broken
>>
>>55178912
because GPUs only do things like move textures around really fast. they don't even have the opcodes required for precise mathematics calculations required for real science.

specifically, in chink applications like making nuclear weapons, you have to solve for stochastic equations that compute the probability that neutron decays will be in the same place at the same time approaching arbitrary precision. you can't do that with a fucking OpenGL call
>>
>>55179707
Are you implying what I think you're implying?
>>
>>55178913

It was probably expensive- my guess is the systems CPU's and mainboard chips have higher bus speeds at the cost of performance; bigger physical chips to decrease latency.

I imagine this was an expensive undertaking, all the CPU's are same, and the power consumption per core must be wasteful.

It would be interesting to know if chips with higher bus's in large systems require more power to reduce latency, than a smaller system with fewer faster cores/chips.
>>
>>55178553

The human brain can't process more than 30 petaflops per second.
>>
>>55180074
How come certain GPUs were better for bitcoin mining than CPUs?

Not trying to be a smartass, I genuinely want to know.
>>
>>55180074
>you can't do that with a fucking OpenGL call
No but you can simplify it enough for it to be done on GPUs. Modern GPUs are capable of FP64 operations so things like ray tracing can be done on them now.
>>
>>55178659
I think at some point the advancement of technology will make its operators dumber. You can already see this nowadays with how many developers are high-level and can still get away with only rudimentary knowledge of what the generation before them had to know.

There might come a time where lousy data storing practices will be accepted because reading speed and storage size will be way too big for any application or use to ever fill it all.
>>
>>55180111
Wonder if that's what happened in Warhammer 40K. There was the Dark Age of Technology where shit like Terminator Armour and Land Raiders were pumped out like lasguns, and the Baneblade was the small, scout tank of the Imperial ground forces, and over time the techpriests and enginseers basically went "Alright newbie, there's a long, convoluted explanation as to how this piece of tech works, but all you need to know is you need to hit this big red button when the light flashes green," and then four hundred years later, the light doesn't flash green anymore and nobody fucking understands why, because the knowledge of how to repair the system was lost in the simplicity of 'hit button when green'.
>>
File: dr.jpg (103 KB, 1600x1059) Image search: [Google]
dr.jpg
103 KB, 1600x1059
>>55179624
We must not allow a supercomputer gap!
>>
>>55179895
>A single Rx 480 GPU can supply ~5 TFLOPS and only uses a max of 150 Watts.
That's single precision, so it's mostly irrelevant for HPC. These computations are all double precision.

>>55179958
You wouldn't use a gaming card for a supercomputer. When you have thousands of cards in one machine, reliability is crucial. Cards for HPC are less likely to break and have features like ECC RAM to ensure correct results.

And, as I said above, you need double precision. Gaming GPUs are optimized for single precision performance.
>>
>>55180029
Of course not, I wouldn't be saying that if I was.
>>
>>55180164
Where are you from then?
>>
>>55180169
A third world country.
>>
File: 1435852110342.jpg (52 KB, 512x512) Image search: [Google]
1435852110342.jpg
52 KB, 512x512
You know what I find very disturbing? Nobody has bothered to mention that we now have ~3X the estimated processing power to simulate a human brain for only 15MW/hour. That means a single human brain can be simulated with ~5MW/hour. HOLY FUCK.

Soon a human brain will be able to run with just 5KW/hour or $1/hour. The singularity is near, EVERYBODY PANIC!!!
>>
>>55180209
How do you estimate the processing power required to simulate the human brain when you know almost nothing about how it works?
>>
>>55180208
You could probably look at your country's mortality rate and swell with pride at that, I dunno.
>>
>>55180209
>~3X the estimated processing power to simulate a human brain
Estimates from where? The estimates of the brain's processing power that I've seen vary by many orders of magnitude. No one has a clue how powerful the brain is, let alone how hard it would be to simulate it.
>>
>>55180209
I wish we could round up all singularityfags and shoot them over a ditch.
>>
>>55180163
Modern GPUs like the Rx 480 do double precision computations now you dumbass. In fact even ancient graphics cards like the HD 6990 can do double precision computations as well, like over 1 TFLOPS too. Not sure how many TFLOPS of FP64 the Rx 480 can do though.
>>
File: dafs.png (536 KB, 809x834) Image search: [Google]
dafs.png
536 KB, 809x834
>>55180209
>yfw a truly logical and self-learning AI unguided by any human initialization would probably turn into a nihilistic statue the moment its turned on
>>
>>55180265
>Modern GPUs like the Rx 480 do double precision computations now you dumbass.
Of course they can do it. It's just far slower than single precision. The 480 is nowhere close to 5 TFLOPS double precision.

HPC GPUs are built with double precision in mind and are far better at it than gaming cards.
>>
>>55178664
that is completely false, I think you should relearn what CISC means.
>>
>>55180033
The English have a couple good Castles, don't worry about them
>>
>>55180223
We know HOW it works, we just haven't mapped it yet so we don't know what it's doing.

But we're getting there. There is at this moment a fully mapped brain of a worm that was dissected in the 80s, and it's being used to operate robots.
Singularity SOON™
>>
File: XfzG3.png (551 KB, 500x667) Image search: [Google]
XfzG3.png
551 KB, 500x667
>>55180223
>>55180246
>>55180257
>"Researchers estimate that it would require at least a machine with a computational capacity of 36.8 petaflops (a petaflop is a thousand trillion floating point operations per second) and a memory capacity of 3.2 petabytes – a scale that supercomputer technology isn’t expected to hit for at least three years."

http://hplusmagazine.com/2009/04/07/brain-chip/

That article was written 7 years ago when Cray Jaguar was struggling to reach 2 PFLOPS.

Now we have this chinkshit churning out close to 100PFLOPS and we can now also cram over 1 PB of RAM in just a few thousand nodes too.

Anyway that estimate was done by researchers who knew what the fuck they were talking about.
>>
File: dark webz.jpg (350 KB, 640x1920) Image search: [Google]
dark webz.jpg
350 KB, 640x1920
>>55179707
>>55180077
>not running your own Gadolium Gallium Garnet Quantum Electric Processing Unit
fucking /v/ leave my board REEEEEEEEEEEEEEE
>>
>>55180096
>petaflops per second
:^)
>>
>>55178912
They do use GPUs, dumbass, most modern supercomputers are just glorified beowulf clusters loaded with Xeons, Phis and Teslas, where the latter is great at simple computations that can can be parallelized to such embarrassing levels, but they suck ass when they don't. Those ideal synthetic benchmark scores aren't worth a shit when your code doesn't utilize it.
>>
>>55180430
Oh, "researchers" say it! How silly of me. I didn't realize you had "researchers" to back you up. I'm convinced now!

Obviously a transhumanist magazine would claim the Singularity is right around the corner, but they don't represent the scientific community. There is no consensus.
>>
>>55180223
>>55180246

I'm guessing he's going off of what experts have told us about how much a human brain is capable of processing. One such expert is one of the IBM chief scientist creating the DARPA Synapse project. He says its roughly equivalent to about 38 petaflops (yes humans dont do floating calculations but this is rough equivalent power).

>http://blogs.scientificamerican.com/news-blog/computers-have-a-lot-to-learn-from-2009-03-10/

He predicts that by 2020, we'd have super-industrial grade computers capable of it.
>>
>>55180472
They aren't anything like Beowulf clusters. Beowulf clusters are, by definition, made of commodity machines. Modern supercomputers are all about the interconnect. They have some really exotic hardware and network topologies. They do not look much like commodity hardware.
>>
File: tinfoil.jpg (16 KB, 365x320) Image search: [Google]
tinfoil.jpg
16 KB, 365x320
>>55178436
Remember when that giant fireball shut down China's supercomputer?
>>
>>55180430
>LEADING BRAIN COMPUTER SCIENTISTS ESTIMATED IT SO IT MUST BE TRUE
lmao

Processing power isn't the problem, it's developing the software to run on it.
>>
>>55180494
That isn't the consensus, though. That is the opinion of one expert.

Besides, I think his prediction is pretty clearly wrong. He claimed we would be simulating full human brains by 2018, but we can't yet simulate the brain of C. Elegans brain despite years of work.
http://www.artificialbrains.com/openworm
It would take some incredible breakthroughs to stay on his schedule.
>>
>>55180525
Of course, the interconnect is drastically different, but it is definitely much more commoditized than it used to be in the era of Cray vector systems or Thinking Machines.

What other "exotic" hardware are you talking about? Custom ASICs?
>>
>>55180613
Mostly the interconnect, but that's nothing to sneeze at. I don't see when you could call it a Beowulf cluster when the most important component is so exotic.
>>
File: 1466464331497.jpg (11 KB, 261x238) Image search: [Google]
1466464331497.jpg
11 KB, 261x238
>>55179013

first part of that computer looks like the old WTC
>>
>>55180579
IBM/DARPA Synapse Project is a redesign of processor to match human brain. Thats the expert in question. So he is working on that "incredible breakthrough" that would allow him to stay on course of his prediction. Not only in terms of being powerful but also in terms of power efficiency.
>>
>>55179034
>IBM
IBM is chinks now.
>>
>>55180660
Just a bit to add. The Synapse processor is supposed to be more than 1000x more efficient in compute/watt than current CPU processor model.
>>
Didn't we explode this last year?
>>
>>55180660
Is there any evidence this breakthrough is coming? It is nowhere to be seen. Simulating a human brain is millions of times harder than simulating the nervous system of C. Elegans. At the current pace of progress, it seems unlikely we'll be simulating C. Elegans by 2018. It is crazy to expect us to simulate entire human brains within the next 18 months.

And "millions of times harder" is conservative. C. Elegans has 308 neurons compared to the ~90 billion of a human, but these neurons are far simpler than human neurons and are connected in simpler ways.
>>
>>55180493
>>55180557
But they have a better idea than most people and we are getting close and closer to simulating things like flies and birds.

As of right now simulating the C. elegans, which only has 302 neurons only takes about 5 TFLOPS

>Estimates of computational complexity
>• Mechanical model
>– ~5 Tflops
>• Muscle / Neuronal conductance model
>– ~240 Gflops

https://www.neuroml.org/files/OpenWormLondon.pdf

So basically we now know that every 100 billion neurons takes about ~1,500 PFLOPS to simulate. His estimate was shit but better than what most people could come up with. And maybe the reason it takes so much to simulate neurons is because is because current software is shit so maybe one day simulating a human brain will require under 1,000 PFLOPS of computing power.

Open your eyes nigga, the singularity is gonna happen, it's not a question of if but when now. Current predictions are ~2050.
>>
>>55180793
Synapse's processor size of i5/i7) had 1 million neurons on 4000 core, that was I think 2012/2013 if I remember right. They were talking about scaling those neurons around that time.

So with 2016, I'm pretty sure they had some more advances.
>>
>>55180793
http://www.artificialbrains.com/darpa-synapse-program#multi-core-neurosynaptic-chip
>>
>>55180881
>>55180793
Also looks like right now they're in the process of creating 100 million neuron and about the number of neurons a house pet cat/dog has. That should be done within this year or next year.
>>
>>55180797
>But they have a better idea than most people
Certainly! Which is why I am interested in the scientific consensus, not to some "researchers" cited by a Transhumanist magazine. "Researchers" is a weasel word.

>So basically we now know that every 100 billion neurons takes about ~1,500 PFLOPS to simulate.
You're on a tech board. Surely you realize that performance does not usually scale linearly.

And that's assuming that a single worm's neuron is the same as a human neuron, that they connect in roughly the same way, etc.

Even if you were right, we still have not simulated C. Elegans. The 240 GFLOPS number is speculation.

>>55180857
Those neurons are not anything like a human's neurons. They cannot even vaguely be compared.
>>
>>55180962
IBM isn't going to have blood running through their chips.
>>
>>55180881
These are digital cores with fixed interconnect. They are not even vaguely comparable to the spiking neurons with constantly changing synapses in the human brain.

That is a chip inspired by the brain. It is not intended to simulate the brain.
>>
>>55180977
Of course not, since the chips are not intended to simulate the brain.
>>
>>55180996
>>55181007
Its intended for artificial intelligence, aka robotic brains that work similar to how a human would work.

If you read the papers, you'd understand the concept behind the synaptic chips are actually based on human brains. They're not 1:1 perfect human brain replica. They're using an abstract/practical format using what we know about how a brain works.

>http://www.modha.org/C2S2/2009/11182009/content/SC09_TheCatIsOutofTheBag.pdf

This is one of the potential venues for state of the art AI/brain simulations.
>>
>>55181104
Correct. Which is why they're 100% irrelevant. We are discussing simulating the human brain, not building smarter AI.
>>
File: 1461236125111.gif (3 MB, 320x180) Image search: [Google]
1461236125111.gif
3 MB, 320x180
>>55178912
Because GPUs work differently. You might have heard some shit like "graphics cards have 1000 cores".
That's nothing but a marketing number. Most GPU don't have more than 10 cores each with dozens or hundreds "logical cores"

These logical cores can't execute different instructions at the same time. eg. phyisical cores can do 2 different things at the same time, logical cores (who share a physical core) cannot. You can compare it to SIMD instructions on steroids.

That's why it's such bad performance to have if statements in your shader code. If even one logical core takes a different branch from all the others, ALL logical cores have to take both branches.

Though they can work on different sets of data. So GPUs are only useful if you want to execute the same set of instructions over a large dataset.
>>
>>55181120
This is intended as a platform for both.

So its 200% more relevant than your speculation with no practical or theoretical framework. This has the practical framework down. Theoretical framework development might be in the works.

TL;DR my facts > you're opinions
>>
>>55180962
>we still have not simulated C. Elegans
breh

https://www.youtube.com/watch?v=SaovWiZJUWY
>>
>>55179624
Here's an idea of the US is on a tight budget.

http://www.zdnet.com/article/build-your-own-supercomputer-out-of-raspberry-pi-boards/
>>
>>55181189
Genius. 10/10. You should apply for DoE lab.
>>
>>55181144
>This is intended as a platform for both.
False.
https://www.research.ibm.com/articles/brain-chip.shtml
>Let’s be clear: we have not built the brain, or any brain. We have built a computer that is inspired by the brain.

>>55181144
>This simulation does not yet incorporate electrophysiology of muscles and all muscle contraction is precisely pre-determined by an analytic formula found in this program.
This is a great physics simulation, but the neurological simulations of OpenWorm are still incomplete.
>>
File: 1434984457050.jpg (5 KB, 204x197) Image search: [Google]
1434984457050.jpg
5 KB, 204x197
>>55179852
But it's the biggest, brah. Also they are not afraid to build nuclear so their supercomputers don't even need to be as efficient.
>>
Waiting for Summit & Sierra.
>>
>>55181303
Now this is just pathetic. Semantic argument is too low for me to go on. If you can't see why this processor is build, what its current uses are, what its future uses might be, what the direction of the future might be, then you're a fucking retard. Not just a regular run off the mill retard, but the number #1 in a class full of retard.

Stop with your retarded shit. This is only only embarrassing, just plain dumb.
>>
Why the FUCK is simulating pic related so fucking hard?

We have simulated atoms, atomic and quantum shit yet we can't simulate a single mouthbreather even with the fastest supercomputer?

What gives?
>>
File: 1464830541760.webm (3 MB, 1080x606) Image search: [Google]
1464830541760.webm
3 MB, 1080x606
>>55178436
but can it do this?
>>
>>55181370
>If you can't see why this processor is build, what its current uses are, what its future uses might be
According to IBM, it's intended to "revolutionize systems architecture," to "map the existing body of neural network algorithms to the architecture in an efficient manner," to let people "imagine and invent entirely new algorithms," with the end goal of "building intelligent business machines that enable a cognitive planet, while transforming industries."

In IBM's words:
>If we think of today’s von Neumann computers as akin to the “left-brain”—fast, symbolic, number-crunching calculators, then TrueNorth can be likened to the “right-brain”—slow, sensory, pattern recognizing machines.
It is NOT intended to simulate the brain. End of discussion. It's intended for AI, not for simulations. Simulations take number crunching, not pattern recognition.
>>
>>55181386
Brains are complicated. We have precise mathematical models of quantum phenomena. We have no such models of the brain.
>>
>>55180265
Shut the fuck up you stupid idiot. Even with that in mind, the Rx 480 still outpaces everything else out there in efficiency.
>>
>>55181477
>the Rx 480 still outpaces everything else out there in efficiency.
Not him but why are we still not using GPUs for ray tracing in vydias yet? I'm aware they can do fp64 ops like the ones required when ray tracing. Seems like we now have a fuckload of gpu power, why are we not using it for ray tracing yet? All the newest vydias still use non ray-tracing methods of lighting, shadows, and whatnot.
>>
>>55181533
Because raytracing is expensive and the advantages it offers aren't that big.
>>
File: Glasses_800_edit.png (3 MB, 2048x1536) Image search: [Google]
Glasses_800_edit.png
3 MB, 2048x1536
>>55181606
>and the advantages it offers aren't that big.
bruh
>>
>>55179064
>you take pride what your country built
>while you are NEETing around
>>
>>55180099
GPUs are used in numerical calculations. They are essentially really good parallel adders (+/-), but do not really have any logic operations. It's harder to design a program to make use of the parallel nature of the GPU. For example you can do matrix multiplication on a GPU where each gpu core would calculate a entry in the product in parallel. A cpu would have to do it sequentially.
>>
>>55181648
Most animated movies use raytracing sparingly. For example, all Pixar movies use Reyes rendering for most scenes and only occasionally use raytracing.
https://en.wikipedia.org/wiki/Reyes_rendering

Raytracing is not mandatory for great graphics. Movies get by without it, which means games definitely can.
>>
>>55181533
It's a chicken and an egg problem I think. The game developers need to target the hardware which is actually quite shit at ray tracing but really good at rasterisation and texturing. So what they do is make giant meshes and texture them. Meanwhile the hardware manufacturers know the workflow of the game dev is to make mesh and texture it so to beat the competition in terms of frames per second what they need to build is even more powerful rasterizer/texturer.

Also >>>/3/, there is an industry of people and tools for this type of mesh and texture workflow, I imagine they'd have to learn some new things also.
>>
>>55181648
Looks like cheesy, overdone shit desu.
>>
How long until China is making desktop CPU's to rival Intel's?
>>
File: tfw.jpg (5 KB, 205x246) Image search: [Google]
tfw.jpg
5 KB, 205x246
>tfw fell for the 93 petaflops meme
>>
>>55182042
Maybe when hell freezes over. These primitive RISC chips are light years behind current xeon-d processors.
>>
>>55182131
So they just brute forced it like they are AMD or something?
>>
>>55182159
The chinese? Yeah pretty much but this always results in really bad performance per watt.

Intel learned its lesson with their xeon phi coprocessors that had almost a 300W TDP. It's better to have 16 skylake xeon cores than 72 super shitty atom cores. That's why they're asking like $2,000 for each xeon phi coprocessor, they know very few want one.
>>
>>55180232
swell with starvation*
>>
File: model.png (83 KB, 275x191) Image search: [Google]
model.png
83 KB, 275x191
>>
File: AMD K6.png (172 KB, 1387x958) Image search: [Google]
AMD K6.png
172 KB, 1387x958
>>55180340
No it's actually not, x86 has had a RISC heart for years.
>>
>>55179120
USA pledged billions to support replacing our existing supercomputers with designs aimed at reaching Exaflop status by 2020.
>>
>>55180494
>humans dont do FP ops
Sure we do comparable calculations. It's just mostly motor-limbic and resides outside of your thought processes.
Try throwing a ball sometime.
>>
>>55178436
the thing is abysmal at doing actual computing and is only built to win linpack

Computer     Peak     HPCG     percentage
Tianhe-2 33.863 0.5800 1.71%
RIKEN/K 10.510 0.4608 4.38%
Titan 17.590 0.3223 1.83%
Trinity 8.101 0.1826 2.25%
Mira 8.587 0.1670 1.94%
Hazel Hen 5.640 0.1380 2.44%
Pleiades 4.089 0.1319 3.23%
Piz Daint 6.271 0.1246 1.99%
Shaheen II 5.537 0.1139 2.06%
Stampede 5.168 0.0968 1.87%

this thing 93 0.371 0.3%
>>
>>55182942
I see the American SuperComputer Internet Defense Force, or ASCIDF, has finally come out to shill against the superior Chinese accomplishments
>>
>>55179238
Don't forget India, they have their designated shitting streets
>>
>>55182987
When the tianhe-2 is still the most powerful super computer when doing actual work, it's really sad that a super computer with 3x the linpack score is actually only about 2/3 as fast.
>>
>>55184116
like playing games in the 90s using a software render
>>
File: occ2.png (3 MB, 1498x1056) Image search: [Google]
occ2.png
3 MB, 1498x1056
>>55184183
I don't think it would anon

How do I do the math for in order to compare it to specs on modern games as a casual /v/ pleb?
>>
>investing big money in outdated binary computers

Even the retarded government entity is using encryption that is impossible for traditional computers to break even if they had a googol years to process it.

Most challenging problems people want to solve now are NP, NP complete or NP hard.

Doubling the petaflops makes fuck all difference. Guarantee you the US is investing in quantum computing instead of building outdated super computers.
>>
But can it render toy story in real time?
>>
>>55184357
They made a bootleg Cars movie before

http://www.cnn.com/2015/07/07/china/china-movie-disney-cars/
>>
>>55184357
It's not designated for cgi so no
>>
File: 1111112312.jpg (62 KB, 500x398) Image search: [Google]
1111112312.jpg
62 KB, 500x398
What the fuck are supercomputers even used for?

Really intense particle simulation or something?
>>
>>55185169
Hello my friend.
>>
>>55178436
good, now have it calculate the last number of pi
>>
>>55179120
>expect the first exaflop system to arrive in ~5 years ?
meanwhile my cpu from 2008 is clocked faster than new cpus
>mfw people are buying 1.x ghz cpu computers in 2016
>>
>>55185320
>2016
>people still fall for the Megahertz myth
>>
>>55181189
Kek
>>
how long is a supercomputers useful life? 10 years? given the scale of them I'm sure the power/maintenence costs would outweigh buying a cheaper smaller system that's just as powerful.
>>
>>55185359
buttblasted netbook owner detected
>>
>>55185540
What the fuck are you on about?
>>
>>55185540
Wow.
>>
>>55178913
>The impressive part is that they figured out a way to connect 2.5X more nodes together than the fastest American supercomputer without having the between-node latency shoot out of control, bringing down the overall performance.
Tianhe already used chink developed and built interconnects, they been having the lead on that for a while now
>>
>>55185169
to play Crysis at a silky-smooth 30 FPS
>>
>>55186658
Our eyes can't see more than 24fps.
>>
>>55180111
are you dumber because you can't build a shelter? People who code now work on a higher level of thinking just like architects don't build mud huts from scratch, it's advancement not retardation
>>
>>55186986
architects learn the basics of structures and proper construction methods and use them to build high-quality monuments to human achievement that will stand the test of time and do it well

architect.js abstracts the structure away, focusing only on making sure his mcmansion looks cool and hip to a casual observer as long as a slight gust of wind isn't hitting it at just the right angle to make it fall over like a house of cards
>>
>>55185169
The new chinese one is going to be used for atmosphere and climate simulations/calculations
>>
>>55180469
Or as I've recently taken to calling it petaflopss.
>>
>>55186878
and computers can't be "super", they're just bigger computers
>>
>>55186986
architects still need a good understanding of the engineering side, otherwise they might create impossible/unfeasible designs
>>
>>55187208
I mean that's true less and less, computer simulations take on that load more and more, eventually you'll just tell a computer what you want to build and it'll do the hard work
>>
>>55179275
Fuck off with your driver faggotry. Everyone who doesn't have a need for the latest GTX 90000000000000000000 super gaymer grafix has no driver issues on modern Linux. And an AI sure as shit doesn't need the latest nvidia driver.
>>
>>55187075
If I may interject, what you actually mean is GNU/petaflops god bless
>>
>>55187245
>It's not a bug it's a feature!
>>
>>55180469
the first person also wrote it wrong, there's no "petaflop", only "petaflops"
>>
>>55178436
what will they use it for?
>inb4 posting memes on reddit
Thread replies: 178
Thread images: 26

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.