[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
http://nvidianews.nvidia.com/news/n vidia-delivers-massive-p
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 25
Thread images: 1
File: P100_SXM2.jpg (44 KB, 468x268) Image search: [Google]
P100_SXM2.jpg
44 KB, 468x268
http://nvidianews.nvidia.com/news/nvidia-delivers-massive-performance-leap-for-deep-learning-hpc-applications-with-nvidia-tesla-p100-accelerators

>General availability for the Pascal-based NVIDIA Tesla P100 GPU accelerator in the new NVIDIA DGX-1™ deep learning system is in June.

http://nvidianews.nvidia.com/news/nvidia-launches-world-s-first-deep-learning-supercomputer

>General availability for the NVIDIA DGX-1 deep learning system in the United States is in June

Nvidia first to HBM2, beating AMD by over 6 months

IT'S OVER, AMD IS FINISHED & BANKRUPT
>>
>>53894096
>june
And there were cucks doubting that Pascal wouldn't come out in Q1-Q2.
>>
>>53894096
>Nvidia first to HBM2, beating AMD by over 6 months

Did you for a second consider, who was maybe first to HBM1?
>>
>>53894096
>no card
>no game
>we car nao
>pls pay $129,000 for dgx-1

pls no
>>
>>53894243
HBM1 IS IRRELEVANT

You'll be laughed at, 4GB when customers have massive amount of large datasets exceeding 4GB to process
>>
Availability
General availability for the Pascal-based NVIDIA Tesla P100 GPU accelerator in the new NVIDIA DGX-1™ deep learning system is in June. It is also expected to be available beginning in early 2017 from leading server manufacturers.

Wow its fucking nothing
>>
So this is a special GPU so if zou run BOTS on they gain +30% in learning?
>>
>>53894096
>>General availability for the NVIDIA DGX-1 deep learning system in the United States is in June
>Nvidia first to HBM2, beating AMD by over 6 months
>IT'S OVER, AMD IS FINISHED & BANKRUPT
wow that's amazing considering HBM2 hasn't even started production yet.
Nvidia: capable of literally fabricating future technology out of thin air.

Did you ever stop to consider that they meant June 2017? Or are you just that retarded?
>>
>>53894890
yeah but the owner gets a -5 charisma debuff
>>
>>53894887
Maybe they want to get a foothold in the server business themselves so they withhold cards to other manufacturers.

If Huang isn't lying and the chip is in mass production already, then they should have way more chips than they can sell in DGX-1.
>>
>>53894938
Are you fucking retarded? Samsung has been mass producing HBM2 since JANUARY

https://news.samsung.com/global/samsung-begins-mass-producing-worlds-fastest-dram-based-on-newest-high-bandwidth-memory-hbm-interface

>January 19, 2016
>Samsung Electronics announced that it has begun mass producing the industry’s first 4-gigabyte (GB) DRAM package based on the second-generation High Bandwidth Memory (HBM2) interface, for use in high performance computing (HPC), advanced graphics and network systems, as well as enterprise servers.
>>
Volume production always preceeds availability by half a year, they need to ramp up.
>>
>>53895028
Unless you're into paper launches, that is.
>>
>>53894259
>a lot of non gayming fp64 core
AGAIN, the announcement (Nvidia Tesla P100/GP100) has nothing to do with gaming cards
>>
>>53895212
Except Tesla cards are always based on the desktop chips.
GP100 is the next Titan.
>>
>>53895341
But then if we aren't seeing these server chipsets until 2017 that means Nvidia will be releasing desktop variants 2-6 months later, holy fuck thats far away.
>>
>>53895481
Jewvidia is DEAD
>>
>>53895481
Not really. The first Kepler cards released almost a year earlier than the Titan.
Neither the 980ti nor the Fury X will get a replacement this year, but the lower end cards will get replaced.
>>
Has anyone considered nvidia might think traditional raster based rendering is on it's way out? They have been spending a lot of time on GPU raytracing the last few years. I suspect they plan to use compute functions for alternate rendering techniques. Once GPUs get 32gb of vram voxels may become very appealing to use. You already see mild use in some GI solutions.
>>
If AMD goes down.
Who will compete with Nvidia?
>>
>>53894096
>$129,000.00 system available soon.
>It is over.
>Competition is finished.
>And bankrupt.
>>
i can hardly wait for the latest generation to go on sale.
>>
>>53894096
Thanks for reminding me that I needed to set up my 'FINISHED & BANKRUPT' filter again to avoid shitty threads like this.
>>
>>53896087
The only people bankrupt are the people buying nvidia
>>
>>53895014
>Are you fucking retarded? Samsung has been mass producing HBM2 since JANUARY

Well no shit retard, it takes a little longer to integrate it into a GPU
Thread replies: 25
Thread images: 1

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.