[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
Polaris Vs Pascal
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 255
Thread images: 35
File: PvP.png (465 KB, 1976x1382) Image search: [Google]
PvP.png
465 KB, 1976x1382
Which one is better?
>>
>>53812186
Why would you compare two tables and cover the important shit in fuckoff huge text
>>
>>53812186
We won't know for certain until they're both released, of course. However, from what we do know; Polaris is a new arch - whereas Pascal isn't. Pascal is a refresh of Maxwell but now on 16nm FinFet.
Polaris obviously takes design principals from previous generations, but its been designed specifically for 14nm FinFet.
The exception to this would be Vega, which from what we've been told, is expected to be Fury X/air/nano on the 14nm processes they were supposedly always meant to be on.
So Vega looks like its more Pascal-esque in that it'll be a refresh - but there will be some hardware differences between Vega and Fiji. How big those differences will be? Probably only more transistors and something else which doesn't take a whole redesign like that.
>>
>>53812206
Because he is an AMD shill.
>>
>>53812186
Pascal
>>
>>53812206
because there isn't useful information there
>>
Polaris/vega is more efficient and more advanced and Pascal has MOAR COARS as a stopgap measure for their volta cards.

$$:performance ratio is better on Polaris and so is power efficiency.

Pascal will have absurdly huge GPU dies with shaders packed into them to make up for their shitty arch.

Pascal will be faster on the desktop but more expensive and a housefire

AMD polaris will be better on laptops

low end Vega will have HBM and pascal will be forced to use huge 512-bit GDDR5/x layout which means AMD will have smaller Kuaii cards at the high end for ultra powerful steamboxes to pair with a 4K TV

AMD is also implementing freesync over HDMI so freesync 4K televisions are in the works as we speak
>>
>>53812283
I'm concerned about the efficacy of Samsung's 14nm process, with the recent comparison to the 16nm process and how Samsung's came up the loser.

Perhaps it won't matter when applied to this entirely different product. Maybe it'll be worse. If someone has some educated thoughts on the possibilities, I'd love to hear it.
>>
>>53812366
Comparing LPP vs LPE processes.
>>
Neither, because we have no real information from either camp. AMD hasn't said much and Nvidia has only talked about deep learning.
>>
>>53812363
You don't know nothing about Pascal to backup anything you've said.
>>
>>53812366

Samsung's 14nm process in 9% smaller than 16nm. the "recent comparison" has nothing to do with the samsung node.
>>
>>53812366

even if the power leakage thing was real (which it isnt) thats on an ultra low power part for a smartphone (less than 5 watts) thats a completely different ball game from laptops and desktops where the power envelope is 50-100-300 watts
>>
>>53812455
actually I do and to be quite frank, you sir, are a retarded nigger faggot
>>
>>53812366
Honestly, again, at this point I don't think anybody can make any educated guesses. Speculation? Oh sure, absolutely. However, like you yourself mention, we have no idea if the efficiency issue will come up at all once it gets to real world, mass scale products.
Until we get those products into the hands of consumers - not tech reviewers, consumers, we will not know for certain how each process - and architecture - handles the challenges thrown at it.

I say in the hands of consumers rather than reviewers because while reviewers may often have more scientific tools to measure power draw and the like; they often don't have the time or enthusiasm to do an extensive, all options inclusive series of tests.
They often go for settings they're told to use or settings which intentionally push the card past its comfort zone to see how it handles when you throw the absolute maximum possible at it.

This might sound like a fair test; and in regards to seeing how ALL cards fair up against one another, they're right. However, it doesn't take into consideration real world examples of how cards are used (NOBODY buys a GT950 and expects to play AAA games @ 4k~60fps and 16xMSAA for example).
So yeah. Until consumers get cards in their hands and put the cards to their intended use; until consumers are trying all sorts of combinations out for settings and getting real world examples of how each card handles things; including in power efficiency, then I don't think anyone is at liberty to try and say X will be better than Y.
If they do, chances are they're just a fanboy.

AMD might come out better, Nvidia might come out better. Who knows at this point.
>>
>>53812410
So intel integrated?
>>
>>53812335
>curry
>recommends pascal
>>
File: poo in the loo nvidia.png (99 KB, 970x1110) Image search: [Google]
poo in the loo nvidia.png
99 KB, 970x1110
>>53813766
Its confirmed. Nvidiots are poo in the loo.
>>
Apparently Polaris was able to play Hit man maxed out at 1440p with 60 fps. They also dis it by deliberately gimping its clock to 800 MHz. This is quite big, but I believe both Pascal and Polaris will be fairly equal with nothing more than a 5-10 fps difference between the two.
>>
>>53814801
Nice maymay
>>
>>53814842
Even nicer; check the file dimensions. It wasn't even intentional. I didn't notice until after I'd posted it.
>>
>>53814801
Test
>>
Only 2 more days to Pascal. Really hyped.
>>
>>53814992
really?
>>
>>53812186
Does anyone know roughly what performance class the low end Polaris chip (Polaris 11?) Will occupy?

Will it be faster than a 7870?
>>
>>53815141
Somewhere around 960 performance.
>>
>>53815141

roided out core with a 128-bit memory but however bandwidth will be 15% greater than the bandwidth on gtx 960 because better memory clocks
>>
>>53815141
Absolutely. From how it seems, they're looking to move past having dedicated GPU's which are outclassed by APU's. There's just not enough market to develop new architectures for such a pitiful performance bracket. No doubt they'll be remarketing any remaining stock of anything weaker than a 7870 as the stuff which takes the place of that.
Hell, I wouldn't even expect them to be producing anything weaker than a 7950 even.

With the advantages in efficiency with the new process node and architecture advancements, they could EASILY put the performance of a 7950 onto something which might only need 1x4pin or perhaps not even that.
>>
>>53815189
Cool, what about price? Any leak on that yet?

If I can get something which is at least 50% faster than my 7870 for under 300 AUD I'll be happy.
>>
>>53815233
you can get that now if you weren't in ausland
>>
Honestly tempted to get AMD when Polaris is out compared to getting an Nvidia, not sure how I feel about Pascal at the moment.
>>
>>53815233
Price is also expected to drop. I can't cite any sources nor can I announce prices, but I'm absolutely certain I remember an AMD marketing person (or perhaps even Raaj) saying that due to the node shrink, the dies of each chip will be smaller; meaning more chips per wafer, meaning lower costs for end customers as there's a better yield per wafer.
Pretty sure Nvidia said the same thing too, actually. That the node shrink ought to bring down prices as a natural part of the process.
>>
>>53812186
Hard to chose as my true name is Pascale Polaris.
>>
>>53815883
No, It's Annie Khan
>>
I like CS go

I really badly badly badly want a rival 300 Fade <3
>>
>buying nvidia after maxlel
>buying nvidia after kekler
>buying nvidia after thermi
>buying nvidia ever
>>
>>53817008
Honestly, as much as I hate it, I only buy nvidia and Intel. AMD just has never released anything worth buying. Their motherboards are fucking garbage, their cpus aren't as fast as Intel either. Their gpus are the best for 4k, but I don't use 4k so it's useless to me. I wish AMD could actually release something good for once.
>>
pisscal

:D
>>
File: well shilled.png (186 KB, 602x334) Image search: [Google]
well shilled.png
186 KB, 602x334
>>53817483
>Their gpus are the best for 4k,

AMD is actually completely nonviable for 4k gaming currently.
>>
>>53817651
4k is really bandwidth starved and r9 fury uses hbm which is better for 4k. The difference is very marginal though, it showed in benchmarks that the r9 fury only gets like 5 frames more in 4k. Thats barely noticeable.
>>
>>53817697

bandwidth doesn't mean shit when you only have 4GB (it causes constant stuttering when gaming at 4k). also, fiji's HBM does not have higher bandwidth at the clock speeds it ships with. it's about 256gb/s at 500mhz, which is right on par with the gddr5 most cards ship with.
>>
>>53817757
Anything above 4gb is too overkill from my personal experience. I don't know what kind of work do you do which requires 8 gb of ram, I don't think any game ever even comes close to using that amount. In my opinion, speed > amount. AMD's answer to everything was always just slap an obscene amount of cores/vram on it to compensate for the bad quality of their architectures. It seems that they aren't doing that anymore and trying to focus on speed now by redesigning their architecture and making HBM2. Both of these are steps in the right direction in my opinion.
>>
>>53817483
you are just a shill then
>>
File: 1433953992232.jpg (26 KB, 480x542) Image search: [Google]
1433953992232.jpg
26 KB, 480x542
>>53818386
>if you don't buy everything amd you're a shill
>>
File: 1459378343336.jpg (61 KB, 480x479) Image search: [Google]
1459378343336.jpg
61 KB, 480x479
>>53812186
Why so hyped? They will be the same shit again. Man you manchildren. Fanbois and fucknuts.
>>
>>53818549
Pretty much. This first gen of cards from both sides won't even be much better than the current generation. The new cards will most be tuned for better power consumption and heat output instead of raw performance. It won't be till like late 2017/18 when we actually see big performance improvements as both companies optimize for the smaller die size.
>>
>>53818728
>instead of raw performance

But you're wrong. Both nvidia and AMD said that their new cards would at least have a 50% improvement in performance over the current gen. I don't know if that's just blank marketing statements though. Guess we'll just have to wait for release to see some benchmarks.
>>
>>53818549
>anime picture
>calls others manchildren
>>
File: 1451286870089.jpg (51 KB, 600x600) Image search: [Google]
1451286870089.jpg
51 KB, 600x600
>>53818833
That's the only picture of Maki I have, her smoking with careless look like that makes those manchildren who love her so much more intimidated
>>
File: lel.gif (424 KB, 250x250) Image search: [Google]
lel.gif
424 KB, 250x250
>>53818877
>Raquel Roach
>>
>>53812186
>AMD coming this year with anything besides bankruptcy
>>
File: herp derp.png (8 KB, 1235x717) Image search: [Google]
herp derp.png
8 KB, 1235x717
>>53812186
>>
File: 45151.png (46 KB, 550x550) Image search: [Google]
45151.png
46 KB, 550x550
>>53818775
>Both nvidia and AMD said that their new cards would at least have a 50% improvement in performance over the current gen

kek, anyone who thinks that will happen is obviously falling for AMD's shitty shill marketing or hasn't been around long enough to know that node shrinks never brought 50% performance increases.
>>
>>53820304
Node shrinks on CPU's rarely do, but node shrinks on GPU's pretty much always do.
>>
>>53812500
Prove it, do you have a single fact to back that up?
>>
>>53814801
Actually Nvidia got a tons of those h1b1 visas not long ago, which explain why they drivers are degrading (poos in the loos), I personally have no problems though.
>>
>>53820339


node shrinks on GPUs have never brought 50% performance improvements. most of the exponential performance increase we saw from 2000-2009 was from increasing die size.
>>
>>53820439
> node shrinks on GPUs have never brought 50% performance improvements.
Bullshit, they pretty much always have.
>>
>>53820496
>Bullshit, they pretty much always have.

are you retarded? just see >>53820304, where the first 28nm GPUs of equal die size to previous 40nm GPUs did not perform 50% better. go even further back to stuff like the 65nm gtx280 vs the 40nm gtx480 and the story is the same, the performance increase was only about ~20%.
>>
>>53818775

no you idiot.

they said 50% more efficient. meaning they will do the same job at half the watt usage.

which is still a fucking amazing thing.

meaning we get fury nano performance that consumes 100w which is just insane!
>>
>>53820619
Then I got memed on because I read 3 articles which said that Polaris will be higher performance.
>>
>>53820553
> posts only a single game where the 7970 in fact did get close to 50% more performance
>>
File: 1448470439428.jpg (9 KB, 225x225) Image search: [Google]
1448470439428.jpg
9 KB, 225x225
>>53820619

50% more efficiency? yeah, you fell for AMD's marketing.
>>
>>53820553
>go even further back to stuff like the 65nm gtx280 vs the 40nm gtx480 and the story is the same
I remember the 5870 being about twice as fast as the 4870.
The GTX 480 was just terribly executed.
>>
>>53820648

the image was meant to be a comparison between the 680 and the 580, but the 7970 vs 6970 is still pretty much the same ratio.

65 / 46.3 = 1.40...
49.8 / 36.4 = 1.368...

AMD actually gained less from 40nm -> 28nm than NVIDIA did, and NVIDIA was moving from the fucking housefire thermi architecture. We haven't seen true >50% exponential growth every since the 200 series when NVIDIA stopped increasing die size once they hit 600mm2 on the flagship cards.
>>
>>53820655
They literally demoed a Polaris playing hitman at max settings at 1440p, 800 MHz. There's video proof of it as well.
>>
>>53820696

what evidence do you have that they demoed an actual polaris card? we never saw the actual card, just some shitty gameplay footage. could've easily been scripted footage or just a conventional 28nm card running in that PC. i'm skeptical of any marketing statements from AMD or NVIDIA after the many fakes they've shown (fury x2 last august, nvidia woodscrews, etc)
>>
>>53820694
Once again, that's a single game.

Also, the GCN architecture benefited greatly from improved drivers over time. The 6970 already had drivers optimised for it.

Also, the 680 was not the top tier card of that generation, and had a smaller die area. The logical comparison would have been the Titan.
>>
>>53820722
I have none. However, I'm willing to believe that both Pascal and Polaris will be better than anything we have currently. Not trying to shill or anything, I just have high hopes because both,companies put a lot of new features on their cards.
>>
>>53820694
GTX 680:
Die area = 294mm2
TDP = 194W

GTX 580:
Die area = 520mm2
TDP = 244W
>>
File: 45124.png (45 KB, 550x550) Image search: [Google]
45124.png
45 KB, 550x550
>>53820728
>Once again, that's a single game.
>Also, the GCN architecture benefited greatly from improved drivers over time. The 6970 already had drivers optimised for it.

>he actually tries to use this weak AMD marketing bait

>Also, the 680 was not the top tier card of that generation, and had a smaller die area. The logical comparison would have been the Titan.

nigga wat? the titan didn't come out until a full year later with the 700 series, the 680 was the flagship.

>and had a smaller die area.

the point was to show two cards with the same die size to disprove your retarded claim that node shrinks bring inherent 50% performance increases. get a clue son.
>>
I'm just wondering how many pascals do I need in order to stream my 4k ultra hd anime collection. I think I'll go with 3 way sli.
>>
>>53820812
> nigga wat? the titan didn't come out until a full year later with the 700 series, the 680 was the flagship.
With half the die size and significantly lower power consumption.
And you want to only compare the performance, instead of the performance per watt.
>>
>>53820812
When you take into account the 194W TDP vs the 244W TDP, the 580 to 680 gives 76.5% better performance per watt according to your own cherrypicked example.
>>
>>53820812
>he actually tries to use this weak AMD marketing bait
This isn't even about trying to promote AMD over Nvidia, you stupid git. The fact of the matter is, GCN did improve a lot with newer drivers. That's how they were able to keep up with Maxwell.
>>
I believe in AMD

https://www.youtube.com/watch?v=rf_3pn7rOfc
>>
>>53820852
you need at least 128
>>
>>53820922
CPU overhead actually, AMD cards paired with Budget slow CPUs are awful slow, they only get ahead with strong Intel CPUs and the fact that console ports are coded with AMD architecture in mind.

Also, Nvidia driver do give performance to their own cards, the previous released driver put the GTX 970 on par with the AMD 390 in Far Cry Primal, if you were paying attention the 390 was crazy ahead of the 970 in this game but now they are equal.

The issue is that barely anyone bothers to benchmarks with new drivers, at best you have to rely on some Youtubers like https://www.youtube.com/user/DudeRandom84

Good dude, he uploads benchmarks for every driver or game update, crazy amount of work he is doing.
>>
>>53821022
Intel CPU's have hardly improved much over the last several years.
>>
>>53821042
> Every new generation of Intel CPU for the last 7 years: 5% faster, 10% more expensive
>>
>>53821042
This is false.

They haven't improved enough to justify an upgrade, but they are stronger.

An i5 4460 is a the same performance level as a stock i5 2500K,
A good overclocked 2500K put its at the same level as a stock 4690K
A i3 Skylake almost beats a i5 4460.
>>
>>53821067
> They haven't improved enough to justify an upgrade, but they are stronger.
...By like 5% every generation.
>>
>>53821081
Yes, which is enough to save AMD ass.
>>
>>53821103
In what way? It's hardly enough to make a difference in AMD's GPU performance.
>>
>>53821081
>>53821124

https://www.youtube.com/watch?v=tyGjsGQ-RHU


>i3 almost beating an i5.


They don't become a lot stronger bu they do become more efficient, do you really think Intel just releases the same architecture over and over without improvements.
>>
>>53821067
>A i3 Skylake almost beats a i5 4460.
But a Skylake i3 beats a Skylake i5.

http://cpu.userbenchmark.com/Compare/Intel-Core-i3-6320-vs-Intel-Core-i5-6600/3535vs3514
>>
>>53821134
> They don't become a lot stronger bu they do become more efficient
Which is not the point here.
Someone claimed the performance increase of AMD's GPU's was due to the high CPU overheads, and Intel's CPU's getting better.
>>
>>53821141
Which is exactly my point, newer Intel CPU are more efficient.

>>53821162
Which is true.
http://www.overclock.net/t/1573982/amd-gpu-drivers-the-real-truth
>>
>>53821134
>1440p
>With a 3.5
>In a CPU comparison
If you create such a huge bottleneck it's no wonder both CPUs get the same FPS.
>>
>>53821173
> Which is true.
No it's not, because Intel's CPU's have hardly improved in performance. That is fact.
>>
>>53821182
>What is min FPS.


How do dumbfuckers like you manage to find this place?
>>
>>53821191
Well you are wrong if you think that then, a quick example would be Fallout 4, ANY CPU BUT SKYLAKE will drop below 40FPS.
>>
>>53821173
> Which is exactly my point, newer Intel CPU are more efficient.
No, it's not the point.
You wanted to claim that Intel's CPU's are getting faster and that's why AMD's GPU's are getting faster.
I pointed out that Intel's CPU's are not getting faster by any meaningful level. You then tried to make it about efficiency.
>>
AMD cpus always have been garbage. Looking at the specs of the new zen, it's going to have 16 cores and 32 threads. Typical AMD strategy when it comes to processors, just add more cores!!11 Bigger numbers win!!!

This is why Intel absolutely destroys AMD in this side of the market.
>>
>>53821210
See
>>53821208
>>
>>53821208
>Well you are wrong if you think that then
No, you're wrong if you don't.
>>
>>53821222
> They're faster because I say they are
That's not how this works.
>>
>>53821226
Why are you trying to argue like this?

It is well know that you need a Skylake CPU to stay above 40FPS in Fallout 4 when walking into the cities with max shadows, Maybe you should browse more websites beside 4chan.
>>
So was HBM just a meme after all?
>>
>>53821201
The framerate is always determined by the slowest part.
In all games but GTA V and Crysis 3 the GPU was constantly near 100% usage meaning the GPU was limiting rather than the CPU.

There is a reason why professional hardware reviews always do CPU benchmarks at the minimum rasolution with a very strong GPU.
>>
>>53821290
>"When using DirectX 11, in situations where the game is under heavy load--for example in the larger hubs of the game--the individual cores may not be able to feed a fast GPU like an Nvidia GTX 980 or even Nvidia GTX 970 quick enough," he said. "This means the game may not hit the desired frame rate, requiring you to turn down settings that impact CPU performance. Even though the game can use all your CPU cores, the majority of the DirectX 11 related work is all happening on a single core. With DirectX 12 a lot of the work is spread over many cores, and the framerate of the game will run at can be much higher for the same settings."
>>
>>53821246
> Why are you trying to argue like this?
It wasn't an argument. I just happen to know you're wrong and said so.
But if you really insist on being proven wrong, here:
http://www.gamersnexus.net/game-bench/2182-fallout-4-cpu-benchmark-huge-performance-difference
>>
>>53821299
Post a up to date benchmark.
>>
>>53821267
Hbm is good technology and the fact that and made it open source raises my respect more for them.
>>
File: fallout-4-cpu-benchmark-1080-m.png (31 KB, 744x659) Image search: [Google]
fallout-4-cpu-benchmark-1080-m.png
31 KB, 744x659
>>53821313
Don't be stupid. Driver updates are not going to make Skylake suddenly better. That only happens with GPU's.
>>
File: fallout-4-cpu-benchmark-1440-u.png (31 KB, 744x659) Image search: [Google]
fallout-4-cpu-benchmark-1440-u.png
31 KB, 744x659
>>53821299
So he is right?

Newer i7 CPUs are faster.
>>
>>53821325
>what are microcode updates


Why do I bother coming to this place anymore, its overrun with /v/ idiots.
>>
>>53821327
Your own graph shows the Skylake CPU literally two FPS faster on the 1% low test compared to Haswell, and 4FPS slower on the 0.1% low test.

I mean, are you actually retarded?
>>
>>53821337
When has a microcode update ever actually improved performance?
>>
>>53821347
Are you?

Maybe try visiting the Fallout 4 forums and see for yourself that in the cities you need an Skylake CPU, this is not something benchmarks are going to showcase there is not built in benchmark in Fallout 4.
>>
>>53821337
>>53821357
They haven't. I've only known them to decrease performance in specific cases where a bug needs to be fixed (like the TLB bug on the Phenom I's)
>>
>>53821357
Lastest BIOS update for Skylake per example.
>>
>>53821360
I showed you some benchmarks that proved you wrong. You showed me some benchmarks that proved yourself wrong.
I don't think I need to say anything more here.
>>
>AMD is not CPU limited
>somehow they pushed for Mantle anbd DX12 which solves all those problem
>but still AMD is not CPU limited.


uh huh, if you're gonna shill atleast stay consistent.
>>
>>53821373
Quit while you're behind. You're embarrassing yourself.
>>
>>53821386
That wasn't the argument, numbnuts. The argument was that CPU performance hasn't improved. Try to keep up.
>>
>CPUs aren't getting faster

mm facts seems to disagree.
https://www.youtube.com/watch?v=ocwwaVGUFtk
>>
>>53821400
CPU performance has mostly stayed the same since the releases of Sandy Bridge and Piledriver in 2011/2012.
>>
>>53821418
> A CPU with higher stock clock speeds beats a CPU with lower stock clock speeds in programs that aren't well threaded
> this means the architecture is an improvement

You're an actual idiot.
>>
>>53821451
No anon, you are the idiot, you have the proof in the video.
>>
>>53821457
No anon, you are the idiot. You don't even understand how to interpret results.
>>
>>53821469
You are obviously the one who can't.

Let me give you can a pro tip, you can see even more clear result when comparing budget mid range GPUs.
>>
>>53821483
You're literally having to cherrypick the hell out of your benchmarks to try and show Skylake being marginally faster by like 5%, and you think you're proving me wrong somehow.
>>
>>53821502
>Cherrypick
>all of them popular newly released game

Anon...
>>
>>53821515
All of what? The one game you've shown me?
>>
https://www.youtube.com/watch?v=frNjT5R5XI4

Here you can see the 2500k below by almost 40FPS even with overclock.
>>
How about this one were they talk about AMD midrange GPUs paired with Budget CPUs performing worse than the Nvidia counterpart paired with the same budget CPU.
https://www.youtube.com/watch?v=fAVxmfNUuRs
>>
>>53821530
At 4:53, it's 53FPS vs 66FPS
You're obviously trying your hardest to cherrypick data to suit your needs.
>>
>>53821569
You are the one cherrypicking when the skylake gets ahead most of the time.
>>
>>53821569
>CPU is faster
>No is not see it was lower in this game even though it is faster in 99% of the games.


Riiight.
>>
>>53821576
I skipped ahead to literally the first comparison I saw between them.
You might as well just admit you have a massive case of confirmation bias.
>>
>>53821208
Ehh, I run an 4690k & gtx 970 with a fuck load of mods and still have no issue whatsoever with frame drop... my refresh rate caps at 50 and I keep vsync on, but it stays at a solid 50 with no issue whatsoever save maybe rendering new areas after long periods of travel, certainly nothing out of the ordinary or that wouldn't also occur on a skylake, I'm sure.
>>
>>53821591
Confirmation bias?

I'm being the most objective person in this thread I even gave you this >>53821173 so you could educate yourself.

There is a reason why pcbuilding recommends a Nvidia card when buying a budget PC, you need a high end CPu to make use of a AMD GPU which obviously if you're on a budget you can't afford.
>>
File: 1457881083090.png (62 KB, 200x233) Image search: [Google]
1457881083090.png
62 KB, 200x233
>>53821422
No it hasn't. Core counts have increased exponentially. Single thread IPC hasn't changed much but multi threaded performance has massively increased with the die shrinks. GPUs are HIGHLY parallel processors and scale much better with die shrinks in the consumer market because of this.
>>
I just want polaris 11 be like 290 but for price of 7850.
>>
>>53821607
I have a GTX 970 1519mz core 3801 memory with a [email protected] and I can bullshit on you unless you are using that mod that dynamically drops shadows.
>>
>>53821586
If you're going to try to distort the argument, then obviously you're incapable of having an intellectually honest discussion.
Skylake is like ~5% faster in cherrypicked cases.
Yes, it's faster. Technically. By a very small amount. Which is exactly what I said.

>>53821610
> Confirmation bias?
> I'm being the most objective person in this thread
No you're not. But that's the thing about confirmation bias, you probably legitimately think you're being objective, even though you're clearly not.
>>
>>53821644
>cherrypicked
>90% of the cases


Alright anon have it your way, stay in the wrong. I don't care.
>>
>>53821615
> Single thread IPC hasn't changed much
And most programs still don't take advantage of more CPU cores.
Maybe that's just because developers are lazy, but still.
>>
>>53821661
> 90% of the cases that you chose to specifically show, and still struggled to show more than about 5% difference against Haswell.
>>
>>53821681
But it is faster, isn't it?
>>
>>53821668
That is irreverent to GPUs. GPUs are highly parallel and scale far better with die shrinks on the consumer market because of this.
>>
>>53821688
Which doesn't go nearly far enough to account for the improvements the GCN architecture has made over time.
There have been a number of driver releases that have made some massive jumps just from the previous version, with nothing else changed.
>>
>>53821681
AMD GPU's can only make 1.1m draw calls no matter what CPU you use so YES, CPU overhead matter even if it is just 5% less.
>>
>>53821701
> AMD GPU's can only make 1.1m draw calls no matter what CPU you use so YES, CPU overhead matter even if it is just 5% less.
That's a prime example of a non-sequitur.
>>
>>53821719
Does not makes it wrong though, any performance boost even 5% it is a performance boost.
>>
>>53821733
> Does not makes it wrong though
Prove god doesn't exist.
>>
>>53821740
Tip your fedora somewhere else kid.
>>
>>53821733
a 5% performance is pretty close to what you would get from drive updates so seems fair.
>>
>>53821745
Don't try to squirm your way out of the fact that I threw your "logic" back at you.
>>
>>53821755
Wrap your kiddie "logic" and show it your ass.
>>
>>53821752
A single driver update version, maybe.
How many driver updates have there been with improved performance?
>>
>>53821757
It was *your* logic I used. That's the point.
>>
File: 02.png (9 KB, 454x378) Image search: [Google]
02.png
9 KB, 454x378
>>53821760
Not many.
>>
>>53821771
We're talking about GCN, not VLIW.
VLIW was already mature long before the 6870.
>>
File: 20.png (8 KB, 455x378) Image search: [Google]
20.png
8 KB, 455x378
>>53821771
>>
These /g/ autist trying to compare CPU scaling with die shrinks to GPU scaling is retarded.

Compare the performance of the 7970 vs the 290x. These are pretty much the same architecture but scaled up.

render output processors 64 vs 32
shading units 2,816 vs 2,048
texture mapping units 176 vs 128
compute units 44 vs 32

Die size of the 7970 365mm^2 28nm
Die size of the 290x 438 mm2 28nm

Transistor count 7970 4.31 billion
Transistor count 290x 6.2 billion
>>
File: 11.png (10 KB, 454x378) Image search: [Google]
11.png
10 KB, 454x378
>>53821781
>>
Here's a single driver update giving ~5% performance improvements across the board:
http://wccftech.com/amd-radeon-software-performance-analysis-is-this-the-crimson-tide/2/
>>
>>53821796
>7970 vs the 290x
>These are pretty much the same architecture
>>
>>53821819
They are. 7970 = 280X
>>
>>53821819
And the Fiji arch is the same as Hawaii too, which is the same as Tahiti, right? :^)
>>
File: 1456681584195.png (223 KB, 644x580) Image search: [Google]
1456681584195.png
223 KB, 644x580
>>53821819
both are GCN 1.1
>>
>>53814801
>poo in the loo
>>
>>53821825
You said Hawaii is the same as Tahiti.
It's not.
>>53821838
Tahiti is first generation GCN
>>
http://www.eteknix.com/examining-amds-driver-progress-since-launch-drivers-r9-290x-hd-7970/all/1/

http://www.hardocp.com/article/2013/02/18/2012_amd_video_card_driver_performance_review/6
>>
File: api-overhead-290x.jpg (103 KB, 918x626) Image search: [Google]
api-overhead-290x.jpg
103 KB, 918x626
Truth is AMD gpus only performs close to its competitors with high end processors, which if you could afford in the first place you wouldn't be buying AMD.

Awful, AWFUL drivers, horribly optimized for modern titles, atrocious power efficiency (20~30% of Maxwell) and tons of little nagging issues that will drive you towards Nvidia.
>>
File: Tahiti.jpg (913 KB, 1634x1581) Image search: [Google]
Tahiti.jpg
913 KB, 1634x1581
This is Tahiti.
>>
>>53821869
http://www.tomshardware.co.uk/radeon-r9-280x-double-dissipation-7970,news-45662.html
> The XFX Radeon R9 280X is a refreshed version HD 7970 GHz Edition and features the company's new Double Dissipation cooler.
>>
File: HawaiiDiagram.jpg (614 KB, 2000x1125) Image search: [Google]
HawaiiDiagram.jpg
614 KB, 2000x1125
>>53821899
This is Hawaii.
>>
>>53821903
Can you not even read what you wrote?

>>53821796
>7970 vs the 290x.
>290X
>>
>>53821916
Do you know what the word "architecture" means?
The 7970 has the same architecture the 290X has.
They are not the same GPU, because they have different numbers of shaders, etc. But the architecture is the same.

The 280X is, however, literally a rebrand of the 7970.
>>
>>53821360
Idiot.
>>
>>53821936
No. Go off to the naughty corner and sit there for 13 minutes - a minute per year of age.
You clearly have no fucking clue of what you're on about.
>>
>>53821948
dont call me that, retard.
>>
>>53821954
Cretin.
>>
>>53821895
Nvidia's drivers are dog shit right now, you know. I had a GTX 560 for years, and I recently bought a GTX 960 and I'm getting the first driver crashes I've ever seen from either vendor.

I usually attribute stuff like this to hardware incompatibilities rather than the drivers themselves, but I can't do that this time because every forum I look at has other people with current Nvidia cards reporting the exact same problem.
>>
>>53821954
> I have nothing of value to say, so I'll just call you 13 and hope it sticks
>>
>>53821964
Are you going to tell me that Maxwell is the same as Fermi?
>>
>>53821936
Another wannabe tech expert. Go back to /v/.
>>
>>53821936
Next you'll tell me Fiji is 3x Tahiti
>>
>>53821966
You have a bad card dude, RMA it.
>>
>>53821978
I'm going to tell you that GCN 1.1 is the same as GCN 1.1

Tahiti and Hawaii are chips, not architectures.
>>
>>53821997
>>
>>53821978
And now you're trying desperately to make a non-sequitur.
What has Maxwell not being Fermi got to do with this?

>>53821990
And you're literally just inventing an argument you want me to make out of thin air.

I can do that too:
You're going to tell me that Obama is a Muslim, therefore you're magically right.
>>
>GCN 1.0 and 1.1 are the same
>Hawaii and Tahiti aren't architectures

Oh I get it, it's still April Fools in some backwater part of the world still
>>
>>53822029
I bet you think "Atom" and "Xeon" are architectures too.
>>
>>53822015
Are you the same idiot who doesn't wants to admit Intel's CPU are getting better and AMD's GPU need strong CPUs to be truly useful?

Embarrassing yourself.
>>
>>53822048
They're product marketing names.
>>
>>53822054
Oh, you're the idiot who thinks drivers had nothing to do with performance increases in GCN cards over time. That explains it.
>>
>>53821986
>>53821990
retards
>>
>>53822069
See
>>53821895


AMD GPUs are limited to 1.1m call draws in DX11, they are CPU limited
>>
File: 11835388.jpg (5 KB, 148x160) Image search: [Google]
11835388.jpg
5 KB, 148x160
>>53821997
>I'm going to tell you that GCN 1.1 is the same as GCN 1.1
HD 7970 = Tahiti XT = GCN 1.0
290X = Hawaii XT = GCN 1.1
Fucking dumbass. Get out of /g/.
>>
>>53821986
Are you just one of those random trolls passing by to throw petrol onto the fire, or are you actually that stupid?
>>
>>53822081
That's why there are millions of articles everywhere that announce driver updates and the performance improvements they bring over the previous version.
>>
>>53822067
No they're not. They're architectures. Take a Haswell Atom and a Haswell Xeon, for example. They have different numbers of cores, different amounts of cache, different memory controllers, etc. They're both Haswell, yes, but Atom and Xeon are the architectures.
>>
>>53822107
Anon, please pay attention, 1.1m draw calls, no more no matter what.
All you can do is offload that to the CPU.
>>
i wonder if half the people here know what a fucking architecture is
7970 = 280x
290x = 280x with MOAR COARS, same architecture

these are cards made in 2012 that still get great driver support, now live under the 380/390 names, and still compete with nvidia's horseshit cards

meanwhile, 2013-released 700 series from nvidia get shit on with crippled or no driver support
>>
>>53822109
What?
Haswell is the architecture.
>>
>>53822109
"Haswell-WS"
"Haswell-EP"
"Haswell-EX"
"Broadwell-WS"
"Skylake-WS"

Those are architecture names, Xeon is umbrella term for a marketing name for their professional CPUs used since 1998
>>
File: 04335552.jpg (44 KB, 254x321) Image search: [Google]
04335552.jpg
44 KB, 254x321
Are we raided by /v/?

>Graphics Core Next "GCN 1.0" architecture (Southern Islands, HD 7000/Rx 200 Series)
>GCN 2nd Generation "GCN 1.1" architecture (Sea Islands, HD 7790 and Rx 290/260 Series)
>SAEM ARCHITECHTUER GUYS!
>>
>>53822122
> Anon, pay attention to [irrelevant detail] that I'm going to pretend is the only thing that is relevant while I ignore the actual relevant things you've pointed out
Go play in traffic.
>>
>>53822126
>290x = 280x with MOAR COARS, same architecture
>>
>>53822150
>irrelevant

So irrelevant AMD had to make Mantle and push for DX12.
>>
>>53822131
Not really. There's the Pentium 2950M architecture, the i7-4770K architecture, the i5-4670K architecture, the Xeon E3-1280 v3 achitecture, etc. Just because they all use Haswell cores doesn't make them the same architecture.
>>
File: 1289330658631.jpg (32 KB, 400x400) Image search: [Google]
1289330658631.jpg
32 KB, 400x400
>>53822176
>>
>>53822167
The only thing you're doing by pretending that drivers don't offer performance improvements is make yourself look like a fucking idiot that doesn't want to admit he's wrong about anything.
>>
>>53822191
1.1m draw calls, no more.
>>
>>53822197
You might as well stick your fingers in your ears and shout "lalala, I can't hear you"
That's what you're doing, essentially.
>>
>>53822131
The architecture is x86-64
haswell its the microarchitecture
>>
>>53822191
No one is claiming drivers don't offer performance updates, you're the one who assumed so, we are talking about how AMD GPU only perform better paired with a strong CPU because the heavy CPU overhead the AMD drivers suffer from.

Get the fuck out of here and learn to read and follow conversations.
>>
>>53822206
x86-64 is the instruction set
Haswell is the architecture
Xeon and i7 are product names that use the Haswell architecture
>>
>>53822158
>implying there's any real architecture difference between gnc 1.1 and 1.0
there's not even hdmi 2.0 support, for fuck's sake
290x came out in 2013, nothing has changed since 7970 other than moar coars
go back to plebbit
>>
File: 1296762906375.jpg (31 KB, 526x300) Image search: [Google]
1296762906375.jpg
31 KB, 526x300
>>53822226
>>
>>53822176
you're a fucking retard
if i open one of those cpus up in cpu-z, it will say "FUCKING HASWELL CPU"
>>
>>53822217
> No one is claiming drivers don't offer performance updates
You've literally been trying to argue that the only thing making GCN cards faster is Intel's CPU's getting faster, and pretending drivers had nothing to do with it.

Fact: The drivers have had like 90% to do with it.
>>
File: 40704284.jpg (27 KB, 400x675) Image search: [Google]
40704284.jpg
27 KB, 400x675
>>53822126
>>53822226
>7970 = 280x
This is true but irrelevant to the discussion. Both GCN 1.0
>290x = 280x
This is wrong. 290X is GCN 1.1.
>GCN 1.1 was introduced with Radeon HD 7790 and is also found in Radeon HD 8770, R7 260/260X, R9 290/290X, R9 295X2, as well as Steamroller-based Desktop Kaveri APUs and Mobile Kaveri APUs and in the Puma-based "Beema" and "Mullins" APUs. It has multiple advantages over the original GCN, including AMD TrueAudio and a revised version of AMD's Powertune technology.

>GCN 1.1 introduced an entity called "Shader Engine" (SE). A Shader Engine comprises one geometry processor, up to 11 CUs (Hawaii chip), rasterizers, ROPs, and L1 cache. Not part of a Shader Engine is the Graphics Command Processor, the 8 ACEs, the L2 cache and memory controllers as well as the audio and video accelerators, the display controllers, the 2 DMA controllers and the PCIe interface.

>The A10-7850K "Kaveri" contains 8 CUs (compute units) and 8 Asynchronous Compute Engines (ACEs) for independent scheduling and work item dispatching.[16]

Educate yourself.
>>
>>53822241
Alright anon, you win almighty know it all anon has disproved everyone despite facts and long discussion about this subject.
>>
>>53822226
This is some pretty awful trolling.
>>
>>53822240
Because it uses Haswell cores.
>>
>>53822259
> despite facts
Don't go pretending you didn't get proven wrong by YOUR OWN sources.
>>
>>53822237
>>53822208
you keep posting that stuff from wikipedia but all you're listing is the "shader engine" which offers nothing new except for "more everything"
no new real features

can you even read? please learn how to use reaction images, you mong

the only REAL new architecture after te 2012 one is the nano series
>>
>>53822268
>>53821895
1.1m
>>
File: rimg3.jpg (38 KB, 432x288) Image search: [Google]
rimg3.jpg
38 KB, 432x288
>>53822274
>>
>>53822274
>nano
i mean fury
>>
>>53822279
Millions of articles benchmarking driver updates.
>>
>>53822289
Yes but still 1.1m on DX11.

Also stop trying to shift the "but you say drivers don't offer performance" because that was never the point you are the one who assumed so, Mr. Autism learn to follow conversations.
>>
>>53822310
Benchmarks on driver updates.
There is literally no counter argument to that.
>>
File: 04881143.jpg (53 KB, 446x400) Image search: [Google]
04881143.jpg
53 KB, 446x400
>>53822274
>the only REAL new architecture after te 2012 one is fury series
AYYY.
>>
>>53822320
Again with "but you say drive updates don't provide performance"


You really do have autism don't you.
>>
>>53822322
it's true and you're retarded if you think otherwise :^)

just accept that amd's 2012 cards were so long-lived
>>
File: 1270559163226.png (37 KB, 296x256) Image search: [Google]
1270559163226.png
37 KB, 296x256
>this thread
>>
>>53822310
> Also stop trying to shift the "but you say drivers don't offer performance" because that was never the point you are the one who assumed so, Mr. Autism learn to follow conversations.
It was always the point you tried to make. Stop trying to re-write history.
>>
>>53822347
Alright how can I dumb it down so even YOU can understand it..

AMD Drivers = High CPU usage
High CPU usage by the driver = Less CPU available for games.
Strong CPU = More CPU for games to use even if drivers have high CPU usage.


I hope that was easy enough for you to understand.
>>
>>53822360
> Alright how can I dumb it down so even YOU can understand it..
You mis-spelled "how can I try to spin this to re-write history and change the point to something it wasn't?"
The point was always about the fact that driver updates did improve performance over time.
Period.

You tried to argue against that, and you were always wrong.
>>
>>53822379
>The point was always about the fact that driver updates did improve performance over time.

You were arguing arguing against yourself the whole time then, the whole discussion was about CPU overhead.
>>
>>53822379
>The point was always about the fact that driver updates did improve performance over time.

>>53821022
>CPU overhead actually,
>CPU overhead
>>
>>53822384
Either you have memory problems, or you're a liar.

Here is the post where this all originated from:
>>53820922
> This isn't even about trying to promote AMD over Nvidia, you stupid git. The fact of the matter is, GCN did improve a lot with newer drivers. That's how they were able to keep up with Maxwell.

This was *always* the point.
Stop trying to pretend it wasn't.
>>
>>53820244
Save us 3dFX
>>
>>53822393
Millions of articles benchmarking driver updates.
/thread
>>
>>53822413
1.1m DX11 no matter what.

Just to educate yourself Nvidia can go up to 2.0m
>>
>amd drivers have high cpu overhead
oh boy, what a retard
directx11 has high cpu overhead, so nvidia is the same as far as cpu usage goes on dx11. go off yourself
>>
>>53822421
Which is why you have inferior Nvidia hardware beating the superior AMD hardware.

AMD software is just awful, if they hasn't pushed for DX12 they most likely would ahve gone bankrupt by this time next year.
>>
The thing you guys WANT to be saying is that Nvidia has mutli-threaded their drivers, where AMD hasn't. That is why Nvidia drivers work better.
>>
>>53822422
What, no is not in fact Nvidia released a driver that improved DX11 CPU overhead.

How new are you, everyone was talking about it when it came out.
>>
>>53822421
You keep repeating your irrelevant detail. It's not going to win you the argument. The only thing you're doing is showing you're a fucking idiot that doesn't like to admit when he's wrong.

It doesn't even matter whether the high CPU overheads is right or wrong, the fact of the matter is driver updates did improve performance, and make up vastly more of it than nonexistent improvements in CPU performance.
>>
>>53822436
>hasn't pushed for dx12
>has had the only fully-dx12 compatible card for a year now
>nvidia has nothing

even if amd goes bankrupt, it still has superior hardware
>>
>>53822452
Alright anon, you win the discussion about driver improvements even though it was never the discussion.

You wasted your time arguing about something no one but you was even talking about, congratulations.
>>
>>53822421
And if you want to go down the "but they made Mantle/DirectX 12" route:
Guess what? That's still *software* doing the performance improvements. Not CPU's getting faster.
>>
>>53822456
>>has had the only fully-dx12 compatible card for a year now

He fell for the AMD marketing.
The only fully DX12 capable GPU in the market right now are Intel's iGPU.
>>
>>53822463
> Alright anon, you win the discussion about driver improvements even though it was never the discussion.
Stop lying, it was always the discussion.
>>
>>53822479
Alright baby boy.
>>
>>53822488
Oh look, here's the smug little ad-hominems being thrown out when you know you've lost but want to keep pretending you haven't.
>>
>>53821895
>power efficiency (20~30% of Maxwell)
Kill yourself Nvidiashill.
>>
>>53822506
Are you gonna cry? baby is gonna rage?
>>
>>53821936
>The 7970 has the same architecture the 290X has.
Except it doesn't. 7970 is first generation GCN. 290x is second generation GCN, and features several architectural improvements.
Thread replies: 255
Thread images: 35

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.