[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
It's over, Intel is finished.
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 59
Thread images: 8
File: 1313218042060.png (2 MB, 948x1130) Image search: [Google]
1313218042060.png
2 MB, 948x1130
It's over, Intel is finished.
>>
K7/K8 were the days. The little faggot was genuinely winning and you couldn't help but root for them.
>>
File: that feel.jpg (58 KB, 528x384) Image search: [Google]
that feel.jpg
58 KB, 528x384
>AMD will never be the CPU performance leader ever again
>>
>>55258513
Why do you think that?
>>
>>55258854

I'm not him, but:

https://www.cpubenchmark.net/singleThread.html
>>
>>55258450
>1/3 speed L2 cache
if you could get a hold of the competing Pentium III it was infinitely better
>>
>>55258513
>z e n
>>
File: 1451918133464.png (32 KB, 286x350) Image search: [Google]
1451918133464.png
32 KB, 286x350
>>55258450
>you will never see a cartridge CPU again
>>
File: hardwaregore_fullyc_004.jpg (211 KB, 1024x768) Image search: [Google]
hardwaregore_fullyc_004.jpg
211 KB, 1024x768
>>55259973
B-but pins are so much better
>>
>>55259973
Why don't they make cartridge CPUs anymore? More expensive? Can't cool them properly?
>>
>>55260295
Harder to cool, more expensive and current socket types allow much more bandwidth between CPU and motherboard.
>>
>>55258854
Intel puts more into R&D every year than AMD is worth.
>>
Why did they switch to cartridge cpus around 2000?
>>
>>55260295
they'd be far too big
>>
>>55258450

fegget AMD used to make better CPUS then intel back in the day. their market share was like 10x what it is today.
>>
>>55260370
2000 Pajeets < Jim Keller

Rd money means nothing
>>
>>55258450
>color palm pilots

The future is now.
>>
640k ought to be enough for anybody.
>>
>>55260295
Think for a second.
Your current CPU has anywhere between 900 and 1200 pins/pads.
DDR4 has 288 pins (DDR3 240 pins) and you want to make a cartridge that would have 900+ pins.

Along with the cooling reasons the socket for it would be a nightmare.

>>55260388
If I recall correctly part of the reasoning was putting more cache near the CPU (which was then BTFO by DDR). It also allowed them to release various classes of these CPUs with different amounts of cache but into the same socket. Slot. Whatever.
>>
>>55260471
I don't think we need that many pins.
>>
>>55260519
We can easily fit it though.

Have the cpu sit into 2 parallel slots instead of one longer one.
>>
>>55258450
Shit, I miss getting Maximum PC magazines. My favorite was their coverage on 3DFX. Dream machines were good. They did some neat cheap builds, too. Always really good at explaining things, too.
>>
File: raja-koduri.jpg (34 KB, 444x284) Image search: [Google]
raja-koduri.jpg
34 KB, 444x284
>> mfw the 480 is a piece of shit
>>
>>55260423
>fegget AMD used to make better CPUS then intel back in the day. their market share was like 10x what it is today.
Which made it like a whooping 15% lel.

By lel I mean :( because I bought quite a few AMD CPUs back in that era. Some good times.

>>55260559
For what purpose?
>>
>>55260471
>DDR4 has 288 pins (DDR3 240 pins) and you want to make a cartridge that would have 900+ pins.

Put memory next to the CPU using HBM2. Problem solved.
>>
>>55260519
True, many of the pins on a CPU are power or ground which could be combined into fewer rails going into the socket, but you still need a ton of pins for the interconnects with RAM, PCI-e and chipset.

>>55260559
It would still be taking up more space than the current socket, and that is without working out how to mount a cooler on it too.
>>
>>55260519
Well, they could in theory do what was done during the Athlon 64 era with a few experimental boards, where the daughterboard contained the CPU, the ram, and the power circuitry with only external IO going off-card.
>>
>>55260617
>want to upgrade your RAM
>have to buy a new CPU too
I wonder who could be behind this post.
>>
>>55259151
Wow, really? I'm pretty sure my 8350e gets better single core than the top AMD proc in this list.......
>>
>>55258513
K E N
E
N

Also I had an Athlon 2100 back in the day.
>>
>>55260471
Most of those pins are Vss/Vcc. On LGA 1151, there are ~150 Vcc and ~450 Vss pins.
>>
>>55258450
amd were winning. then intel paid oems to only use their chips :^)
>>
File: 1432009272659.jpg (122 KB, 668x623) Image search: [Google]
1432009272659.jpg
122 KB, 668x623
>>55260598
>Hurr de durr
>>
>>55260641
I don't think that's what he meant, though.
>>
File: 1398977907844.jpg (82 KB, 680x677) Image search: [Google]
1398977907844.jpg
82 KB, 680x677
>>55258450
>first-gen K7
>>
>Color Palm IIIc PDA
Holy fucking shit!

No really though, the word phrase PDA shouldn't have went away, it sounds way better than smartphone.
>>
>>55260295
Because they're shit, and they only existed in the first place as an economical but faster way to accommodate off-die caches, something that became irrelevant as soon as Coppermine and Thunderbird put L2 on-die.

Not to mention the entire design of the packaging is fucking dumb, is difficult to remove and with a modern design would basically make SMP designs impossible. Sockets are much more efficient and make way better use of available space, and allow way better heatsinks.
>>
>>55260840
That's the outcome though.
>>
>>55259151
>"Golden age" Athlon 64 3800+ (676/mid-2005) doesn't even near Intel's 3.8 GHz Pentium 4 housefire (822/mid-2005)
Not feeling the hype.
>>
>>55260852
what was wrong with it?

first amd cpu i ever had was a k8 2700+
>>
>>55261206
relive the Emergency Edition
http://www.anandtech.com/show/1910/14
>>
>>55261216
they were pretty much marketing stunts, everything after the 700 MHz model used gimped cache in order to achieve those speeds and front-page PR
>>
>>55261288
Thats terrible.
>>
>>55261259
Not even going to argue with you on that one, Prescott+ EEs were fucking shit, and the dual-core A64s deserved their run in the sun before Core2 came around.

I'm more looking at the single-core era, nothing I've ever seen from that period has really been impressive from a raw performance or feature set perspective.
>>
>>55260980
Well, there is the option of putting the CPUs and their local memory blocks on their own daughterboard and then have them interface to the mainboard. I know the LGA 1567 8P servers intel put out did it in this fashion (4 daughterboards, 2 CPUs and their ram per board). Hell, with the fact that every CPU these days has an IMC the concept is actually quite viable for servers.

>>55261390
Socket 939 Athlon 64s (not the X2s, although those are also quite noteworthy) had a very good featureset for their time and could be considered the first "modern" modern platform layout.
>>
>>55261493
>Socket 939 Athlon 64s (not the X2s, although those are also quite noteworthy) had a very good featureset for their time and could be considered the first "modern" modern platform layout.
They also helped make Intel abandon Itanium which was best for everyone.
>>
>>55261531
Itanium was a clusterfuck that would have killed itself. The compilers needed to make it work its supposed magic never really materialized, and the power consumption and heat output was terrible for the performance, and to round it all off a fucking first gen Pentium could execute x86 instructions faster than it could in emulation.

Seriously, what the fuck was intel thinking during that time period? They had Netburst, they had Itanium, both sucked.
>>
>>55261493
That concept is a much better idea than the perpendicular slotted design the Pentium 4/K7

>Socket 939 Athlon 64s (not the X2s, although those are also quite noteworthy) had a very good featureset for their time and could be considered the first "modern" modern platform layout.
When I think about it, I guess you're right on the consumer front, not really so sure in servers and workstations though. Opterons always looked iffy both in benchmarks and on paper, not quite as shit as the Athlon MP, but still lacking things Intel had like on-die L3 cache.

>>55261531
Not at all, Intel continued pretty steady development on the Itanium platform until well after 64-bit x86 was old news, and in terms of market success, the platform was dead to begin with outside of HP and a couple Chinese manufacturers.

>>55261574
I think hubris was what really took down the Itanium, it felt like it was designed based on ideals and theoretical numbers rather than clunky real-world applications. The magic compiler was fucking great for Intel and HP's marketing tests, not so much for actual applications.

Though when you think about it, it was a pretty sweet deal for Intel despite its failure, the IA-64 hype train destroyed many of Intel's competitors in the pure profit high-end like MIPS, PA-RISC, Alpha, and took SPARC down a pretty huge notch, just in time for Intel to swoop in with "good enough" x86 chips and hoards of cheap workstations and servers running NT and Red Hat, and as a side bonus it made vendor-fucking enterprise customers a little easier for HP and IBM.
>>
>>55261761
>That concept is a much better idea than the perpendicular slotted design the Pentium 4/K7
Thats because everything but the chip and cache was off-board for P2, first gen P3, and Athlon. Nowadays everything except external IO can be installed on a daughterboard, including power.

>Opterons always looked iffy both in benchmarks and on paper, not quite as shit as the Athlon MP, but still lacking things Intel had like on-die L3 cache.
Well, I suppose back then AMD counted on Hypertransport and the onboard IMC to help with that. Keep in mind during that time Intel had socket 603 Netburst chips, all of which had to fight over the same FSB and memory bus for resources and was further compounded by needing to maintain cache coherency. Optys on the otherhand had their own memory pools, and could directly communicate with each other and the chipset through HT. Only downside of this was if a chip needed to access a non-local memory pool, there would be a not-insigificant latency penalty.
Even so, it was enough of a good idea for intel to adopt it themselves after Dunnington (the Core 2 Hex that never reached consumers).

>Though when you think about it, it was a pretty sweet deal for Intel despite its failure, the IA-64 hype train destroyed many of Intel's competitors in the pure profit high-end like MIPS, PA-RISC, Alpha, and took SPARC down a pretty huge notch, just in time for Intel to swoop in with "good enough" x86 chips and hoards of cheap workstations and servers running NT and Red Hat, and as a side bonus it made vendor-fucking enterprise customers a little easier for HP and IBM.
Well, true. And even though IA64 flopped, AMD ended up swooping in with x86_64, which intel grabbed quickly enough to finish making the aforementioned "Vendor-fucking of the enterprise customers" easier.

Still doesnt explain what the fuck they were thinking with Netburst.
>>
>>55262030
Totally agree, would be pretty content seeing larger CPU+memory daughterboard designs in particular, the old method was simply too small. Still sounds a little harder to cool in a desktop form factor though.

Also, I meant Pentium III in that quote, not Pentium 4, jesus christ.

>Well, I suppose back then AMD counted on Hypertransport and the onboard IMC to help with that.
Oh yeah, I forgot the memory controller was integrated from the start, explains a little more why Unix vendors making forays into x86 like Sun seemed to like the early Opterons over Xeons.

>And even though IA64 flopped, AMD ended up swooping in with x86_64, which intel grabbed quickly enough to finish making the aforementioned "Vendor-fucking of the enterprise customers" easier.
Right, but the HP and IBM could do their share of "vendor-fucking" as well as the only non-x86 Unix vendors worth a shit left, in an age before the Xeon and Opteron were even remotely competitive in the "mission-critical" and HPC segments.

>Still doesnt explain what the fuck they were thinking with Netburst.
Seemed like just plain old cynicism, a buyer looking at the shelf talker alone saw an Athlon XP running at 1.5 GHz with 400 MHz DDR or a Pentium 4 clocking in at 2 GHz with 800 MHz RDRAM, and bigger numbers are always better, right?? The design was functional enough that Intel could simply raise clocks when something challenged them, and worked well enough for the majority of people that would probably blame Microsoft for their struggles anyway.
>>
>>55262350
>Still sounds a little harder to cool in a desktop form factor though.
Well, a desktop doesnt need the density of a high performance server, so it'd be better if they stayed as they are: CPU, ram, and CPU VRMs on the mainboard instead of a daughterboard. In servers though they have those tiny little 1/2U passive heatsinks, and that bigass bank of fans splitting the drives from the motherboard. Just get some ductwork, align everything for optimal airflow, and there you have it.

>Right, but the HP and IBM could do their share of "vendor-fucking" as well as the only non-x86 Unix vendors worth a shit left, in an age before the Xeon and Opteron were even remotely competitive in the "mission-critical" and HPC segments.
Point, forgot about that. I'm just so used to intel (and to a much lesser extent AMD) being the dominant factor in servers and HPCs that I completely forgot about the fact that they were still around and could move in with the collapse of the other vendors and architectures.

>Seemed like just plain old cynicism, a buyer looking at the shelf talker alone saw an Athlon XP running at 1.5 GHz with 400 MHz DDR or a Pentium 4 clocking in at 2 GHz with 800 MHz RDRAM, and bigger numbers are always better, right?? The design was functional enough that Intel could simply raise clocks when something challenged them, and worked well enough for the majority of people that would probably blame Microsoft for their struggles anyway.
Yep, that would be a reason, and even when people started smarting up a bit, intel just started shoveling chips and cash everywhere and boned AMD.
Speaking of which, it turns out intel still has not forked over the settlement money for the no-no they did during the Athlon64 days.
>>
>>55262479
>Just get some ductwork, align everything for optimal airflow, and there you have it.
Yeah, you could probably handle it for dual-processor workstations as well if you had the two boards facing opposite directions from each other, but overall it seems not really worth the extra pain.

>Speaking of which, it turns out intel still has not forked over the settlement money for the no-no they did during the Athlon64 days.
Shit, that's retarded. Can't say it's surprising of them, though.
>>
>>55262731
>Yeah, you could probably handle it for dual-processor workstations as well if you had the two boards facing opposite directions from each other, but overall it seems not really worth the extra pain.
Actually, how I would see it working is to have the top of one CPU heatsink butt up against the bottom of the daughterboard right next to it, and have the ram either flank the chip (if LGA 2011) or above it. Assuming a 4U rackmount server, looking from the front, the CPUs would face to the left. On top of them would be a heatsink. Above or flanking on both sides would be the ram. There would be 4 of these with 1U of clearance between each board. There would be ductwork focusing the fan bank's exhaust directly into the CPU banks, with more ductwork on the far left CPU card to keep the airflow pressed against the card and heatsink.

Pretty sure that or something similar is how it was done for the LGA 1567 chips, and those things were monsters. It would allow for increased density, and should Intel/AMD and the system designer allow for it, the installation of even more CPU/ram banks. Communication would be interesting, as a data burst running from one end of the CPU chain through the QPI/HT links to the other would take some time relatively speaking, but because the way QPI/hypertransport works, theoretically a 4U server using daughterboards could easily fit and scale to 8 boards, which in turn translates to 16 cpus and their own cores.
>>
>>55259151
>muh single thread
>implying that anything resource intensive in 2016 is fucking single threaded
>>
>>55262874
That's a more interesting idea than I had, my idea of having two boards opposite of each other was kind of along the lines of communication, it seems if you fit them together closely like that it would make the underlying backplane design a little less of a bitch, if it would be in the first place.
>>
>>55263408
Well, for the most part the backplane around that area would be passive, just data bus (QPI/HT), PCI-E, and power traces, with a couple miscellaneous traces here and there for other I/O. There might be some additional surface mount components for filtering and other things though.
The mainboard would naturally still have the chipset(s) and the other I/O hardware for the other devices in the system, so it wouldnt be entirely passive.

If doing a 4-board/8-socket setup, it would still leave plenty of space for PCI-E slots in the case and on the board for whatever other things that would need a PCI-E slot.

The major issue here (other than routing the PCI-E lanes, since they mostly come from the CPUs these days) is setting up the QPI/HT links in a sane manner. Keep in mind most server CPUs only have room for 3-4 links, with the 4th link typically going either unused, converted to PCI-E, or linked to the chipset(s) if close enough on the board.
>>
>>55263541
That's true the more I think about it, it would actually probably be more annoying to design it the other way, since the edge connectors would be pretty cramped and a little more difficult to route.

Wish there was a little further discourse I could offer on the subject, but I'm not as well-versed in modern hardware design as I'd like to be.
>>
>>55263779
Neither am i. I just figured it was a practical way of doing modern slotted processors as it has more or less already been done before.

An interesting twist on it though would be to skip the slots for the dram and the sockets for the cpus entirely, and attach them directly to the daughterboards (The CPUs through a BGA style mount and the dram directly), then attach the necessary heatsinks. It could be done in pairs as previously mentioned, or as singular CPU/DRAM cards that can be pulled and dropped in as required, each with the necessary heatsink, CPU, DRAM, and VRM hardware on it. It arguably would make the design a little bit simpler because wiring up to sockets especially for dram would no longer be needed, instead wiring straight to the DRAM chips. It would allow for the use of the other side of the card (put part of the DRAM there maybe), and if done correctly, allow for the packing in of even more CPU cards.

Eh, its an idea that will likely never truly happen, and sleep deprivation is making its presence known.
>>
File: dream00.png (571 KB, 575x599) Image search: [Google]
dream00.png
571 KB, 575x599
>>55260593
It's fun as fuck to go back and look through the archived issues on Google Books, I still want one of the early Dream Machines, as silly and still somewhat pedestrian as they sometimes were.
Thread replies: 59
Thread images: 8

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.