[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
Riddle me this: let's say RAM prices are 10$/GB, for 10k
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 57
Thread images: 7
File: 1463073900307.jpg (61 KB, 355x236) Image search: [Google]
1463073900307.jpg
61 KB, 355x236
Riddle me this: let's say RAM prices are 10$/GB, for 10k one can get 1TB of RAM. Of course, you need to fit it into a box + some extra stuff to handle all that, but essentially you have
1. very simple
2. in memory, lightning fast
3. cheap
computing base setup for 1 TB chunks

Why don't we have computers available like this? Why Big Data platforms are a thing again?
>>
Because it's $10,000 per TB you fag
>>
16 @ $56 x 64GB = 1024
Assuming you could find a motherboard with 64 RAM slots, you can get 1TB of RAM for $3584
>>
>>54758973
peanuts, big data offerings start at a couple of grands, hw cost is always the cheapest.
If you have to hire just 1 additional person for you super distributed complex algorithm and infrastructure to operate, you are looking at ~60k per year. Multiply that number according to your consultants.
>>
>>54759140
that is what I meant by not having components readily available. Anyway it is just memory handling not rocket science
>>
>>54758788
I set up a 12gb ram disk to work in on boot. Pretty comfy, no complaints. Assuming you had infinite money, and that you somehow found a motherboard with a hundred fucking ram slots, I guess?

Sounds retarded though honestly
>>
>>54759408
>2016
>not having a PCI-e battery powered 32gb ram disk
>>
>>54758788
I work for a company that owns something like this. The reason is that you get diminishing returns quite quickly with this sort of thing. Once you shove that much RAM into one computer, the price becomes enormous. It's much cheaper to just buy two servers that have 512GB each (like $5,000x2 vs $15,000). Then you consider that two computers is better than one, and in most installations of this size they're going to be working in a distributed system which would balance across them, letting them be redundant etc.

It's just dumb to pick the one bigger computer.
>>
You would lose everything upon reboot.... idiot
>>
>>54759408
12 GB is nowhere near big data
>>
File: s-l1600.jpg (401 KB, 1600x1200) Image search: [Google]
s-l1600.jpg
401 KB, 1600x1200
>>54759140
Am I mistaken in thinking the upper limit on ram slots dictated by the cpu architecture.
>>
>>54758788
also if your 1tb pc gets damaged u lose tons of money
if u damage 1 from10 pc/servers it is just 10% damage
>>
>>54759508
that is my point, the whole distributed shilling sounds smart, but it has nowhere the same performance and costs (human resources, hw, complexity) are magnitudes higher.

Of course, ymmv, 1 TB seemed like a good number to debate. If you frequently work with bigger data shove in more RAM until you have everything* in memory.

*reasonably
>>
File: board-front.jpg (33 KB, 550x367) Image search: [Google]
board-front.jpg
33 KB, 550x367
>>54759485
>>
>>54759578
like you need to change some memory sticks? Or physically destroyed? Not your typical scenario imo

tons of money is what your cloud provider charges and the salaries of the people who operate it
>>
>>54759603
Makes me wonder why people aren't using this already
>>
>>54759583

Distributed systems are tried and true. If i can get my deployment out for about 66% the cost (like my first post suggested), I'm going to take that option. There are exceedingly few cases where you can do the processing you're talking about on one machine, even one with 4 CPUs and 1TB of RAM, so you're going to need to distribute. Plus, only a financially suicidial faggot would put that much faith in one system where uptime is critical, with two of them, you would have failover.

There will be the day when im saying this about 1TB machines, it's gonna get there eventually and demands will grow to meet it. I'm just saying that this is the current climate.
>>
File: 1164336.jpg (146 KB, 682x700) Image search: [Google]
1164336.jpg
146 KB, 682x700
>>54758788
>Why don't we have computers available like this?
You can buy a computer with 1.5 TiB RAM today.
Do note that it is 1.5 TiB, not 1.5 TB.

Board: Supermicro X10DRi-LN4+
RAM: 24x 64 GiB DDR4 ECC = 1.5 TiB
>>
>>54759536
why would it be limited?
>>
>>54759210
>that is what I meant by not having components readily available
The components ARE readily available.
>>
>>54759677


Physical limitations on the bus of the CPU? These things aren't infinite, yaknow.
>>
>>54759663
again, you can have many such systems for fraction of the cost for failover

>Distributed systems are tried and true
On the contrary. Complexity and moving parts are the very enemy, they need to be managed which costs extra. Not to mention performance/ Networking adds a big overhead, while you can pretty much access your data instantly once you read it into memory. Faster computation -> less uptime.
>>
Why would you want all of those sticks of ram?!?!?! If one fails you're so fucked.
>>
>>54759677
Number of addressable bytes with a 64bit architecture would be 2^64 > 2^40 which would be the number of bytes in a terabyte, I don't see any physical reason why you couldn't do it.

However, what happens when the power goes out? You'd have to mirror the Memory onto a non volatile medium which is just gonna complicate things. In the end I don't see a high enough return to justify it.
>>
>>54759693
why couldn't be multiple memory units connected to the bus as long as you can address them? I don't think CPUs care about that, it is abstracted away from their POV, no?
>>
>>54759768


I just said that two 512GiB systems are cheaper than the single mega system.

I'm not sure why it's so complex to have multiple computers? You have to stop thinking of them as individual computers. There are already very stable systems in place for doing this, and the overhead of networking is almost always negligible. I just don't see your point. We run many different types of clusters where I work, and I suspect that what we do is quite typical. All of the companies we sell to that I've talked with also do this.

I can't think of a single application where memory read speed is more critical than fault tolerance, although there may be one.


Also, fault tolerance is absolutely crucial to anything that isn't big data maymay.
>>
>>54759816
why would the power go out? It is the same for distributed though, unless you mean one datacenter is hit by a blackout and the other can operate freely. Which is a freak I don't care for in the grand scheme of things.
>>
>>54759835

There's still a maximum data rate for the entire thing, plus you have to keep those tracks very short. I would imagine that the breaking out of the bus by the chipset imposes a significant limitation as well.

Then again, I don't see any reason it COULDN'T be possible.
>>
>>54759904

Tolerance for losing a data center is not unusual at all. Look at Google, they can get all of the data out of a data center and have seamless failover before the building can even burn down. Uptime is crucial.
>>
File: gtb.jpg (40 KB, 526x526) Image search: [Google]
gtb.jpg
40 KB, 526x526
Looks like it.
>>
>>54758788
check out the vega 3 from azul systems. Apparently it supports to up to 670GB of memory.
>>
>>54759898
network bandwith and memory bandwith are not even comparable, let alone negligible
+ Cloud providers charge for data transfer, if you need to transfers to/from to your computing devices that is x2 for starters

of course, you need to prepare for faults, however you can have redundancy in such a system too

>I'm not sure why it's so complex to have multiple computers? You have to stop thinking of them as individual computers.

you just answered it yourself. When you have a layer of abstraction that is what introduces complexity. Sure, by now it is easier to manage multiple devices, and your provider do some parts for you, nevertheless one computer will always be easier
>>
>trusting your entire operation to one computer
Nigga there's a reason you're funposting on /g/ gtfo here desu senpai
>>
>>54759946
then you have a backup machine in another one, not a big deal. Or even in your own basement.
Also, it is inherently more protected from network failures

why would be uptime crucial? It is not facebook, you have long running jobs. Sure, if it fails, you have to start a batch over. However when you have 10x performance because no overhead, it is pretty great.
It is harder to shill for consultant money if you don't throw some dust into your customers eye with all your big data magic, I will give you that
>>
>>54758788
My esx cluster at work has like 6TB of ram across 6 hosts.
>>
>>54758788
I don't think you know what big data is.

Big data is just a database + analytics.
>>
>>54760105
when you are in the business, you can have multiple computers that can handle a job, the only difference each one can complete it separately
>>
>>54759663
two physical servers isn't even good for failover.

Whenever we deploy a cluster it's between 4 and 6 hosts.

Never use local storage, everything on the SAN.
>>
>>54760114

Most of my work concerns systems engineers use, if they go down, we waste engineering time and they can't work.

Also, I can't even imagine why performance gains would be that much. Try closer to 10% faster maybe on a good day.
>>
>>54759768
Sorry, sysadmin here, the best setup is always distributed, highly available, redundant.
>>
>>54760137
and what is analytics if not computation on your data? to call e.g scientific computing analytics is a bit misleading though
>>
>>54759805
ECC RDIMMS have a really low failure rate, also if you need to power off the server it should be one of many in a cluster and you can vmotion all the vms somewhere else for the 20 minutes it is powered down.
>>
>>54760192

Obviously, I was using this to say two is better than one.

Most of ours are only one node failure tolerant though, not enough resources to go around.
>>
>>54760213
sysadmin here too, the best setup is very much dependent on your goal

>>54760211
because of said bandwidth difference, you don't have to move your data across computers, that is huge
>>
>>54760244
If your goal is to have 95% uptime, by all means have one physical host. WHy stop there make it a whitebox with fucking regular non-ecc ram? And use shitty 3.5" consumer harddrives with no next day replacement support.
>>
>>54758788
>Why don't we have computers available like this?
We do, newfriend. You can rent one with 2TB ram for $10/hour on amazon ec2.
>>
>>54760244

There's 10 or 40gbps links between the nodes, we don't bottleneck on the network, why would you even need to transfer that much?
>>
>>54758788
nice squirrel op
>>
>>54760263
by the very fact you are a sysadmin means you work with complex systems. No one hires a guy to manage his laptop. Saying you have to hit a sweetspot on the complexity scale is not really outrageous.
>>
>>54760313
I'm not uptodate on memory bandwidths, but my guess would be they can do 40GB/s easily, that is an x8

you need to transfer that much, because you have to process all your data. And it is big, by definition. What if when you have 100TB or more? sure, you can't really hold infinite data in memory, but it is still faster working on those chunks.
>>
>>54760322
That's a chupacabra you retard.
>>
File: Kepiplörö and your mom.gif (1 MB, 260x195) Image search: [Google]
Kepiplörö and your mom.gif
1 MB, 260x195
>>54760482
no you flaming faggot thats a Cnidarian you fucking spastic
>>
>>54759408
I don't know about a TB, but HP Z800s have an above-average amount of ram slots.

My machine has 96 GB of ram, and its not even maxed out. It could go higher.
>>
>>54759514
Literally only talking running eclipse, browser, and my project files. I was just saying that in theory it sounds nice.
>>
>>54760593
do you use a small SSD as swap?
>>
>>54760323
Guess what, on an enterprise level you do need people to manage laptops, they're called (desktop) support.

When one system fails and it isn't part of an overall thing, it fails completely. 95% up time for an entire year is nearly a week of downtime. That is assuming that you don't need to reconfigure the system every single time you experience one failure.
>>
>>54760425
I'm sorry but you clearly just do not understand how these things work. try going to school and getting a job first before you get all fucking crazy about this stuff.
Thread replies: 57
Thread images: 7

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.