[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
http://www8.hp.com/us/en/products/p roliant-servers/product-
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 90
Thread images: 9
File: c03760147[2].png (151 KB, 474x356) Image search: [Google]
c03760147[2].png
151 KB, 474x356
http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=5379860

/g/, tell me a reason not to get one of these along with 16GB of ECC RAM and use it to build a FreeNAS/ZFS home server. They're on Amazon for 240 yurodollars, and the 16GB of ECC RAM would cost me 150 bucks on top.

In comparison, a motherboard (excluding processor!) with the C226 chipset (needed for ECC RAM) already costs 180 euros, and one with a pre-installed quad core atom C2550 is 300.

>inb4 HP
>>
What the living hell do you need 16 GB RAM for on a home server?
>>
>>47146443
ZFS.
>>
>>47146443
ZFS.
>>
>>47146443
ZFS + ZFS niceties
>>
>>47146443
FreeNAS requires a buttload of RAM as disk space goes up for no good reason.
>>
Seriously, this filesystem needs 16 GB RAM to work well? What would you be storing on it?

And even if you do need 16 GB of it, surely you don't actually need ECC RAM, which costs more.
>>
Does it even have hardware RAID?

Either way, GigE will bottleneck the shit out of you so any performance improvements after that will be wasted. My D510 supermicro board doesn't even get bottlenecked on the CPU regardless of whether I use Samba or any of the possible LIO backstores.
>>
>>47146397
need hp support to get firmware updates

but then again everyone else does this
>>
>>47146518
>>47146520
ZFS needs around 1GB RAM per 1TB stored, guideline.
Secondly no one uses hardware raid with ZFS. Removes most of the flexibility off the bat.
>>
>>47146518
It uses the RAM to prevent filesystem corruption, which is why ECC is crucial - if some filesystem related shit gets corrupted in RAM, the filesystem itself will get fucked.
>>47146520
It does have HW RAID, but FreeNAS can't use it. I'd just go with ZFS' RAID.
And I'm not really worried about data throughput as much as data integrity - as long as it's fast enough to stream 1080p videos from it and do backups to it etc I'll be fine.

>>47146556
By HP support, you mean some kind of paid support plan? What benefit would I get from these updates once my system is up and running?
>>
>>47146558
My main point is that it doesn't matter what storage software or hardware you throw into something, you're going to get bottlenecked by the network.
>>
>>47146491
WTF, just use normal Linux and set up whatever sharing you want then. My home server gets along comfortably with 2 GB cheapass RAM and runs torrents and IRC bot too. Then again I'm using ext4 since I don't really need a filesystem that can backup multiple versions of my movies, porn and isos.
>>
>>47146585
the standard warranty will grant you firmware updates but once that runs out you will need to pay to get them

just general bug fixes etc
>>
>>47146598
I think you're in the wrong thread. Op probably has a reason for wanting to keep his data.
>>
>>47146585
>It does have HW RAID, but FreeNAS can't use it
Then there's a 99% chance it's not hardware RAID, it's fakeraid.

I'm using a Supermicro board that I got used (with 4GB of ram included) for $65. There's not much reason to pay for more than that when your GbE is going to be the bottleneck.

>>47146598
This. If you want easy redundancy and automatic failover, toss in $30 for an older HW RAID card.
>>
>>47146585
>It uses the RAM to prevent filesystem corruption, which is why ECC is crucial - if some filesystem related shit gets corrupted in RAM, the filesystem itself will get fucked.
What happens if power is lost? Do you need a UPS as well to use ZFS safely?
>>
>>47146443
OpenZFS
>>
>>47146653
You're probably right about the fake RAID. However, ZFS is really appealing to me because of the filesystem integrity thing and I have no idea where I could find used ECC compatible motherboards. I guess I could scavenge the dumpsters at my university, but I have no idea how often they throw out server grade hardware.

>>47146700
No, I think it would still be fine. I'm not an expert but from my research, you can pull the plug at any point and the FS will still be good, you just might lose some transactions that were still in RAM only.
>>
>>47146700
no, zfs can survive power outages

problem with non-eec ram in zfs is the filesystem operations (e.g. checksumming) happens in ram

so if you get a bitflipped you're fucked as you're going to write corrupted data to the disk

keep in mind zfs was designed for enterprise systems where ecc is the standard
>>
>>47146700
No. it's just that there's no way to detect errors introduced from faulty ram or cosmic radiation bit flips other than with ecc ram. So the whole point of running this super safe file system becomes kind of moot if you still can't trust your RAM to return the data intact.
>>
>>47146833
This is true for all software raid systems, but >>47146841
>>
>>47146825
They're more expensive due to being newer, but the Supermicro A1SRM lineup has atom C2000s and supports ECC.

I can't wait for the X10 line, some of them have onboard 10GbE which is way overdue.
>>
>>47146841
You could say the same thing about hardware raid controllers if they didn't use ecc
>>
File: CIMG0261.jpg (656 KB, 2000x1500) Image search: [Google]
CIMG0261.jpg
656 KB, 2000x1500
>>47146397
I've got this with the standard hardware configuration. 2x WD Red 4TB in Linux software RAID 1, plus a Crucial M4 64 GB SSD running CentOS 7.

Ask me anything you need to know OP.
>>
>>47146905
If they are truly hardware raid, they do. problem is >>47146653
>Then there's a 99% chance it's not hardware RAID, it's fakeraid.
>>
File: ernie.gif (1 MB, 193x135) Image search: [Google]
ernie.gif
1 MB, 193x135
/g/ee, what's a good place to read about servers and become less of a fucking retard about 'em?
>>
>>47146905
Any halfway decent hardware RAID controller will use ECC RAM.
>>
>>47146397
I also plan on buying this with 16 Gig RAM and run ESXi on for FreeNAS and a bunch of other servers.

I'm kind of turned off by the fact that it has 4 drive bays but doesn't support RAID 5...
>>
File: tumblr_m19bqlbqoi1r48j99o1_500.png (332 KB, 486x363) Image search: [Google]
tumblr_m19bqlbqoi1r48j99o1_500.png
332 KB, 486x363
>hp
>>
>>47146940
Servers are just like most other computers but they are designed to have maximum uptime and fault tolerance (or should be anyway) in most cases. Hence dual PSUs, hardware RAID, dual ethernet (or more) connections, etc.

>>47146953
You can always do soft raid in LVM or with mdadm in Linux. Not a big fan of LVM soft raid though because it seems to be buggy right now, ie setting up an LV with "lvcreate ... -m1 --type raid1" or something like it.
>>
>>47146940
There's not that much to learn. They're just normal computers for the most part. The only difference is that for multiprocessor setups, the CPUs themselves need to support the level of MP you're using(2P, 4P, or rare 8P setups).

RAM is sometimes different. Some RAM supports ECC which allows it to find and correct most RAM errors. Then, normal desktop RAM is unbuffered, which is used in some server equipment. Others use registered DIMMs, and some uses fully buffered DIMMs.

SAS is often used instead of SATA on servers, but you can plug SATA drives into SAS controllers (but not vice versa). Proper RAID controllers do all of the RAID on the hardware level, and only expose the RAID device to the system. Fakeraid (which is what you generally get when a motherboard supports "RAID") relies on the (windows) driver to do the RAID, so it may as well not exist.

Servers often have lower-level management interfaces that bypass the OS, like IPMI or several vendor-specific ones.

The easiest way to learn shit is to get a used server (not ancient, try about ~5 years old) and play around with it.


>>47146953
It doesn't support any RAID. It's just fakeraid.
>>
File: bludgeon.webm (2 MB, 344x510) Image search: [Google]
bludgeon.webm
2 MB, 344x510
>>47146926
I never understand why people don't at least wrap the outside of a case in some tinfoil to prevent cosmic radiation?
>>
>>47147051
It's HP's new look and I fucking hate it. Looking to add new servers to my racks at work and they all have the cosmic ray shit. The fucking racks already have doors, now you're just putting shit in my way when I need to access the server.
>>
File: 1422519372907.jpg (44 KB, 615x409) Image search: [Google]
1422519372907.jpg
44 KB, 615x409
>>47147051
What if I bought it to get cosmic radiation?
>>
>>47147093
I thought you had to pay more to get the bezels though anon

i rarely see rack systems with bezels
>>
>>47147051
"hello dog, i am water-dog, is pleasure to meet you"
>>
>>47147133
Because that guy is a moron. You can just take it off if you don't want it.
>>
>>47147188
>>47147133
http://www8.hp.com/us/en/products/proliant-servers/index.html?facet=ProLiant-DL-Rack#%21view=grid&page=1&facet=ProLiant-DL-Rack

Behold the fancy foil on many of them. I can take it off, but the point is it does nothing except get in the way in a rack. Why bother including it at all? Plus you're obviously going to pay more just because it's one more thing they have to make and include with it.
>>
>>47147229
Because it looks "cool" on pictures and when you tour dumbass managers in your server room.

Yes, I'm being serious.
>>
>>47147229
hurr durr

http://h71016.www7.hp.com/dstore/MiddleFrame.asp?page=config&ProductLineId=433&FamilyId=3901&BaseId=45860&oi=E9CED&BEID=19701&SBLID=&jumpid=in_r11662_virtualWKS_DL380z/psgpromo/atlas/heasmith_workstations

bezel isn't standard option

>photo on hp website displays the server you're going to order
>>
>>47147293
Fuck I hate HPs websites. All of them.
>>
>>47147316
i agree their website sucks ass but who the fuck orders through the website

either deal with your account manager or a reseller, no one pays full sticker price anyway
>>
>>47147362
Oh yeah, for sure, but you often have to use their site to look up shit.

I'm not responsible for buying the servers at my shop, but I doubt we specify if we want doors/bezel shit or not. We just get it..
>>
>>47147051

B-but anon, the case is already metal...

Some particles can still get through.
>>
File: 1426873733647.jpg (115 KB, 900x900) Image search: [Google]
1426873733647.jpg
115 KB, 900x900
>>47147394
>mfw hp and ibm is going down the shitter anyway :^)
>>
>>47147478
I think their servers are pretty good, but expensive.
>>
>>47147508
IBM hardware has always been solid, I think their x86 division will decline since it's been sold off to Lenovo.

I have no idea how they are going to survive with 'just' their POWER/i series/big iron businesses, they seem to be dead ends at as I doubt new customers are going to throw money down the drain by going proprietary IBM (or oracle/sparc)

HP enterprise looks bleak as well, gross mismanagement of the company in the last 10 years

What's left? Dell? Fujitsu? Supermicro?
>>
>>47147414
multiple layers of tin foil with small gaps in between are much more effective than a single metal sheet
>>
>>47147603
Supermicro sounds nice tbh.
>>
File: vrtx_front_and_rear_6_x_4.jpg (67 KB, 940x627) Image search: [Google]
vrtx_front_and_rear_6_x_4.jpg
67 KB, 940x627
>>47147633
I have no idea how good supermicro support is though

then again the saved support costs, you could buy another 2 clone boxes for backup

dell's vrtx looks pretty sexy
>>
>>47146397
>In comparison, a motherboard (excluding processor!) with the C226 chipset (needed for ECC RAM) already costs 180 euros, and one with a pre-installed quad core atom C2550 is 300.

A C206 board with an i3 is plenty sufficient & a lot cheaper. And yes, it does ECC.
>>
>>47147693
Do you take that thing off any sick ramps?
>>
File: 2HHjt18.png (33 KB, 147x176) Image search: [Google]
2HHjt18.png
33 KB, 147x176
>>47146443

Why not its cheap enough? Plus then I can run a few VM's on it too.

my HS has 64gb of ram because I'm not a poorfag.
>>
>>47146940

special school were you belong.
>>
>>47147705
still have to buy case & power supply

not to mention that microserver is a pretty compact machine
>>
OP here, I was just browsing ebay for used servers and happened upon this thing:
http://www.ebay.de/itm/HP-SE316M1-FreeNAS-Storage-Server-XEON-QC-L5520-16-GB-RAM-iSCSI-CIFS-SAN-NAS-LAN-/221713191296?pt=LH_DefaultDomain_77&hash=item339f22d180
It looks pretty neat, the only thing I'm concerned about is the noise it probably makes. Does anyone have experience with these processors in 1U cases? Are they incredibly loud?
>>
>>47148984
It will be loud, considering it has what appears to be 6 high airflow 40mm fans. Maybe 12 if they're double stacked.
>>
>>47146443
ZFS
>>
>>47149167
That's too bad, I'm planning to have this set up in my mom's office since that's where the router sits. I just submitted a bid for a Dell T20 though, those seem even better than the HPs since they have a Xeon. If I could snag it for under 250 euros, that would be nice.
>>
>>47146397
I have one. It's not running ZFS though.

12x 4TB Hitachi (4 internal, 8 external), SmartArray P412, 2x 1TB mSATA SSD on the onboard SATA (using 25SAT22MSAT). + 4x 5TB USB3

Each set of 4 drives is in a RAID 5. Storage Spaces tiered storage is fucking awesome.
>>
>>47146520
What is... NIC teaming?
>>
>>47149258
You can put a Xeon in the G8. Mine came stock with a Xeon 1220L V3. Otherwise change it out with something more potent.
>>
>>47148984
I had a 1U in my basement for a few months. When they turn on they sound like a jet starting to take off. After a few minutes when it boots up it's borderline acceptable. I had to put the TV loud to compensate and definitely wouldn't want it in the same room if I'm having a normal conversation with people. I could tell it was on in the next floor above.

Do what I did and just buy server parts and use a regular micro ATX case. Much quieter and more compact as well as having more storage capability than a 1U. I have a 1TB WD RED boot drive and 2x 4TB Hitachi NAS data drives plus 2x 4TB external desktop drives that mirror the internal ones as a backup.

In a month or so I'm going to buy a used RAID card on ebay and buy two or three more 4TB NAS drives and make a RAID 6 for a total of 12TB of usable storage and only keep the most important stuff backed up on the two externals.

>>47149532
I'm considering using RAID 5 instead. Have any bad history with RAID 5 yet?

I've also thought about getting crashplan. Do you think there's a big enough possibility even if I use my own private key that they'll know about my loli and other hentai and report me?
>>
>>47149589
No, but these are just storage pods. The magic of storage spaces / zfs comes from jbod. Given that I have a hardware RAID controller, I'm offloading the RAID5 parity calc to that. Rebuild priority is set to very high, muh tiered storage will drop priority on that pod if it has to rebuild.
>>
>>47146397
AMD AM3+ socket with an asus motherboard.
Bonus: you can virtualize shit on it and it's cheaper
>>
>>47149666
You can virtualize on the G8...

Also, it's not cheaper for the same caliber hardware. Server != Consumer
>>
>>47149690
Please do tell me about this higher quality signals.
>>
>>47149710
Server grade NIC (TOE?)
Remote management
Swappable drives
ECC memory support
>>
>>47149732
None of these are relevant in home usage.
AM3+ processors support ECC on the right motherboard. All Asus boards support it. You can have a nice octacore for cheap.
>>
>>47149650
So you have each RAID 5 (each set of 4 disks) as jbod? I'm not too familiar with storage spaces/ZFS. Also I thought RAID-Z couldn't be offloaded to a RAID card.
>>
>>47149547
Something that doesn't increase the throughput of a single TCP/IP connection in any widespread implementation.
>>
So what's wrong with RAID5 + offsite backup?
ZFS is for corporate deployments IMO, unsuited for home use because of cost and complexity.
Not to mention Btrfs will replace it very soon. If you thought linux is obscure, you haven't met *BSD
>>
>>47149748
Maybe not in YOUR home. All my production gear is at least ECC...
>>
>>47149787
I just said that it supports ECC
Are you OK anon? Need medical attention?
>>
>>47149755
Yes, the pods are presented to storage spaces as jbod. Basically 3 12TB spindles. storage spaces doesn't do any type of RAID with them.

>>47149781
Wait, what?

>>47149806
> None of these are relevant in home usage.
>>
>>47149845
99% of NIC bonding implementations (essentially everything except linux's NIC bonding set to balance-rr mode) will keep track of various aspects of the packets going across the bonded link so that any two packets going from A to B will always be sent over the same link.

balance-rr doesn't do that, but it shows why it's used to begin with. If you allow packets to be split up, they can arrive out of order, which best case causes the receiving system to have to do more work to reorder them, and worst case fucks things up. I've heard that 4xGbE with balance-rr only gets about 2.3 Gb/s TCP/IP throughput.
>>
>>47149878
Depends on usage scenario. I'm running Server 2012 R2 and throughput caps about 1.9Gb/s.

You sure you're not tanking your storage pool?
>>
>>47149917
1.9Gb/s from where to where?
>>
>>47149925
Server to server.
>>
>>47149980
What server to what server?
>>
>>47149988
MicroServer G8s. Last test config for both servers:

Xeon 1220L v2
16GB ECC
2x 1TB mSATA SSD (RAID1)
4x Hitachi 2TB (JBOD)
Server 2012 R2 Standard

Switch: HP PS1810-8G - Jumbo fames enabled

I pushed a sync of my MSDN downloads from one to the other. About 600GB. Yes, I'm aware that I'm still in the SSD storage tier.
>>
>>47150077
Are you sure it's not doing that at a higher level? The protocol itself might be splitting it up into one connection for each NIC.
>>
Because you can get a 8GB DL360 G5 2x dualcore for $110 and having them load another 8GB will be like $20? (they have 2GB sticks for $2.65 Buy It Now)

http://www.ebay.com/itm/HP-ProLiant-DL360-G5-Server-Dual-Core-5160-2x3-00GHz-8GB-2x36GB-10K-1PS-P200i-/191388545519?&_trksid=p2056016.m2516.l5255

Better deal is to get a 2x quad-core for $140. 12MB cache goodness, and you can run it 1P for lower power draw and always have the ability to go back to 2P to get some crunching power.
>>
>>47149845
You use ZFS, right? Isn't it recommended to put your RAID card in jbod so RAID-Z works correctly? I'm still not getting your first setup.
>>
I really like that case.
Can I get a case like that for a desktop?
>>
>>47149589
The FBI doesn't care about your cartoon porn.

>>47148984
Seconded. 1U servers are VERY loud.
>>
>>47150378
>Seconded. 1U servers are VERY loud.

No reason to keep them stock unless you're an uber nerd who actually has a rack of 'em, though. Take the top off, rip out the fans, and jury rig up some 80 or 120mm's. You can make ducts with cardboard and duct tape. (Hell, my main rig is being funneled cool air down a National Geographic cover.)
>>
>>47146397
If you only need four drive bays I don't see why not. I'm looking for an 8 bay machine for a ZFS RAID 6 setup.
>>
>>47150217
Nope, I run Windows.
Thread replies: 90
Thread images: 9

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.