[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
/hsg/ - Home Server General
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 122
Thread images: 25
Because fuck you edition!

/hsg/

What would you use a home server for? /g/ answer - Fuck you!. Simple answer - For whatever you want. From media to development to virtualization, options abound.

Power - Any server DDR2 based is going to be power hungry. Any multi-socket Intel system is FBDIMM based. Anything else is ECC. With DDR3 based units coming off 2nd lease, anything DDR2 should be avoided.

Plex - 1080p streaming at 10MBPS requires a CPUMark score of ~2000 per stream. This is especially true with first generation i3/5/7 / DDR3 Xeons. The more recent the CPU, the more slack there is in this. For some reason, Plex doesn't seem to like low power options (Xeon 1220L, for example).

Virtualization - ESXi, KVM, Hyper-V, etc. ESXi is generally used by Linux heavy shops that aren't cloud centered. KVM is usually used in OpenStack. Hyper-V is for mostly Microsoft centric shops. These are all free, so use what you like.

Storage - Both ZFS and Storage Spaces pool. If you're going to use these options, do NOT configure the drive with a hardware RAID controller. Many options are available in general, such as FreeNAS, Nas4Free, OpenMediaVault, Windows Storage Server, Linux / Unix / BSD, etc. Some are free, some are not.

What should I get? A good starting point, if you don't want to build your own system, is an HP Proliant Micro G8, 8GB DDR3 ECC (Not Registered or RDIMM), 4 3.5" drives, and a 16GB micro SD card. Install OpenMediaVault on the SD card, and enjoy ZFS, Plex, and whatever else you want to try.

Where can I get things? Ebay is a good place to start. Used / refurbished gear is fine, provided that the seller is selling a large quantity of them. With drives especially, this is the case. The only real drive to avoid is the Seagate ES.2 1TB. These have faulty firmware and fail prematurely (Ask EMC).
>>
File: dell-t620-2.jpg (30 KB, 620x465) Image search: [Google]
dell-t620-2.jpg
30 KB, 620x465
(OP continues)

I'm in the process of re-configuring my T620 as I have other hardware pending.
> Dell PowerEdge T620
> 2x Xeon 2660
> 192GB RAM
> 12x 2TB Hitachi Ultrastar
> 2x 128GB SSD (OS)
> Drives are presented as JBOD, and handled through storage spaces.
> VM's - 4x DC, 2x NS, 2x EX15CAS, 3x EX15MBX, 1x ADFS, 1x DirSync, 1x SharePoint, 1x TFS, 1x SQL (I know), 1x SCCM/SCOM, 1x media (Plex), and about 10 mostly off Linux VM's for fuckery and testing.
>>
>>54732023
>dirsync
enjoy your meme65 botnet
also why the fuck would you need four dc's and two additional nameservers? is it fair to assume that you have absolutely no idea what you're doing?
>>
>>54732176
You have no idea how AD, Exchange, or SCCM/SCOM work, do you?
>>
>>54732233
i install them for a living
>>
>>54732251
Then you should know what the difference between a dc and dns server is...
>>
> Huge x86_64 boxes
I got sick of owning those.

Since most of my servers runs on Linux, I migrated them to Raspberry Pis. Room's a lot more quiet now. Electricity bill goes down, especially during summer. UPS lasts a lot longer on power outages. Most importantly, I don't have to take them offline to clean dusts every few months.

I miss SATA transfer speeds and Gigabit Ethernet, but I can live without them.
>>
Would it be completely retarded to just take an old gaymen PC I'm not using anymore, remove the GPU, and set it up as my home server? i7 and DDR3.
>>
>>54732321
Which rPIs are you using?

And one server is better than the half rack I was running. It's quiet (surprisingly so) and power difference is about $8/month compared to nothing at all.
>>
>>54732321
if pi's were a viable alternative, you aren't doing anything worthwhile in the first place to justify using the extra power of a normal machine
>>
>>54732338
Not really. What do you want to use it for?
>>
>>54732343
A mix of pi2 (1GB RAM model) and pi3.

>>54732345
I don't do things that requires two Intel Xeons (Exchange server?) at home, and the only user is me. It's only ownCloud, email, torrent & file server, tied to OpenLDAP. A few Raspberry Pis has a lot more power than what I need, and it's a waste to run it on a big Xeon box.

But what really made me switch was having to periodically clean dusts.
>>
>>54732348
NAS and some super small scale web hosting.
>>
>>54732446
Should be fine.
>>
File: 1463498223936.jpg (29 KB, 504x504) Image search: [Google]
1463498223936.jpg
29 KB, 504x504
Building a NAS for family + own cloud and maybe an attempt to host my own webpage.

>Pentiun G3250
>mITX H97N motherboard
>8GB Kingston value ram. Non ECC
>A corsair PSU I already have
>4x2TB drives in raid-z1

I'm trying to make it less than £500 overall, but here in the UK, the availability of parts is poor and everything is so fucking expensive compared to the US

.Planning on running Freenas, Plex and owncloud for now.

Any comments?
Will the pentium be suitable for utilisation of jails in the future?
>>
>>54732582
I'd splurge for ecc, because zfs. OpenMediaVailt should get you there.

Look at the op, and see what micro server g8 prices are locally.

I picked up 12 2tb drives for $300 on eBay. Pulls or refurb lots are fine, but grab one or tw extra, just to have spaces.
>>
I've got three 1U and one 2U server.
I'm running ProxmoxVE on all of them in a cluster.
I'm currently populating my 2U with 2TB drives for a Ceph cluster (then I'll be adding two more 2U nodes)

The 1Us are as such:
dual-core 64bit P4 class Xeon with 4GB DDR2, just used for file service via owncloud, 4x 2TB in RAID5 (supermicro superserver 5014)
dual-socket quad-core xeon x5460, 32GB FBDDR2, 4x 73GB Ultra320 in RAID5 (supermicro superserver 6016)
dual-socket quad-core xeon x5430, 24GB FBDDR2, 2x 73GB SAS in RAID1 (IBM netserver or something)

2U:
dual-socket quad-core Xeon x5560 (16 threds), 48GB FBDDR2, 2x 500GB in RAID1 for boot volume, 4x 2TB in JBOD for Ceph


I work with ESXi at work extensively, from leased systems, to our internal ops, and it's frustrating as hell unless you're going to shell out money.
It's only redeemable feature is virtual networking.
We've got HyperV in a few places, but meh, it's windows.

RIght now we're evaluating different VM clustering solutions to migrate to.
It'll probably be Proxmox or HyperV, but I plan on looking at Ovirt as well.
>>
>>54732663
Refurb drives? Are you sure?

Also. The premium for hardware to support ECC is way out of my budget for the form factor I want.

I mean. Reading the stats on bit flipping, my HDDs will fail way before my files beging getting corrupted. Even more so if I'm buying refurb drives.

It's a home server, I'll be backing up mission critical files and photos to a separate HDD anyway.

Proliant g8's are £1000 or thereabouts.
Found a bare bones one for £250.....with £385 shipping, but no CPU, no ram, no raid controllers etc.. Fucking irritating.
>>
>>54732746
One of the big advantages to hyper-v is avma. Only works with data center though. I run 2012r2 data center on my t620, and use avma for all the Windows vms. Otherwise, the bare metal hyper-v has all the same features.
>>
>>54732888
Not him but: referbs are fine, but you should buy two or so new drives to be used as cold spares as soon as you can, and begin swapping them out slowly for new or enterprise-grade drives.
>>
>>54732746
>It's only redeemable feature is virtual networking
Have you tried openvswitch? You don't get all the clicky interface stuff but it's very powerful.
>>
>>54732888
http://www.ebay.com/itm/HP-ProLiant-MicroServer-Gen8-Celeron-Dual-Core-4GB-/182132159066?hash=item2a67ec325a:g:pEkAAOSwAuNW30qr

And the b120i is built in. You can also drop in any e3 v2 Xeon with the stock heat sink and be fine.
>>
>>54732910
Most of our guest OS' are Linux and FreeBSD, so that's less of an issue.

If we went with HyperV, I could definitely live with it.
The only downside is that it awkwardly requires a full Linux VM for containers, which is technically understandable.

It would be pretty hilarious to see Microsoft having to implement all the Linux calls on top of NT to run Linux software natively.

>>54732967
Thanks for the heads up on that, the Proxmox networking functionality is kind of ass at the moment.
>>
File: DSCN070xxx8.jpg (1 MB, 2000x2465) Image search: [Google]
DSCN070xxx8.jpg
1 MB, 2000x2465
same olde - still have to put youmu back in the rack, she has a much bigger ssd and a single hdd now - need to find a home for her 4 old drives and raid card sometime.
>>
How much does it cost to set up such a server and maintain it? I know it's different with each server, but on average?"
>>
>>54732968
Herp I was looking at the full rack mount gen8. I'm literally retarded.

Thank you. Looks to be low power too, which is a bonus.

Three questions though.
1. I'd like to play with Freenas, I'm assuming this will install pretty easily right?

2. Does the hardware raid interfere with zfs?

3. In terms of zfs, given there are only 4 sata ports, I'll be stuck with raid-z1 unless I want to sacrifice 50% of my storage. Is there scope to expand to 6 ports in future?
>>
File: 1462502843136.png (357 KB, 554x597) Image search: [Google]
1462502843136.png
357 KB, 554x597
>>54732023
>mfw going to use a core2duo based old PC as a backup server
>>
File: Supermicro-X10SDV-TLN4F.jpg (101 KB, 625x544) Image search: [Google]
Supermicro-X10SDV-TLN4F.jpg
101 KB, 625x544
Based Xeon-D reporting in.

>all the performance you'll ever want out of a home server
>cheap to run
>quiet
>>
>>54733189
I own a gen8 microserver.
1. I never used freenas on this thing, but it should work without any issue.
2. The on-board RAID controller is pretty much a glorified fake raid. It can be set in AHCI mode and should not interfere with ZFS.
3. There is no way to fit another two 3.5" HDD in the chassis.
The only way to add more HDD is insert a SAS/FC PCI-E controller and use and external enclosure.

Take everything I say with a bit of salt, I use CentOS 7 and KVM to host some virtual machines on my gen8.
Storage is composed of a mdadm RAID10 of 4x3TB WD Red (filesystem is XFS) and 1xWD Scorpio blue as boot drive (it is connected to a fifth SATA port that is meant to be used for the optional optical drive).
In order to be able to boot from the fifth SATA port the onboard SATA controller must be set in RAID mode (with no RAID volumes configured).
>>
>>54734562
Could be worse, I've repurposed a first-gen core i7 laptop as ESXi box with a VPN running on it. As long as it works and with reliability, you could use anything you like.

I feel the poor laptop needs a better role though but this suffices and contributes to education quite well.

I thought I could buy a domain and host something off it like a portfolio or meme as well, but it's like naming a child at birth; you need something solid or you'll regret it for years and/or a lifetime to come.

/rant off.
>>
>>54735118
I boot my Gen8 microserver off internal SDcard, so AHCI mode works fine, with an SSD where the optical drive would go using a Berg floppy-to-SATA power adapter.

I run Linux - Ubuntu 16.04 LTS Server - and btrfs myself, rather than FreeNAS and ZFS. Ubuntu 16.04 LTS and ZFS would probably also be viable.
>>
File: R9 Nano.png (649 KB, 1631x1571) Image search: [Google]
R9 Nano.png
649 KB, 1631x1571
>>54735221
>Ubuntu server
>btrfs

heh
>>
Nas does torrent and windows backups
Lappy does FTP and router logs
Router does QoS with ddwrt
Modem does adsl2 1mb/s | 0.1mb/s
>>
>>54735394
When is nbn getting to you ?
>>
>>54735656
I think they are installing a node (FTTN) down the road, so no FTTP 100/40 goodness :(
>>
So, I have a cast with plenty of drive mounting points and I have a bunch of 500-1tb drives. Any suggestions for a reasonably priced board with a bunch of SATA ports to turn the drives into a simple file server?

Even just an inexpensive board I could add some PCI SATA controllers to would work.
>>
>>54732023
How expensive/how much effort would it take to make a home media/file server, that I could use from anywhere with and internet connection? For streaming 1080p and transferring files with decent speed, and with effectively RAID 1?
>>
>>54735970

If you don't necessarily want to build you could buy a pre built NAS and some drives also. I really love my Synology NAS and it has built in software for a lot of useful things.
>>
I have a really nice server rack with some fairly old servers. They're loud though and I don't currently have a use for them.

I still run all sorts of services on my network, but I just run them out of my desktop computer.

I was thinking of setting up a big NAS but I'm currently too poor to afford all that shit (mainly hard drives).

Is this thread a regular thing? I'm actually a /sci/fag just passing through.
>>
>>54732440
Virtualization is the main reason I use what I do. I have about 20 virtual servers on a dual hex core xeon box.
>>
>>54732582
If you're trying to keep it cheap, don't use zfs. But rot will not be a problem within the life of your server, you're not hosting petabytes of archive storage.
>>
>>54732746
Do not use proxmox. It's not stable enough for production use. Hyper v is not practical for anything above a small deployment.
I recommend Nutanix Acropolis. It's free and built on kvm like proxmox.
>>
File: 20160505190339_big.png (590 KB, 1000x1000) Image search: [Google]
20160505190339_big.png
590 KB, 1000x1000
>>54734577
I almost went with one of the Supermicro ITX boards until Gigabyte announced the MB10-DS4 with dual SFP+ ports, and I've been waiting like an idiot ever since.

> but seriously, 10GBASE-T receivers use like 5W each
>>
>>54733089
You mean electric costs? Take the wattage of the server, convert to kw, multiply by the average number of hours in a month and then by your price per kwh.
>>
>>54736126
i don't mind le adult lego, but i'm pretty new to servers, will any computer with a decent processor and memory do?
>>
>>54737226
you can run a server on pretty much anything hardware-wise, the benefit to prebuilt NAS systems is the ongoing software support and streamlined management.

unless you're a student or a dedicated hobbyist explicitly looking to fuck around with this stuff, nobody really wants to spend hours dealing with keeping their network media storage working properly.
>>
>>54736276
Thanks senpai. But I followed the advice of the other guy.

Going for a HP proliant gen8.

I realised I have a i5-2500 lying around i got for free, and the server has the same socket.

Basically im getting a cheaper server with more features. Which is brilliant.
>>
How do people feel about supermicro mobos? I got one off craigslist (x8dth-i) with dual xeon 5650s and 48 gigs ram and mostly interested in how power hungry it will be. I originally got it for numerical nonsense (gonna learn openMP for school and have a pet project involving scattering on a surface for a friend's company that would benefit from parallelization) and as such I cannot imagine that a decent estimate for its power consumption would be the power rating of each processor and 300W for fans and I/O. In terms of file management I would only really care about copying text or static image output from it.
>>
>>54732034
>storage spaces
Objectively better than zfs.

Deal with it linux fags.

On top of that if you want to share a gpu Hyper-V's RemoteFX does not require a $1.5k (and thats a second hand priced) card like Xen and esxi require.

PCI pass through is great and works with all the vairous options but it only allows for a 1 to 1 sharing of a gpus.
>>
>>54738072
I'm not sure you're going to be able to use the i5. It's not officially supported on most server boards, but you may get lucky.

Used Xeon E3-1230's are about $120, E3-1240v2's are about $190. Again, you should be fine if you're buying from someone selling multiples or very high feedback.
>>
This looks like the right thread for me.

I wanted to build a freenas box, with stuff I still have around.
got:
4x4GB DDR3 1600MHz CL8 corsair vengeance
ASRock Z77 Pro3
i5-3xxx
900W Tagan Piperock
Fractal XL first version
Smasung 840 EVO 120GB
8TB Seagate archive
2x Intel NICs

Now what troubles me is that basically everything and everyone on freenas says go for ECC, but I would need server hardware for that and used server equipment is not as cheap here as elsewhere.
>>
>>54732663
How does OMV compare to FreeNAS and NAS4Free?
>>
>>54738758
I thought it was simpler for some of the more basic stuff. FreeNAS/NAS4free work fine, especially at scale, but Plex support is hit and miss, and ultimately that's why I chose OMV.
>>
>>54738718
ECC is suggested, but not required. If you're going to use ZFS and scale beyond maybe two drives, I'd advise it.
>>
File: IMG_0020 (Large).jpg (466 KB, 1080x1620) Image search: [Google]
IMG_0020 (Large).jpg
466 KB, 1080x1620
Top to Bottom inside rack:

-2x Intel NUC
CPU: Core i3 5010U
RAM: 16GB
Function: set up as ESX HA cluster

-Router, Ubiquity EdgeMax Pro

-Switch 1, Quanta LB4M (48port 1Gbps, 2port 10Gbps)

-Switch 2, Arista 7124SX (24port 10Gbps)

-Server 1, Supermicro 1U
CPU: Core i3
RAM: 16GB ECC
SSD: 4x 1TB 850 Pro RAID 10
NIC: 2x10Gbit
Function: Datastore for vritualization

-Server 2, Supermicro 2U (Twin server, 2 nodes)
-Node 1:
CPU: 2x Intel XEON E5-2650v2 8 Core HT
RAM: 128GB ECC
Storage: ESX on USB
NIC: 4x10Gbit + 6x1Gbit
Function: ESXi Virtualization server

-Node 2:
CPU: 2x Intel Xeon E5-2609v2
RAM: 32GB ECC
SSD: 2x 512GB 850 Pro RAID 1
RAID CARD: LSI MegaRAID SAS 9286 8e SGL 8 + LSIiBBU09
NIC: 2x10Gbit + 2x1Gbit
Function: Storage server, RAID card is connected to the 4U JBOD underneath it.

-JBOD, Supermicro 4U
CASE: SC847 E16-RJBOD1
HDD: 37x 4TB RAID 60

-Old norco case storage machine
CPU: Intel i7 920
RAM: 24GB
SSD: 1x 128GB
HDD: currently 8x 2TB
NIC: 2x10Gbit
Function: Test server for storage stuff

-DL160G6
CPU: 2x Intel Xeon L5520
RAM: 32GB
HDD: 4x 1TB RAID 10
Function: Test server.

-DL140G3
CPU: 2x Intel Xeon E5345
RAM: 16GB
HDD: 1x 160GB
Function: Test server.

-Router, Coldspare Monowall (supermicro C2D)

-Router, Coldspare pfSense (supermicro C2D)
>>
>>54732023
>What would you use a home server for? /g/ answer - Fuck you!. Simple answer - storing illicit images/video and illegally downloaded media

Let's not kid yourself, /g/.
>>
>>54738902
>>
File: best server.jpg (28 KB, 544x214) Image search: [Google]
best server.jpg
28 KB, 544x214
>>54732023
>Because fuck you edition!
>>
>>54738826
Well for now it is one, but I would sure like to go further in the future.

>>54738805
Is there any performance difference between them?
Does one of them allow to extend a raid later on without having to back up all data beforehand or losing it when extending the raid (I heard a prebuild Synology NAS is capable of that).

>>54738902
Oh not you again.
You make me jeal.
>>
>>54738691
Well it supports the i3-3250, so I see no reason the same socket couldn't take the i5-2500. Although, I suppose it doesn't support ECC, so the bios may flip out. I'll just try it. Worst case I stick with the pentium.

Even without the i5, the feature set (ECC etc) and form factor is far better than anything I could build myself for the same budget.

Finding an ECC supporting mITX board + processor + case + psu for under £170 is basically impossible.
>>
>>54738902
But why
>>
>>54739086
I would not bother installing the i5-2500 unless you really need the added raw CPU power.
With a Celeron G1610T I run without any problem 8 VM.
Most likely HDDs will be the bottleneck, not the CPU.
>>
>>54739158
Funnily enough. Running a VM is the reason.

So you're saying the performance is fine with the Celeron?

Have you tried any Freenas jails?
>>
>>54736324
Those gigabyte boards are basically unobtanium you're going to be waiting forever
>>
>>54739114
OP summed it up perfectly:
>What would you use a home server for? /g/ answer - Fuck you!. Simple answer - For whatever you want. From media to development to virtualization, options abound.
>>
>>54739184
No, I run CentOS (host and guests) + qemu/KVM.
[root@master ~]# cat /proc/loadavg 
0.00 0.01 0.05 1/333 20150
>>
Hey sys, I like the cute names you give your systems. Question: What do you use the rbpi's for, and do they work well?
>>
>>54741165
>>54733011
oops
>>
So for Plex, would something like the logicalincrements minimum be good for pretty much myself? I'm completely new to servers, is easy to stream to other networks? Like if I'm on someone else's wi-fi.
>>
>>54735970
Fuck, virtually any pc can do that.
>>
File: IMG_0042 (Large).jpg (484 KB, 1620x1080) Image search: [Google]
IMG_0042 (Large).jpg
484 KB, 1620x1080
>>54738902
Those who see mee post more often may know I have been procrastinating buying my UPS's and installing them.

let me give you a update to the UPS situation: FUCK FUCK FUCK FUCK FUCK FUCK!

literally 1 hour ago I had a power outage, all hardware is fine but I have just bought 2x 1.5kVA and 1x 750VA APC UPS's and 2 smartcards,

The dual-node and JBOD will be one one 1.5kVA and the SAN and NUCS on the other leaving the 750 for the networking.

Also bought 2 APC smartcards to control the 2 1.5's, the dual node will be configured to do a soft shutdown the moment power loss is detected while the NUCs and SAN will be kept online until the UPS is a low battery.

750 Is enough to keep network up during shutdown, also got a 375VA for my modem.

Hoping to have it installed this weekend.
>>
>>54733189
Sorry, must have missed this.

1 - It should, yes.
2 - Yes. ZFS / Storage Spaces handle RAID. A hardware RAID controller complicates things. The B120i can be set to AHCI (simple SATA controller) so that's fine. Otherwise, you can configure each drive as a single RAID 0. Not ideal, but it works.
3 - ZFS pools. You don't have to take all of the space and dedicate it to Z1. Some of the pools on my T620 are RAID10, some are RAID 5, some are RAID 1. It depends on what you need capacity vs performance.

>>54739184
If you're not doing Plex transcoding, yeah, you should be fine.

The problem is transcoding. Plex is CPU bound (I really hope A) they change this or B) I figure out how to rebuild ffmpeg with GPU support and get plex to like it)

This is why I have a pair of Xeon 2660's. The average CPU mark score for the pair is ~1700. That gives me enough to run the 5 or 6 1080p 10mbit streams that I normally run through Plex, and run the rest of my farm as well.

Otherwise, I'd be fine with something old, like a DL580 G5 (except for the noise, of course).
>>
>>54732023
>Plex - 1080p streaming at 10MBPS requires a CPUMark score of ~2000 per stream. This is especially true with first generation i3/5/7 / DDR3 Xeons. The more recent the CPU, the more slack there is in this. For some reason, Plex doesn't seem to like low power options (Xeon 1220L, for example).

Are you talking about transcoding? I run an old amd e350 that streams my animus with mediatomb just fine? Unless you *are* talking about transcode, then it would make a lot more sense.
>>
>>54735118
Pretty much my setup, except I'm running Debian Jessie and ZFS with raidz1 and a 64gb boot SSD.
>>
Ever use a raspberry pi server/cloud?
>>
>>54743933
Yes, transcoding. I was tired when I wrote that, my bad.
>>
How bad of an idea is this: buying a 2+ socket mother board , buy really expensive $1k+ xeons 128gb+ ram . using it as: router,DNS webserver, media server , game server, anything else.
>>
How in the world do you get Plex to tag your shit right? It's absolutely inflexible.
>>
>>54739359
>Those gigabyte boards are basically unobtanium you're going to be waiting forever

are they even in nominal mass production yet?
and why the fuck wouldn't Supermicro make one with SFP+ instead of all 10GBASE-T?

> tfw I just want a Xeon-D ITX with 2x SFP+ and at least one u.2 port, and nobody want me to give them my money
>>
http://pcpartpicker.com/p/7VCmPs
Gotta get up early tomorrow to grab case and PSU at Purolator's middle of nowhere center.
>>
File: esxi.png (82 KB, 941x759) Image search: [Google]
esxi.png
82 KB, 941x759
meh
>>
I want to have 3 Windows desktops and 1 Server 2012 server, 8 Ubuntu VMs (admin, MariaDB, Plex, Plex, Usenet/torrent box, Mp3 streamer, owncloud, misc)

I have the following hardware

ASRock 970 Extreme3 R2.0 AM3+ MB
AMD FX-8320E Vishera 8-Core 3.2GHz (4.0GHz Turbo) 95W
Chipset AMD 970 + AMD SB950 Memory 4 x DDR3 DIMM slots - Supports DDR3 2100+(OC)/1866*(OC)/1800*(OC)/1600*(OC)/1333/1066/800
M1 - 16GB DDR3 1333
M2 - 16GB DDR3 1333
PCI-E 2.0 x16 slot 1 (PCIE2 @ x16 mode)
PCI-E 2.0 x16 slot 2 (PCIE4 @ x4 mode)
PCI-E x1 Slot 1
PCI-E x1 Slot 2
PCI slot 1
PCI slot 2

ASUS M5A78L-M/USB3 AM3+ MB
AMD FX-6300 Vishera 6-Core 3.5GHz (4.1GHz Turbo) 95W
Chipset AMD 760G + SB710 Memory 4×240pin DDR3 2000(O.C.)/1866(O.C.)/1800(O.C.)/1600(O.C.)/1333/1066 32GB Dual Channel
M1 - 8GB DDR3 1333
M2 - 8GB DDR3 1333
PCI E 2.0 x16
PCI E x1
PCI 1 -
PCI 2 -

5 x Seagate NAS HDD ST3000VN000 3TB 64MB Cache SATA
2 x Sandisk SSD+ 120GB

assorted SATA drives from 1TB to 2TB


Suggestions?
Hyper-V on one
Ubuntu the Linux containers on the other?
>>
>>54745521
If you're going to be using Plex to transcode, you may have to grab something Intel. I don't think the AMD stuff will do it real well, but I don't have a whole lot of experience with it, either.

No matter how you slice your list, you're still 1 CPU / board short of your desktop requirements.

Unless you want the desktops to be VM's as well...
>>
>>54745563
>If you're going to be using Plex to transcode, you may have to grab something Intel. I don't think the AMD stuff will do it real well, but I don't have a whole lot of experience with it, either.

My current rig is ESXi on an AMD Phenom and it handles both Plex servers just fine

>No matter how you slice your list, you're still 1 CPU / board short of your desktop requirements.
>Unless you want the desktops to be VM's as well...

Yes. All VMs
There is a 32bit Win7 desktop in a VM on the above Phenom rig...
>>
>>54745637
Alright, fair enough.

Windows on the 8320 (2x SSD + 5x 3TB w/ storage spaces tiering)

Rest on FX-6300.
>>
Ignore the prices, because they will change in the next 3 months, but I'm going to be ordering 3 or 5 of these once Server 2016 is released.

Best part? Work is paying for them.

http://pcpartpicker.com/p/VWMV3F
>>
>>54732746
what software is that?
>>
>>54732582
use sandstorm if you want your own cloud
>>
>>54732746
you would be fucking mental to use proxmox in a business environment.

esxi or hyper-v, full fucking stop.
>>
>>54738942
Called out. 2tru
>>
I'm taking the plunge, gonna switch from HW raid to ZFSonLinux. Already got a 9211 on the way.
Any tips? Right now I had 1 SSD for boot plus 2 RAID5 arrays of 3x1TB. I figure doing 3 2-drive stripes is the best way?
>>
>>54745998
>3 2-drive stripes
Meant mirrors
Also if I want to upgrade the capacity of the pool later, do I have to upgrade all 6 drives, or just in pairs?
>>
Currently trying to remember how to configure RAID on my C6100 I haven't had on in a while. But first I have to update yum and fuuuuck that's 379 packages that need updates.
>>
I've got old shit that I got from dumpster diving, I have a watchguard xtm 505 that I'm installing a hardened musl based gentoo as I'm using it for gateway to the public web for vpn and firewall.

I have a dell t300 poweredge with a sas drive that I'm using as a owncloud server, lan dhcp, routing, git, dns, something else I probably forgot that I currently have hardened gentoo running.

A simple trendnet 24 port gig switch, netgear R7000, and a tri-band 5ghz linksys that I can't remember the model number.

And soon 20 acer e1-531 motherboard cluster.

All stuff that was gonna be thrown away lol.
>>
>>54738902
How much CP is in there, anon?

Do we need to bust your door down?
>>
File: networkguts.jpg (4 MB, 7500x2423) Image search: [Google]
networkguts.jpg
4 MB, 7500x2423
Original purpose of this was to simply get rid of that annoying fucking HDD noise out of my room. And since I bought Lian Li, I can actually leave it in the living room behind all the networking stuff.

So I took my former gaming rig and turned into my own little file server running unraid. It will eventually do PLEX and OpenVPN as well.

i5-750, 16 gigs of ddr3, and a bunch of random hard drives from random places and of random age.

Took the old seasonic platinum out of this and reused it, so I bought a horrible refurb unit from an NCIX warehouse for $20 as a temporary solution.

Short term:

A proper power supply, seasonic again most likely.
A proper UPS

Long term:

Replace the guts with a 10nm ECC platform
9 more 3TB Drives to complete my 30TB home file server solution.
>>
File: homeservercablegore.jpg (1 MB, 3264x2448) Image search: [Google]
homeservercablegore.jpg
1 MB, 3264x2448
>>54746288
the guts
>>
File: 1451419072435.jpg (1 MB, 1633x2585) Image search: [Google]
1451419072435.jpg
1 MB, 1633x2585
Looking for a cheap case or even full server with a shit ton of 3.5" slots, minimum of 10, preferably with hotswap trays. Which brands should I look into? The local used market is pretty fucking empty so I might have to buy new.
Also I heard that some SATA expansion cards can fuck up SMART readings on Ganoo/Lunix, is that true? Which brands or chipsets are perfectly supported or which ones should I avoid? (I don't care about HW RAID)
Finally, can ZFS raids be damaged because of the SATA Bit Error Rate or are ZFS' data integrity checking features enough to avoid it?
>>
>>54746496
>can ZFS raids be damaged because of the SATA Bit Error Rate or are ZFS' data integrity checking features enough to avoid it?

unrecoverable read errors don't get data pushed to the FS stack for attempted recovery, so ZFS buys you nothing there in and of itself.

using RAIDZ1/2/3 will give the system a shot at replacing a lost sector at the stripe level though, just like any normal RAID setup.
>>
>>54746496
>Which brands or chipsets are perfectly supported
SAS expanders
>>
File: 6714071464158268660.jpg (2 MB, 4096x3072) Image search: [Google]
6714071464158268660.jpg
2 MB, 4096x3072
Built this little qt for leaving at my friend with fast internet's place.
Details here:
https://hackaday.io/project/11877-somatic-private-server
>>
>>54747588
Why the router?
>>
>>54747635
Idea was drop it at a friends place/work, hook it up to power and a network and it's all mine after that.
The pi reverse port forwards out, so there's no need to expose anything on the pi to the outside world.
Also gives me wifi whenever I'm around it.
I just don't want it all hooked up to an existing network.
>>
File: IMG_20160526_113902.jpg (3 MB, 4160x3120) Image search: [Google]
IMG_20160526_113902.jpg
3 MB, 4160x3120
My home server
Runs Debian in chroot
2 GB RAM
Some quad core ARM CPU
16GB internal memory
Can attach external USB peripherals

I mainly use it for downloading torrents, anime and serving as wireless storage for my network (64 GB SD card+ 1TB USB HDD)
>>
File: WP_20160521_21_59_31_Pro.jpg (839 KB, 900x1599) Image search: [Google]
WP_20160521_21_59_31_Pro.jpg
839 KB, 900x1599
>>54732023
Here's my baby. Hosting Teknik.io and my media stuff.
>>
>>54748328
Whatcha use to chroot out of curiosity?
Debian kit's servers been down for ages, sad.
>>
>>54748461
Built my own chroot using debootstrap in an IMG ext4 file. I mount the file while i have to use it, chroot into it, then unmount it when I'm done.

I wrote the scripts manually
>>
File: DSC05306.jpg (73 KB, 720x540) Image search: [Google]
DSC05306.jpg
73 KB, 720x540
>>54732888
>Refurb drives? Are you sure?

If you build in enough redundancy then its fine. Even new drives fail. In fact new drives have a higher failure rate in the first year than years two and three combined, after which it starts ticking up again. They call it the bathtub curve.

The first six years averaged is something like 5 percent per year failure rate. And that's assuming the drive is on 24/7. Not much good data out there for drives older than six years.
>>
>just got a unifi
why didn't I do this sooner, it's so nice
>>
>>54748485
Heh, nice.
>>
>>54734577
>google xeon-d
>lots of neat features
>look up prices for xeon-d boards
>see this
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182964
jesus christ
>>
>>54744717
it's better to get a few cheap machines then go splurge on a single $4k box
>>
>>54748655
>http://www.newegg.com/Product/Product.aspx?Item=N82E16813182964
Yeah, but that's 8 cores, dual 10gb Ethernet, etc.

Look at http://www.newegg.com/Product/Product.aspx?Item=N82E16813182973
>>
>>54748683
that looks real neat, much better than the newer atoms I suppose
I've been thinking about building another 2U box to run storage, media servers and maybe a few game servers if there's cpu room, would that board handle it?

and should I bother with 10GbE on my router/firewall if I barely have a 100/10 outside connection?
>>
Bought a HP Proliant Microserver gen8.
Need to upgrade CPU and RAM, I found an i3 that does support ECC and uses 55W. Vacation pay was spent on the server and harddrives, so can't really afford Xeon CPUs.

Any other recommendations?
Currently using it as Plex Media Server, but might virtualize things soon and do web, db, etc.

Server runs Ubuntu 16.04
>>
>>54745425
Got case and PSU, case is way bigger than I expected, almost as big as my ATX desktop. Then again with 4 3.5" bays I guess there's not much that can be done? I guess the "solution" is to have the unit and storage in separate enclosures?
>>
File: 1381544849239.jpg (51 KB, 500x500) Image search: [Google]
1381544849239.jpg
51 KB, 500x500
>>54748971
Come on /g/
>>
amd fx-4100
8gb of ram
total of 2tb of space + some 60g ssd for boot drive

using for plex right now, not sure what else to do with it at the moment. Have space for 3 more drives but not sure what to get. help me /g/
>>
>>54750884
buy a nic and run pfsense on it
>>
>>54751310
Backups, seedbox, local DNS with DNScrypt, etc.
There are tons of things you can do with that thing.
>>
File: Pepe Zone.gif (1 MB, 640x360) Image search: [Google]
Pepe Zone.gif
1 MB, 640x360
>>54752761
Can anyone recommend me a good pc case for a NAS? I'm looking to sit the system on its side, if at all possible. So idealy I'm looking to install the data drives vertically if the case were standing up.
>>
>>54754216
You want to tip the computer over?

Lian-Li PC-Q25 maybe. Tipped over ofc. Has sata backplanes, so installing new drives is easier, but uses sleds instead of caddies, so that's annoying.

I think silverstone has a case where you insert the drives through the front, like you would in proper server cases.

These would be miniITX cases.

There are others that make decent server cases, but I don't remember the name of those.
Thread replies: 122
Thread images: 25

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.