[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
/hsg/ - Home Server General
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /g/ - Technology

Thread replies: 255
Thread images: 47
File: images.duckduckgo.com.jpg (393 KB, 768x1024) Image search: [Google]
images.duckduckgo.com.jpg
393 KB, 768x1024
Let's do it, /g/.

What do you have? What are you running?

I was thinking about buying a prebuilt, and HP had a ProLiant ML10 Gen9 with decent specs (Xeon, DDR4, RAID) for $300. The catch? Built in botnet (aka HP iLO), and you have to register with HPE in order to download the BIOS updates and install the OS.

Long story short, had to built a custom one. Thankfully, Supermicro mobos are not that expensive, I managed to grab a $139 X10SLL-F-O, 8GB of ECC RAM for $49, and a Pentium G3258 for $60. Great deal. I still need to buy the PSU, what do you recommend? is 400W good enough?
>>
ok so i have absolutely bo idea what the fuck im doing. Im currently running windows server 2012 and trying to configure a network that allows users to login and access storage specially fr their account. I'm in the process of installing and configuring active directory but is their anything else i need and should be aware of?
>>
Trying to decide how to do storage. Host has 8 slots, (RAID10) and the NAS has 2, (RAID1). Backup jobs run on the Veeam VM to a iSCSI target on the NAS, and then are backed up offsite to %ProviderIHaven'tDecidedOnYet

Currently, I have 4x1TB (host) + 2x2TB (backup)

I'm wondering how to best purchase these drives so I have room to grow without breaking the bank. Currently deciding between 8x1TB + 2x6TB, or 6x2TB + 2x8TB. Either way, I'm looking at like a thousand god damned dollars.

>>54378264

Make sure you virtualized that shit, son
>>
I have a HP uw8600 with dual xenon e5335 and 32gb ddr2 ecc RAM. I need to replace the current hard drives since they are over 5 years old and from a recycling center. I set up proxmox and a couple vms but now it just sits off in my room. I wanna set up freenas on the new hard drives
>>
>>54378264
unless all of the machines that are going to access it are joined to the domain, you dont need ad. also you dont need to virtualize it unless your environment calls for it.

just use local accounts on the 2012 machine
>>
>>54378264
Why do you need server 2012 - especially if it's just for fileserver?
>>
>>54378164
Raspberry pi 2 and 64GB flash drive with a belkin router
Living the good life
>>
File: mooonkitten_1462031317282.jpg (104 KB, 1080x1776) Image search: [Google]
mooonkitten_1462031317282.jpg
104 KB, 1080x1776
Want to set up a small home file server. Can I just connect an external HD to my router, providing that my router has a usb port and can share drives?

Will be using it for media. Mainly streaming HD (~3gb) to my TV.
>>
File: 1461601927689.png (312 KB, 389x386) Image search: [Google]
1461601927689.png
312 KB, 389x386
>>54381271
>>
How hard is it to set up an email server? I'd love to be able to host loads of email addresses and actually sign up for free things, but I'd rather have total control over all the email addresses rather than just making up [email protected] and [email protected] and [email protected]. I want that shit to be @ my own last name.
>>
>>54382593
very. check the wiki
>>
>>54382593
Mailinabox on a VPS.

I did that and it was not very hard since it does most the work for you.
>>
Will the PSU from a Dell Poweredge 2950 work with my Tyan motherboard?
>>
>>54378164
>you have to register with HPE
Does this cost anything?
>>
>>54382593
Hard, because many big mail services are sticklers about what they'll accept. So unless you don't care about being able to send email to gmail/ms/etc accounts, go for it.
>>
File: IMG_1382.jpg (392 KB, 1280x956) Image search: [Google]
IMG_1382.jpg
392 KB, 1280x956
>>54378164
Dell C6100
>4 nodes
>8 Quad-core Xeons
>96GB RAM
>2x 1TB in RAID-10 because lazy
>>
I'm a total beginner server-wise, i want to try esxi virtualisation and make voip, irc, file, web, mail servers.

I found a Dell poweredge T610 which specs are : 16 Go ECC RAM, 4 SAS 300GB HDD, 2 Intel Xeon E5504 2Hz Quad core, Windows server 2008.

Price is 500€, is it good ? I'm also worried about the redundant 870W alimentation, is it too much for what i want to do ?
>>
My nexus 7 is running an http server app to "stream" to my phone until I can find my microsd card.
>>
>>54378164
>home server
>PSU
How many drives do you have?
What is the form factor?
1U is not known for modular cables (although there is typically one giant connector for SeaSonic 1U PSUs) so that might end up limiting the number of drives.

If you have a lot of drives you may find yourself well suited to buying a SATA backplane powered using some Molex cables, connected to a storage controller via a few SAS cables.
>>
File: Vmware.jpg (138 KB, 1014x697) Image search: [Google]
Vmware.jpg
138 KB, 1014x697
>>54378164
Posting several times my ESXi Server. Very stable the processor although is Core i7. My server is running since 5 months ago. I have running a combination between Vmware & CCIE Lab.
>>
File: IMG_0875.jpg (2 MB, 2448x3264) Image search: [Google]
IMG_0875.jpg
2 MB, 2448x3264
Still the usual
X5675
24GB DDR3 ECC
Perc H700
1x850 EVO 250GB, 6x Random 1TB hard drives, 1x2TB backup drive
Mellanox ConnectX-2 10GbE card
>>
>>54378264
Noob alert
>>
>>54384022
Wondering this also. Are server psus interchangeable?
>>
>>54384549
Holy shit that's worse than what I picked up for $250 I'm >>54379828
>>
>>54378164
>The catch? Built in botnet (aka HP iLO)
>Supermicro X10SLL-F-O
>Botnet
You have no idea what you're talking about, do you?

1 - Intel. IME is already enabled, so this invalidates your "botnet" argument to begin with.
2 - Supermicro runs an ASPEED IPMI controller. This is connected to the system via PCI and the BMC.
3 - HP ILO. Much like the Supermicro, this is a custom SOC. It ties into the BMC and PCI interfaces.
4 - ILO / IPMI / DRAC usually have their own interfaces, or are configured as out of band. In any event, they can all be monitored and controlled by, and interact with, the server at any level.

/hsg/ threads are fun, but when you cry "muh botnetz" you sound like a drooling retard.
>>
>>54384234
No. But in order to be eligible for any of the HPE "restricted" software, you have to have a valid warranty. I expect that everyone will go this route eventually. It's sad.

I suppose I'll stick with PowerEdge's for the time being.
>>
>>54378367
Depending on your IO requirements, a couple big drives may be better than more small drives.

I went with 12x 4TB spindles and 4 500GB SSD's, configured with Windows Storage Spaces.

The other thing to look at is refurbished drives on ebay. Smaller drives especially are going for ~$40/drive.
>>
>>54386441
Servers are easily the most proprietary market in computing hardware.

However, in my experience rackmount server motherboards always follow the ATX standard for motherboard power, ATX/EPS12V.

So theoretically you should be forced to buy proprietary a Supermicro motherboard and case, with aftermarket RAM (debatable), aftermarket PSU, etc.

Depending on form factor, the heatsink/cooler may also be proprietary.
>>
Just finished my FreeNAS box:
Define R4
X10SL7-F
i3-4370
16GB DDR3 ECC
16GB Transcend flash drive
6x4TB HGST in raidz2
Cyberpower CP1000AVRLCD
>>
Just got an HP DL180 G5 for free. Single Xeon E5420, tossed 16gb in it and an SAS HBA in place of the shitty RAID card. Taking over freenas duties from an old dell precision.
>>
File: 1462251224615.jpg (23 KB, 223x290) Image search: [Google]
1462251224615.jpg
23 KB, 223x290
>>54382593
Not hard, if your ISP blocks port 25 outbound you can always use an smtp reflector like sendgrid, the trickiest part for me was setting up all the domain verification records
>>
>>54386505
How is that worse ? Is 500 really too much for what i'm getting ?
>>
>>54381271
usually yes
but you won't be happy with it. It'll have to be externally powered if disk-based: amperage on the port is not good. It'll be slow, even through USB. You will be hosting files from your router.

Build a $400 home server and configure SMB or NFS if you have the know-how
>>
>>54386154
>CX500M
>on a 24/7 machine

>>54386095
you're posting your ip pajeet
>>
How do I build something that has the following characteristics:

>has one single root file system
>no keeping track of which drives the files are on
>pools the storage available into that single file system
>tolerates failure of like 2 or 3 storage media, enough time to replace

I can actually give a decent answer to those requirements, but what has me stumped is this:

>EASY TO EXPAND STORAGE

I want to slowly buy more and more drives and add them one by one to the server. However, the filesystem stuff seems to be set in stone the moment you set it up no matter what technology I look into. I can't just add a drive, and when I can, it won't redistribute the load so the redundancy characteristics end up skewed. Also, what happens when I run out of drive bays and motherboard connections?

Do you have the answer /g/hsg/
>>
>>54389645
what are you even looking for
>>
>>54389666
Simple, expandable, reliable, centralized storage? I kind of hoard data. Lots of torrents.
>>
>>54389574
do you even RFC1918?
>>
File: IMG_0020 (Large).JPGr.jpg (466 KB, 1080x1620) Image search: [Google]
IMG_0020 (Large).JPGr.jpg
466 KB, 1080x1620
Top to Bottom inside rack:

-2x Intel NUC
CPU: Core i3 5010U
RAM: 16GB
Function: set up as ESX HA cluster

-Router, Ubiquity EdgeMax Pro

-Switch 1, Quanta LB4M (48port 1Gbps, 2port 10Gbps)

-Switch 2, Arista 7124SX (24port 10Gbps)

-Server 1, Supermicro 1U
CPU: Core i3
RAM: 16GB ECC
SSD: 4x 1TB 850 Pro RAID 10
NIC: 2x10Gbit
Function: Datastore for vritualization

-Server 2, Supermicro 2U (Twin server, 2 nodes)
-Node 1:
CPU: 2x Intel XEON E5-2650v2 8 Core HT
RAM: 128GB ECC
Storage: ESX on USB
NIC: 4x10Gbit + 6x1Gbit
Function: ESXi Virtualization server

-Node 2:
CPU: 2x Intel Xeon E5-2609v2
RAM: 32GB ECC
SSD: 2x 512GB 850 Pro RAID 1
RAID CARD: LSI MegaRAID SAS 9286 8e SGL 8 + LSIiBBU09
NIC: 2x10Gbit + 2x1Gbit
Function: Storage server, RAID card is connected to the 4U JBOD underneath it.

-JBOD, Supermicro 4U
CASE: SC847 E16-RJBOD1
HDD: 37x 4TB RAID 60

-Old norco case storage machine
CPU: Intel i7 920
RAM: 24GB
SSD: 1x 128GB
HDD: currently 8x 2TB
NIC: 2x10Gbit
Function: Test server for storage stuff

-DL160G6
CPU: 2x Intel Xeon L5520
RAM: 32GB
HDD: 4x 1TB RAID 10
Function: Test server.

-DL140G3
CPU: 2x Intel Xeon E5345
RAM: 16GB
HDD: 1x 160GB
Function: Test server.

-Router, Coldspare Monowall (supermicro C2D)

-Router, Coldspare pfSense (supermicro C2D)
>>
>>54382593
>>54388189

Or you can make a request at your ISP to unblock port 25. Port is blocked by most ISP's due to security reasons.
>>
>>54389645
BTRFS can do that.

Add drive and re balance.
>>
>>54389645
>I want to slowly buy more and more drives and add them one by one to the server. However, the filesystem stuff seems to be set in stone the moment you set it up no matter what technology I look into. I can't just add a drive, and when I can, it won't redistribute the load so the redundancy characteristics end up skewed. Also, what happens when I run out of drive bays and motherboard connections?
>Do you have the answer /g/hsg/

You're looking for a distributed object based store like CEPH. You add to the pool and it grows your storage area automatically. You can't do it right without 3 machines to start and you add more storage by adding more machines packed with drives.
>>
>tfw you have a 750GB external HDD to back up both desktop and laptop and it's only filled half way
feels good not to be a hoarder desu
>>
>>54384549
Can someone else give his thoughts, cause >>54386505 is making me second guess.
>>
>>54384549
>alimentation

Isn't that what you pay to your wife after she fucks tyrone and jamal?
>>
2x C2100
1x C6100
1x RS140

The Dell's have 144GB mem and hex core 2.7GHz Xeons. The Lenovo was a promotion for attending a Lenovo event. It a 4 core 3.4GHz Xeon with 32GB of mem. The Lenovo runs my edge applications and the Dell's run my infrastructure. They're all running ESXi 5.5 and I'm managing it all with the VCSA. So six VMware hosts since one of the C2100s is a storage box.

The network is a combo of Nortel 5510/5520/5530s and a Gnodal 10G/40g L2 switch. The storage network runs at 40G, and my managment network runs at 10G. All L3 tasks are handled in the Nortels at 1G but they're connected to the Gnodal by 6x 10G. There's 80G of bandwidth through the stack cables between the Nortel switches. I'm not network limited anywhere on my network besides WiFi. I provide POE to all LAN ports in the house including the ones in the ceiling for my WAPs.

I run a ton of services internally for myself and also host several for customers. It usually brings in about $2k a month which I use to augment my stack.
>>
File: IMG_20160503_230620.jpg (3 MB, 3120x4160) Image search: [Google]
IMG_20160503_230620.jpg
3 MB, 3120x4160
>>54390427
I am jelly. Here is my setup:

Top - Storage server, 6x 4TB Raidz2, 4x 1TB iSCSI, 2x SSD for cache.

Cheap 16 port 1Gbit switch

Power strip

Dell server running ESXi, 72 GB RAM, some Xeon 5000-series 2x CPUs

Supermicro server running ESXi, used with PCI passthru for Steam streaming to thin clients.

UPS, only supports the storage and 1U ESXi server.
>>
File: DSC_0040 3.jpg (3 MB, 3840x2160) Image search: [Google]
DSC_0040 3.jpg
3 MB, 3840x2160
>>54378164
Synology DS112 with an 3TB WD Green HD
Next to it - Gaming Rig
Intel Core i5 6600K
16GB DDR4 2133Mhz
Radeon R9 280X 3GB
>>
>>54390922
Are you buying new or used? Don't buy new. Get off lease hardware for much less than new. C2100s are available on EBay for $300. They're dual CPU and usually come with much more memory.

Don't buy pedestal servers. Rackmount or nothing. Get a lack rack if you want but later when you have three or five or fifteen you'll thank me.
>>
>>54390951
cucking is mostly a thing in amerrica you know, non english speaking european country are not this degenerate yet and do their best to resist the degeneracy.
>>
>>54391071
buying used of course, isn't pedestal server better for home with the heat and noise ? I'm not sure i'm gonna need more than this single server for like the next 5 years, but if they're cheaper I got no reason not to get rack.
>>
>>54390427
What company needs this beast?
>>
>>54391258
not for company, its in my basement, hence is why I'm posting it in /hsg/
>>
File: 1462331083855.png (1 MB, 750x1334) Image search: [Google]
1462331083855.png
1 MB, 750x1334
>>54378164
What about power? I'm looking for something around $1000 total, 2TB, 32GB and be powerful enough to run 2 VMs, Sophos UTM and CentOS plus be power efficient. Any recommendations?
>>
File: DSC_0194.jpg (2 MB, 3840x2160) Image search: [Google]
DSC_0194.jpg
2 MB, 3840x2160
>>54391069
>He fell for the synolomeme memestation meme

i know that feel bro.

Rpi for openVPN
Cheap netgear gigaswitch
DS415+ for mail/dns/backups/BT/dlna
>>
>>54378164
Do any anons know what kind of case that is?

Also, what kind of motherboard would have that number of SATA connections, or would you have to have additional controller cards to handle them? (do you risk data loss/errors using controller cards these days?)
>>
Does pi hole work well?

Im thinking of doing something productive with my pi rather then keeping it as a paper weight.
>>
>>54391502
turn them into NTP servers
>>
>>54391322
What the fuck do you do with it?

How much did everything cost?
>>
File: svr.jpg (453 KB, 4160x3120) Image search: [Google]
svr.jpg
453 KB, 4160x3120
it does okay for what it is.

the internet is too slow when I am to do anything useful [down 1 | up .1mb/s]
>>
I have a raspberry pi that was running arkos but i accidentally stepped on it.
>>
File: S2600CP-CPU-kit-2.jpg (780 KB, 1200x800) Image search: [Google]
S2600CP-CPU-kit-2.jpg
780 KB, 1200x800
>Intel S2600CP Motherboard w/ Dual E5-2670 SR0KX , 128GB Ram
>$475
>>
>>54391542
general storage, AD, mail, NTP, VPN, FTP, backup server, gameservers, vcenter, unifi, observium, bots, proxy, WDS, TS3, websites, etc, etc and a lot of testing.

guessing about $35k at this point
>>
>>54391258
>What company needs this

Holy shit have you ever seen a datacenter for even a small/medium business?
>>
>>54386717
Lol my thought exactly. Dont like ILO? Just dont plug it in!
>>
>>54391775
Do you rent it to someone, do you make any money out of it
>>
Here's my PowerMac G5 server

I tried to install NetBSD, OpenBSD, and FreeBSD, but none of them have an rd kernel with SMP support, so I just installed OS X till I get around to compiling a kernel. Soon I'm installing a 120GB SSD, second 4TB HDD, and a PCI-X SATA II card

>2x 2GHz PowerPC 970 MP (northbridge doesn't support dual core CPUs but I've got them installed because it's a die shrink over the regular 970s)
>8x512MB RAM
>4TB HDD
>1.5TB HDD
>1TB HDD
>Mac OS X Server 10.5.8

Uses
>file server
>torrents
>compile server
>web server
>>
File: esxi.png (82 KB, 941x759) Image search: [Google]
esxi.png
82 KB, 941x759
it gets the job done. i have outgrown it through and will be building a new rack setup in the coming months
>>
>>54391852
I allow my friend to use the hardware to test stuff, don't rent it out or actively make money with it though I should point out that it was paid for with other server related stuff (not paid with my main job salary)
>>
File: DSCF6167.jpg (2 MB, 3211x3831) Image search: [Google]
DSCF6167.jpg
2 MB, 3211x3831
KGPE-D16
2x Opteron 6262HE
40GB RAM
3x1TB HDD in RAIDZ1 (will soon add more HDD's)
4x1Gb NIC


Also:
shitty L2 switch

Not in the picture:
Ubiquiti EdgeRouter Lite
>>
>>54393053
fellow swedish anon?
can you run libreboot on it?
>>
>>54384549
>>54391071
I found a refurbished Dell 2950 for 400€, 2X Xeon quad core E5440, 32GB of DDR3, 2X 146GB 15000RPM SAS Drive, 2 X 1TB 7.2K SATA, dual GB NIC and no power supply.

Is the cost worth it now ?
>>
>>54390427
really impressed with your set up anon.
I am new fag that wants to learn more about servers; I kind of want to get a set up going in my basement, I don't really need a power house; any guides or references? I just want to learn about firewalls, vmware, provisioning. do you use puppet by any chance?
>>
>>54393793
Ohayo gozaimasu, meine swedisch freunde!
>>
>>54393955
start small obviously, but make sure you leave room to upgrade.
don't buy old shit to save money as you will still notice that in your power-bill.

As I said, I let friends use my servers and one of them is doing a lot with ansible as that is also what we use at a large LAN party we help organize.
>>
>>54391190
>buying used of course, isn't pedestal server better for home with the heat and noise ? I'm not sure i'm gonna need more than this single server for like the next 5 years, but if they're cheaper I got no reason not to get rack.

In most cases pedestal servers are different hardware than their rack mount counterparts. The TS140 has one NIC and shared mem. The RS140 has two NICs, disparate memory and a BMC.

>>54393877
>I found a refurbished Dell 2950 for 400€, 2X Xeon quad core E5440, 32GB of DDR3, 2X 146GB 15000RPM SAS Drive, 2 X 1TB 7.2K SATA, dual GB NIC and no power supply.
>Is the cost worth it now ?

Jesus titty balls. Avoid 2950s. They're power hungry, loud, and not powerful. They go for $50 here now. Baddogservers gives them away occasionally.
>>
>>54394866
What should i aim for then ? There's close to nothing under 300€ and for this it's only 1 quadcore and 8GB RAM with like 3*75GB HDD and such.

And how can i find something decent in Europe ? Or is it normal to get robbed cause the market of used servers is almost non existent here ?
>>
>>54396041
if you are looking at 2950, look at an r710 instead. otherwise something like a ts140 and you can cram 4 extra nic cards, 32 gigs of ram, and 4 hard drives into it.

that guys is not joking, 2950s are fucking LOUD as fuck even at idle
>>
>>54396084
I really have no clue about servers, i've been looking what's to sell and comparing CPU/RAM/HDD and searching for the lowest price with the better specs.

How can I learn what's good and what's not other than looking at the specs ?
>>
>>54390863
Does it have to be the same size drive?

ZFS and a lot of others have these weird software RAIDs which apparently don't allow me to buy a 4 TB drive and add it to a pool of 2x1 TB drives; I could but it won't use the 3 TB

Imagine something like Amazon S3's simple API, but as a local filesystem on a very snall scale. I'd have "unlimited" underlying space with no concept of drives at all and just manage data by unique identifier. Abstracted the drives away to the point I just add random drives and replace the failed ones
>>
>>54390864
>distributed object-based store

Yeah that sounds like what I want! Kind of like a distributed/sharded database system but for my entire file system. I'll look into it. 3 systems should be no problem, but I'll start looking into power efficiency as well.
>>
>>54391071
>Don't buy pedestal servers. Rackmount or nothing

Depends on what you're doing. I swapped out and condensed a half rack into a single PE T620.
>>
>>54386095
what is CCIE, how can I do something smaller for CCNA instead of gns3 or packetbraked?
are those routers images in torrents safe?
some link to learn this?
>>
>>54378164

My home server is something I built to be quiet--no fans. It's an 8core CPU, 32GB RAM, 512MB SSD. It runs Xen with Linux instances mostly. It's also got an HBA that connects to my 8x4tb array (SPF8088). I use software raid for safety (hardware dies). It stores all of my usenet downloaded media and runs plexmediaserver. I use plex on my tablet or smart tv to watch movies and TV.
>>
>>54386095
If it doesn't have a Xeon CPU and at least 64 GB ECC RAM, it's not a server.
>>
>>54396787
>server |ˈsərvər|
>noun

>a computer or computer program that manages access to a centralized resource or service in a network.
Sure thing bud
>>
File: best server.jpg (28 KB, 544x214) Image search: [Google]
best server.jpg
28 KB, 544x214
>>54378164
best server reporting in
>>
>>54396084
>>54394866
Lenovo Ts140 best price is 300€ for 1 dualcore CPU Intel Celeron G1850, 4GB DDR3 SDRAM, no HDD.

RS140 start at 800€.

Dell R710 is 360€ for 2 x quad core Intel Xeon E5530, 32GB DDR3 SDRAM, no HDD, 2 870W PSU which seems a lot for what i want.

So is the R710 my best bet ?
>>
>>54396906
>shitty compressed jpg
>logged in as root for no reason
>uptime: 1m
>old 32 bit kernel
Kill yourself
>>
>>54390864
>Ceph
>object store
>block level store
>bindings to languages
>REST API (comoatible with S3!)
>data striping
>data replication
>posix nfs
>in-memory caching (probably glorious mmap)
>snapshots
>incremental backups
>copy on write
>rebalancing
>data scrubbing, bit-by-bit if I want
>FUSE
>can implement my own software on top of the whole thing
>even provides a custom CMS as example
>can use SSDs as cache
>open source so I can look at the glorious implementation details

It seems like some kind of dream come true - lots of smart people already tackled this problem.

Thanks so much for introducing me to this!

http://www.virtualtothecore.com/en/adventures-with-ceph-storage-part-7-add-a-node-and-expand-the-cluster-storage/

One question though. When scaling the cluster, do I have to provision a whole new node, or can I add storage to an existing node until I use all the bays and only then provision another node?
>>
Dell r210 sitting in a data centre for lolz

host: Hiyori OS: Linux 3.13.0-77-generic/x86_64 Distro: Ubuntu 14.04.4 LTS CPU: 4 x Intel Xeon (2394.080 MHz) Processes: 229 Uptime: 77d 11h 36m Users: 1 Load Average: 0.31 Memory Usage: 8191.52MB/16036.79MB (4.23%) Disk Usage: 77.48GB/909.78GB (8.52%)

2x whitebox machines at home

i5 2500
24gb ram
Archlinux
10x4TB drives in RAID-Z2
Chelsio S310E 10gbe nic
File server +Plex + various download things

i5 3570
16gb ram
Ubuntu server
4x480gb ssds + 1x4TB drive in btrfs pooling mode
Chelsio s310e 10gbe nic
Nginx functioning at a steamcaching server
>>
>>54396041
>And how can i find something decent in Europe ? Or is it normal to get robbed cause the market of used servers is almost non existent here ?

No clue, brocepticon. In freedomland good servers grow on trees.
>>
>>54396348
>Yeah that sounds like what I want! Kind of like a distributed/sharded database system but for my entire file system. I'll look into it. 3 systems should be no problem, but I'll start looking into power efficiency as well.

C2750s are your friend.
>>
>>54396349
>Depends on what you're doing. I swapped out and condensed a half rack into a single PE T620.

You did it wrong. Why would anyone want to be like you? You're still in the before pic.
>>
>>54397261
>Lenovo Ts140 best price is 300€ for 1 dualcore CPU Intel Celeron G1850, 4GB DDR3 SDRAM, no HDD.
>RS140 start at 800€.
>Dell R710 is 360€ for 2 x quad core Intel Xeon E5530, 32GB DDR3 SDRAM, no HDD, 2 870W PSU which seems a lot for what i want.
>So is the R710 my best bet ?
Jesus. Lenovo literally gave me an RS140 for attending an event.
>>
>>54397927
>It seems like some kind of dream come true - lots of smart people already tackled this problem.
>Thanks so much for introducing me to this!
>http://www.virtualtothecore.com/en/adventures-with-ceph-storage-part-7-add-a-node-and-expand-the-cluster-storage/
>One question though. When scaling the cluster, do I have to provision a whole new node, or can I add storage to an existing node until I use all the bays and only then provision another node?

I manage a 3300 node cluster on the side. I've never added less than a full node to the cluster but if it's like HDFS (it is) you allocate free space in your JBOD and it doesn't care at all if you add one disk or a million. Disk speed isn't important either and it's rack aware.

Data migrations are amazingly easy. Add new nodes at the new data site, let it rebalance the cluster, the start shutting down nodes at the old site. It'll take a few rebalances but the data is always available and has parity.
>>
>>54396310
I feel like Windows storage space might be somewhat related to what you want
>>
File: dl580 g51.jpg (176 KB, 1189x434) Image search: [Google]
dl580 g51.jpg
176 KB, 1189x434
Guys, we've "inherited" an HP ProLiant DL580 Gen5 server at work.

- 20GB Ram
- Two 6 core Xeons
- Two 150Gb HDDs in Raid 0
- VMware ESXi 5.1 (free license)
- 4 NICs

It's been donated to my department (we used to be just support guys) to start up a small development team and better serve the organization.

Really don't know what the fuck to do with it. Our tiny services run on a 2007 computer I've put in the (previously network-only) rack room.

I can figure out my way around virtualization shit but I need some guidance when it comes to what I need to do.

We have two services: one running apache + php + mysql and another apache + python + mysql (same apache instance). We don't even have a domain or valid IP (not critical atm).

What do I need to know?
What do I need to install/configure as to secure and make the services reliable?
Are there any guides about this shit? I'm not building a 99.999% availability data center.

We don't have hardware for backup but I guess we can use the old "server" with upgraded HDDs as NAS or something.
>>
File: htnshtnshtns.png (246 KB, 664x660) Image search: [Google]
htnshtnshtns.png
246 KB, 664x660
Not bashing because I would totally do this stuff too, but I wonder how many corporations are like /hsg/ users: I don't know what I'm going to do with it, but fuck it, I need more servers.
>>
Stuck picking between my virtualization host
I was looking at some vm hosting distros like proxmox or unraid but I just figure maybe I should just use redhata and configure it myself.
>>
So does /hsg/ have some kind of buying guide?

What's the difference between server hardware and consumer hardware? I see a lot of people with Supermicro mobos, a brand I'd never heard of before these threads. Its not fine to, say, repurpose an old computer? Also seems like nobody's using any PC enclosures even though some mobos are ATX

What are the pros and cons of stuff? Why did you choose something in favor of something else?
>>
what's the consensus upon 80€ mobos with integrated CPUs as home servers?

I don't plan to do heavy calculations or something, just backups and mail and some websites
>>
>>54400434
2nd for a /hsg/ buying guide or wiki

I'm knowledgeable when it comes to building desktops but completely lost when it comes to some of these servers.
>>
File: 1460320248747.gif (903 KB, 400x300) Image search: [Google]
1460320248747.gif
903 KB, 400x300
>>54391729
Now I understand how the cloud works

Where did you get your bundle?
>>
>>54389645
just use lvm and keep adding more drives to your to your volume group via logical volumes.
>>
>>54400434
yes we definetly need one.
/hsg/ overlords make it happen please
>>
>>54400539
just put some dual sli's in it and balance the load using the gpu, pretty much the same thing.
>>
>>54400608
This doesn't meet all of the requirements and has some pretty annyoing limitations. Same thing about ZFS.
>>
>>54399523
ITfag here.

I need more info about your org, do you have a mostly Windows environment, Linux or both?

What type of dev do you want to learn? What business model do you serve?
>>
>>54400434
>>54400539
>>54400680
Not OP, but I'll see what I can throw together before I'm gone for a week.
>>
>>54401075
What limitations?
>>
>>54399199
> Did it wrong
Lets see...

6 servers, 3 SAS EB's, and a SAN. Nope, like my T620 better, thanks.
>>
>>54391548
fuck that sharecenter. mine's fucked up after a year and shit's loud as fuck
>>
>>54390864
>You can't do it right without 3 machines to start
excuse my retardedness but why?
couldn't you do it with say, two systems and virtualization?

I'm in a similar situation to him but only 2 kvm hosts, but I'd love the ability to expand it later like that.
>>
>>54402496
Not that anon, but CEPH clusters (usually?) don't have a dedicated quorum. As such, they need an odd number of nodes for a majority election if one node is out of sync with the others.
>>
>>54389549
or just get a bananapi with sata and plug sata into the fucking hard drive and use that. as I do.. which gets full speeds over LAN
>>
>>54401614
>6 servers, 3 SAS EB's, and a SAN. Nope, like my T620 better, thanks.

Did it wrong and still too stupid to see it.
It's a shame.
>>
>>54402849
So what's the right answer?
>>
>>54402545
>Not that anon, but CEPH clusters (usually?) don't have a dedicated quorum. As such, they need an odd number of nodes for a majority election if one node is out of sync with the others.

Not only that but default redundancy is 3 copies which generally have to be on 3 discrete nodes. Once it becomes rack aware it wants some copies off rack. One it becomes datacenter aware it wants copies in disperate datacenters.

You don't want to virtualized storage on top of some other storage unless you're testing in a lab.
>>
File: Screenshot (1).png (208 KB, 1366x768) Image search: [Google]
Screenshot (1).png
208 KB, 1366x768
under 200 dollar homeserver
>>
>>54402883
>So what's the right answer?
Follow the comment chain. Rack mount since they tend to multiply. Then you end up with switches and routers and UPSs, and KVMs, and a bunch of other shit that doesn't stack for shit when you're using pedestal servers.
>>
>>54402960
Fuck that noise.

A PE T620 works best for what I'm doing at home.

I have enough gear in the colo, and I deal with this shit all day at work.
>>
Do you need a fast cpu for a nas?
I have a spare amd athlon 5350 from my mom's old build I have no use for.
>>
>>54403029
For just a NAS? No, that should be fine. If you want to do Plex transcoding and such you'll have to evaluate what quality you want, and what you're transcoding to.
>>
>>54391420
What's wrong with Synology?
>>
>>54403047
Okay thanks. Google has some nas builds with that cpu. I was probably going to use it just for backups or data storage, but apparently it can handle a 1080p steam.
>>
>>54403029
nas4free will run on a P-III
>>
>>54403250
That's good to know. What about on a Raspberry Pi?
>>
File: cat9n-2-web.jpg (72 KB, 970x1248) Image search: [Google]
cat9n-2-web.jpg
72 KB, 970x1248
>>54403662
Maybe I'm not sure. my NAS consists of a samba share from my server. I don't use it personally but have set it up for other people before.
>>
>>54403006
>A PE T620 works best for what I'm doing at home.
>I have enough gear in the colo, and I deal with this shit all day at work.
Well when you're ready for big boy hardware we'll still let you post pictures. Until then, it's naptime. You don't want to get any crankier.
>>
>>54403099
>What's wrong with Synology?
Qnap, Synology, and Drobo are expensive for the feature set they include. Often the hardware is significantly less performant than an equal cost server build. They're closed source and proprietary which is a significant problem here since it offers you few upgrade options, a single source who really knows how the software works and little recourse once they abandon a model. I'm perfectly fine with closed source, proprietary software elsewhere but with these devices it's a problem.

TL;DR: They're a Fischer price "my first storage" device. There are much better options.
>>
File: 1412082735918.jpg (855 KB, 2450x1155) Image search: [Google]
1412082735918.jpg
855 KB, 2450x1155
this is what faggot weeaboos on /a/ actually think are servers
>>
>>54401437
I work in the local branch of a state-funded university. 99% of the 500+ computers are office & facebook machines running Windows. Only 10% of the IT dept uses Linux (including me).

I already know how to dev, but this is a sysadmin'ing job in my case. I've dealt with VPSes (had a Linode for about five years before Digital Ocean was a thing). I've never been to the back side of VPSes, so I don't know how to manage these things.

Initially the services will only be available to the local networks and will be used by other employees and sometimes by the students.

Except in February and August, down times do not cause major issues (at worse I get a single call). It's been like this for about one year and a half. ***ACCORDING TO WHAT I'VE BEEN TOLD*** the services will be migrated to the datacenter once they're production-ready (i heard they are out of space in the storage and have no money to buy more so they'll probably give me the finger if i ask them to host the services later this year).
>>
>>54407319
VPSes were basically ubuntu server with nginx + database behind ufw.

Can I make something like that with multiple vms? I.e. setup one firewall virtual machine and put the service vms behind it. Is that even recommended?

Everything is low traffic here. I just checked and we had 2927 requests since May 1st.
>>
File: 200px-HappyMerchant.gif (15 KB, 200x225) Image search: [Google]
200px-HappyMerchant.gif
15 KB, 200x225
>pfSense Gold
>>
File: DeathpactAngel.jpg (173 KB, 1600x1168) Image search: [Google]
DeathpactAngel.jpg
173 KB, 1600x1168
For servers, why would I ever want to use RAID 5E or RAID 5EE over RAID 1+0?
>>
I'm currently looking into setting up a FreeNAS box with 8 4GB drives in RAID10. I'm looking into raid controller cards right now, but not what to get. I read that ZFS benefits from software raid more than hardware raid, though I've always been told that hardware raid is always superior. Does anybody run that many drives in raid with FreeNAS and what is your experience with it?
>>
>>54400434
With all the knowledgeable people here it'd be a true gold mine of information!
>>
>>54382593
>not buying a server with RAID5 and at least 12gb RAM
>not installing Exchange 2016
>not enjoying calendar, tasks, contacts too

The easier alternative is to use Office 365 for about 2 bucks a month.
>>
>>54407820

If you're going to add more drives, you should look at RAID6.

ZFS has only ever let me down. How do you add drives to a ZFS array? It takes a bunch of juggling and resilvering which takes days to complete. I had a ZFS raid fail and discovered... there are no tools to help recover a ZFS raid in distress.

If you're going to go with hardware raid you should buy 2 of the cards. You need a spare. If you use software raid... no need for spares.
>>
>>54390864
>>54397927

Windows Server Storage Spaces
>>
>>54400434
>>54401453
Any progress on this?

Also, was thinking of getting a Thinkstation S20 with a quad core Xeon and 4GB ram for about $150. I want something quiet. Does anyone have any better recommendations? Don't want to be wasting my money.
>>
>>54397310
ayy lmao and its a PuTTY ssh connection
>>
>>54410389
Interesting to hear. I was originally planning on 8x4TB since thats's all the drive bays I have, but I might expand to a different chassis later on. I have also looked into RAID6 and like what I see, but I also like the reliability of RAID10 so I'm pretty torn on that. I have never ran a ZFS filesystem before, so I'll have to take your word. Did you use a hardware or software raid controller when your raid failed?
>>
File: artsXfe.jpg (481 KB, 2688x1520) Image search: [Google]
artsXfe.jpg
481 KB, 2688x1520
Any books/articles/guides for getting into servers for retards?
>>
>>54411078

Software RAID. I learned my lesson on hardware RAID years ago. If your hardware RAID card dies, you could be down for days waiting on another to arrive.

I use a basic JBOD SPF8088 card with a gig of cache.

Yeah, all my friends were fans of ZFS.. but I don't get it. No tools for even trying to recover some data is appalling. Keep it simple. Software RAID will actually keep up with some pretty heavy loads.
>>
>>54407654
>For servers, why would I ever want to use RAID 5E or RAID 5EE over RAID 1+0?

RAID5 gives you linear write performance of N-1, while RAID1 or 0+1 gives you N/2.
Utilization efficiency is similarly (N-1)/N for base RAID5 and (N-2)/N for RAID5E/EE, which exceeds the 0.5 of RAID1+0 for arrays of 5 or more drives.

RAID0+1 is only particularly good for lots of random small writes, as its linear and random read throughput are only better than RAID5 by a small scalar degree.
>>
>>54411143
Sounds good. I was kind of leaning towards that. Instead of using ZFS do you use UFS? I was reading about ZFS and while it sounds great, the lack of official ZFS raid recovery does have me cautious about it.
>>
>>54391548
Whay do you use the eeepad for?
>>
QEMU+KVM or ESXI or Xenserver?

I'm running xenserver at the moment but might change to QEMU+KVM, what are the real advantages?
>>
>>54411647

Uhhh, you sure about that?
>>
>>54384446

>2x 1tb in RAID-10

You can't possibly be this retarded.
>>
>>54412729
How so? I had two, I just threw them in there.
>>
>>54412909
He's saying RAID-10 requires 4 drives anon.
>>
>>54412643
what's wrong with it?

the issues with RAID5 are that partial stripe writes require reading the data and parity segments before writing back to them and the supposed "hole" here if you have a power loss between data and parity being written.

unless somebody knows they have atypical use requirements, software RAID5 is nearly always the right default choice.

I don't actually know anybody you uses RAID5E/EE instead of a normal standby spare though.
>>
>>54378164
>iLO
>Botnet
Do you understand what either of those two are?
>>
>>54410729
>Windows Server Storage Spaces
No.

>>54411078
>Interesting to hear. I was originally planning on 8x4TB since thats's all the drive bays I have, but I might expand to a different chassis later on. I have also looked into RAID6 and like what I see, but I also like the reliability of RAID10 so I'm pretty torn on that. I have never ran a ZFS filesystem before, so I'll have to take your word. Did you use a hardware or software raid controller when your raid failed?

Don't ever run ZFS on top of RAID. Either run RAID or ZFS2/ZFS3. RAID/ZFS to any a backup. If you're going to be a big boy with storage, plan on having a backup server or service and scheduled backups. Backblaze B2 and crashplan are cheap.

Don't use RAID5 any longer. Go RAID6 or 60. Unless you need super write performance skip RAID10. In most cases a single L2ARC SSD will be faster than RAID10 anyway.

Also use hotspares, and have a RAID card battery backup if you go that route.
>>
>>54412265
>Sounds good. I was kind of leaning towards that. Instead of using ZFS do you use UFS? I was reading about ZFS and while it sounds great, the lack of official ZFS raid recovery does have me cautious about it.

WTF are you even saying? Use appropriate URE drives with appropriate parity settings and this is a non issue. Hot spares rebuild failed drives immediately, backups repair lightening strikes. There is no in between.
>>
>>54413388
2nd for using straight software raid
2nd for not using RAID5
2nd for using L2ARC w/ SSD
>>
>>54413297
>unless somebody knows they have atypical use requirements, software RAID5 is nearly always the right default choice.

STOP USING RAID 5!
>>
>>54413388
>RAID/ZFS to any a backup.

RAID/ZFS is not a backup, is what I was trying to say.
>>
>>54413456
>2nd for using straight software raid
>2nd for not using RAID5
>2nd for using L2ARC w/ SSD

We can be friends. I'd let you rsync my media collection.
>>
It gets me by.
>>
File: servass.png (23 KB, 815x519) Image search: [Google]
servass.png
23 KB, 815x519
>>54413527
It doesn't help when I don't post what gets me by.
>>
>>54413525
Heh. I hope it's not full of weeb shit.
>>
>>54413388
>>54413439
Thanks for the info. This is my first foray into into setting up RAID so I'm just been doing a lot of reading, but still unsure of a lot of things.
>>
>>54414248
FreeBSD's guide is pretty good for getting started with ZFS.

https://www.freebsd.org/doc/handbook/zfs-quickstart.html
>>
>>54378164
what purpose server??
why do you want to waste electricity?
>>
>>54378164
I bought a HP microserver gen8 after fucking around with tons of different options trying to get ECC and vt-d in one platform.

It's now running sonarr (+ deluge) but I can't decide what to use to access my media.
>>
>>54413472
>he doesn't have backups
fuck off retard
>>
>>54396310
This is exactly why you should use btrfs instead of ZFS for home.

The flexibility is amazing.
>>
>>54414636
Does btrfs have something like ARC or L2ARC?
>>
File: ayooo.png (54 KB, 675x424) Image search: [Google]
ayooo.png
54 KB, 675x424
>>54414474
I'm liking 16.04

>>54414852
>using ARC
Linux(yes the kernel) is supposed to add a function that allows all filesystems to have a l2arc-like cache.
>>
>>54414894
Care to elaborate? Maybe a link of some kind to this?
>>
>>54414959
https://en.wikipedia.org/wiki/Dm-cache

I have no idea how it works or if it works well.
Things in the linux kernel tend to be good stuff.
>>
>>54414984
>dm-cache
That's somewhat equivalent to L2ARC, not ARC. ARC is a RAM cache for the disk.

>The design of dm-cache requires three physical storage devices for the creation of a single hybrid volume; dm-cache uses those storage devices to separately store actual data, cache data, and required metadata.

Yikes, the separate volumes?

Back to ARC, does Linux have an equivalent for ARC?
>>
>>54415025
No idea.

Also I meant that ARC is terribly from a finance perspective. Memory is not cheap relative to SSD's. I wouldn't even consider it.
>>
>>54414581
>>he doesn't have backups
>fuck off retard

I have two levels of backups you fuckwaddle. There is no reason to use a redundancy method that doesn't provide redundancy in case of a failure. RAID5 is over as of 2TB drives unless you spring for drives with a URE of 10^15. With the expensive drives its only "likely" that you can rebuild a degraded array. Why in the shitfuck would you accept all the downsides of RAID5 if it meant any failure would likely required a full restore from backup to recover from?

Sounds like you're just a shit snacker trying to post tough on g. You got called on it and now you have to smile with corn nibblets and peanuts stuck in your teeth.
>>
Nothing special, but it runs Apache, Samba, and OpenVPN (currently broken).

>INB4 Cardboard: It's metal now. STFU.
>>
>>54415762
Yay. Lost image.
>>
>>54415811
>>
>>54386095

>gigabyte gayming
>core i7 4770
>no ecc memory
>server

TOP KEK!
>>
>>54386195
That's some fine detective work there anon. I mean I know that he said ",ok so i have absolutely bo idea what the fuck im doing." but had you not put 1 + 0 together, and basically reiterated what he said, I don't think I would have ever realized that.
>>
>>54415811
"Naaaaw maaaaan. Put the swiiitch insiiiide the compuuuter." *toke*
>>
>>54415934
Post your server or shut your mouth...
>>
>>54416053
Lol. Extra space and not enough ports on the router.
>>
>>54415065
>terrible from a finance perspective
Not the point anon. The primary point is if you have free memory, then you have wasted memory. Hence, you may as well use the free memory to cache the disk.

You could performance too, but performance was never financially friendly.

>>54416053
lmao'd
>>
>>54386095
>Ubuntu
>2 versions of Window; so nice, had to install it twice, huh?
>Vmware, not kvm, or even Xen
>>
Anyone here use FreeBSD jails? I just don't get the whole shilling behind virtualization with ESXi.
>>
>>54416167
>Lol. Extra space and not enough ports on the router.

"Bro, the data goes into the computer, then it goes into the computer. Like it enters the body, then the mind."
"That still doesn't explain how they fing." *States at hand*
>>
>>54416537
>I just don't get the whole shilling behind virtualization with ESXi.

Highly capable software, that has a huge user base, and has a great support structure. Businesses love it so it's popular among labbers who use it to get better at work.
>>
>>54412966
Oh lol right, it was RAID-0. Dunno what I was thinking.
>>
Some guy I work with gave me a Poweredge 1950 that he didn't want anymore and I have no fucking clue what to do with it.

>giving a network engineer a bare server
>>
>>54417196
>Some guy I work with gave me a Poweredge 1950 that he didn't want anymore and I have no fucking clue what to do with it.
Throw it away.
>>
>>54418663
You're the worst kind of person.
>>
File: ASRock C2750D4I Top.jpg (501 KB, 1274x1200) Image search: [Google]
ASRock C2750D4I Top.jpg
501 KB, 1274x1200
Thoughts on pic related? It has an 8 core embedded atom cpu. Supports ECC RAM. Mitx board with 12 sata ports and 4 dimm slots. Worth getting? I currently have a Pentium G3260 in an H97 motherboard with 4gb RAM.

If I were to pop 16gb of ddr3 ECC RAM into this, would it be able to handle multiple VMs and video encoding? Or would I be better off getting an i7-4790, raid controller, and 16gb of standard ddr3? Both options are around the same price. I intend to run Windows 7 (shoot me I know) because for my needs Linux is worthless and I refuse to pay $380 for 2012 essentials.
>>
>>54418798
Pop a R7 260X in it (The PCIe slot is 8x) and you've got yourself the closest thing to a PS4 for almost twice the price.
It's great.
>>
>>54418726
>You're the worst kind of person.
Read the fucking thread, shitweasel. 1950s and 2950s are nearly the same thing. They're better door stops than servers.

Throw.
It.
Away.
>>
>>54418919
I stand by my post.
>>
>>54418866
That's not helpful in any way. Thanks.
>>
>>54418798
>Thoughts on pic related? It has an 8 core embedded atom cpu. Supports ECC RAM. Mitx board with 12 sata ports and 4 dimm slots. Worth getting? I currently have a Pentium G3260 in an H97 motherboard with 4gb RAM.
>If I were to pop 16gb of ddr3 ECC RAM into this, would it be able to handle multiple VMs and video encoding? Or would I be better off getting an i7-4790, raid controller, and 16gb of standard ddr3? Both options are around the same price. I intend to run Windows 7 (shoot me I know) because for my needs Linux is worthless and I refuse to pay $380 for 2012 essentials.

The C2750s are designed for storage boxes. The C2758s are designed for network boxes. Neither are good for general hypervisor work. They're both fairly expensive in a $/performance standpoint when compared to off lease servers.
>>
>>54418798
>octacore atom
Didn't know these existed desu

>>54418951
I'd take it for free but if you have a spare $100-$150 you might as well buy a refurb off ebay. If you plan to run it 24/7 you might as well buy a better one.

Hope it came with memory, but you said barebones so wouldn't hold my heart on it.
>>
>>54418952
Seeing as how I own one, I could tell you about specifics, but since you're an ass, I won't.
>>
File: CPsCOQwWcAEWn7o.png (286 KB, 599x303) Image search: [Google]
CPsCOQwWcAEWn7o.png
286 KB, 599x303
>>54405637
>There are much better options.

Which are?
>>
>>54418951
>I stand by my post.
Got it. You're a dumbass.
>>
>>54418988
>Seeing as how I own one, I could tell you about specifics, but since you're an ass, I won't.
There's nothing to tell. Unless you're running pfsense or sophos utm on a C2758, you can do better. If the Atoms were half the price then they might make sense but they're priced out of the niche unless you have a shoestring power budget.
>>
>>54419030
Says the guy suggesting to toss perfectly functional hardware.
>>
>>54418977
OK thanks. Guess I'll just get the i7. Does ECC RAM make or break a server? Like is it REQUIRED? Or would I get by just fine with non ECC RAM?>>54418984
>octocore atom

Yup they exist. Low clocked 8 thread 15w tdp atoms. They're actually pretty good from what I hear but from what >>54418977 says, they're only good as storage boxes or network.

So I'm looking at the Asrock server mitx with 8 thread atom - $350
16gb ddr3 ECC - $70

Or

I7-4790 - $250
16gb ddr3 non ECC - $40

Hence my question about the importance of ECC.
>>
>>54419082
I was talking about specifics about that particular ASRock board.
>>
>>54419094
Not that anon, but 1950/2950 anything hp G5 is worthless. Even R900s are pretty close. I've just tossed the last of my g5s and 2950s.
>>
>>54418996
>Which are?

Rape, murder, arson and rape.


Wait, no. Either an actual SAN/NAS from Dell/Promise or a less expensive software NAS like freenas or nas4free. If qnap sold freenas devices they'd charge $5k due to the available features. Qnaps, drobos and similar are prosumer goods. There's a reason you won't find them anywhere that understands storage.
>>
>>54419161
4u
>>
>>54419094
>Says the guy suggesting to toss perfectly functional hardware.

You don't understand what a 1950 is, so you're still a dumbass.

You could buy 3x-5x a 1950s capabilities from AWS for less than the power the 1950 would consume in equal time.

Throw.
It.
Away.
>>
>>54419129
It should be fine as a standalone -- definitely wouldn't put a hypervisor on it due to overhead.

>using non-ECC on ECC mobo
See:
http://www.tomshardware.com/answers/id-2644636/ecc-ram-ecc-mobo.html
and look at the Intel page for your specific processor model.

>>54419243
Just do what you want anon, /g/ is full of poor and rich, /hsg/ is just filled with rich.
>>
>>54419262
>You don't understand what a 1950 is
Yes I do, I looked up the specs before replying to you.
>>
>>54413388
>Backblaze B2

They say its unlimited. How can I trust that? I trusted ISPs when they said that. They betrayed that trust.

Can I really expect them to be totally okay with me shoving about 53 terabytes down their throats? And all of it is encrypted data they can't possibly deduplicate.

Also, no Linux daemon, no API... It's really fishy if you ask me.
>>
>>54419129
>OK thanks. Guess I'll just get the i7. Does ECC RAM make or break a server? Like is it REQUIRED? Or would I get by just fine with non ECC RAM?

If you're just fucking around get whatever. If you actually want some server time where stuff runs 24/7 go ECC. It's very, very important in software based storage and very important everywhere else.

Look at off lease XEONs that take dual and quad ranked ECC mem and you'll awww the desktop tier stuff doesn't make any sense.
>>
>>54419366
>unlimited
No it doesn't. It says $.005/gb for me.
>>
>>54419306
Not that guy but those really are junk. Louder than a 2950 and they will pull 300w at fucking IDLE.

Fun to play around with but for any type of actual 24/7 powered on state, not worth it.
>>
>>54419426
I never said I would use it, but to throw it out is just fucking moronic. Pass it on to someone who does want it or sell it
>>
>>54419132
>I was talking about specifics about that particular ASRock board

Ok. It a the same price as an off lease C2100 and has 1/10th the cpu, no chassis, less network, and less storage bandwidth.
>>
>>54419243
Only the R900 and DL580 G5's are 4U...
>>
So, I said I'd throw something together for a WIKI before I left. it's shit, but I'm tired and have days of driving to do. So here then. It's a starting point, however crappy...

/hsg/

What would you use a home server for? /g/ answer - Fuck you!. Simple answer - For whatever you want. From media to development to virtualization, options abound.

Power - Any server DDR2 based is going to be power hungry. Any multi-socket Intel system is FBDIMM based. Anything else is ECC. With DDR3 based units coming off 2nd lease, anything DDR2 should be avoided.

Plex - 1080p streaming at 10MBPS requires a CPUMark score of ~2000 per stream. This is especially true with first generation i3/5/7 / DDR3 Xeons. The more recent the CPU, the more slack there is in this. For some reason, Plex doesn't seem to like low power options (Xeon 1220L, for example).

Virtualization - ESXi, KVM, Hyper-V, etc. ESXi is generally used by Linux heavy shops that aren't cloud centered. KVM is usually used in OpenStack. Hyper-V is for mostly Microsoft centric shops. These are all free, so use what you like.

Storage - Both ZFS and Storage Spaces pool. If you're going to use these options, do NOT configure the drive with a hardware RAID controller. Many options are available in general, such as FreeNAS, Nas4Free, OpenMediaVault, Windows Storage Server, Linux / Unix / BSD, etc. Some are free, some are not.

What should I get? A good starting point, if you don't want to build your own system, is an HP Proliant Micro G8, 8GB DDR3 ECC (Not Registered or RDIMM), 4 3.5" drives, and a 16GB micro SD card. Install OpenMediaVault on the SD card, and enjoy ZFS, Plex, and whatever else you want to try.

Where can I get things? Ebay is a good place to start. Used / refurbished gear is fine, provided that the seller is selling a large quantity of them. With drives especially, this is the case. The only real drive to avoid is the Seagate ES.2 1TB. These have faulty firmware and fail prematurely (Ask EMC).
>>
>>54419306
>>You don't understand what a 1950 is
>Yes I do, I looked up the specs before replying to you.

Holy fuck you're dense. I didn't mean litterally what it is. It's a waste. You lose money running one compared to running equal services in AWS.
>>
File: P_20160507_134159.jpg (4 MB, 4096x3072) Image search: [Google]
P_20160507_134159.jpg
4 MB, 4096x3072
I've had some shit before, a diy wooden rack full of greens connected through $10 4 port sata pci cards with software raid 1 but eventually that went to shit, I turned them all off, and have spent the last two years running off 4 3TB externals.

Recently I got my hands on a 16TB RAID array for like 450 aussiebux (About $150 USD).
The housing was unmanageable eSATA bullshit so I said fuck it and got myself a 4x4 port sas card.
Built myself a nice clean wooden box, and am now trying to work out how fucked each of the drives is so I can put all the old drives back online, rebuild and sync the arrays, then finally optimise my drive usage. Long weekend ahead.
>>
>>54419366
>They say its unlimited. How can I trust that? I trusted ISPs when they said that. They betrayed that trust.
>Can I really expect them to be totally okay with me shoving about 53 terabytes down their throats? And all of it is encrypted data they can't possibly deduplicate.
>Also, no Linux daemon, no API... It's really fishy if you ask me.

I've seen 100tb stuck in their service. Note there are two services. B2 is the business grade service that allows server backups. The other is on my for desktops.

Their major downside is they only have a single datacenter so there is no DR.

They release some drive analysis every now and then, which is worth a read. They have a shitload of drives so their sample sizes start to count for something.
>>
>>54397310
>>54410892
>it could not have possibly been bait
>not on 4chan
>>
>>54419411

Oh, that's the cloud storage service, not the backup service. My mistake.

I'm even more confused now though. Why do they not charge by GB for backups? It's the same storage, no?
>>
>>54419455
>I never said I would use it, but to throw it out is just fucking moronic. Pass it on to someone who does want it or sell it

Hold it for someone else so that the price gap between it and AWS gets even bigger over time. You might be retarded.
>>
>>54419514

Thanks a lot man! Please keep coming back to these threads. Everyone can see you know your stuff and having good, informative threads is such a nice change from regular /g/ shitposting
>>
>>54419627
>You might be retarded
Says the guy that throws away things that aren't broken
>>
And lastly, no pics because scattered, but...

Server:
Dell PowerEdge T620
2x E5-2660
160GB DDR3 RDIMM
12x Hitachi Ultrastar 4TB (And 2 more on the spares shelf)
6x Sandisk Ultra II 500GB SSD (1 more on spares shelf)
2x 120GB Sandisk something RAID 1 (and 3 more spares, because extras)
Spindles and 500GB SSD's are in a Windows Storage Spaces (Server 2012 R2) pool, with tiered storage enabled for a few of the file shares.

Server runs Hyper-V and supports 10 production VM's (2 DC's, 2 DHCP, SQL, Exchange, SCCM/SCOM, SharePoint, OCIM/Lync, Plex) and up to 30 non-production VM's (When I feel like trying to play in OpenStack, for example)

Production VM's are Hyper-V Gen2, everything is Hyper-V Gen1.

Firewall:
PCEngines APU1D4
128GB mSATA SSD
Untangle

Wireless:
2x EAP1750H running over PoE. Guest network isolation active, and AP roaming supported.

Phones:
3x Polycom CX700. Currently configured with Lync, but will be reconfigured to support standard voice shortly.

Switch:
Dell X1018P 16 + 2 port managed* PoE gigabit.

*Not full managed, but managed enough to vlan VoIP and data, and send PoE and throughput data to SCOM.

VM's are also replicated to an HP Z800 that I do all my blu ray and dvd ripping on, just to cover an oh shit moment.
>>
>>54419651
>Says the guy that throws away things that aren't broken

Yeah. I rid myself of things when they're no longer useful. Just like the guy who gave you the 1950 did.

The 1950 is useless. It may as well be a pretty rabbit that you can hug, and pet, and squeeze without worrying about the consequences, Lenny.
>>
>>54419584

>I've seen 100tb stuck in their service.

B2 or regular user backup?

>allows server backups

So if I have a home server I can't use the backup service, I must use B2?

>Their major downside is they only have a single datacenter so there is no DR.

Can you explain why this is a downside? What is a DR?

>drive analysis

I've seen them get heavily criticized because it was too specific to their drive storage conditions or something
>>
>>54419730
I have no 1950, I'm just arguing with you because you're a moron.
>>
>>54419457
I was talking specifics like the issues with its third party SATA controllers, the NIC, and RAID/ZFS bugs the things has, but alright
>>
>>54419782

> Can you explain why this is a downside? What is a DR?

DR == Disaster Recovery. Basically, if the datacenter hosting their service burns to the ground, gets struck by lightning, etc your data/ backups are hosed with no way of getting them back.
>>
What does /g/ use their home serber for?
Casualfag here trying to educate myself on this shit.

Maybe my needs are just too casual and undemanding, but I can't see a use for such a setup in my own domicile.
>>
Fun fact: according to my Kill-a-watt, an AMD A4-5000 uses 5W less power at near idle (10% load) than the Pentium N3700 at near idle (7% load). At full load, the A4-5000 only uses 37W total with DDR3L and an SSD, whereas the N3700 uses about 40W. All this despite the A4-5000 having better single and multi-threaded performance (5% better ST, 7.5% in multi-threaded)

The A4-5000 is simply a better choice for a low-cost, low-power, quad core home server.
>>
/g/ SEND HELP
I want to make a server, file server but i also want to manage everything done by them like local mail chat and printing, how would i go about limiting them? Chat will be handled by open fire but how do i go about managing Printing? And limiting how much can a machine print etc
>>
>>54419789
>I have no 1950, I'm just arguing with you because you're a moron

Things must look different on the otherside of that pile of victrolas, VCRs, 8 tracks, and CB radios. You'd be a mingoloid to keep it. But we've established that now, so all is well.
>>
>>54419841
>I was talking specifics like the issues with its third party SATA controllers, the NIC, and RAID/ZFS bugs the things has, but alright

Who cares? There's no reason to have one.
>>
File: 20160501_172555.jpg (4 MB, 5312x2988) Image search: [Google]
20160501_172555.jpg
4 MB, 5312x2988
>>
>>54396728
I think it stands for Cisco Certified Internetwork Expert. Idk but it's the top of the line for cisco networking certs. I just took my CCNA last week. Never worked in IT or had any interest in computers or networking before 4 months ago. Started studying half-assed for a few months thinking I could glide on superior intelligence. 725 fail :(.

It really wasn't that hard though. I didn't use GNS3 at all, I used packettracer, CLN, home lab (3 2950's and 3 1841's) and some books/audiobooks.

If you have any questions about it let me know man. I'll be taking it again in a few weeks and passing it this time. Hopefully get a CCIE one day.
>>
>>54419782
>>I've seen 100tb stuck in their service.
>B2 or regular user backup?

B2

>>allows server backups
>So if I have a home server I can't use the backup service, I must use B2?

The home service is for local disks on desktop Windows and Mac.

>>Their major downside is they only have a single datacenter so there is no DR.
>Can you explain why this is a downside? What is a DR?
>>drive analysis

Disaster recovery.

>I've seen them get heavily criticized because it was too specific to their drive storage conditions or something

Probably the home service.
>>
>want to use some fancy LAN ip numbering like 10.0.2.0 or whatever
>don't care enough to fuck with it
>192.168.0.0 is just fine
>>
>>54420441
>victrolas, VCRs, 8 tracks, and CB radios
Congrats, your score is 0/4.
>>
>>54420506
Nice sized rack, where'd you get it / how much?
>>
>>54420590
Amazon, $590
>>
>>54420590
>nice sized rack

t-thanks, I'm kind of an early bloomer
>>
>tfw my vps has over 2/3 of a year of uptime
>tfw it's been up since the last power outage
>>
File: pepeglasses.jpg (42 KB, 655x527) Image search: [Google]
pepeglasses.jpg
42 KB, 655x527
>>54420349
How can i manage the rest? Chat by using open fire but printing? How do i go about it?
>>
File: 1447890426109.jpg (46 KB, 424x505) Image search: [Google]
1447890426109.jpg
46 KB, 424x505
>mfw 500gb external died with all my porn on it
>>
So are there books on this stuff? What should one use to educate oneself
>>
File: Screenshot_20160507-015905.png (922 KB, 1440x2560) Image search: [Google]
Screenshot_20160507-015905.png
922 KB, 1440x2560
Anon that was asking about the octocore atom board here. How about pic related? Mitx, supports ECC, dual gigabit. Was thinking this and a Xeon e3-1231v3 along with a 2x8gb kit of ddr3 1600mhz ECC from crucial.

Board - $210
CPU - $210
RAM - $90
Throw away passively cooled GPU - $40

The gpu is only for god forbid the thing NEEDS to be hooked up to a monitor as a Xeon doesn't have an igpu and i7s don't support ECC. I currently just remote into the server when needed. The board has 6 sata ports and that's all my Node 304 case can handle anyway is 6 drives so I don't mind losing the pcie slot.
>>
>>54419900
My needs are pretty basic as well, but I just have a remotely accessible box hooked to my tv for htpc use, torrenting, nas (at home), relay (not home). For all intents and purposes, it is a server and it is capable of doing nearly everything that any other box can do but it is in a different hardware category entirely. One of the things I like about it is that wherever I am, I have a guaranteed untrackable connection e.g. I can browse 4chan and youtube at work since it's running through my home connection and not have it logged. Other than being able to back up my data continually, a chinese atom based mini pc would work just as well for me. The difference is that my machine can actually hold 4-6 drives internally (more if I put in a raid controller and a bigger PSU) and it has something like 6 usb 3.0 ports. It also has more OS compatibility but since it works as is, I'm unlikely to load anything other than windows 10.

I used to have a HP Compaq Pro 6000 (that I sold when I got this, Core2Duo E8400, 10GB Ram, 128GB SSD, 1TB HDD, DVD-ROM) that I used as a game server (and nas / headless media center) and I used to run a heavily modded minecraft server for like 8 people. I also used it as a vpn, similar to how I use my new machine (except on the new machine, I just use teamviewer and run everything locally).

Current machine is AMD 5350 + 8GB Ram + 128GB SSD, + 2x3TB HDDs + BluRay Drive. I still have an empty pcie slot that I don't know what I want to do with yet. It's low profile though so that limits me a lot.
If I disconnect the HDDS, I can run it off of a laptop power brick.
Thread replies: 255
Thread images: 47

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.