Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Walked
Apr 14, 2003

AbsoluteLlama posted:

So I went to price together a NAS today. Has anyone noticed HD prices have skyrocketed on every site? I checked out CamelCamelCamel and most of the 2TB drives on Amazon that were $70-90 are now $130-150. Is this just a pre-holiday price bump or something?

http://www.msnbc.msn.com/id/4499573...s/#.TqrL4d4r29w

Adbot
ADBOT LOVES YOU

Walked
Apr 14, 2003

Is there a consensus on the best 2.4/5ghz router for home use? Happy to spend if I need to (well, not happy), but I dont want to buy three until I find the right one.

Walked
Apr 14, 2003

I'm building a Plex box and trying to figure out my OS options, specifically for storage purposes.

The server is going to be an i3-4150 and running 3-4 WD Reds.

I'd like a storage option with parity so I can sustain at least one failure at a time. I was originally going to use a Windows base, with storage pools but the write performance looks to be laughably bad for this.

So I'm considering a few options:
- Windows software RAID5. Should have stronger write performance from what I've seen.
- FlexRAID. Seems pretty cool, but also money I don't want to spend.

Wondering what my options are for a *nix OS? I need:
- Plex and transcoding
- SABnzbd/couchpotato/sickbeard
- Torrent client
- Some sort of parity storage option

Past that, I don't really want to spend hours loving around because I'm getting old and grumpy and lack the endless hours to tinker I once had (hence original windows considerations).

Any thoughts?

Walked
Apr 14, 2003

Combat Pretzel posted:

For the love of god, don't do this. The NTFS journalling doesn't communicate with the LVM, and whenever the box restarts unexpectedly, it starts to rebuild your whole array. I'd rather take the write hit of Storage Spaces, which journals what slabs are dirty (independent of the filesystem), than have my array poo poo the bed performance wise and add wear to the disks whenever the power grid is wonky during summer.

This is the info I'm here in search of. Thanks!

Probably go with Xpenology, I just want to be sure it's pretty self-sustaining with minimal maintenance.

Walked
Apr 14, 2003

http://www.engadget.com/2014/12/12/seagate-ships-8tb-shingled-hard-drive/

:psypop:

Walked
Apr 14, 2003

I've decided to build a dedicated Plex box.

I'd just go really simple and go with an HP Microserver, but I'd really like something that can transcode as well (and if I can run some VMs for a little more, that'd be nice - not a must).

Any suggestions on:
- Board (6x SATA would be really nice)
- Processor that can handle transcodes at 1080p. Ideally low(ish) wattage. Dont forsee more than one going at any given time
- Case that's quiet (I can swap fans out if needed)


edit:
Looking at:
Mobo: http://www.newegg.com/Product/Product.aspx?Item=N82E16813157547
Case: http://www.newegg.com/Product/Product.aspx?Item=N82E16811352047
CPU: http://www.newegg.com/Product/Product.aspx?Item=N82E16819116995

Any issues at hand? Toss 8gb RAM in there and then 4-5x 4tb Reds and be good to go

Walked fucked around with this message at 22:58 on Dec 23, 2014

Walked
Apr 14, 2003

I'm ready to buy a NAS box.

For home storage mainly (backing up wife's computer, storage of files, etc) might stream some Plex but that's not a top priority.

I have 4x3 TB drives ready to roll.

Eyeballing a synology. What's the best 4 bay device of their current offering?

Walked
Apr 14, 2003

Just got my 1515+ setup; running 5x3tb drives in SHR with 2 parity drive.

100% maxing out my GigE on backups. I have a few computers on the network using it for various purposes (Plex, backup, Time Machine, etc) so I'm considering doing NIC aggregation on the Synology side - probably dont really need it but its fun to employ poo poo like that at home.

Also: Veeam Endpoint is awesome for home backup to NAS. Very very pleased; been eyeballing it for a while and finally decided to give it a whirl here.

Walked
Apr 14, 2003

necrobobsledder posted:

Most people's onboard SATA ports are Intel ICH controllers and will be fine for most home server needs. Depending upon how many PCI-e lanes are allocated for the SATA controllers you may want something else for a 1000 MBps throughput, 500k+ sequential IOPS build. If you're trying to do some boot-from-LAN type of scenarios that's nearly unheard of in home scenarios then it may make some sense.

The coolest thing I've done at home is setup Starwind Virtual SAN, with an Adaptec RAID controller, 4x4tb drives in RAID10; with SSD cache on 10gbe.
The bottleneck between that thing and my primary desktop is the 850Evo in the desktop.

:getin:

Seriously awesome; but I do a _ton_ of vmware lab stuff for work, so it's worth it. The synology on 1gbe gets all my media storage and is entirely sufficient.

Walked
Apr 14, 2003

Can anyone offer a suggestion to nail down a bottleneck?

I have a TS140 with an Adaptec 6405e RAID card, and 4x 3tb 7200k drives in RAID10.

I just moved this from my desktop too the TS140 and went from ~300mbps r/w to about 120.

The two systems are connected iSCSI at 10gbe and it maxes out the network until the TS140 RAM cache is full, then dies down to 120ish.

I've benchmarked this on the desktop and TS140 and it's pretty consistently different in iometer.

I'm thinking it has to be a bottleneck somewhere on the TS140 but it's a relatively modern PCIe port.

What am I missing?

Config:
TS140 Quad core Xeon
20gb RAM
Intel X540-T2 10gbe
Adaptec 6405e
4x 3tb 7200k drives RAID 10
Server 2016
Star winds virtual SAN

The performance is the same local, network, and iSCSI.

I just don't know what would bottleneck it..

Walked
Apr 14, 2003

SynMoo posted:

Were the drives connected to the same card when they were in your desktop?

The bottleneck looks like the controller writing to the drives. 120MByte/s is about what you'd expect writing to a single drive. Theoretically you could expect better in RAID10. Check your config to see how the card is configured to cache etc.

Async writes; same card in both. It's really, truly rather stange.

Walked
Apr 14, 2003

Walked posted:

Can anyone offer a suggestion to nail down a bottleneck?

I have a TS140 with an Adaptec 6405e RAID card, and 4x 3tb 7200k drives in RAID10.

I just moved this from my desktop too the TS140 and went from ~300mbps r/w to about 120.

The two systems are connected iSCSI at 10gbe and it maxes out the network until the TS140 RAM cache is full, then dies down to 120ish.

I've benchmarked this on the desktop and TS140 and it's pretty consistently different in iometer.

I'm thinking it has to be a bottleneck somewhere on the TS140 but it's a relatively modern PCIe port.

What am I missing?

Config:
TS140 Quad core Xeon
20gb RAM
Intel X540-T2 10gbe
Adaptec 6405e
4x 3tb 7200k drives RAID 10
Server 2016
Star winds virtual SAN

The performance is the same local, network, and iSCSI.

I just don't know what would bottleneck it..

So I need some help figuring this out.

I moved my 10gbe NIC to a 710 with H700 controller, and 2x 850evo in RAID 0 to completely eliminate HDD as a possible bottleneck. I also directly connected it to my workstation to remove cabling and the switch from the picture.

Still capping at 2gb/sec transfer. gently caress.

I've verified the source disk (850pro 1tb) is capable of so much more than 2gbit/sec. So it seems it has to be something with the PCIE or some other weird quirk in Windows 10.

PCIe is running in Gen3 mode at 8x. So it shouldn't be a bus limitation.

Any other ideas?

Walked
Apr 14, 2003

mayodreams posted:

What are you using for your 10g switch?

I would check that your firmware and drivers line up. Is Server 2016 and Windows 10 actually supported for the 10g HBA?

From the HDD perspective, 4 disks, even in RAID 10, isn't a ton of spindle speed.

Like I said; to eliminate HDD as the bottleneck I'm going from SSD (850pro 1tb) --> SSD RAID 0 (2x 850pro on hardware RAID0, with battery and 5112mb cache, write-back enabled/forced); I should very easily be doing more than 2gbit/sec; maybe not maxing out 10gbe, but notably better than what I'm seeing.
I've eliminated the switch from the equation by directly connecting the hosts.

Windows 10 is supported; I'm using mainstream Intel X540-T2 adapters.


Edit; on a whim I blew away VMware and installed Server 2016. Getting speeds as expected now.

Something is amiss in the ESXi default drivers it seems.

Walked fucked around with this message at 15:42 on Oct 22, 2016

Walked
Apr 14, 2003

http://forums.somethingawful.com/showthread.php?threadid=3799779

10gbe switch for $300 from Ubiquiti.

Then you need NICs; I'm running Intel X520-T2 for my desktop and SAN; and debating throwing fiber into one of my VM hosts (mellanox is pretty cheap on ebay for a card)


Still not cheap; mind you, but it's doable for reasonable-ish pricing right now.

$300 switch
$30 for a pair of Mellanox cards
Add in twinax and you're good to go.

Walked
Apr 14, 2003

So I'm revisiting shared storage for my lab.

Right now I'm using a DS1515+ with about 8tb usable (using SHR2). This has been just fine for network SMB storage for media / whatever; but Synology iSCSI is never going to be performant enough for a significant Hyper-V cluster.

However, I'm in the process of expanding my lab out to third Hyper-V host and want to move to high-performance shared storage as a part of this (rather than the local storage I have been using on my main compute host)

:nsfw:
right now running a pair of 1tb Samsung 850s in RAID0 on an H700 controller for VM storage; as most of what I lab-test is deployment based and build/destroy anyways, although part of the motivation for fixing this is the fact that I'm slowly starting to build out services I dont want to be at immensely-high risk for total loss

I have a 10gbe network; lab is to be running primarily on paired R710s on a solid UPS.

I also have presently a TS140 on hand, an Adaptec 6400 series card, and 4x 3tb drives in RAID10. If I can make use of this in any reasonable way - cool. Not a must.

Any suggestions for a path forward that would provide somewhat high performance shared storage?
I'm kinda eyeballing a Fusion ioDrve II 1.2tb to use as a cache disk for Starwind VSAN and then just sticking the Adaptec RAID10 behind that. Not sure if thats a logical move or not.

Not too concerned with dollar amounts.

Any suggestions?

Adbot
ADBOT LOVES YOU

Walked
Apr 14, 2003

Blackblaze B2 seems pretty reasonable, too and pretty straightforward.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply