Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Walked
Apr 14, 2003

necrobobsledder posted:

Most people's onboard SATA ports are Intel ICH controllers and will be fine for most home server needs. Depending upon how many PCI-e lanes are allocated for the SATA controllers you may want something else for a 1000 MBps throughput, 500k+ sequential IOPS build. If you're trying to do some boot-from-LAN type of scenarios that's nearly unheard of in home scenarios then it may make some sense.

The coolest thing I've done at home is setup Starwind Virtual SAN, with an Adaptec RAID controller, 4x4tb drives in RAID10; with SSD cache on 10gbe.
The bottleneck between that thing and my primary desktop is the 850Evo in the desktop.

:getin:

Seriously awesome; but I do a _ton_ of vmware lab stuff for work, so it's worth it. The synology on 1gbe gets all my media storage and is entirely sufficient.

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

necrobobsledder posted:

If a big honking enterprise class SAN can experience disruptive data corruption on a shelf of drives because of a bad controller, you can lose your whole array's worth of data at home because of it.

We are actually dealing with this at work right now. I don't know the details but we have a ZFS-based SAN that's been slowly making GBS threads itself to death over the past year. My cheapass boss won't approve the hardware purchases because reasons.

We're doing performance testing on this system to optimize for the production instance so that's fun.

Greatest Living Man
Jul 22, 2005

ask President Obama

IOwnCalculus posted:

Onboard (more specifically, on-chipset) controllers are generally well supported. Intel/AMD and the open source community all have a very big interest in making sure that Linux / FreeBSD / other OS here are rock-solid stable on the most common SATA controllers. This same mindset seems to extend to server-grade SATA controllers. The older LSI controllers are a particular favorite because they're everywhere in the server world, and are far from bleeding edge. They're well supported in pretty much every OS and they can be had cheaply because pretty much every server manufacturer has used them at some point.

The problem occurs when you get to cheap-rear end SATA controllers, from manufacturers who only care about Windows compatibility. They don't put any effort into supporting non-Windows systems, and unless the community gets lucky and makes a stable driver on its own, the general mindset will be "buy hardware that's already supported".

What's the general maximum number of SATA ports allowable on a PCIe card? I usually see a max of four.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Greatest Living Man posted:

What's the general maximum number of SATA ports allowable on a PCIe card? I usually see a max of four.

Depends on how much you're willing to pay. You can get LSI cards with two SFF break-out ports for 8 total SATA ports pretty cheap, though.

Krailor
Nov 2, 2001
I'm only pretending to care
Taco Defender

Greatest Living Man posted:

What's the general maximum number of SATA ports allowable on a PCIe card? I usually see a max of four.

If you want to maintain good performance over all of the drives the general rule of thumb is one drive per PCIe lane.

IOwnCalculus
Apr 2, 2003





DrDork posted:

Depends on how much you're willing to pay. You can get LSI cards with two SFF break-out ports for 8 total SATA ports pretty cheap, though.

And then you can take advantage of SAS multipliers. You start losing peak throughout at that point.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
I'd also replace the Q1900 motherboard in the original post with the more modern Braswell equivalent: the ASRock J3710.

I've just switched from the Bay Trail J1800 to the Braswell J3710. It's not quite as much of a leap in benchmarks as I was expecting, but it does feel much slicker in daily use, can run full disk encryption with the same responsiveness as my J1800 (which ran with non-encrypted disks) and has HEVC advertised on the box. I'm not transcoding with mine, but it's handy if you might want to use Plex. It's ever so slightly more power hungry than my J1800 (13 watts at the wall on idle, as opposed to 12 watts) but they are super light on power to start with.

Greatest Living Man
Jul 22, 2005

ask President Obama

Krailor posted:

If you want to maintain good performance over all of the drives the general rule of thumb is one drive per PCIe lane.

But a PCI x16 card is 16 PCI lanes, right?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Doesn't' mean that there's 16 lanes on the board that are physically there. Most SAS controllers top out at 8 lanes anyway and lots of servers will only put 8 lanes onto each 16x physical slot in favor of more slots.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
I'm going to experiment this weekend with a mirror pair in my home server. The noteworthy part of this story is that one drive is a 2.5" WD blue and the mirror is an SSD. Will Ubuntu poo poo the bed?

Will it be like a three legged race, with Usain Bolt tied to Barry White?

apropos man fucked around with this message at 16:07 on Oct 14, 2016

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

apropos man posted:

I'm going to experiment this weekend with a mirror pair in my home server. The noteworthy part of this story is that one drive is a 2.5" WD blue and the mirror is an SSD. Will Ubuntu poo poo the bed?

Will it be like a three legged race, with Usain Bolt tied to Barry White?

Assuming by "mirror pair" you mean RAID 1, you are dumb and you'll only get the speed on the WD Blue and the size of the SSD. Ubuntu won't poo poo the bed, since the entire idea of RAID is to allow the OS to not care about the underlying details of each physical drive, but you're putting yourself in a worst-case scenario here. Just run the SSD as your main drive and make a rsync or other backup script/program to mirror onto the Blue nightly if you want redundancy.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Calm down. I said it was an experiment.

I actually posted my intention to see if there's anything inherently wrong with it. I guess it won't result in data corruption or anything: I'll merely be adding 'wait cycles' to any writes to the array, while the filesystem waits for the mechanical drive to be ready?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
No, unless there's some hardware failure along the line, the setup should work. You very probably will have slow writes, as you expect; many RAID 1 implementations do not consider a write operation complete until both drives have committed the change, so the overall effect to you as the user will be as if you were always writing to the WD Blue (because you are). Some implementations will allow it to be "complete" when either drive finishes, and then tries to sync them, but in practice that still ends up limiting you to the buffer size of the slower drive anyhow, so it's usually not much of a help.

You may also have slow reads, too, since depending on how the RAID is implemented, it very well may not respect that the SSD should be the "primary" read source, and may therefore tap the WD Blue with 50% of your read operations.

I mean, you can do what you want, but there's literally no reason to do what you've proposed, as it eliminates the vast majority of the reasons to have a SSD in the first place.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
I know it's a waste and kind of stupid. I feel like practising setting up RAID1. Today I went into CeX (a kind of junk shop for second hand phones, DVD's and computer parts here in the UK) and I was gonna throw £15 at a drive to pair with the WD Blue but they had nothing that looked decent, only a really scratched looking Samsung 250GB.

I have a spare OCZ Arc that I'm not really using and that was gonna end up in the pair. I might have a look in town tomorrow for a semi-decent mechanical drive.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

apropos man posted:

I know it's a waste and kind of stupid. I feel like practising setting up RAID1. Today I went into CeX (a kind of junk shop for second hand phones, DVD's and computer parts here in the UK) and I was gonna throw £15 at a drive to pair with the WD Blue but they had nothing that looked decent, only a really scratched looking Samsung 250GB.

I have a spare OCZ Arc that I'm not really using and that was gonna end up in the pair. I might have a look in town tomorrow for a semi-decent mechanical drive.

If you want to play with RAID levels and assembling volumes, you could instead look at learning ZFS. You can instead of using physical drives, create a vdev backed by a file on disk. You can then assemble / do stuff with RAID arrays with the whole array running from separate files on the same disk.

You could also just use VMs, but a RAID 1 of a hard disk and an SSD sounds like a bad time.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Creating a load of vdevs across a couple of disks sounds like an idea. I assume that this has to be planned carefully so that if a disk dies then data retention is maintained?

rizzo1001
Jan 3, 2001

apropos man posted:

Creating a load of vdevs across a couple of disks sounds like an idea. I assume that this has to be planned carefully so that if a disk dies then data retention is maintained?

Can always dig up a fistful of thumb drives too.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

apropos man posted:

Creating a load of vdevs across a couple of disks sounds like an idea. I assume that this has to be planned carefully so that if a disk dies then data retention is maintained?

The above suggestion of using vdevs is just for play. Obviously, you gain nothing by having a RAID 0/1/5/6/10 made up from from a bunch of files on the same disk.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Skandranon posted:

The above suggestion of using vdevs is just for play. Obviously, you gain nothing by having a RAID 0/1/5/6/10 made up from from a bunch of files on the same disk.

Hmm. Gonna look for guides on RAID1 with ZFS I think...

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The primary reasons for setting up RAIDZ variants with multiple vdev uses on the same disk include:

1. Testing configurations
2. Expecting to expand to other drives. I had 4 drives for a while before that I was doing some testing with and to stagger failure likelihoods a bit, I waited a couple months before ordering from another supplier. In the meantime, I made an 8 partition RAIDZ2 with my 4 drives and ran them through some tests.

Expanding zpools is a bit awkward if you haven't done it before and you may want to practice managing them conceptually before committing a bunch of time and money.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
So I've installed the WD Blue on its own, wiped it and created two 40GB files to use as vdevs.

Then created a zfs mirror array, so I have this:

code:
  pool: ztank0
 state: ONLINE
  scan: none requested
config:

        NAME                    STATE     READ WRITE CKSUM
        ztank0                  ONLINE       0     0     0
          mirror-0              ONLINE       0     0     0
            /media/zdevs/file0  ONLINE       0     0     0
            /media/zdevs/file1  ONLINE       0     0     0

errors: No known data errors
I can access the array by going to /ztank0. It was originally owned by root but I 'sudo chown'ed it. This is OK, yes?

If I buy another cheap drive tomorrow I'm interested in doing the same as I did tonight, but at a device level.

I'm also interested in encryption, since my other drives are encrypted. If I apply luks to the actual pool (ztank0) I take it that zfs will still mirror everything correctly? So if everything is working the checksum of my file0 and file1 vdevs should always be matching?

What about more esoteric RAID arrangements with zfs? How is encryption deployed on a zfs array with 5 drives or n drives? Are there special considerations depending on the arrangement?

EDIT: going to bed now. Will read in the morning.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Anyone have experiences or opinions on running ZFS or Btrfs on LVM volumes. I'm interested on having checksumming filesystem, but I wouldn't want to give up the flexibility that LVM and mdadm provide. I remember reading years ago that you should use ZFS only with raw devices, not even partitions, but I don't remember the justification for that.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
While I have no real experience with LVM volumes, the desire to use ZFS on raw devices comes from one of two places, depending on what you mean by "raw."

If you mean "raw" as in without gpt partitions, that was mostly an effort to get 4k alignment correct to maximize performance. Most drives these days don't need any futzing anymore to get aligned properly, so that's mostly old hat.

If you mean "raw" as in not abstracted/virtualized/behind some other intermediate layer, then yeah, ZFS strongly prefers direct access to the drives because it assumes it has direct access. So when ZFS goes to write something, and the subsystem comes back and tells it it's done, it assumes it's telling the truth. Depending on your subsystem, this may or may not be correct. You also lose some of ZFS's features: it (generally) no longer can read SMART data to detect failing drives, it (generally) can no longer package read/writes into transactions, so performance can be degraded, it can usually still detect file/checksum errors, but generally cannot correct them anymore, and generally any disk-based failure runs a much higher risk of somehow borking the entire pool when you try to replace the drive. That's a lot of generallyies because each subsystem acts a bit different, but it's never an ideal situation.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
It's similar to the reasoning why you should generally not use RAIDZ on hardware RAID backed vdevs - ZFS is built with assumptions around not being virtualized as well as mutual exclusion that can impact data safety in the event of failures (although the strangest recommendation for use on VMware managed storage to use virtual compatibility mode rather than physical like you would with MS clustering is bizarre to me).

It's peculiar to want to use LVM with ZFS given that one of the tooling usage aims of ZFS was literally to avoid all the hassles of LVM's various commands (pvs, vgs, lvs, etc.), so now you'd be looking at a possible scenario of worst-of-both-worlds instead of best-of-both-worlds.

If you really want to though, I would use LVM on top of ZFS zvols for your block devices.

Walked
Apr 14, 2003

Can anyone offer a suggestion to nail down a bottleneck?

I have a TS140 with an Adaptec 6405e RAID card, and 4x 3tb 7200k drives in RAID10.

I just moved this from my desktop too the TS140 and went from ~300mbps r/w to about 120.

The two systems are connected iSCSI at 10gbe and it maxes out the network until the TS140 RAM cache is full, then dies down to 120ish.

I've benchmarked this on the desktop and TS140 and it's pretty consistently different in iometer.

I'm thinking it has to be a bottleneck somewhere on the TS140 but it's a relatively modern PCIe port.

What am I missing?

Config:
TS140 Quad core Xeon
20gb RAM
Intel X540-T2 10gbe
Adaptec 6405e
4x 3tb 7200k drives RAID 10
Server 2016
Star winds virtual SAN

The performance is the same local, network, and iSCSI.

I just don't know what would bottleneck it..

SynMoo
Dec 4, 2006

Were the drives connected to the same card when they were in your desktop?

The bottleneck looks like the controller writing to the drives. 120MByte/s is about what you'd expect writing to a single drive. Theoretically you could expect better in RAID10. Check your config to see how the card is configured to cache etc.

Walked
Apr 14, 2003

SynMoo posted:

Were the drives connected to the same card when they were in your desktop?

The bottleneck looks like the controller writing to the drives. 120MByte/s is about what you'd expect writing to a single drive. Theoretically you could expect better in RAID10. Check your config to see how the card is configured to cache etc.

Async writes; same card in both. It's really, truly rather stange.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

necrobobsledder posted:

It's peculiar to want to use LVM with ZFS given that one of the tooling usage aims of ZFS was literally to avoid all the hassles of LVM's various commands (pvs, vgs, lvs, etc.), so now you'd be looking at a possible scenario of worst-of-both-worlds instead of best-of-both-worlds.

If you really want to though, I would use LVM on top of ZFS zvols for your block devices.

I've used LVM with mdadm to deal with the scenario where you have 4 drive RAID-6 and you want to convert that to a 5 drive RAID-6, or convert a 6 x 1TB RAID-6 into a 4 x 3TB RAID-6.

Even if I tried to create a similar system that I have currently, split the drives into several partitions, create separate RAIDZ2 vdevs and pool them it wouldn't work, since apparently it's impossible to empty vdev of data and remove it from a pool.

I guess this could be done with LVM-on-zvols, but that seems too advanced setup for my first dable in ZFS. ZFS-on-LVM-block device sounds simpler and easier to understand setup, but I guess the abstraction layer is a problem.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
What sort of chassis has 4+ 2.5" 15mm bays?

http://www.pcworld.com/article/3130234/storage-drives/seagate-drops-the-worlds-largest-tiny-hard-drive.html

PCWorld is reporting that 5TB 2.5" drives are going to be $85, which is suspect. Even if the pricing is a wash, I'd prefer 2.5" drives if there were a case that could make a 4-8 drive home NAS a good bit smaller.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
The DS416slim only takes 12.5mm drives. Rats.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Moey posted:

The DS416slim only takes 12.5mm drives. Rats.

Exactly! Who are these things for? Not laptops, not small enclosures, what's the usage case for a 15mm 2.5" drive?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
15mm 2.5" drives are what a lot of SAS drives fall under (lower latency without needing to price into enterprise SSD territory is still valuable, and short-stroking via smaller platters is one approach). For those announced drives, I'd look for 2.5" SATA backplanes or maybe 2U or 3U servers with 2.5" bays.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Twerk from Home posted:

What sort of chassis has 4+ 2.5" 15mm bays?

I don't think you'll find many directly, buuuutttt you can absolutely find 5.25" -> 2.5" adapters and get 3 or 4 drives in there. So you could probably get a SFF box with two or three 5.25" bays and cram a gently caress-ton of storage in there. Getting 60TB of space out of a Shuttle or similar box would be nuts.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
What's the consensus of using 2.5" drives for casual storage vs 3.5"? They're gonna run quieter and more economical, but are they as reliable? I'm talking for home file storage and serving up movies.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

apropos man posted:

What's the consensus of using 2.5" drives for casual storage vs 3.5"? They're gonna run quieter and more economical, but are they as reliable? I'm talking for home file storage and serving up movies.

More economical? 3.5" drives are cheaper per gb and should run cooler.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
You can grab external 2.5" drives and shuck them for around $110 per 4 TB drive. Good 4 TB drives are at least $140 regardless of size. I wouldn't expect them to run particularly well just because the original drives' warranties are maybe 2 years tops, but for home storage of Linux ISOs it really doesn't matter IMO. Drive prices have been rather slowly going down and my strategy of upgrading capacity every so often before the drive warranty on everything expires. My plan going forward is to primarily use 2.5" drives for bulk storage and if I need a lot of fast, low latency disks and I expect to use it for less than 6 months then I should probably just shell out money to run something in a cloud.

Heat / power is primarily associated with the RPMs and number of platters of the drive rather than the physical size of the drive.

Anime Schoolgirl
Nov 28, 2002

5tb for 85 dollars is well within "too good to be true" territory especially on a 2.5 enterprise form-factor drive, hot-to-the-touch Toshiba 3.5in 5tbs only go as low as $120.

Unless they mean production costs, then that's actually a believable figure.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

necrobobsledder posted:

Heat / power is primarily associated with the RPMs and number of platters of the drive rather than the physical size of the drive.

That makes sense. The only time I've noted the number of platters before purchase was on an old Seagate Momentus XT: I bought the single platter 250gb version because I obsessively figured that a single platter would load Windows from the outside edge and be therefore faster as an OS drive. How obsessively embarrassing!

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

apropos man posted:

That makes sense. The only time I've noted the number of platters before purchase was on an old Seagate Momentus XT: I bought the single platter 250gb version because I obsessively figured that a single platter would load Windows from the outside edge and be therefore faster as an OS drive. How obsessively embarrassing!

We all did stupid nerd poo poo in the aughts. I used Windows Server 2003 as my gaming machine for a few years because somebody, somewhere said that it has less bloat and was therefore faster than XP.

Adbot
ADBOT LOVES YOU

Pryor on Fire
May 14, 2013

they don't know all alien abduction experiences can be explained by people thinking saving private ryan was a documentary

The whole XP memory management thing was such a clusterfuck of exploits that often went unnoticed and unreported that switching to any server version was probably a great idea even for games. At least up until SP1/2 or whenever they finally started thinking about security and it sunk in that the web probably wasn't going away.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply