Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

teamdest posted:

technically correct, sorry. however you're a fool to use a 250 with 2 500's, due to space loss.
Not a big issue with software RAID. Create a RAID5 from three 250 GB partitions and RAID1 from the remaining two 250 GB. You lose 250 GB compared to three 500 GB drive, but it's better than using the 500 GB drives as RAID1 and keeping the 250 GB drive standalone.

Adbot
ADBOT LOVES YOU

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

teamdest posted:

here's an interesting question:

I used LVM under Debian to create a Volume group, then logical volume on that volume group. then some bad poo poo happened, and I had to reinstall. I had consigned myself to the loss of what data was on this Logical Volume, but on attempting to rebuild it, i was greeted with "Can't Initialize Physical Volume /dev/hda of volume group a3p without -ff".

It had this error for all 4 of the disks I had planned to reinitialize. is there any way (since the metadata seems to be intact) to recover the array? all four disks are fine, the information regarding them has just been destroyed.
What exactly have you done? Did you try commands like pvscan, vgscan, lvscan, vgimport?

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

complex posted:

The original ATA spec just didn't support hot-plug ability.
Pretty much. The Molex power connector wasn't designed for hotplugging either. If you look at the SATA power connector you see several longer pins. These are the ground pins and they connect before the other pins.

That doesn't mean you can't hotplug IDE, I've read about some people doing it. It's just riskier and you need to be more careful. I believe you also need to do a scan in device manager to have it show up.

SATA hotplug works pretty much as it should. Couple weeks ago I needed to connect a SATA drive temporarily, so I just disconnected my SATA DVD drive and connected the HDD and it immediately showed up.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

kalibar posted:

The OS hard drive (an old IDE Maxtor 120GB) in my home filebox just died on me today. I don't have any unused SATA ports, and I don't really want to buy another IDE drive.

I do have an open PCI-e x16 slot, an open PCI slot, and a spare 100GB 2.5" laptop SATA hard drive laying around. Is there some kind of magic part I can buy that would let me get this drive into the computer and use it?
One option is to get a SATA to IDE adapter. This is the most simple solution as it avoids any controller issues, but it isn't as cost effective as getting a SATA controller. You may also need rails to attach it to 3.5" bay.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Interlude posted:

Bleh. So what, does Seagate just make all their drives with a 7 second error recovery period? Why is WD the only mfg with this "tech" ?
I believe Seagate has exactly the same system. They have desktop drives and RAID edition drives (ES series?). They call the feature Error Recovery Correction (ERC).

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Doh004 posted:

(Double Post)

I lied, it is running the 8.9x version that was causing other people problems too. Now I can't find an exe of the 8.8 version, only a bootable version that I put on my bootable flash drive, but I don't know which file to run. There's no exe or bat file. Just sys, inf, cat and an OEM file.
Try going in the device manager, open storage controller properties, update drivers and search for the flash drive.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Vinlaen posted:

Well, I've been doing some research and apparantly Server 2008 supports SMB2 which would be an advantage I suppose.

It also supports something called "DFS Namespaces" which will combine folders into one share (or something).

Why else would Server 2008 be a bad choice? (what disadvantages does it have?)
DFS isn't really what you want, it's more useful when you have several servers. If you have serverA with shareA, serverB with shareB and serverC with shareC you could have them show up as separate folder under one \\domain\dfs\ share.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Wanderer89 posted:

Oh you guys... you can expand zfs.... you just add an array to the pool! ~looks over at 4tb raidz (1tb * 4) quickly filling up, and cheaper-by-the-day 1.5TB tri-platters~

Does anyone have a cheap solution between home-fab a box that can handle 10+ drives? Am I living in a dreamworld trying to avoid paying a couple hundred bucks for a nice rack/box that can support 10+ drives? Just talking physical space, can do pci-e SAS expansion for the sataII ports.
Is it possible to create ZFS arrays from partitions/slices? You could then divide the drives into several partitions and make separate arrays of them. When you want to add a disk drop the arrays from the pool and recreate them one by one. This is what I do with my MDRAID+LVM setup.

And here's the simple harddrive "rack" I made years ago.

Only registered members can see post attachments!

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

ufarn posted:

I just can't see how it works out in the end, as the drive will eventually fill itself up with prior files and folders that no longer exist.
Hopefully the program would have the functionally to only keep periodic copies of the files or remove backups for deleted files after certain time. For example if you have a file that changes daily, then a good backup system could keep old versions of it for the past week, after that it could have weekly backups for the past month, monthlies for the past 6 months and after that yearly backups only.

Another approach could be a system where you set how much space the backups are allowed to use and then the backup software figures out a way to maintain maximum amount of different versions as far back as possible without going over the limit.

Of course I wouldn't be surprised if a free backup software that comes with an external drive is lacking such features. But I would be hard pressed to call a software that only does synchronization a backup software.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

DLCinferno posted:

The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size.
Another way to accomplish the same is to split all the drives into suitably sized partitions, create arrays from the partitions and then combine them with LVM. I'm using an extreme version of this scenario with all my drives split to 10+ partitions. I did it for flexibility when changing and adding drives before RAID expansion was a practical option in Linux.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

DLCinferno posted:

True, but I didn't recommend that because you need to be very clever about how you're choosing your RAID levels on the partition arrays and which ones are going into the same array, otherwise a single drive failing could end up wiping out the entire array.

In a simple example, assume two 500GB drives and one 1TB drive. Partition the TB in half and create a RAID5 array across the four partitions. Unfortunately, if that TB drive goes down, it will effectively kill two devices in the array and render it useless.

I'd be curious to see what your partition/array map looks like - it must have a taken awhile to setup properly if you have over ten partitions on some disks?


It really doesn't require much cleverness, simply remember to build the arrays from separate sdX devices.

Here's an example partition table from cfdisk. /dev/sdb/sdc/sdd are similar, just with slightly different amount of partitions. /dev/sda1 is housing the operating system currently. In the future I'll RAID1 it with another partition.

code:
   Name      Flags    Part TypeFS Type        [Label]     Size (MB)
 -----------------------------------------------------------------
   sda1      Boot      Primary Linux raid autodetect       15002.92
   sda2                Primary Linux raid autodetect       81923.79
   sda3                Primary Linux raid autodetect       81923.79
   sda5                Logical Linux raid autodetect       81923.79
   sda6                Logical Linux raid autodetect       81923.79
   sda7                Logical Linux raid autodetect       81923.79
   sda8                Logical Linux raid autodetect       81923.79
   sda9                Logical Linux raid autodetect       81923.79
   sda10               Logical Linux raid autodetect       81923.79
   sda11               Logical Linux raid autodetect       81923.79
   sda12               Logical Linux raid autodetect       81923.79
   sda13               Logical Linux raid autodetect       81923.79
   sda14               Logical Linux raid autodetect       81923.79
Here's a snippet from /proc/mdstat. /dev/md10-md18 are 245GB RAID5 arrays with 4 devices. /dev/md19-md21 are 163GB RAID1 arrays with 2 devices. They are made from the leftover partitions from the 2 1TB drives.

code:
md21 : active raid1 sdc14[0] sda14[1]
      80003584 blocks [2/2] [UU]

md10 : active raid5 sdd2[0] sdc2[1] sdb2[2] sda2[3]
      240010752 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md11 : active raid5 sdd3[0] sdc3[1] sdb3[2] sda3[3]
      240010752 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
Then I've created two LVM volume groups of those arrays, sized 915GB and 1.34TB, but I'm considering combining them into one VG. The original reason for the two VGs was that I didn't have enough drives for my needs. vg_safehouse was made of RAID1 or RAID5 devices for storing more valuable files and vg_warehouse was a single large drive for the rest. Now that everything is RAIDed there's no need for warehouse anymore.


DLCinferno posted:

Sure are. Literally, unplug from one machine, plug into the new one, and run one mdadm --assemble command per array. As long as the computer can see the same physical drives/partitions, it doesn't matter what hardware it's running.

That's one of the main reasons I like ZFS/mdadm at home - no need to buy pricey hardware controllers, but you get most of the same benefits.
I think if you use partitions se to RAID autodetect type you don't even need the assemble command. During boot up Linux kernel will see a bunch of partitions that seem to belong to an array and then it figures out which of them go together.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

FooGoo posted:

I didn't see this question posed in the fact or forum so here it goes:

Is there any advantage to buying an external drive versus an internal drive and an enclosure provided it will only be used occasionally and won't be thrown across the room?
Whenever I've looked externals were available for slightly cheaper then enclosure and drive. Externals often also have nicer or more refined looks and may have some extra features.

Otherwise I would usually recommend enclosure and drive. You can upgrade the drive to bigger one later, you can choose what drive goes inside, instead of most likely the cheapest drive the manufacturer could find. If there's some kind of failure with the enclosure you can take the drive out and have better chances of recovery without voiding the warranty which would probably happen with external.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Is any of the current 2TB eco-harddrives suitable for use with Linux software RAID? I vaguely remember that earlier WD Green Powers had some behaviours that made them not as good for RAID. I doubt the performance would matter much since the rest of the harddrives would be older models and the price difference to non-eco models is upto 40€.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
This issue with rearranging the drives is the biggest reason I still use a system similar to the Synology Hybrid Raid. I've partitioned my harddrives with 80 GB partitions, created a bunch of mdadm RAID6/1 devices of these and then created LVM volumes of them. Thanks to pvmove I can move data off any of the RAID devices, delete the RAID and recreate it with different amount of drives or geometry. Since pvmove uses mirrored LVM volumes during data transfer my storage should never be in degraded state.

With this setup you could start with 3x1TB RAID5, upgrade it to 6x1TB RAID6 and later change to 4x4TB RAID6. This system has proven itself, I'm still using the same volume group I originally created with 80GB IDE drives or 250GB IDE/SATA drives. I can't remember for sure anymore, it was about 10 years ago and the LVM has never experienced a major failure. I'm waiting for btrfs to mature and to get more experience with it so I could filesystem with checksumming, now I just have copious amounts of md5sum files all over the place.

Minor problem with this system is that after a bunch of pvmoves your data may be all over the place. I recently upgraded from 4 drives to 6 and I wanted to rearrange the volumes more optimally and logically. I had to draw a graph in PowerPoint to figure out where all my extents were laying. Not all of it made sense.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Skandranon posted:

I personally prefer both. I have a 30TB array that is also backed up to Crashplan (well, working on it, taking it's time). The array is for me to manage the data, Crashplan is literally only for when the world ends and all my drives fail at the same time. I hope to never have to use it. It's nice to own your data, cloud providers can gently caress up as well. Consider Bitcasa, they offered an 'unlimited' plan, then changed their mind and decided not to. Told all their customers they had 1 month to either pay up for super-expensive business plan, or download all their data before it was deleted. A lot of people decided they didn't like Bitcasa anymore, but were having serious trouble downloading everything they had uploaded, because everyone else was trying to do the same. Some got screwed and were forced to pay for the business plan simply to not have their data deleted. If Crashplan decides to do something similar, at least I won't be in the same situation, I can just cut clean and go with a different provider.
Cloud storage certainly has it uses, but not as the only storage. Just because if they are the only storage you need to trust the provider to handle the backups and you'll never know how exactly they've done it. There aren't many situation where large internal or external harddrives or NAS won't work.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

G-Prime posted:

Is there any faster/better way than just running an rsync to validate that the massive initial data load I did from my old Windows box to my new FreeNAS went off without a hitch? I'd like to not spend another 5-6 days doing the validation.

My usual method with large transfers is to use MD5 checksums. It is about the fastest way to make sure the data is intact. On Windows I either use MD5summer or md5sum in Cygwin.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Megaman posted:

Ok, so I'm going to pose a scenario then. I run a several terabyte NAS, and do nightly rsyncs to a standby NAS as an online "backup". The only way I currently have to verify that everything copied is a log of the details of the copy (copies/deletes), but that's it. I want to make sure the entire contents of the highest level folder of the source is exactly the same as the destinations. I assume I would also do this with checksums? If so, how would I generate one giant checksum for that folder?

If you are able to use a command line, then the easiest way would be to switch to the highest level folder and run command
code:
find . -type f -print0 | xargs -0 md5sum > /checksum_folder/all_files.md5sum
If you only want checksums of the files created or modified in the past 24 hours you can use the command
code:
find . -type f -ctime -1 -print0 | xargs -0 md5sum > /checksum_folder/new_files.md5sum
Checking is done with 'md5sum -c /checksum_folder/new_files.md5sum'.

But if you are using rsync for the transfer, then rsync calculates MD4 checksums of the transferred data so you can be pretty sure the data was transferred correct. It just that the receiving end does the calculation before the data is written to the disk so it's not quite as sure as separate checksums.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
IIRC years ago Tom's Hardware or some other tech website tested how much 1x/4x/8x affects graphics cards. They just put electrical tape over the contacs on the card to turn a normal x16 card to narrower PCIe.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
My stress test method has been dividing the drive to 4 partitions and then running 4 simultaneous Bonnie++ benchmarks on them for extended time.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Richard M Nixon posted:

I haven't been following the discussions about RAID starting to be a bad thing with huge disks, but now that I'm experiencing it firsthand I'm ready to make a change. What is the new suggested software solution (Linux based) for me to combine multiple large drives for NAS/media storage purposes?

Upgrade to RAID-6. RAID-5 is obsolete with these storage sizes.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

FCKGW posted:

This will be in the attic where a single POE switch is, no router.

Single drive NAS is probably the best option, but if you want to use an existing external and performance requirements aren't high a Raspberry Pi and similar computers would be a cheap option.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Skandranon posted:

It's something that has to be answered on a case by case backup. What does it cost to backup my data in full? How much does it hurt to lose any of it? How much downtime am I willing to endure to restore from my backup? How much is my time worth futzing around with this crap? These are somewhat hard to answer, but an extra hard drive is fairly cheap.

Also consider how much data will be generated between backups. If you lose a one workday worth of data that can already be a quite a loss.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

PerrineClostermann posted:

Computers interpret "Giga" "Mega" and "Tera" bytes as powers of two, instead of powers of ten. And by computers, I mean dumb poo poo like windows. The proper amount of space is actually there, it's just displayed dumb.

What Windows is actually reporting is space in "Gibi" "Mibi" and "Tebi" bytes, or GiB, MiB, TiB.

It might be useful for drives, but I'm not sure I would be completely happy when the OS tells me I have 34.3597 GB of RAM.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

PerrineClostermann posted:

RAM is different; it's advertised as GB, but uses GiB for its real capacity.

Yes, there's difference in advertisements, but what is same with them is that both are built on powers of two. The basic units of drives are either 512 B or 4 kiB, 4.10 kB wouldn't be nearly as convenient. Fortunately the RAM manufacturers weren't able to cut the DIMMs short, they had to built them to full GiBs.

The size discrepancy on drives is annoying, but in the end it doesn't matter and you get used to it and won't feel cheated anymore.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
After you have 4 drives you should also consider using RAID-6. You would only get 6TB of usable space, but you could survive two drive failures.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Thermopyle posted:

Also...I've run out of space in my already-huge case for hard drives. I've either got to retire perfectly functional 2TB drives or figure out a way to add a secondary box for drives. What's a good solution for that?

This was my solution years ago.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

necrobobsledder posted:

Oh, with the 32 disk system that's not the worst thing to try to cool. I was looking at the top-loading case similar to what Backblaze custom-built as my nightmare scenario of chassis cooling (aside from being in a hot, dusty environment like an attic in Atlanta).

HP SL4500 is another example with up to 60 harddrives in one case. But I guess harddrives aren't that hard to keep cool enough, considering that traditionally desktop cases had hardly any airflow over the harddrives, like in that Inwin Q500 I linked few days ago. One of the reasons I used a separate harddrive stand was to get some airflow over them. Later I modified the case and put a 120mm fan in front of the harddrive bracket. Front intake fans that blow over the harddrives seem to be a relatively recent invention.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

fletcher posted:

Has anybody ever actually had ESD issues when working on computers? I never even think about it, even when I'm working on carpet and poo poo

Ages ago we used to have computers with the original Antec Sonata at work and I occasionally wore pants that generated quite a bit of static electricity. I noticed a phenomenon that if I touched the chromed USB port cover on the Sonata it would immediately reboot. What's disturbing is that I couldn't figure any kind of electrical connection from the cover to the computer. It was just chromed plastic cover attached to the plastic case front with about 1 cm distance to the USB ports. Still rebooted every time.

I would wish someone tested ESD. Take an old computer and see how big a static discharge it would take to fry the RAM sticks.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

redeyes posted:

Um, probably not. The NAS controller probably wont be using Intel RAID and might not even be formatted as NTFS.

Yes, it most likely isn't compatible with Intel RAID, but how often do the NAS boxes use stardard Linux mdadm RAID?

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Shachi posted:

Also the nature of how Synology software manages the device and encrypts it makes it resistant to the risk of ransom ware?

I wouldn't say so. The encryption helps most against someone grabbing the disks and stealing the data. It may also prevent someone grabbing the whole Synology device, if the encryption requires inputing a password for the data to become accessible.

But when the Synology is up and running then any computer that has write access to the NAS can encrypt the data by some ransomware. But if most of the client computers only have read access to the NAS, then it is immune againts those computers.

But backups are the only protection against computers that can write to the NAS, and those computers must not be able to access the backups at the same time.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Anyone have experiences or opinions on running ZFS or Btrfs on LVM volumes. I'm interested on having checksumming filesystem, but I wouldn't want to give up the flexibility that LVM and mdadm provide. I remember reading years ago that you should use ZFS only with raw devices, not even partitions, but I don't remember the justification for that.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

necrobobsledder posted:

It's peculiar to want to use LVM with ZFS given that one of the tooling usage aims of ZFS was literally to avoid all the hassles of LVM's various commands (pvs, vgs, lvs, etc.), so now you'd be looking at a possible scenario of worst-of-both-worlds instead of best-of-both-worlds.

If you really want to though, I would use LVM on top of ZFS zvols for your block devices.

I've used LVM with mdadm to deal with the scenario where you have 4 drive RAID-6 and you want to convert that to a 5 drive RAID-6, or convert a 6 x 1TB RAID-6 into a 4 x 3TB RAID-6.

Even if I tried to create a similar system that I have currently, split the drives into several partitions, create separate RAIDZ2 vdevs and pool them it wouldn't work, since apparently it's impossible to empty vdev of data and remove it from a pool.

I guess this could be done with LVM-on-zvols, but that seems too advanced setup for my first dable in ZFS. ZFS-on-LVM-block device sounds simpler and easier to understand setup, but I guess the abstraction layer is a problem.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Has anyone done tests how different file systems behave with broken memory?

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

necrobobsledder posted:

I think it'd be possible to do a superficial synthetic test using a virtualization system to inject "bad" memory reads and writes somehow but I'm not aware of such a system (there's network level tools to simulate bad network links, of course).

I'd be more interested on real world testing with memory that has know defects, maybe few different DIMMs with different kinds of faults. Do a bunch of reading and writing and check how badly it breaks the data. What I'd especially like to test is read-only use with ZFS, like a music collection.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
A while ago I had a need to buy a new drive and because of price and availability reasons I ended up with the common 4TB Seagate. I wasn't entirely happy about getting another Seagate, but WDs would have been bit more expensive, not as easily available and I wouldn't really expect them to be more reliable. HGST drives would be about 100€ more expensive and they just wouldn't worth the price for me.

HGST may be more reliable but even those wouldn't be reliable enough, you still need RAID and backups. What the higher price gives you is less of chance to have to go through the hassle of replacement and possible restore. And even the Seagate drives probably won't fail during the useful life span, except the 3TB models.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

redeyes posted:

I will try this next. Problem is the card has 2 x sata ports and I need a total of 7 or 8. I may have to just buy some HBA card.. I would just do this but the Poweredge itself is a limited platform for my uses. Only 6 drive bays. No easy way to expand with hotswap cages. Motherboard is proprietary. GRR. 2nd processor heatsink runs over 100 bux because DELL!

Another option could be to set the Asmedia card so that it doesn't recognize it has drives connected during POST. The OS should still be able to pick up the drive. This used to work in the days with harddrive size limitations.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Paul MaudDib posted:

So, um... how in the world is package management/versioning not a solved problem coming from the people who brought you jails? Why not build/run literally all applications with a minimal set of dependencies symlinked into a chroot/jail? At that point package management should basically look a lot like a bundler file for Ruby.

How ae security updates to the libraries handled with jails? If I understood your symlink remark correctly, then the jailed application will still use the OS libraries, so that will take care of security updates. But if all the libraries come from the OS then what's the point of jailing. And if the jail has it's own copy of the libraries, then does that mean that whenever there is a security update to any of the included libraries it's necessary to release a security update to your application. I have the same concern with Ubuntu Snap and I wasn't able to figure out how they are addressing it with a cursory look. Docker can be included in this too.

If I was a developer considering Docker I would have to think hard if I want to shoulder that much responsibility. I work as a sysop and if it comes from the repositories it's our responsibility, the developer doesn't need to care. If there's an update to kernel, OpenSSL, Java, Python or Apache, Red Hat will send us an email and we'll do what's necessary. The developer will only have to worry about the piece of php, ruby or java they have written.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

CommieGIR posted:

Its running a VM cluster and Docker containers for plex, has a PERC 6/e connected to a MD1000 array, FreeNAS is runn ing on two 100GB SSDs in RAID1 and the VMs are stored on a 500GB RAID6 array of 6 146GB SAS disks.

Sounds like instead of spending money to move the drives to you garage you should buy a pair of cheaper 500GB SSDs to replace the RAID6 array.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

CommieGIR posted:

I'd love to, but all the drives are recycled, didn't spend a dime. If I had the money, I'd replace all the RAID6 with SSDs like you said.

I meant that if you want to put the drives in the garage you will have to buy new hardware, so it's better to buy SSDs instead.

Adbot
ADBOT LOVES YOU

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

phongn posted:

I believe that most Intel SATA ports can be set to hot-swap (may need to be set in the BIOS). You also need a proper backplane for electrical safety.

Something like that might be prudent, but I've always just hot-plugged SATA drives without issue or extra setup. Even that one time when I hotplugged a SATA power cable in to a IDE connector nothing got broken. (I bought a 750 GB SATA drive from a local computer store and never actually looked at the connector before I saw sparks).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply