|
teamdest posted:technically correct, sorry. however you're a fool to use a 250 with 2 500's, due to space loss.
|
# ¿ Mar 24, 2008 06:29 |
|
|
# ¿ Mar 28, 2024 18:34 |
|
teamdest posted:here's an interesting question:
|
# ¿ Apr 1, 2008 11:01 |
|
complex posted:The original ATA spec just didn't support hot-plug ability. That doesn't mean you can't hotplug IDE, I've read about some people doing it. It's just riskier and you need to be more careful. I believe you also need to do a scan in device manager to have it show up. SATA hotplug works pretty much as it should. Couple weeks ago I needed to connect a SATA drive temporarily, so I just disconnected my SATA DVD drive and connected the HDD and it immediately showed up.
|
# ¿ Aug 2, 2008 17:57 |
|
kalibar posted:The OS hard drive (an old IDE Maxtor 120GB) in my home filebox just died on me today. I don't have any unused SATA ports, and I don't really want to buy another IDE drive.
|
# ¿ Aug 18, 2008 17:54 |
|
Interlude posted:Bleh. So what, does Seagate just make all their drives with a 7 second error recovery period? Why is WD the only mfg with this "tech" ?
|
# ¿ Jun 14, 2009 19:12 |
|
Doh004 posted:(Double Post)
|
# ¿ Aug 13, 2009 18:49 |
|
Vinlaen posted:Well, I've been doing some research and apparantly Server 2008 supports SMB2 which would be an advantage I suppose.
|
# ¿ Aug 15, 2009 22:19 |
|
Wanderer89 posted:Oh you guys... you can expand zfs.... you just add an array to the pool! ~looks over at 4tb raidz (1tb * 4) quickly filling up, and cheaper-by-the-day 1.5TB tri-platters~ And here's the simple harddrive "rack" I made years ago.
|
# ¿ Dec 3, 2009 19:40 |
|
ufarn posted:I just can't see how it works out in the end, as the drive will eventually fill itself up with prior files and folders that no longer exist. Another approach could be a system where you set how much space the backups are allowed to use and then the backup software figures out a way to maintain maximum amount of different versions as far back as possible without going over the limit. Of course I wouldn't be surprised if a free backup software that comes with an external drive is lacking such features. But I would be hard pressed to call a software that only does synchronization a backup software.
|
# ¿ Oct 30, 2010 23:40 |
|
DLCinferno posted:The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size.
|
# ¿ Nov 7, 2010 15:01 |
|
DLCinferno posted:True, but I didn't recommend that because you need to be very clever about how you're choosing your RAID levels on the partition arrays and which ones are going into the same array, otherwise a single drive failing could end up wiping out the entire array. Here's an example partition table from cfdisk. /dev/sdb/sdc/sdd are similar, just with slightly different amount of partitions. /dev/sda1 is housing the operating system currently. In the future I'll RAID1 it with another partition. code:
code:
DLCinferno posted:Sure are. Literally, unplug from one machine, plug into the new one, and run one mdadm --assemble command per array. As long as the computer can see the same physical drives/partitions, it doesn't matter what hardware it's running.
|
# ¿ Nov 13, 2010 19:04 |
|
FooGoo posted:I didn't see this question posed in the fact or forum so here it goes: Otherwise I would usually recommend enclosure and drive. You can upgrade the drive to bigger one later, you can choose what drive goes inside, instead of most likely the cheapest drive the manufacturer could find. If there's some kind of failure with the enclosure you can take the drive out and have better chances of recovery without voiding the warranty which would probably happen with external.
|
# ¿ Dec 28, 2010 23:33 |
|
Is any of the current 2TB eco-harddrives suitable for use with Linux software RAID? I vaguely remember that earlier WD Green Powers had some behaviours that made them not as good for RAID. I doubt the performance would matter much since the rest of the harddrives would be older models and the price difference to non-eco models is upto 40€.
|
# ¿ Sep 1, 2011 12:03 |
|
This issue with rearranging the drives is the biggest reason I still use a system similar to the Synology Hybrid Raid. I've partitioned my harddrives with 80 GB partitions, created a bunch of mdadm RAID6/1 devices of these and then created LVM volumes of them. Thanks to pvmove I can move data off any of the RAID devices, delete the RAID and recreate it with different amount of drives or geometry. Since pvmove uses mirrored LVM volumes during data transfer my storage should never be in degraded state. With this setup you could start with 3x1TB RAID5, upgrade it to 6x1TB RAID6 and later change to 4x4TB RAID6. This system has proven itself, I'm still using the same volume group I originally created with 80GB IDE drives or 250GB IDE/SATA drives. I can't remember for sure anymore, it was about 10 years ago and the LVM has never experienced a major failure. I'm waiting for btrfs to mature and to get more experience with it so I could filesystem with checksumming, now I just have copious amounts of md5sum files all over the place. Minor problem with this system is that after a bunch of pvmoves your data may be all over the place. I recently upgraded from 4 drives to 6 and I wanted to rearrange the volumes more optimally and logically. I had to draw a graph in PowerPoint to figure out where all my extents were laying. Not all of it made sense.
|
# ¿ Jul 31, 2015 22:38 |
|
Skandranon posted:I personally prefer both. I have a 30TB array that is also backed up to Crashplan (well, working on it, taking it's time). The array is for me to manage the data, Crashplan is literally only for when the world ends and all my drives fail at the same time. I hope to never have to use it. It's nice to own your data, cloud providers can gently caress up as well. Consider Bitcasa, they offered an 'unlimited' plan, then changed their mind and decided not to. Told all their customers they had 1 month to either pay up for super-expensive business plan, or download all their data before it was deleted. A lot of people decided they didn't like Bitcasa anymore, but were having serious trouble downloading everything they had uploaded, because everyone else was trying to do the same. Some got screwed and were forced to pay for the business plan simply to not have their data deleted. If Crashplan decides to do something similar, at least I won't be in the same situation, I can just cut clean and go with a different provider.
|
# ¿ Aug 19, 2015 20:26 |
|
G-Prime posted:Is there any faster/better way than just running an rsync to validate that the massive initial data load I did from my old Windows box to my new FreeNAS went off without a hitch? I'd like to not spend another 5-6 days doing the validation. My usual method with large transfers is to use MD5 checksums. It is about the fastest way to make sure the data is intact. On Windows I either use MD5summer or md5sum in Cygwin.
|
# ¿ Sep 7, 2015 21:27 |
|
Megaman posted:Ok, so I'm going to pose a scenario then. I run a several terabyte NAS, and do nightly rsyncs to a standby NAS as an online "backup". The only way I currently have to verify that everything copied is a log of the details of the copy (copies/deletes), but that's it. I want to make sure the entire contents of the highest level folder of the source is exactly the same as the destinations. I assume I would also do this with checksums? If so, how would I generate one giant checksum for that folder? If you are able to use a command line, then the easiest way would be to switch to the highest level folder and run command code:
code:
But if you are using rsync for the transfer, then rsync calculates MD4 checksums of the transferred data so you can be pretty sure the data was transferred correct. It just that the receiving end does the calculation before the data is written to the disk so it's not quite as sure as separate checksums.
|
# ¿ Sep 8, 2015 18:21 |
|
IIRC years ago Tom's Hardware or some other tech website tested how much 1x/4x/8x affects graphics cards. They just put electrical tape over the contacs on the card to turn a normal x16 card to narrower PCIe.
|
# ¿ Sep 28, 2015 17:29 |
|
My stress test method has been dividing the drive to 4 partitions and then running 4 simultaneous Bonnie++ benchmarks on them for extended time.
|
# ¿ Dec 2, 2015 22:45 |
|
Richard M Nixon posted:I haven't been following the discussions about RAID starting to be a bad thing with huge disks, but now that I'm experiencing it firsthand I'm ready to make a change. What is the new suggested software solution (Linux based) for me to combine multiple large drives for NAS/media storage purposes? Upgrade to RAID-6. RAID-5 is obsolete with these storage sizes.
|
# ¿ Feb 14, 2016 10:29 |
|
FCKGW posted:This will be in the attic where a single POE switch is, no router. Single drive NAS is probably the best option, but if you want to use an existing external and performance requirements aren't high a Raspberry Pi and similar computers would be a cheap option.
|
# ¿ Feb 22, 2016 19:30 |
|
Skandranon posted:It's something that has to be answered on a case by case backup. What does it cost to backup my data in full? How much does it hurt to lose any of it? How much downtime am I willing to endure to restore from my backup? How much is my time worth futzing around with this crap? These are somewhat hard to answer, but an extra hard drive is fairly cheap. Also consider how much data will be generated between backups. If you lose a one workday worth of data that can already be a quite a loss.
|
# ¿ Feb 23, 2016 23:44 |
|
PerrineClostermann posted:Computers interpret "Giga" "Mega" and "Tera" bytes as powers of two, instead of powers of ten. And by computers, I mean dumb poo poo like windows. The proper amount of space is actually there, it's just displayed dumb. It might be useful for drives, but I'm not sure I would be completely happy when the OS tells me I have 34.3597 GB of RAM.
|
# ¿ Mar 14, 2016 19:14 |
|
PerrineClostermann posted:RAM is different; it's advertised as GB, but uses GiB for its real capacity. Yes, there's difference in advertisements, but what is same with them is that both are built on powers of two. The basic units of drives are either 512 B or 4 kiB, 4.10 kB wouldn't be nearly as convenient. Fortunately the RAM manufacturers weren't able to cut the DIMMs short, they had to built them to full GiBs. The size discrepancy on drives is annoying, but in the end it doesn't matter and you get used to it and won't feel cheated anymore.
|
# ¿ Mar 15, 2016 00:03 |
|
After you have 4 drives you should also consider using RAID-6. You would only get 6TB of usable space, but you could survive two drive failures.
|
# ¿ Apr 4, 2016 23:44 |
|
Thermopyle posted:Also...I've run out of space in my already-huge case for hard drives. I've either got to retire perfectly functional 2TB drives or figure out a way to add a secondary box for drives. What's a good solution for that? This was my solution years ago.
|
# ¿ May 4, 2016 17:50 |
|
necrobobsledder posted:Oh, with the 32 disk system that's not the worst thing to try to cool. I was looking at the top-loading case similar to what Backblaze custom-built as my nightmare scenario of chassis cooling (aside from being in a hot, dusty environment like an attic in Atlanta). HP SL4500 is another example with up to 60 harddrives in one case. But I guess harddrives aren't that hard to keep cool enough, considering that traditionally desktop cases had hardly any airflow over the harddrives, like in that Inwin Q500 I linked few days ago. One of the reasons I used a separate harddrive stand was to get some airflow over them. Later I modified the case and put a 120mm fan in front of the harddrive bracket. Front intake fans that blow over the harddrives seem to be a relatively recent invention.
|
# ¿ May 5, 2016 20:26 |
|
fletcher posted:Has anybody ever actually had ESD issues when working on computers? I never even think about it, even when I'm working on carpet and poo poo Ages ago we used to have computers with the original Antec Sonata at work and I occasionally wore pants that generated quite a bit of static electricity. I noticed a phenomenon that if I touched the chromed USB port cover on the Sonata it would immediately reboot. What's disturbing is that I couldn't figure any kind of electrical connection from the cover to the computer. It was just chromed plastic cover attached to the plastic case front with about 1 cm distance to the USB ports. Still rebooted every time. I would wish someone tested ESD. Take an old computer and see how big a static discharge it would take to fry the RAM sticks.
|
# ¿ Jun 10, 2016 16:42 |
|
redeyes posted:Um, probably not. The NAS controller probably wont be using Intel RAID and might not even be formatted as NTFS. Yes, it most likely isn't compatible with Intel RAID, but how often do the NAS boxes use stardard Linux mdadm RAID?
|
# ¿ Jun 30, 2016 22:26 |
|
Shachi posted:Also the nature of how Synology software manages the device and encrypts it makes it resistant to the risk of ransom ware? I wouldn't say so. The encryption helps most against someone grabbing the disks and stealing the data. It may also prevent someone grabbing the whole Synology device, if the encryption requires inputing a password for the data to become accessible. But when the Synology is up and running then any computer that has write access to the NAS can encrypt the data by some ransomware. But if most of the client computers only have read access to the NAS, then it is immune againts those computers. But backups are the only protection against computers that can write to the NAS, and those computers must not be able to access the backups at the same time.
|
# ¿ Aug 2, 2016 22:55 |
|
Anyone have experiences or opinions on running ZFS or Btrfs on LVM volumes. I'm interested on having checksumming filesystem, but I wouldn't want to give up the flexibility that LVM and mdadm provide. I remember reading years ago that you should use ZFS only with raw devices, not even partitions, but I don't remember the justification for that.
|
# ¿ Oct 15, 2016 14:35 |
|
necrobobsledder posted:It's peculiar to want to use LVM with ZFS given that one of the tooling usage aims of ZFS was literally to avoid all the hassles of LVM's various commands (pvs, vgs, lvs, etc.), so now you'd be looking at a possible scenario of worst-of-both-worlds instead of best-of-both-worlds. I've used LVM with mdadm to deal with the scenario where you have 4 drive RAID-6 and you want to convert that to a 5 drive RAID-6, or convert a 6 x 1TB RAID-6 into a 4 x 3TB RAID-6. Even if I tried to create a similar system that I have currently, split the drives into several partitions, create separate RAIDZ2 vdevs and pool them it wouldn't work, since apparently it's impossible to empty vdev of data and remove it from a pool. I guess this could be done with LVM-on-zvols, but that seems too advanced setup for my first dable in ZFS. ZFS-on-LVM-block device sounds simpler and easier to understand setup, but I guess the abstraction layer is a problem.
|
# ¿ Oct 16, 2016 19:15 |
|
Has anyone done tests how different file systems behave with broken memory?
|
# ¿ Nov 15, 2016 18:18 |
|
necrobobsledder posted:I think it'd be possible to do a superficial synthetic test using a virtualization system to inject "bad" memory reads and writes somehow but I'm not aware of such a system (there's network level tools to simulate bad network links, of course). I'd be more interested on real world testing with memory that has know defects, maybe few different DIMMs with different kinds of faults. Do a bunch of reading and writing and check how badly it breaks the data. What I'd especially like to test is read-only use with ZFS, like a music collection.
|
# ¿ Nov 16, 2016 21:10 |
|
A while ago I had a need to buy a new drive and because of price and availability reasons I ended up with the common 4TB Seagate. I wasn't entirely happy about getting another Seagate, but WDs would have been bit more expensive, not as easily available and I wouldn't really expect them to be more reliable. HGST drives would be about 100€ more expensive and they just wouldn't worth the price for me. HGST may be more reliable but even those wouldn't be reliable enough, you still need RAID and backups. What the higher price gives you is less of chance to have to go through the hassle of replacement and possible restore. And even the Seagate drives probably won't fail during the useful life span, except the 3TB models.
|
# ¿ Jan 23, 2017 20:57 |
|
redeyes posted:I will try this next. Problem is the card has 2 x sata ports and I need a total of 7 or 8. I may have to just buy some HBA card.. I would just do this but the Poweredge itself is a limited platform for my uses. Only 6 drive bays. No easy way to expand with hotswap cages. Motherboard is proprietary. GRR. 2nd processor heatsink runs over 100 bux because DELL! Another option could be to set the Asmedia card so that it doesn't recognize it has drives connected during POST. The OS should still be able to pick up the drive. This used to work in the days with harddrive size limitations.
|
# ¿ Apr 26, 2017 19:04 |
|
Paul MaudDib posted:So, um... how in the world is package management/versioning not a solved problem coming from the people who brought you jails? Why not build/run literally all applications with a minimal set of dependencies symlinked into a chroot/jail? At that point package management should basically look a lot like a bundler file for Ruby. How ae security updates to the libraries handled with jails? If I understood your symlink remark correctly, then the jailed application will still use the OS libraries, so that will take care of security updates. But if all the libraries come from the OS then what's the point of jailing. And if the jail has it's own copy of the libraries, then does that mean that whenever there is a security update to any of the included libraries it's necessary to release a security update to your application. I have the same concern with Ubuntu Snap and I wasn't able to figure out how they are addressing it with a cursory look. Docker can be included in this too. If I was a developer considering Docker I would have to think hard if I want to shoulder that much responsibility. I work as a sysop and if it comes from the repositories it's our responsibility, the developer doesn't need to care. If there's an update to kernel, OpenSSL, Java, Python or Apache, Red Hat will send us an email and we'll do what's necessary. The developer will only have to worry about the piece of php, ruby or java they have written.
|
# ¿ May 25, 2017 17:36 |
|
CommieGIR posted:Its running a VM cluster and Docker containers for plex, has a PERC 6/e connected to a MD1000 array, FreeNAS is runn ing on two 100GB SSDs in RAID1 and the VMs are stored on a 500GB RAID6 array of 6 146GB SAS disks. Sounds like instead of spending money to move the drives to you garage you should buy a pair of cheaper 500GB SSDs to replace the RAID6 array.
|
# ¿ Aug 8, 2017 09:06 |
|
CommieGIR posted:I'd love to, but all the drives are recycled, didn't spend a dime. If I had the money, I'd replace all the RAID6 with SSDs like you said. I meant that if you want to put the drives in the garage you will have to buy new hardware, so it's better to buy SSDs instead.
|
# ¿ Aug 8, 2017 13:47 |
|
|
# ¿ Mar 28, 2024 18:34 |
|
phongn posted:I believe that most Intel SATA ports can be set to hot-swap (may need to be set in the BIOS). You also need a proper backplane for electrical safety. Something like that might be prudent, but I've always just hot-plugged SATA drives without issue or extra setup. Even that one time when I hotplugged a SATA power cable in to a IDE connector nothing got broken. (I bought a 750 GB SATA drive from a local computer store and never actually looked at the connector before I saw sparks).
|
# ¿ Nov 30, 2017 22:35 |