|
For what it's worth, any PCIe card will work in any PCIe slot, as long as it will physically fit. Technically, if you cut the end off of the slot (or had a x16 physical slot wired up as an x1) you could run a graphics card off of PCIe x1.
|
# ¿ Apr 30, 2010 17:34 |
|
|
# ¿ Apr 26, 2024 17:25 |
|
dietcokefiend posted:Yea well I just wanted to confirm that it wouldn't gently caress with the on-board graphics and turn it off if that slot was filled or some stupid poo poo. Yeah, that was an AGP limitation - chipsets could only have one port for that, whether it was for onboard or a video card.
|
# ¿ Apr 30, 2010 18:30 |
|
NeuralSpark posted:I just did this Monday, only took 6 hours for a 4 TB array. Yeah, it really can be remarkably quick if you aren't using it for anything else - did you crank up the speed limit that md imposes on itself? It's set up so that the array is totally usable while you do all of that, but if you can deal with a few hours of piss-poor performance, you can get it done much faster.
|
# ¿ May 21, 2010 22:45 |
|
Methylethylaldehyde posted:make sure your array will truncate (is that he right term?) Off the last 1GB or so, because if the drives have slightly different sizes, then you can run into a case where the array eats itself alive when it can't address those last few sectors, or they just refuse to play nice. That won't be an issue for at least the initial setup, md won't create an array larger than what the smallest volume will support. That said, even cross-manufacturer I've seen identical (to the byte) size ratings on some drives. It will only be an issue if down the road he needs to replace a drive or grow the array and the new/replacement drive is smaller than the existing ones.
|
# ¿ May 23, 2010 08:27 |
|
What protocol are you using to access it? Encrypting the drive won't do anything if you're sharing it to the whole world.
|
# ¿ Jul 17, 2010 00:11 |
|
Yeah, I use SFTP to get to my files remotely - and japtor's post pretty much nails it. You'll want to use something like denyhosts or fail2ban or any of the other packages that watch your access attempt logs to see if someone is trying to sneak in.
|
# ¿ Jul 17, 2010 01:40 |
|
Vladimir Putin posted:Not all the time. What if they hacked an exploit in some other program/part of the OS. They wouldn't need to hack your sftp password. Why would you have anything else exposed without a firewall? I'm guessing you're running it behind NAT in which case the only port directly to it should be a nonstandard one pointing at wherever sshd is listening, but if it's in a wide-open area at least set up a firewall on the server itself.
|
# ¿ Jul 17, 2010 07:05 |
|
1) Even with the utility, newer WD drives won't let you enable TLER. 2) TLER is kinda academic for softraid anyway.
|
# ¿ Jul 29, 2010 01:13 |
|
Wanderer89 posted:Does anyone have any experience with sil3132 based pcie(1x) to sata (with raid) cards? If it does the same thing in your P55 board, yeah, dead or otherwise defective card. I've had weird controller conflicts before with SiI chips and onboard controllers (I think it was an Asus board with its own older SiI on it) but you shouldn't have that happen with a newer P55 board.
|
# ¿ Aug 2, 2010 23:00 |
|
Combat Pretzel posted:Anything that's claiming to be green or power-saving will do things like idle parking. Get some WD Black or the equivalent from other manufacturers. My Samsung HD154s haven't done this - smartctl output: code:
|
# ¿ Aug 3, 2010 18:56 |
|
Combat Pretzel posted:The parking is counted in 'Load Cycle Count'. Huh; must not be tracked by the Samsung drives then, I don't have that at all. Here's smartctl -A for one of them: code:
|
# ¿ Aug 4, 2010 17:44 |
|
crm posted:Do any of those sans digital do RAID6? Is there any bonus to Raid6 other than being protected if 2+ drives go down at one time? No comment on the Sans Digital device but yes, RAID6 is entirely for the extra parity data letting you survive a two-drive failure (when properly set up, this should only be in the event you have a drive failure while a rebuild completes).
|
# ¿ Aug 31, 2010 17:09 |
|
The biggest issue by far is that green drives try to spin down quickly to reduce power consumption. Hardware RAID controllers interpret the spin down (and resultant spin up and the delays caused by it) as a failed drive. Software RAID typically just shrugs and waits for it. edit: \/\/ Nevermind he's right, it's been so long since I looked into it that I had forgotten the error-recovery aspect. IOwnCalculus fucked around with this message at 21:18 on Oct 12, 2010 |
# ¿ Oct 12, 2010 20:52 |
|
necrobobsledder posted:That makes it even more of a rip-off to me. You can't possibly spend more than $180 PER DRIVE in cooling and power over the same 5 year period in an enterprise environment. It may be arguable if there's some overhead like man-hours replacing drives, but it's seriously a rip-off for firmware that turns off the head parking and turns on the TLER, which is algorithmically easier to write than for a regular consumer green drive. Let's run some rough numbers then - RE4 2TB vs RE4-GP 2TB. From WD's site: Read/write: 10.7W vs 6.8W Idle: 8.1W vs 3.8W Standby/Sleep: 1.5W vs 0.8W In any given situation, the GP is anywhere from not quite 40% to not quite 60% more efficient than the regular 2TB drive; to make numbers easier, we'll just deal with read/write (where the savings are smallest percentage wise, but greatest in terms of Watt-hours conserved). Over the course of five years of constant read/write, the standard drive will consume 468.66 kWh. The GP drive will consume 297.84 kWh. Of course, in a datacenter, a kWh consumed means heat to deal with, so on top of the actual power cost we need to multiply what it cost to cool that heat. The Uptime Institute figures an average datacenter's PUE is 2.5, so the actual kWh consumed over those five years becomes 1171.65 kWh and 744.60 kWh respectively. So what does this cost? The US EIA puts the average power cost across all sectors in July 2010 at 10.5 cents per kWh. So to power and cool the RE4, we spent about $123.02. To power the RE4-GP, we spent $78.19. Savings in this very rough and not entirely realistic situation? $44.83. Real-world, that difference is highly variable. If your cost per kWh is higher than 10.5 cents, it'll be a lot bigger. If your datacenter is a lot more efficient than that, it'll be a lot smaller. If your regular RE4s can spend more time at idle than your GPs can because they read the data off faster, the difference will be smaller; but if your drives spent the majority of the time idle instead, the difference will be bigger. There are other costs to consider here - across a large enough deployment of drives, GP vs regular can make a (small) difference in how much cooling capacity your datacenter needs in the first place. More likely, you may be able to pack more drives in a given chassis design, and/or utilize smaller power supplies in the servers. You may or may not see reduced drive failures due to temperatures (Google's data indicates it's not nearly as much of an issue as once thought, at least only at the temperatures you should see in a datacenter). That said, a $180 premium per drive is a bit extreme. A quick look on Froogle shows a difference of under $20 per drive when comparing RE4 to RE4-GP at 2TB. I'd hope that anyone comparing RE4-GP drives to consumer drives of any speed, is doing so for home use and not a datacenter. I don't bother paying for 'enterprise' drives for my home setup, but I wouldn't ever tell a customer who relies on the data on an array for their business to use consumer drives instead.
|
# ¿ Oct 17, 2010 23:50 |
|
necrobobsledder posted:I'd rather not be buying the storage equivalent of Monster cables (even if it's not my money) if I could help it, but without solid data instead of manufacturer's specs we take as gold (and thus become subject to marketing spin and "lying with statistics" as commonplace in modern business) we can't really do much but sigh and pay for the "supposed" best and to make certain assumptions, can we? Nobody got fired for buying IBM, and nobody got fired for paying for "enterprise" drives in their datacenter even if the failure rate might be identical in practice. Funny, I don't consider 7200RPM drives enterprise anyway, but that's out of scope of this thread. Oh, I agree with you big time here across the board. In my mind, if you're paying for datacenter space, and you're using 7200RPM drives, you should hopefully only be doing so for a large, low-performance array with enough redundancy built in on the array level that it doesn't matter if you use an enterprise drive or not...but what you can get away with in real life and what you can get away with telling a customer to do are often two different things. Now, for home users / people comparing consumer green drives versus consumer 7200RPM drives...I'd bet the power savings still sway in favor of green drives, though you do need to make sure you're not pairing them with a hardware RAID controller. I'd also argue that for a home setup, a hardware RAID controller is way overkill anyway. I'm more than happy with the performance I get out of Linux md-raid and 5400RPM SATA drives.
|
# ¿ Oct 18, 2010 18:02 |
|
DLCinferno posted:For anyone looking at or owning any of the 4KB-sector drives, here's a pretty good article on how to compensate for potential performance issues, as well as some discussion about what to expect in the future from drive manufacturers (i.e. more of the same):
|
# ¿ Nov 9, 2010 01:35 |
|
Thermopyle posted:I've got 3 different SATA/RAID controller cards that I just use to add SATA ports. Cheapest option is a Supermicro LSI-based card that comes up pretty regularly, though the only issue is the bracket for it is technically for Supermicro's UIO form factor. The connector is still PCIe so it will fit in a PCIe slot but you need to remove the bracket first.
|
# ¿ Nov 14, 2010 20:28 |
|
Thermopyle posted:Any "gotchas" to using these drives in an mdadm raid5 array? I've had the 1.5TB Samsung Ecogreen drives in my box for...470 days according to SMART. No issues in md at any time.
|
# ¿ Nov 22, 2010 03:45 |
|
Sizzlechest posted:A PC minus the hard drives would cost approximately $600 in parts. I'd want redundancy, so I'd have to double up the hard drive and configure them as a RAID 1 at minimum. If you need more than a single drive's worth of capacity, don't RAID1, go with RAID5.
|
# ¿ Mar 3, 2011 23:59 |
|
Yes, setting up RAID on the drives will obliterate the data. How much data do you have on there, and do you have any other drives available to back things up onto? I'm guessing you have >4TB, but if you're lucky enough to be at <4TB, you could try and pull one drive out of the JBOD (make sure it's the empty one, ha) and create a degraded "two-drive" RAID5 (two devices + one missing) with Linux software RAID. Copy at least half of the data off of the remainder of the JBOD array, then add the newly empty drive to the RAID. Don't know if it's possible, but at that point you'd hopefully be able to add the drive to the array as the third drive in a degraded four-drive array, copy the rest of the data off of the last drive, and then add that drive to the RAID5 and complete a rebuild. While that could all work in theory, it's risky as hell and I strongly recommend you figure out what data you absolutely cannot lose, and back that up elsewhere first.
|
# ¿ Mar 4, 2011 06:00 |
|
Profane Obituary! posted:If you're getting another 2TB drive, you could just copy your data to that, turn the existing drives into raid5, copy over the data, then add in the final 2TB drive. Yeah, if you only have <2TB data, copy that off of the JBOD array altogether to the new 2TB drive. Once that's done, break the JBOD and create a proper (non-degraded) 3-drive RAID5 from that, let it finish syncing across all of the drives, and copy the data into it. Add the single drive to the array and grow it to four. Done, and a lot less risky.
|
# ¿ Mar 4, 2011 17:53 |
|
Longinus00 posted:Not to mention much faster since you're only doing one rebuild. Faster than doing two, yes, but still takes a while; if I remember correctly (it's been a while since I last created a new mdraid RAID5/6 array) it treats a new blank array as if it's going through a recovery from a drive failure and does a whole rebuild there.
|
# ¿ Mar 4, 2011 20:50 |
|
Are you trying to do hardware RAID (which I would strongly recommend against at that price point) or software RAID? Hardware RAID will almost never work across controllers (I'd bet there's some crazy one-off situation where it could, but it's still a bad idea). Software RAID, like md-raid in Linux, works great regardless of how your drives are connected.
|
# ¿ Apr 2, 2011 06:14 |
|
jeeves posted:Also, people seem to say it that the eSATA port on the back doesn't have a port multiplier. What does that mean? Do eSATA multipliers let you chain various external things together in a daisy chain, and if you don't have a multiplier only one device can be attached at once-- or what? That's pretty much the gist of it. An eSATA with multiplier support can run a single eSATA cable to one of the 5-bay enclosures and run all drives in it on that cable.
|
# ¿ Apr 7, 2011 21:54 |
|
Butt Soup Barnes posted:I want to set up RAID 5 with a few 2TB drives. Will I have to wipe the existing data on the 2TB drive I already have? Yes, but there are workarounds: *If you are creating the RAID on something that can non-destructively grow a RAID5, and you have four 2TB drives - create an array with 3x 2TB, copy the data off of the full 2TB onto the array, then add the old drive to the array (which will wipe the old drive) *If you are creating the RAID on something that can non-destructively grow a RAID5 but you only have three 2TB drives, you'll have to get creative. In Linux mdraid you have the ability to create a RAID5 in a degraded state so you could use two drives to create a degraded 3-drive RAID5, copy the data, and then add the original drive to the array to bring it to normal condition. Alternatively, you could even create the original RAID as a two-drive RAID1 and then convert it to RAID5 (I did this myself with my backup server when the two-drive RAID1 became too small).
|
# ¿ Apr 25, 2011 22:19 |
|
Modern Pragmatist posted:Any particular reason why the rackmount systems to be so much more expensive than their counterparts? Because 99.9% of them are going into either a company's computer room or a datacenter.
|
# ¿ Apr 26, 2011 07:04 |
|
Alternatively, in XP there was apparently some registry hack or other file modification you could do to allow RAID5 dynamic disks to be created (when the alternative was running Server 2003). I'd bet it can be done in 7 as well.
|
# ¿ May 4, 2011 00:18 |
|
Crashplan is already encrypted though, isn't it? At any rate, at $3/mo for one computer or $6/mo for 2-10 computers, Crashplan's cloud storage makes the most sense for a bulk backup. I combine it with the older backup RAID array I have stashed at my mom's that only backs up the critical data.
|
# ¿ May 8, 2011 05:58 |
|
I'd be shocked if there's a home ISP that allows port 80, a lot of them blocked it back in the day due to the Code Red virus and never really liked having it open due to people running random webservers.
|
# ¿ Jun 24, 2011 20:07 |
|
Random question - running a software RAID5 using md on Ubuntu 10.10, ext3 filesystem. Realistically, what options do I have for either file-level or block-level deduplication?
|
# ¿ Jun 27, 2011 23:09 |
|
Playing around with NexentaStor Community Edition on a VM. Seems pretty damned slick; between this and the HP MicroServer I have a pretty solid idea of what I want to do with my fileserver for the future. I just need to figure out how to set up a RSS grabber and a torrent client on it.
|
# ¿ Jul 16, 2011 07:46 |
|
Can you sun any form of torrent / sabnzbd server on there as well, or are you going to create a Linux VM to do that?
|
# ¿ Jul 24, 2011 21:00 |
|
I think md is pretty forgiving on drive error times. Due to the embarassingly ghetto cabling / controller setup I have mine living on right now, I pop DMA errors in dmesg every once in a while, but they never drop the drive from the array altogether. The worry of bit rot / the hilariously bad way this is cabled right now is part of why I really want to do a Microserver + ZFS to eventually replace it with.
|
# ¿ Jul 27, 2011 22:47 |
|
XBMC Live, installed to a hard drive, is just a well-done Ubuntu/XBMC integration. There's nothing about it that should preclude you from setting up a mdraid RAID1 array to install everything to, or SSHing in and installing samba for SMB/CIFS. I can't remember the last time I dealt with bluetooth keyboard and mouse on Linux, if I ever actually did, and I can't comment on sleep/hibernate. That said I know I've seen reviews of some of the little remote-style keyboard/mouse combos that have had positive Linux experiences.
|
# ¿ Aug 12, 2011 00:46 |
|
movax posted:They should be striped RAID-0 style, my intent was for a stripe of -Z2 vdevs. So as long as the wrong two drives don't fail, I'm good... Don't you mean three drives? I thought Z2 could lose two drives and still maintain integrity. Hell, from the sound of it you could lose up to six drives if it's no more than two in any one Z2.
|
# ¿ Aug 18, 2011 01:48 |
|
Lian-Li has just announced an awesome-looking little case, the PC-Q25. Five hotswap 3.5" bays, three fixed bays (either 1x 3.5" and 2x 2.5" or vice versa, depending on your video card) and a Mini-ITX or Mini-DTX motherboard. Does anyone make a Mini-DTX board that isn't an Atom? I really want to put a nicely powered box in here that would work as both my living room HTPC and my fileserver, but to do so I'll need to fill every one of those drat bays. I found an Atom board that has one x16 slot, one x1 slot, and 6x SATA, and even a mini-PCIe slot. I could stick a video card in the x16, power all of the hotswaps off of the onboard controller, one of the fixed bays on the last controller port, and then the other two with an add-in controller on either the x1 slot or the mini-PCIe. I guess I could go with a Mini-ITX board with Sandy Bridge and use that onboard video instead of an add-in card?
|
# ¿ Aug 19, 2011 00:55 |
|
Not that I've seen, I don't think any retailers have it yet.
|
# ¿ Aug 19, 2011 01:34 |
|
Or you could pay a hell of a lot less for Crashplan.
|
# ¿ Aug 19, 2011 14:43 |
|
teamdest posted:Either Crashplan or Backblaze, can't remember which, offers you the option of mailing a drive with your data on it, in order to speed up the initial backup. I always thought that was a cool idea. Crashplan does for sure. It's not cheap but it's a hell of a lot cheaper than goonsharing a LTO drive.
|
# ¿ Aug 19, 2011 16:01 |
|
|
# ¿ Apr 26, 2024 17:25 |
|
Fourthed. It really is excellent, and it's one of the few providers out there that has really good support for Linux as well. I have it running on my headless fileserver and I just had to tweak a config file on the desktop client to be able to access and manage it through the app.
|
# ¿ Aug 30, 2011 06:17 |