Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
IOwnCalculus
Apr 2, 2003





For what it's worth, any PCIe card will work in any PCIe slot, as long as it will physically fit. Technically, if you cut the end off of the slot (or had a x16 physical slot wired up as an x1) you could run a graphics card off of PCIe x1.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





dietcokefiend posted:

Yea well I just wanted to confirm that it wouldn't gently caress with the on-board graphics and turn it off if that slot was filled or some stupid poo poo.

Yeah, that was an AGP limitation - chipsets could only have one port for that, whether it was for onboard or a video card.

IOwnCalculus
Apr 2, 2003





NeuralSpark posted:

I just did this Monday, only took 6 hours for a 4 TB array.

Yeah, it really can be remarkably quick if you aren't using it for anything else - did you crank up the speed limit that md imposes on itself? It's set up so that the array is totally usable while you do all of that, but if you can deal with a few hours of piss-poor performance, you can get it done much faster.

IOwnCalculus
Apr 2, 2003





Methylethylaldehyde posted:

make sure your array will truncate (is that he right term?) Off the last 1GB or so, because if the drives have slightly different sizes, then you can run into a case where the array eats itself alive when it can't address those last few sectors, or they just refuse to play nice.

That won't be an issue for at least the initial setup, md won't create an array larger than what the smallest volume will support. That said, even cross-manufacturer I've seen identical (to the byte) size ratings on some drives.

It will only be an issue if down the road he needs to replace a drive or grow the array and the new/replacement drive is smaller than the existing ones.

IOwnCalculus
Apr 2, 2003





What protocol are you using to access it? Encrypting the drive won't do anything if you're sharing it to the whole world.

IOwnCalculus
Apr 2, 2003





Yeah, I use SFTP to get to my files remotely - and japtor's post pretty much nails it. You'll want to use something like denyhosts or fail2ban or any of the other packages that watch your access attempt logs to see if someone is trying to sneak in.

IOwnCalculus
Apr 2, 2003





Vladimir Putin posted:

Not all the time. What if they hacked an exploit in some other program/part of the OS. They wouldn't need to hack your sftp password.

Why would you have anything else exposed without a firewall? I'm guessing you're running it behind NAT in which case the only port directly to it should be a nonstandard one pointing at wherever sshd is listening, but if it's in a wide-open area at least set up a firewall on the server itself.

IOwnCalculus
Apr 2, 2003





1) Even with the utility, newer WD drives won't let you enable TLER.

2) TLER is kinda academic for softraid anyway.

IOwnCalculus
Apr 2, 2003





Wanderer89 posted:

Does anyone have any experience with sil3132 based pcie(1x) to sata (with raid) cards?

After updating my mobo bios to latest possible (albeit 3 years old, as it's an old s939 board), the system doesn't post with the card install, no display, no beeps, nothing. Without the card, boots fine. Just trying to get it up and recognized in windows 7 to ensure the card's bios is up to date before trying again in opensolaris. I also tried throwing it in my p55/1156 based msi board, with the same result.

Bad card?

If it does the same thing in your P55 board, yeah, dead or otherwise defective card. I've had weird controller conflicts before with SiI chips and onboard controllers (I think it was an Asus board with its own older SiI on it) but you shouldn't have that happen with a newer P55 board.

IOwnCalculus
Apr 2, 2003





Combat Pretzel posted:

Anything that's claiming to be green or power-saving will do things like idle parking. Get some WD Black or the equivalent from other manufacturers.

My Samsung HD154s haven't done this - smartctl output:

code:
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       17
  9 Power_On_Hours          0x0032   098   098   000    Old_age   Always       -       8629

IOwnCalculus
Apr 2, 2003





Combat Pretzel posted:

The parking is counted in 'Load Cycle Count'.

Huh; must not be tracked by the Samsung drives then, I don't have that at all. Here's smartctl -A for one of them:

code:
$ sudo smartctl -A /dev/sde
smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   100   100   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0007   081   081   011    Pre-fail  Always       -       6660
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       17
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   100   100   051    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0025   100   100   015    Pre-fail  Offline      -       0
  9 Power_On_Hours          0x0032   098   098   000    Old_age   Always       -       8652
 10 Spin_Retry_Count        0x0033   100   100   051    Pre-fail  Always       -       0
 11 Calibration_Retry_Count 0x0012   100   100   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       17
 13 Read_Soft_Error_Rate    0x000e   100   100   000    Old_age   Always       -       0
183 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
184 Unknown_Attribute       0x0033   100   100   000    Pre-fail  Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   070   058   000    Old_age   Always       -       30 (Lifetime Min/Max 30/32)
194 Temperature_Celsius     0x0022   070   062   000    Old_age   Always       -       30 (Lifetime Min/Max 28/34)
195 Hardware_ECC_Recovered  0x001a   100   100   000    Old_age   Always       -       2469498274
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   100   100   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x000a   100   100   000    Old_age   Always       -       0
201 Soft_Read_Error_Rate    0x000a   100   100   000    Old_age   Always       -       0

IOwnCalculus
Apr 2, 2003





crm posted:

Do any of those sans digital do RAID6? Is there any bonus to Raid6 other than being protected if 2+ drives go down at one time?

No comment on the Sans Digital device but yes, RAID6 is entirely for the extra parity data letting you survive a two-drive failure (when properly set up, this should only be in the event you have a drive failure while a rebuild completes).

IOwnCalculus
Apr 2, 2003





The biggest issue by far is that green drives try to spin down quickly to reduce power consumption. Hardware RAID controllers interpret the spin down (and resultant spin up and the delays caused by it) as a failed drive.

Software RAID typically just shrugs and waits for it.

edit: \/\/ Nevermind he's right, it's been so long since I looked into it that I had forgotten the error-recovery aspect.

IOwnCalculus fucked around with this message at 21:18 on Oct 12, 2010

IOwnCalculus
Apr 2, 2003





necrobobsledder posted:

That makes it even more of a rip-off to me. You can't possibly spend more than $180 PER DRIVE in cooling and power over the same 5 year period in an enterprise environment. It may be arguable if there's some overhead like man-hours replacing drives, but it's seriously a rip-off for firmware that turns off the head parking and turns on the TLER, which is algorithmically easier to write than for a regular consumer green drive.

Let's run some rough numbers then - RE4 2TB vs RE4-GP 2TB. From WD's site:

Read/write: 10.7W vs 6.8W
Idle: 8.1W vs 3.8W
Standby/Sleep: 1.5W vs 0.8W

In any given situation, the GP is anywhere from not quite 40% to not quite 60% more efficient than the regular 2TB drive; to make numbers easier, we'll just deal with read/write (where the savings are smallest percentage wise, but greatest in terms of Watt-hours conserved).

Over the course of five years of constant read/write, the standard drive will consume 468.66 kWh. The GP drive will consume 297.84 kWh. Of course, in a datacenter, a kWh consumed means heat to deal with, so on top of the actual power cost we need to multiply what it cost to cool that heat. The Uptime Institute figures an average datacenter's PUE is 2.5, so the actual kWh consumed over those five years becomes 1171.65 kWh and 744.60 kWh respectively.

So what does this cost? The US EIA puts the average power cost across all sectors in July 2010 at 10.5 cents per kWh. So to power and cool the RE4, we spent about $123.02. To power the RE4-GP, we spent $78.19. Savings in this very rough and not entirely realistic situation? $44.83.

Real-world, that difference is highly variable. If your cost per kWh is higher than 10.5 cents, it'll be a lot bigger. If your datacenter is a lot more efficient than that, it'll be a lot smaller. If your regular RE4s can spend more time at idle than your GPs can because they read the data off faster, the difference will be smaller; but if your drives spent the majority of the time idle instead, the difference will be bigger.

There are other costs to consider here - across a large enough deployment of drives, GP vs regular can make a (small) difference in how much cooling capacity your datacenter needs in the first place. More likely, you may be able to pack more drives in a given chassis design, and/or utilize smaller power supplies in the servers. You may or may not see reduced drive failures due to temperatures (Google's data indicates it's not nearly as much of an issue as once thought, at least only at the temperatures you should see in a datacenter).

That said, a $180 premium per drive is a bit extreme. A quick look on Froogle shows a difference of under $20 per drive when comparing RE4 to RE4-GP at 2TB. I'd hope that anyone comparing RE4-GP drives to consumer drives of any speed, is doing so for home use and not a datacenter. I don't bother paying for 'enterprise' drives for my home setup, but I wouldn't ever tell a customer who relies on the data on an array for their business to use consumer drives instead.

IOwnCalculus
Apr 2, 2003





necrobobsledder posted:

I'd rather not be buying the storage equivalent of Monster cables (even if it's not my money) if I could help it, but without solid data instead of manufacturer's specs we take as gold (and thus become subject to marketing spin and "lying with statistics" as commonplace in modern business) we can't really do much but sigh and pay for the "supposed" best and to make certain assumptions, can we? Nobody got fired for buying IBM, and nobody got fired for paying for "enterprise" drives in their datacenter even if the failure rate might be identical in practice. Funny, I don't consider 7200RPM drives enterprise anyway, but that's out of scope of this thread.

Oh, I agree with you big time here across the board. In my mind, if you're paying for datacenter space, and you're using 7200RPM drives, you should hopefully only be doing so for a large, low-performance array with enough redundancy built in on the array level that it doesn't matter if you use an enterprise drive or not...but what you can get away with in real life and what you can get away with telling a customer to do are often two different things.

Now, for home users / people comparing consumer green drives versus consumer 7200RPM drives...I'd bet the power savings still sway in favor of green drives, though you do need to make sure you're not pairing them with a hardware RAID controller. I'd also argue that for a home setup, a hardware RAID controller is way overkill anyway. I'm more than happy with the performance I get out of Linux md-raid and 5400RPM SATA drives.

IOwnCalculus
Apr 2, 2003





DLCinferno posted:

For anyone looking at or owning any of the 4KB-sector drives, here's a pretty good article on how to compensate for potential performance issues, as well as some discussion about what to expect in the future from drive manufacturers (i.e. more of the same):

http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.html?ca=dgr-lnxw074KB-Disksdth-LX
Thanks for posting that, some solid info in there.

IOwnCalculus
Apr 2, 2003





Thermopyle posted:

I've got 3 different SATA/RAID controller cards that I just use to add SATA ports.

I'd like to consolidate these. What's a good solution for adding 8 SATA ports, preferably via PCIe?

Cheapest option is a Supermicro LSI-based card that comes up pretty regularly, though the only issue is the bracket for it is technically for Supermicro's UIO form factor. The connector is still PCIe so it will fit in a PCIe slot but you need to remove the bracket first.

IOwnCalculus
Apr 2, 2003





Thermopyle posted:

Any "gotchas" to using these drives in an mdadm raid5 array?

I know there has been some problems with WD Green drives and some sort of NAS solution that's popular in here, but I've lost track of the actual issues.

I've had the 1.5TB Samsung Ecogreen drives in my box for...470 days according to SMART. No issues in md at any time.

IOwnCalculus
Apr 2, 2003





Sizzlechest posted:

A PC minus the hard drives would cost approximately $600 in parts. I'd want redundancy, so I'd have to double up the hard drive and configure them as a RAID 1 at minimum.

If you need more than a single drive's worth of capacity, don't RAID1, go with RAID5.

IOwnCalculus
Apr 2, 2003





Yes, setting up RAID on the drives will obliterate the data.

How much data do you have on there, and do you have any other drives available to back things up onto? I'm guessing you have >4TB, but if you're lucky enough to be at <4TB, you could try and pull one drive out of the JBOD (make sure it's the empty one, ha) and create a degraded "two-drive" RAID5 (two devices + one missing) with Linux software RAID. Copy at least half of the data off of the remainder of the JBOD array, then add the newly empty drive to the RAID.

Don't know if it's possible, but at that point you'd hopefully be able to add the drive to the array as the third drive in a degraded four-drive array, copy the rest of the data off of the last drive, and then add that drive to the RAID5 and complete a rebuild.

While that could all work in theory, it's risky as hell and I strongly recommend you figure out what data you absolutely cannot lose, and back that up elsewhere first.

IOwnCalculus
Apr 2, 2003





Profane Obituary! posted:

If you're getting another 2TB drive, you could just copy your data to that, turn the existing drives into raid5, copy over the data, then add in the final 2TB drive.

Probably safer then yank a drive out of JBOD?

Yeah, if you only have <2TB data, copy that off of the JBOD array altogether to the new 2TB drive. Once that's done, break the JBOD and create a proper (non-degraded) 3-drive RAID5 from that, let it finish syncing across all of the drives, and copy the data into it. Add the single drive to the array and grow it to four. Done, and a lot less risky.

IOwnCalculus
Apr 2, 2003





Longinus00 posted:

Not to mention much faster since you're only doing one rebuild.

Faster than doing two, yes, but still takes a while; if I remember correctly (it's been a while since I last created a new mdraid RAID5/6 array) it treats a new blank array as if it's going through a recovery from a drive failure and does a whole rebuild there.

IOwnCalculus
Apr 2, 2003





Are you trying to do hardware RAID (which I would strongly recommend against at that price point) or software RAID? Hardware RAID will almost never work across controllers (I'd bet there's some crazy one-off situation where it could, but it's still a bad idea). Software RAID, like md-raid in Linux, works great regardless of how your drives are connected.

IOwnCalculus
Apr 2, 2003





jeeves posted:

Also, people seem to say it that the eSATA port on the back doesn't have a port multiplier. What does that mean? Do eSATA multipliers let you chain various external things together in a daisy chain, and if you don't have a multiplier only one device can be attached at once-- or what?

That's pretty much the gist of it. An eSATA with multiplier support can run a single eSATA cable to one of the 5-bay enclosures and run all drives in it on that cable.

IOwnCalculus
Apr 2, 2003





Butt Soup Barnes posted:

I want to set up RAID 5 with a few 2TB drives. Will I have to wipe the existing data on the 2TB drive I already have?

Yes, but there are workarounds:

*If you are creating the RAID on something that can non-destructively grow a RAID5, and you have four 2TB drives - create an array with 3x 2TB, copy the data off of the full 2TB onto the array, then add the old drive to the array (which will wipe the old drive)

*If you are creating the RAID on something that can non-destructively grow a RAID5 but you only have three 2TB drives, you'll have to get creative. In Linux mdraid you have the ability to create a RAID5 in a degraded state so you could use two drives to create a degraded 3-drive RAID5, copy the data, and then add the original drive to the array to bring it to normal condition. Alternatively, you could even create the original RAID as a two-drive RAID1 and then convert it to RAID5 (I did this myself with my backup server when the two-drive RAID1 became too small).

IOwnCalculus
Apr 2, 2003





Modern Pragmatist posted:

Any particular reason why the rackmount systems to be so much more expensive than their counterparts?

Because 99.9% of them are going into either a company's computer room or a datacenter.

IOwnCalculus
Apr 2, 2003





Alternatively, in XP there was apparently some registry hack or other file modification you could do to allow RAID5 dynamic disks to be created (when the alternative was running Server 2003). I'd bet it can be done in 7 as well.

IOwnCalculus
Apr 2, 2003





Crashplan is already encrypted though, isn't it?

At any rate, at $3/mo for one computer or $6/mo for 2-10 computers, Crashplan's cloud storage makes the most sense for a bulk backup. I combine it with the older backup RAID array I have stashed at my mom's that only backs up the critical data.

IOwnCalculus
Apr 2, 2003





I'd be shocked if there's a home ISP that allows port 80, a lot of them blocked it back in the day due to the Code Red virus and never really liked having it open due to people running random webservers.

IOwnCalculus
Apr 2, 2003





Random question - running a software RAID5 using md on Ubuntu 10.10, ext3 filesystem. Realistically, what options do I have for either file-level or block-level deduplication?

IOwnCalculus
Apr 2, 2003





Playing around with NexentaStor Community Edition on a VM. Seems pretty damned slick; between this and the HP MicroServer I have a pretty solid idea of what I want to do with my fileserver for the future. I just need to figure out how to set up a RSS grabber and a torrent client on it.

IOwnCalculus
Apr 2, 2003





Can you sun any form of torrent / sabnzbd server on there as well, or are you going to create a Linux VM to do that?

IOwnCalculus
Apr 2, 2003





I think md is pretty forgiving on drive error times. Due to the embarassingly ghetto cabling / controller setup I have mine living on right now, I pop DMA errors in dmesg every once in a while, but they never drop the drive from the array altogether.

The worry of bit rot / the hilariously bad way this is cabled right now is part of why I really want to do a Microserver + ZFS to eventually replace it with.

IOwnCalculus
Apr 2, 2003





XBMC Live, installed to a hard drive, is just a well-done Ubuntu/XBMC integration. There's nothing about it that should preclude you from setting up a mdraid RAID1 array to install everything to, or SSHing in and installing samba for SMB/CIFS.

I can't remember the last time I dealt with bluetooth keyboard and mouse on Linux, if I ever actually did, and I can't comment on sleep/hibernate. That said I know I've seen reviews of some of the little remote-style keyboard/mouse combos that have had positive Linux experiences.

IOwnCalculus
Apr 2, 2003





movax posted:

They should be striped RAID-0 style, my intent was for a stripe of -Z2 vdevs. So as long as the wrong two drives don't fail, I'm good...

Don't you mean three drives? I thought Z2 could lose two drives and still maintain integrity. Hell, from the sound of it you could lose up to six drives if it's no more than two in any one Z2.

IOwnCalculus
Apr 2, 2003





Lian-Li has just announced an awesome-looking little case, the PC-Q25. Five hotswap 3.5" bays, three fixed bays (either 1x 3.5" and 2x 2.5" or vice versa, depending on your video card) and a Mini-ITX or Mini-DTX motherboard.

Does anyone make a Mini-DTX board that isn't an Atom? I really want to put a nicely powered box in here that would work as both my living room HTPC and my fileserver, but to do so I'll need to fill every one of those drat bays. I found an Atom board that has one x16 slot, one x1 slot, and 6x SATA, and even a mini-PCIe slot. I could stick a video card in the x16, power all of the hotswaps off of the onboard controller, one of the fixed bays on the last controller port, and then the other two with an add-in controller on either the x1 slot or the mini-PCIe.

I guess I could go with a Mini-ITX board with Sandy Bridge and use that onboard video instead of an add-in card?

IOwnCalculus
Apr 2, 2003





Not that I've seen, I don't think any retailers have it yet.

IOwnCalculus
Apr 2, 2003





Or you could pay a hell of a lot less for Crashplan.

IOwnCalculus
Apr 2, 2003





teamdest posted:

Either Crashplan or Backblaze, can't remember which, offers you the option of mailing a drive with your data on it, in order to speed up the initial backup. I always thought that was a cool idea.

Crashplan does for sure. It's not cheap but it's a hell of a lot cheaper than goonsharing a LTO drive.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





Fourthed. It really is excellent, and it's one of the few providers out there that has really good support for Linux as well. I have it running on my headless fileserver and I just had to tweak a config file on the desktop client to be able to access and manage it through the app.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply