Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Xenomorph
Jun 13, 2001


We have some old Apple Xserves and Xraids that we'd like to replace, but I'm getting a little frustrated with our replacement choice.

We want to go with new Dell stuff.

Our current Xraids have 12 HDDs (two RAID 5s) and uses two Fibre Channel connections, 2Gb.
Our current Xserves have a Fibre Channel card, 2Gb each port.

I connect them, they work. Two logical drives show up to the server. They are from 2005 and we want something new.

When I go to Dell.com, I click the parts I want, but when I click the Qlogic Fibre Channel card, a popup telling me it requires the $2,000 Mission Critical service plan. It is *REQUIRED*.
Um, what? Why?

Does anyone have experience with this? With ordering from Dell? $4,000 server plus almost $4,000 for adding Fibre Channel (parts and service plan) is nuts.

Should we just drop Fibre Channel?

Adbot
ADBOT LOVES YOU

Xenomorph
Jun 13, 2001


I have a Premier account and a rep's number (somewhere). It's just after reading Apple's announcement this morning (Xserve is DEAD), I simply hit Dell.com looking for a price estimate.

I'll call/email them about Tuesday or something.

But, is Fibre Channel still a good option?

Xenomorph
Jun 13, 2001


Hok posted:

Are you trying to keep your existing xraids and use them with new servers, or are you replacing the lot?

If replacing the lot, don't bother with fibre, not unless you're talking about a decent scale SAN.

If it's just a single server with direct attached storage, get an MD1200/1220, 6Gb sas is a lot less hassle than fiber and a hell of a lot cheaper, if you want a couple of servers connected, MD3200/3220, they've got 4 6GB sas ports on each controller, so you can hang 4 servers off them and still have full redundancy on the links.

I would like to keep exiting Fibre Channel Xraids for now, and also add a new storage unit.

Our Xraids are 5 years old, and replacing the ATA-based hard drives will get harder and harder. We have like 20 TB worth of Xraid units.

A new server with a new SAS storage unit is fine, as long as we have a Fibre card for all of our old storage.

I told higher-ups that we are probably looking at $15-$20K for this new setup (as well as a LOT of hell moving from Apple OD to Windows AD), so we may not get this until May or June or something (end of fiscal year).

Xenomorph
Jun 13, 2001


Pretty basic I guess, but I just ordered a Dell NX3000 NAS, loaded with 2TB drives.

I was looking at a storage option that offered Fibre Channel, since all of our old Apple Xraids used that. I've seen iSCSI recommended a lot. I'm not familiar with all that stuff.
I just know the NX3000 let me pick standard 7200 RPM SATA drives (so I can pick up an Enterprise WD or Segate off Newegg as replacements), and had it's own OS to share stuff over Ethernet, without requiring another computer to manage it.

The issues I've had with existing storage are the types of drives (Xraids use now-difficult to replace IDE drives), and the type of connection (we had to dedicate one system to fibre channel connections to get access to half a dozen storage units).

Xenomorph
Jun 13, 2001


Misogynist posted:

The issues you've had with existing storage were mostly due to you ignoring SH/SC telling you to replace them a year ago and finding bizarre justifications to keep trucking along on your broken and unsupportable kit, IIRC

Well, buying the new NX3000 is the first step in replacing our old stuff. However, our old stuff still works, and I've been asked repeatedly by higher-ups why I'm so eager to just get rid of it and buy new stuff that will do the same thing.
I was able to take one old Apple Xraid offline, and they let me buy a new Dell RAID unit. The one Xraid we took offline is now spare drives for the others. So when they start failing, I will be allowed to buy another Dell RAID.

szlevi posted:

If you can wait new NX models, sporting the new Storage Server 2008 R2 etc, are coming in 2-3 weeks.

Any more info on this? At first I didn't think it would be a problem, but after using 2008 R2 on our Domain Controller, using regular 2008 feels a little "off" (basically going from Win7 back to Vista).

Besides the slightly improved interface, what advantages does Storage Server 2008 R2 offer? SMB 2.1? How much better is that than 2.0?

I'm not even familiar with the "Storage Server" product. I saw something to enable Single Instance Store (de-duplication) on the drive, which I'm guessing isn't in the regular Server products.
I'm tempted to just wipe Storage Server 2008 and install Server 2008 R2. We get Windows licenses cheap, and I'm trying to figure out if we'd be happier with the overall improvements in Server 2008 R2 compared to the NAS features we may not use in Storage Server 2008.

Xenomorph
Jun 13, 2001


Nebulis01 posted:

WSS2008 is available in x86 or x64. WSS2008R2 is available only on x64. Unless you really need the iSCSI or De-duplication features, Server 2008R2 would serve you quite well.

We've never used iSCSI (yet), and I don't think we'd really use de-duplication. Most of the files will be office documents and a bunch of other stuff that will probably be pretty unique from user to user.

The PowerVault isn't really in production yet, so I guess I could just spend a few hours getting it ready with Server 2008 R2.

Xenomorph
Jun 13, 2001


Ok, BACKUPS. How can we do this better?

We back up about a dozen Servers and 50 Desktop computers.

*One* of our servers has a full, 4 TB storage unit (RAID 5 w/ a hotspare). It gets a full backup once a month, and then incremental every day.

There are millions of individual files, and our backup program is now choking on this. Since no one knew what most of the files were for, we were told by the person that does most of the work on the server to go ahead and delete the stuff that was old. This was like 90% of the data. Of course, while in the middle of deleting the data, someone else was like "no wait, we need this still. All of it."

I said we can't hold all of it. It's not organized, it filled up all storage, and it uses up most of our backup resources. Seriously, four terabytes of "mystery files".

Their solution? They want another 4 TB RAID unit - and then we should back that up as well. That is over-kill in multiple ways. The user probably doesn't have the money for that (I'm estimating $3,000 for the storage), and we would need another backup drive (estimated $5,000).

How do people with dozens/hundreds of terabytes handle backups? Multiple backup drives? HUGE backup drives? What about bandwidth? We back up over the network. Do you have backup drives attached locally to servers? Does each server get its own backup unit?

Xenomorph
Jun 13, 2001


I'm looking for a big storage box.

Wants:

- 2U rackmount (can go up to 3U)
- can fill with 2TB SATA drives (such as WD RE4 drives, we provide the drives)
- hot swappable, hardware RAID 6
- at least 6 drive bays
- Fibre channel connection (2Gb+)
- redundant power supplies
- some sort of on-board management (to set up the RAID)
- iSCSI
- cheap

Any suggestions?

Edit, an alternative:

- NAS box that runs Windows (Gigabit connection) so we don't need a host.

We have an existing Dell PowerVault NX3000 that works OK. I wasn't sure about going through Dell again: we have a bunch of Server 2008 R2 licenses and piles of 2TB RE4 drives we can use, so we didn't want to pay Dell for another license and more drives.

Xenomorph fucked around with this message at 06:22 on Sep 7, 2012

Xenomorph
Jun 13, 2001


- We currently use some old Apple Xraids - hardware RAID 5 onboard and 2Gb fibre channel connections. We have a bunch of servers with Qlogic fibre channel cards, so I already know that is as simple as plugging in a USB thumbdrive and sharing a drive on the network.
- We also have a PowerVault NX3000. It has hardware RAID 5, boots Windows and shares the drives installed into it.

If I haven't actively wired it and configured it, I'm afraid of it, regardless of how much I've read about it and asked about it in the past. iSCSI doesn't sound bad - it's like Fibre channel over Ethernet, right? If that's the case, I can just add additional Ethernet cards to the servers, right?

All of our existing RAIDs have around 4TB of storage (for everyone). This was fine years ago, but now we have individual people that need 4TB-10TB of storage.

So, I want to add something like a 10TB RAID. With that much data, I'm guessing RAID 6 would be safer than RAID 5.

I mentioned the WD RE4 drives because we ordered a lot of spares. I'm trying to use the same drives for servers and RAIDs. We've been in situations in the past where we did not have a spare when a RAID failed because every server and every RAID was ordered with different size drives and no one thought to purchase a bunch of spares for the dozen different drive configurations they went with.
Now we have a pile of 73GB/146GB/300GB SCSI drives for old servers, and 500GB/2TB SATA drives for new servers.

I just want a 10TB RAID 6 drive to share from Windows. Just file storage for some people, so it doesn't have to be fast. One person has been using a 4TB LaCie USB drive. There's no redundancy with that, and we don't back it up.

It must be under $10,000, closer to $5,000 (without disks).
Maybe something like this:
http://www.newegg.com/Product/Product.aspx?Item=N82E16822108100

Xenomorph
Jun 13, 2001


OK then, what is a good & cheap, rack-mountable hardware RAID 6 box I can slap a bunch of SATA drives into and connect via iSCSI to one of my existing servers?

Xenomorph
Jun 13, 2001


Misogynist posted:

You already posted about those goddamn Xraids a year ago and everyone who saw your post started yelling that you need to get rid of those loving things. 16 gigabit fibre channel is the current market standard. You are running poo poo that's three generations older than anything being produced today.

Is anything in your environment still under warranty?

Some of our Dell stuff has warranties.

I would like to get rid of the Xraids. They're currently in use & we have a budget I'm trying to stay within. They will probably be in use for another 5 years.

There's no way I'm going to spend $20,000-$50,000 on storage overkill when people have been happy with their $200-$400 USB drives. I simply want something better than those USB drives that has redundant hardware and fits in our racks. I can possibly go up to $10,000 for one item.

We do not need the latest & greatest tech. Something three generations old is perfectly fine for us. Half the servers I've purchased were refurbs and half the upgrade components I've purchased were off eBay. Going with refurbs and eBay items was still an *upgrade* compared to what we had (lots of white boxes under desks).

Xenomorph
Jun 13, 2001


I guess I don't fully understand iSCSI.

I've looked at a few NAS servers that advertise "Built-in iSCSI Target Service". It then mentions that it runs Linux and uses EXT4 for its file system.

How does that work if a Windows system is the iSCSI initiator? I thought I could just connect the device and then a drive would show up to Windows that I could then format as NTFS. That's how I've been working with Fibre Channel.

Xenomorph
Jun 13, 2001


Maneki Neko posted:

That's exactly how it works. The NAS box is probably just making a big fat file and then presenting that storage to the initiator as a block device.

I just watched some videos that showed a user making a virtual disk on the target system, and then that virtual disk gets mounted on the initiator. So there is still overhead (NTFS -> virtual disk -> ext4) when writing/reading.

Edit:
I'm looking at this now: QNAP TS-EC1279U-RP

Under $5,000 and we just fill it with WD RE4 drives. I'm just not too hot on the idea that the iSCSI part creates a virtual disk sitting on top of ext4.


Xenomorph fucked around with this message at 23:46 on Sep 6, 2012

Xenomorph
Jun 13, 2001


FISHMANPET posted:

Holy poo poo you have no idea what you're doing. This is what a SAN is. If you want to find some crazy overpriced FC gear go nuts I guess, or see if you can find some direct attached storage but I don't know if that exists anymore anyway.

If I were you I'd be way more worried about the overhead of running second hand equipment all over.

Uh, I was hoping it was some sort of SAN device. I am asking about that kind of stuff in a SAN thread. I'm looking for something iSCSI-based, not fibre channel.

three posted:

Well, you see, FC cables plug directly into the hard drive platters so there is no overheard like iSCSI. :psyduck:

...and this? From what I saw, most of the iSCSI devices I looked at had an additional layer in-between: an OS running on the device (Linux) with its own file system, then a file created on that as a "virtual disk" is shared as a volume over iSCSI -which is then formatted by the initiator. The idea of some sort of nested file systems doesn't seem like a good idea to me. Is it making partitions instead?

If I already knew everything about every single hardware device and connection method, I don't think I'd be asking so many loving questions.
I was hoping to get some helpful information, not be mocked.

Xenomorph
Jun 13, 2001


NippleFloss posted:

If you don't know anything about storage then you don't need to concern yourself with the details of how a particular piece of storage works. You just need to provide useful information about the workload you intend to run on it and the required response times. If you can find storage that provides the performance you want for the price you want, who cares how it works?

My experience with a "file system on a file system" are things like Wubi/Ubuntu (ext3 loop device on top of NTFS) and virtual machines, where I/O suffers. I'm guessing I shouldn't have to worry about that since the internal I/O of a modern RAID setup won't be my bottle neck (1 Gb Ethernet will).

I just wasn't sure if that is how all iSCSI-compatible devices do it.

Next question: how important is it to have a dual-controller RAID?

Is something like the QNAP TS-EC1279U-RP terrible?

The primary use I'm looking for is just data storage. People want to move stuff of their desktops to a shared drive.

Xenomorph
Jun 13, 2001


Does anyone have any experience with Partners Data?
http://www.partnersdata.com/cgi-bin/productinfo?division=systems&id=761

Another department here has been using them for a while now, and they say they're pretty happy with them.

They're pricier than the cheap QNAP stuff I was looking at, but still cheaper than Dell.

Xenomorph
Jun 13, 2001


Misogynist posted:

Can you stop posting in this thread? Every single thing you ask is a really bad idea, and you must be a glutton for punishment, because you don't listen to a single thing from people who do this poo poo for a living and actually know what they're talking about, and you keep coming back here to ask ridiculous questions about Xserve RAIDs and off-brand SANs.

I have no problem with people thinking my questions are ridiculous. I'm trying to learn and gather information.

From previous replies in this thread I went and tested some iSCSI setups.
Also, after speaking with other people that "do this poo poo for a living", they recommend Partners Data due to the reliability of hardware and level of support they received.

If it's a bad idea, I would like to know why.

Xenomorph
Jun 13, 2001


Does anyone use ZFS with Linux? Or is that asking for trouble?

Xenomorph
Jun 13, 2001


Is anyone here hosting Windows 8 roaming profiles on a Samba-based share?
Are there any known issues that keeps this from working?

Adbot
ADBOT LOVES YOU

Xenomorph
Jun 13, 2001


I have a weird setup and question here:

We got an old Dell PowerEdge R710 to use as a file server. We requested it with the Dell SAS 6/iR RAID card instead of the Dell H700 card because we wanted direct access to the disks for ZFS. The "nicer" Dell cards do not let you disable RAID to use JBOD. Their "cheaper" RAID cards do.

I get the server and fill it with a bunch of new HDDs. I notice that only 2.0TB of the drives are showing up (they are 5TB drives). I look up more info, and I see that while the SAS 6/iR gives us "direct" access to the drives, it cannot access more than 2.0 TB. The H700 would let us work with the whole drive, but we'd lose direct-access (JBOD).

Someone suggested the LSI SAS9211-8i card. It cheap and works like the SAS 6/iR, but supports >2TB drives. So I pop that in, and it seems to work fine. All drives are available with their full capacity... But now my HDD power LEDs are not on. Activity LEDs flashes when the drive is in use, but the power LED is off.

Is this some Dell backplane + Non Dell backplane issue? Is there any way to make the HDD power LEDs come on?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply