Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


SAN Newbie Post incoming:

I have finally convinced management (I hope anyways) that we need to start looking at setting up a SAN since our storage design is not sustainable and becoming very difficult to manage. We are a small video game development studio who's staffing levels can fluctuate greatly on relatively short notice. We can go from having 45 users to 75 or even 100 in less than a quarter. I am the sole administrator and it will likely remain that way unless we start passing the 120 user mark.

Our environment only consists of standard Windows AD and file services, a Perforce server and a myriad of other Linux based servers (FTP, bug tracking, etc). That is going to start changing this year as we implement an Exchange server, as well as some other development packages. All the Linux servers are running on ESXi "servers" which are installed on old workstations that are hooked up to an OpenFiler that was built using...another old workstation. This is another thing that will be changing as I will be pushing to get proper enterprise grade server hardware in place. What I have is cobbled together and keeps me up at night (like tonight). I am starting with the storage because it's the foundation that all the rest of the upgrade will be built on top of.

So far I have contacted LeftHand and NetApp for solutions. I tried to get in touch with EMC but it's been like talking to a black hole. There are things about both solutions I really like, but there are also a few concerns about each solution I have as well.

With LeftHand, I'm worried about the overall cost of growth. You get everything out of the box, but adding more storage can be quite expensive. That being said, the availability options and ease of growth are very nice as well and the management structure of it suits my administration needs. I'm also concerned about compatibility with VMware. LeftHand seems to lag behind a bit in getting certified for the latest VMware updates (at least according to the HCL) and I don't want to get caught in no man's land.

For NetApp, I'm really worried about the management aspect. I've read enough things about FilerView and it's limitations. I'm certainly not scared to use a CLI, but it seems like there is a large learning curve involved which would be potential barrier given my availability. I also don't like having to tack on licenses for features I might need down the road. That being said, once the initial investment is completed, it's much more affordable to add more storage.

NetApp has put me in touch with a VAR in my area and we've had one face-to-face already. This VAR also can do LeftHand and pretty much anything else worthwhile. They were pushing me towards NetApp pretty hard (probably due to the referral) so I will likely see if LeftHand will hook me up with a different VAR for another perspective.

Hiring a consultant to do it all for me is pretty much out of the question. I've managed to get the budget for this (well, I hope I have anyways), but I'm not going to push my luck.

I've read through the whole thread up until here which is how I chose my vendors. Now I'm just looking for any additional insight that anyone might have.

Adbot
ADBOT LOVES YOU

Maneki Neko
Oct 27, 2000

I don't have any management concerns with NetApp (especially with the environment size that you're talking about). If you're brand spankin' new to NetApp, get some install/configure services tacked on there, all of the field engineers I've ever dealt with have been fantastic, and they should be able to get you up to speed.

My biggest NetApp complaint (and you mentioned it) is that everything is ala carte, and once you've purchased, the discounts aren't usually as good (unless you manage to find them at the end of a quarter/fiscal year). In my experience, the more you can bundle into that initial purchase in terms of protocols, etc. the better off you'll be.

I love their stuff, just hate having to pay for it. :(

Maneki Neko fucked around with this message at 15:13 on Apr 17, 2009

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


I do know that NetApp is beta testing a new Windows native GUI tool which will simplify things on the management end even more, so even that's not a huge concern down the road I suppose. It's probably a couple of months out, but I don't mind challenges and learning things. Management might, though.

Mierdaan
Sep 14, 2004

Pillbug
Netapp question:

We have a FAS2020 connected to an entirely segregated iSCSI network, and then through the BMC for ssh management. Is there any way to force Autosupport emails to go through the BMC's connection, since the other network it's connected to is completely isolated?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Does BMC have an SMTP connector? All autosupport does is use standard SMTP to deliver mails to wherever.

quote:

For NetApp, I'm really worried about the management aspect. I've read enough things about FilerView and it's limitations. I'm certainly not scared to use a CLI, but it seems like there is a large learning curve involved which would be potential barrier given my availability. I also don't like having to tack on licenses for features I might need down the road. That being said, once the initial investment is completed, it's much more affordable to add more storage.

Honestly, as ugly as filerview is, once you've poked around in it for about a week you can pretty much figure out everything you could possibly want.

I've helped a lot of smallish businesses make these sorts of decisions and so far, of all the guys that picked netapp, none were disastified.

Most love the management because it becomes a single point for them. The NetApp serves as their CIFS server, iSCSI, NFS, and/or FCP box all in one. All of their data is considered an equal citizen in the netapp world and they've got one tool to manage their storage.

I bolted a particularly important sentence as well as this is VERY important when figuring out who you're giving these tens of thousands of dollars to.

That said, lets get some specifics about your environment.

What sort of applications are you running? One of NetApp's great benefits is the integration they provide with things like VMware, Exchange, SQL, etc. You called out ESXi and Exchange, which may mean you could be looking at tools like SnapManager for VI and SnapManager for Exchange. These could be great tools to help keep things sane.

I have a customer for example who has one guy managing AD, Exchange, and storage for a 1200ish person organization. He uses SnapManager with Single Mailbox recovery and provides all his users something like up to the hour restores on any individual mailbox in the company. It takes him less than 20 minutes to do said restore.

What's your data looking like? Would things like de-duplication on live volumes be of a benefit to you (this is a free feature from NetApp)?

Do you have any disaster recovery needs you want to consider? Lookng to tighten up backup/recovery SLAs? Remote replication?

Given that you're a one man shop, you probably want to spend some money on tools to make your life easier. Find out from each vendor what tools they have available to do just that and get those tools.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


1000101 posted:

Does BMC have an SMTP connector? All autosupport does is use standard SMTP to deliver mails to wherever.


Honestly, as ugly as filerview is, once you've poked around in it for about a week you can pretty much figure out everything you could possibly want.

It's one of those things I'm going to have to try out. I'm defintiely not getting anything without being able to do some sort of demo on the system. I've been able to do a bit of playing around with a LeftHand setup by using their demo VSAs and I like most of the management tools there (although some of the emailing and alerting setup is a bit cumbersome to configure.)

quote:

I've helped a lot of smallish businesses make these sorts of decisions and so far, of all the guys that picked netapp, none were disastified.

Most love the management because it becomes a single point for them. The NetApp serves as their CIFS server, iSCSI, NFS, and/or FCP box all in one. All of their data is considered an equal citizen in the netapp world and they've got one tool to manage their storage.

I bolted a particularly important sentence as well as this is VERY important when figuring out who you're giving these tens of thousands of dollars to.

The ability to have all my data management be done in one location really is appealing. We're definitely going to go with iSCSI for our block level stuff since I see no point in investing in FC, nor will management sign off on the cost even if I did.

My VAR was talking about putting my ESX datastores in NFS instead of iSCSI. I believe he mentioned something about it being easier to resize NFS volumes. We didn't go into a lot of details about it as this was just a get to know you" kind of deal.

quote:

That said, lets get some specifics about your environment.

What sort of applications are you running? One of NetApp's great benefits is the integration they provide with things like VMware, Exchange, SQL, etc. You called out ESXi and Exchange, which may mean you could be looking at tools like SnapManager for VI and SnapManager for Exchange. These could be great tools to help keep things sane.

I have a customer for example who has one guy managing AD, Exchange, and storage for a 1200ish person organization. He uses SnapManager with Single Mailbox recovery and provides all his users something like up to the hour restores on any individual mailbox in the company. It takes him less than 20 minutes to do said restore.

As I stated in my post, aside from the standard Windows AD services, we have a Perforce server which is used by about half the current staff. We are switching to using it for the whole team as people switch onto newer projects. Neither of these servers are virtualized, which is something I'd seriously like to correct. I had started Perforce as a virtualized solution, but ran into what I found out later to be non-virtualization issues (bad hard drive in the ESXi host).

None of the other VMs are particularly high traffic. I do have a mySQL server which is really small and low use right now. I made a centralized one for all the web apps to use instead of having one per server. It's conceivable that it could grow to be larger depending on future needs.

We do not have Exchange at this point but it's one of the things that's been discussed extensively and is on my roadmap for this year. Additional roadmap items will involve centralized build management, better bug tracking and development management software and probably some other stuff that I'll find out about 5 minutes before it needs to be implemented.

One of the NetApp advantages here would be that I don't have to pay for management tools I'm not using at the time. I get the basic software and only buy what I need when I need it.

I have one question about CIFS on NetApp: Am I able to make the filer look like more than one server to the users? Management would very much like to have a "server" for each project instead of \\server\project1, \\server\project2, etc.

quote:

What's your data looking like? Would things like de-duplication on live volumes be of a benefit to you (this is a free feature from NetApp)?

My data is a goddamned mess and that's the major thing that's driving all this. It's scattered across servers with no easy way to manage and maintain it. De-duplication and cloning would probably benefit me a great deal as a lot of my Linux servers share common base files, and I'm positive that there's all sorts of other things that could be de-duped on my Windows shares.

quote:

Do you have any disaster recovery needs you want to consider? Lookng to tighten up backup/recovery SLAs? Remote replication?

I have plenty of disaster recovery needs I need to work on. A lot of my backup work has to be initiated manually and on a busy day some of it can get missed. I'd need to create my SLAs first before I could tighten them. I'll be honest, it's a real mess right now. It ties into the main point that's driving me to get a SAN: I need to get the data into a managable state before I can start ensuring that we get it all backed up reliably.

Remote replication is one of the things I've considered and have begun to nudge management towards for backups. It's one of those things that will be looked at once I have the data centralized. I need to know what my approximate delta is. If it's small enough we could get by with tape, but one of our problems is that our rate of data generation is VERY spikey. As projects near completion, a lot more changes are made. I'll have to plan around it. It's also going to get much bigger than our current delta now that I'm finally (after a lot of years and triyng to get management to sign off) getting people to do all their work iteration saving using their CVS instead of whatever stupid system they normally use because that's how they do it at home or were taught at school.

quote:

Given that you're a one man shop, you probably want to spend some money on tools to make your life easier. Find out from each vendor what tools they have available to do just that and get those tools.

This is a really critical point for us. Another thing I've managed to impress on management is that we need to start getting our setup more standardized because I am not an invincible "computar nerd machine guy" who is available 24x7 like they think I am. I go on vacations, and I could get hit by a bus or some poo poo. They need the ability for someone to be able to step in and fix something without reading through 100 pages of my documentation and then searching Google for hours. Having something critical be down for more than a couple of hours could be a real problem depending on how close to a deadline it happens.

That brings up a really good point: whatever solution I get is going to have to be pretty damned fault tolerant. Management needs a guarantee that a LOT could go wrong and we'll still be ok. LeftHand scores a lot of points there simply by virtue of every set of disks being a full controller system. NetApp would require a second head and SnapMirror to do that.

Holy poo poo that's a lot of :words: Sorry for the long posts, but this is by and far the best resource I've found for this that's not full of tards hurfdurfing all over the place and making GBS threads out marketing points. I can take it to PMs/email if it's too much.

Intrepid00
Nov 10, 2003

I'm tired of the PM’s asking if I actually poisoned kittens, instead look at these boobies.

Number19 posted:

I go on vacations, and I could get hit by a bus or some poo poo.

I love this line, it's like the only way you can tell your boss to gently caress off you don't need him while keeping your job.

Well sort. Though I try to my best to make sure they are not dependent on me so I can go vacation and in case I do leave no one is left feeling bitter.

I also have no ill will against my current bosses anyway.

Intrepid00 fucked around with this message at 12:56 on Apr 19, 2009

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


Intrepid00 posted:

I love this line, it's like the only way you can tell your boss to gently caress off you don't need him while keeping your job.

Well sort. Though I try to my best to make sure they are not dependent on me so I can go vacation and in case I do leave no one is left feeling bitter.

I also have no ill will against my current bosses anyway.

I have none towards mine either (one is a friend from high school) but at the same time I get heat to fix something in a situation where I just can't and it's frustrating. I recently got summoned for jury duty selection and while I was in the courtroom waiting to get out of being picked I was getting messages on my Blackberry about something not working and that I have to get there. I really don't know what would have happened if I got selected and couldn't get out of it.

Mierdaan
Sep 14, 2004

Pillbug

Number19 posted:

My VAR was talking about putting my ESX datastores in NFS instead of iSCSI. I believe he mentioned something about it being easier to resize NFS volumes. We didn't go into a lot of details about it as this was just a get to know you" kind of deal.

Another good reason to do this is so that ESX can see the savings from deduplication. We have our ESXi stores done over iSCSI and while the filer happily dedupes, ESXi isn't aware that its stores are a good 60% less full than it thinks they are.

edit: but that's what we get for not paying for NFS/CIFS licenses. Stupid shoestring budgets.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Number19 posted:

It's one of those things I'm going to have to try out. I'm defintiely not getting anything without being able to do some sort of demo on the system. I've been able to do a bit of playing around with a LeftHand setup by using their demo VSAs and I like most of the management tools there (although some of the emailing and alerting setup is a bit cumbersome to configure.)

Netapp is more than happy to ship you a unit to kick the tires on for a month or two. Contact your VAR and get the hookup.

quote:

The ability to have all my data management be done in one location really is appealing. We're definitely going to go with iSCSI for our block level stuff since I see no point in investing in FC, nor will management sign off on the cost even if I did.

FC is generally overrated in a lot of use cases anyway. iSCSI is a perfectly viable option and of course, it's free.

You're going to want it anyway for Exchange integration with SnapManager (if you go that route).

quote:

My VAR was talking about putting my ESX datastores in NFS instead of iSCSI. I believe he mentioned something about it being easier to resize NFS volumes. We didn't go into a lot of details about it as this was just a get to know you" kind of deal.

NFS is fantastic because yes, volume resizing, thin provisioning, and de-duplication all work right out of the box without any tomfoolery. It's also safe to create just one big NFS volume to house your VMs.

The drawback is that the NFS license may be more costly than you'd like. If you're okay with iSCSI and intelligently lay out your LUNs you should be mostly fine though for VMware.

quote:

As I stated in my post, aside from the standard Windows AD services, we have a Perforce server which is used by about half the current staff. We are switching to using it for the whole team as people switch onto newer projects. Neither of these servers are virtualized, which is something I'd seriously like to correct. I had started Perforce as a virtualized solution, but ran into what I found out later to be non-virtualization issues (bad hard drive in the ESXi host).

None of the other VMs are particularly high traffic. I do have a mySQL server which is really small and low use right now. I made a centralized one for all the web apps to use instead of having one per server. It's conceivable that it could grow to be larger depending on future needs.

Sounds like a low to midrange filer may be appropriate. I would recommend if your VAR goes for the 2000 series, that you look into the 2050. Its more expandable than the 2020 and won't be a 100% forklift replacement when the time comes to outgrow the 2050.

quote:

We do not have Exchange at this point but it's one of the things that's been discussed extensively and is on my roadmap for this year. Additional roadmap items will involve centralized build management, better bug tracking and development management software and probably some other stuff that I'll find out about 5 minutes before it needs to be implemented.

So when you put exchange in, your management is pretty much going to turn it into one of the most important tools you have to keep track of. Look at the effort to keep this thing backed up and running between Lefthand and NetApp. I can pretty much tell you without a doubt that NetApp is going to win that fight. If the saved headaches are worth it you can get budget for it.


quote:

One of the NetApp advantages here would be that I don't have to pay for management tools I'm not using at the time. I get the basic software and only buy what I need when I need it.

Yes, also NetApp has a leg up on other vendors in that nothing is really hidden. You tell the sales guy what you want and he'll tell you what products netapp sells with the license costs. The systems are fairly straightforward.

quote:

I have one question about CIFS on NetApp: Am I able to make the filer look like more than one server to the users? Management would very much like to have a "server" for each project instead of \\server\project1, \\server\project2, etc.

You could assign multiple IP aliases to an interface I believe, and access by DNS names to each alias I believe. You could also use vfiler which lets you "virtualize" filer insances on the filer.

quote:

My data is a goddamned mess and that's the major thing that's driving all this. It's scattered across servers with no easy way to manage and maintain it. De-duplication and cloning would probably benefit me a great deal as a lot of my Linux servers share common base files, and I'm positive that there's all sorts of other things that could be de-duped on my Windows shares.

Then netapp dedup may be for you. The best part is it is free; I believe you just have to request the license. Later versions of OnTap may include it now.

quote:

I have plenty of disaster recovery needs I need to work on. A lot of my backup work has to be initiated manually and on a busy day some of it can get missed. I'd need to create my SLAs first before I could tighten them. I'll be honest, it's a real mess right now. It ties into the main point that's driving me to get a SAN: I need to get the data into a managable state before I can start ensuring that we get it all backed up reliably.

Work out your DR needs now, because it will help you figure out what budget you need for your storage; with whatever solution you intend to buy.

quote:

Remote replication is one of the things I've considered and have begun to nudge management towards for backups. It's one of those things that will be looked at once I have the data centralized. I need to know what my approximate delta is. If it's small enough we could get by with tape, but one of our problems is that our rate of data generation is VERY spikey. As projects near completion, a lot more changes are made. I'll have to plan around it. It's also going to get much bigger than our current delta now that I'm finally (after a lot of years and triyng to get management to sign off) getting people to do all their work iteration saving using their CVS instead of whatever stupid system they normally use because that's how they do it at home or were taught at school.

Remote replication can get expensive, not just on the storage side but the infrastructure side. You need network bandwidth to get poo poo over to the DR site.


quote:

This is a really critical point for us. Another thing I've managed to impress on management is that we need to start getting our setup more standardized because I am not an invincible "computar nerd machine guy" who is available 24x7 like they think I am. I go on vacations, and I could get hit by a bus or some poo poo. They need the ability for someone to be able to step in and fix something without reading through 100 pages of my documentation and then searching Google for hours. Having something critical be down for more than a couple of hours could be a real problem depending on how close to a deadline it happens.

NetApp support is pretty darn good about keeping your system alive when all you've got to help you is real estate agents and sales people.

quote:

That brings up a really good point: whatever solution I get is going to have to be pretty damned fault tolerant. Management needs a guarantee that a LOT could go wrong and we'll still be ok. LeftHand scores a lot of points there simply by virtue of every set of disks being a full controller system. NetApp would require a second head and SnapMirror to do that.

Well, NetApp fault tolerance does need a second head; but snapmirror is only required if you're replicating data to a whole other filer system. If you just want storage controller failover, you just need the second head and cluster them.

You'd use snapmirror to move the data offsite for example; or if you've got super high SLA requirements, you could use it to replicate to another filer in the same location but a different set of disks/heads.

quote:

Holy poo poo that's a lot of :words: Sorry for the long posts, but this is by and far the best resource I've found for this that's not full of tards hurfdurfing all over the place and making GBS threads out marketing points. I can take it to PMs/email if it's too much.

I think its good for the thread since others have similar questions or are wrestling with similar issues.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


1000101 posted:

Netapp is more than happy to ship you a unit to kick the tires on for a month or two. Contact your VAR and get the hookup.

I'll ask my VAR about it at my next meeting with them. Demo units are one thing that will help me sell the cost to management as they can see it in action.

quote:

NFS is fantastic because yes, volume resizing, thin provisioning, and de-duplication all work right out of the box without any tomfoolery. It's also safe to create just one big NFS volume to house your VMs.

The drawback is that the NFS license may be more costly than you'd like. If you're okay with iSCSI and intelligently lay out your LUNs you should be mostly fine though for VMware.

The licensing might turn into a real concern for us. I've been reading the SAN whitepapers for ESX to get a good handle on the best practises, so iSCSI might be enough for us.

quote:

Sounds like a low to midrange filer may be appropriate. I would recommend if your VAR goes for the 2000 series, that you look into the 2050. Its more expandable than the 2020 and won't be a 100% forklift replacement when the time comes to outgrow the 2050.

I'm guessing what you mean by this is that the extra shelves of disks for a FAS2020 are really only compatible with another FAS2020, whereas the shelves that the FAS2050 uses can be used in the higher up FAS series if needed?

quote:

So when you put exchange in, your management is pretty much going to turn it into one of the most important tools you have to keep track of. Look at the effort to keep this thing backed up and running between Lefthand and NetApp. I can pretty much tell you without a doubt that NetApp is going to win that fight. If the saved headaches are worth it you can get budget for it.

Yeah, this is going to be a big point when it comes to pitching this to management. NetApp has many more options to manage data for major apps that need help like Exchange. We'd have to license those options, but it doesn't have to be done until we have Exchange anyways.

quote:

Work out your DR needs now, because it will help you figure out what budget you need for your storage; with whatever solution you intend to buy.

Remote replication can get expensive, not just on the storage side but the infrastructure side. You need network bandwidth to get poo poo over to the DR site.

I gave this a good deal of thought over the weekend, and figured that I might be ok with a tape system (or removable HDDs) after all. All I'll need to do is move snapshots off to tape to get my deltas. Different data types will need different levels of backups. Stuff like Perforce only needs dailies, but Exchange would probably benefit from more than one a day.

If I can get tape to be feasible then off-site replication can take a back seat. I was only entertaining the option since I wasn't sure if I would be able to get our data off site any other way.

quote:

NetApp support is pretty darn good about keeping your system alive when all you've got to help you is real estate agents and sales people.

There are enough technically minded people here (we are a game dev studio after all and therefore have a lot of nerds) that I could find 10 people who could call support to fix a problem. The better the support, the more people I can trust to be able to solve an issue.

quote:

Well, NetApp fault tolerance does need a second head; but snapmirror is only required if you're replicating data to a whole other filer system. If you just want storage controller failover, you just need the second head and cluster them.

I think a second head is going to be a must for us. If our data's out, we're out and we can't afford that. I can't plan for the median schedule to determine data SLAs for us, I need to plan for the worst case since missing a deadline for us could be catastrophic.

This is ok though, since it's one of the things I've told management will have to happen. I have half the management team on board with that idea already, I just have to get the other one on board. Half the point of clustering the data properly is that I can take a vacation and have very little concern that the data services will be impacted while I'm gone. A whole head plus some other combo of parts in the other one will all have to fail at once, which is probably very unlikely.

quote:

I think its good for the thread since others have similar questions or are wrestling with similar issues.

I'll keep it here then. Like I said, this is one of the best resources I've found where people are honest about stuff and not FUD machines.

The next question I have is: SAS vs. SATA drives. I really want to go SAS just to be future proof with Exchange and other potentially IOPS hungry services coming. We aren't there yet and there's no guarantee that we will be. I don't want to overbuy right now and use up a lot of budget money that could be put to better use upgrading other portions of my infrastructure. How much am I going to limit things if I end up going with SATA?

AmericanCitizen
Nov 25, 2003

I am the ass-kickin clown that'll twist you like a balloon animal. I will beat your head against this bumper until the airbags deploy.

Number19 posted:

I have one question about CIFS on NetApp: Am I able to make the filer look like more than one server to the users? Management would very much like to have a "server" for each project instead of \\server\project1, \\server\project2, etc.

This sounds like an incredibly silly requirement that should be dropped if at all possible. The whole point of the SAN is that you can centralize your storage into one highly-available, easily manageable place and any solution that I can think of to do what you're asking won't be simple or scalable as the number of "servers" increases.

If people seriously can't just deal with the one volume or share per project, they need to get over it. It will be more of a pain on the user side as well since no one will be able to just go to a single location like \\filer and be able to directly navigate a tree of open projects, they'll be stuck trying to do that in the domain-level view with every other PC on the network displayed along side a dozen pretend servers.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


AmericanCitizen posted:

This sounds like an incredibly silly requirement that should be dropped if at all possible. The whole point of the SAN is that you can centralize your storage into one highly-available, easily manageable place and any solution that I can think of to do what you're asking won't be simple or scalable as the number of "servers" increases.

If people seriously can't just deal with the one volume or share per project, they need to get over it. It will be more of a pain on the user side as well since no one will be able to just go to a single location like \\filer and be able to directly navigate a tree of open projects, they'll be stuck trying to do that in the domain-level view with every other PC on the network displayed along side a dozen pretend servers.

I agree with you. The idea is certainly not mine. I don't even think the majority of the teams using the data want it (or care). I was just asking for curiosity's sake more than anything and as something I could use for additional leverage in getting this implemented.

Rols
Dec 2, 2005
You could try creating multiple DNS aliases that point to the same filer. Problem is that you can't map multiple "servers" using different accounts (eg. map \\proj02-filer\share as userjoe, and map \\proj01-filer\share as userjane) on the same computer at any one time.

Chucklehead
Apr 14, 2004
I couldn't think of a custom title, so I got this piece of shit instead!

Number19 posted:

I agree with you. The idea is certainly not mine. I don't even think the majority of the teams using the data want it (or care). I was just asking for curiosity's sake more than anything and as something I could use for additional leverage in getting this implemented.

I am fairly certain you can do this with CIFS on the filer. I'm not familiar with the details but I am sure this had been discussed with us when talking about migrating existing CIFS shares to a NetApp filer.

For example if you have a windows file server with a share right now that is \\Files1\stuff then you are going to want a way to host that on the Filer without impacting users - and there is a way. You copy all of the data from \\Files1\stuff to your filer. You take a small outage to make sure no one is using the share, shut down the share on \\Files1\stuff and tell the Filer to host that share. I'm sure you could do the same with \\Project1, \\Project2 etc.

da sponge
May 24, 2004

..and you've eaten your pen. simply stunning.
If you have a windows server, you might be able to use DFS to do what you want.

Edit: reread what you want, don't know that DFS would apply.

unknown
Nov 16, 2002
Ain't got no stinking title yet!


Anyone ever play with Infortrend gear and have any stories?

Someone in a local datacenter is looking to blow a few out rather than attempt to ship it, and it might be a nice general storage type box.

A24F-R2224: 24drive, 2G-FC to SATA san system..
http://www.infortrend.com/main/2_product/es_a24f-r2224.asp

A24F-R2430: 24drive, 4G-FC to Sata san system..
http://www.infortrend.com/main/2_product/es_a24f-r(g)2430.asp

Sock on a Fish
Jul 17, 2004

What if that thing I said?
I came in this morning to find my Solaris box had crashed, and when I brought it backup it threw this at me:
code:
-bash-3.00# zpool status -v
  pool: rpool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: [url]http://www.sun.com/msg/ZFS-8000-8A[/url]
 scrub: scrub in progress for 0h1m, 38.26% done, 0h1m to go
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t8d0s0  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        //dev/dsk/c1t8d0s0
        //dev/dsk/c1t0d0s0
The linked URL says that these types of errors are unrecoverable, but my OS is still chugging along and after the scrub finished the error disappeared. Am I in the clear or should I still be wary?

H110Hawk
Dec 28, 2006

Sock on a Fish posted:

I came in this morning to find my Solaris box had crashed, and when I brought it backup it threw this at me:


The linked URL says that these types of errors are unrecoverable, but my OS is still chugging along and after the scrub finished the error disappeared. Am I in the clear or should I still be wary?

The filesystem is now suspect, because you don't know if a block had data changed and then re-checksummed to appear valid. I would check your logs for information about the crash itself, versus just what zpool is telling you about your pool.

Sock on a Fish
Jul 17, 2004

What if that thing I said?

H110Hawk posted:

The filesystem is now suspect, because you don't know if a block had data changed and then re-checksummed to appear valid. I would check your logs for information about the crash itself, versus just what zpool is telling you about your pool.

The logs aren't entirely clear, but I think this might have occurred at the same time as a scheduled scrub.

Could assigning s0 to the pool instead of the disk without any slice specified lead to this kind of a problem?

Sock on a Fish
Jul 17, 2004

What if that thing I said?

Sock on a Fish posted:

The logs aren't entirely clear, but I think this might have occurred at the same time as a scheduled scrub.

Could assigning s0 to the pool instead of the disk without any slice specified lead to this kind of a problem?

Say I took the mirror pool containing c1t0d0s0 and c1t8d0s0, and then one at a time removed each device from the pool and then added back as c1t(0|8)d0, allowing the pool to resilver in between moves. Also, let's say I rebooted the machine only to discover that I'd wiped out my boot sector.

How would I go about getting that back? I'm not finding any kind of recovery disks for Solaris 10.

Sock on a Fish
Jul 17, 2004

What if that thing I said?

Sock on a Fish posted:

Say I took the mirror pool containing c1t0d0s0 and c1t8d0s0, and then one at a time removed each device from the pool and then added back as c1t(0|8)d0, allowing the pool to resilver in between moves. Also, let's say I rebooted the machine only to discover that I'd wiped out my boot sector.

How would I go about getting that back? I'm not finding any kind of recovery disks for Solaris 10.

So, I discovered that you can modify boot options in grub on the Solaris installer and was able to drop myself into single user mode where I could mount my original filesystem. I'm thinking that I should be able to remove one of the disks from the pool, then go through a reinstallation of Solaris in the disk that I did not remove from the pool, and then I can use single user mode to mount both disks and copy the contents of the old root partition on to the new one, leaving the boot partition intact.

Thoughts?

H110Hawk
Dec 28, 2006

Sock on a Fish posted:

Say I took the mirror pool containing c1t0d0s0 and c1t8d0s0, and then one at a time removed each device from the pool and then added back as c1t(0|8)d0, allowing the pool to resilver in between moves. Also, let's say I rebooted the machine only to discover that I'd wiped out my boot sector.

How would I go about getting that back? I'm not finding any kind of recovery disks for Solaris 10.

Whoops. In theory you can just dd the boot sector from one of your old disks on to a new one. If they're in the mirror the worst you'll do is hose one of them. Remember to do it to the disk device itself (c1t0d0) or the whole disk partition (c1t0d0s2).

grub-install or what not *should* work, googling around found this untested bit:

http://opensolaris.org/jive/message.jspa?messageID=179529

As for what to dd on and off, the boot sector is a set size, and from there it should be enough to get you reading ZFS. Google dd grub gave this:

http://www.sorgonet.com/linux/grubrestore/

You will want to use the stage1 loader first and see what happens. I don't know where they switch from raw disk reading to actually being bootstrapped.

Sock on a Fish
Jul 17, 2004

What if that thing I said?

H110Hawk posted:

Whoops. In theory you can just dd the boot sector from one of your old disks on to a new one. If they're in the mirror the worst you'll do is hose one of them. Remember to do it to the disk device itself (c1t0d0) or the whole disk partition (c1t0d0s2).

grub-install or what not *should* work, googling around found this untested bit:

http://opensolaris.org/jive/message.jspa?messageID=179529

As for what to dd on and off, the boot sector is a set size, and from there it should be enough to get you reading ZFS. Google dd grub gave this:

http://www.sorgonet.com/linux/grubrestore/

You will want to use the stage1 loader first and see what happens. I don't know where they switch from raw disk reading to actually being bootstrapped.

I took the same actions for all disks in the pool.

I'm trying to do an installgrub like this:
code:
installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 /dev/dsk/c1t0d0s0
When I do, it comes back with a cannot open/stat for /dev/dsk/c1t0d0s2. I've verified that this is true, I get an I/O error if I try a 'head /dev/dsk/c1t0d0s2', whereas with s0 I get actual output.

What does it all mean?! How is it possible that I can address s0 but not s2?

Sock on a Fish
Jul 17, 2004

What if that thing I said?
FYI, I just created a new thread for this issue, since it's more of a Solaris thing than a SAN thing: http://forums.somethingawful.com/showthread.php?threadid=3127366

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists
OK...

We've run into an insane amount of problems on our crappy array (Gateway Branded Xyratex 5402E) with VMware. The basics of the setup are on page 5 if you're interested. Basically, the problem is that if the array gets busy at all, it starts ignoring commands, LUNs wind up remounting RO or locking completely, and everything goes to hell.

Regardless of that, we'll have a proposal for a new array in the next couple of weeks. The new solution will be expected to house a smallish (compared to some) production VMware environment (Probably 3-5 ESX hosts with up to 60 or 70 VMs), as well as a couple of fairly high-use Oracle databases, and the data store for an Enterprise Content Management system (scans of documents stored as PDFs/TIFFs.) Total disk space for this will be in the neighborhood of 15TB, and we'll need some overhead for expansion.

I have reason to believe that the vendor will be specifying an HP EVA system. That's not necessarily a problem for me, since we have an EVA4000 already for our Oracle RAC environment, and I'm fairly comfortable managing it, but I really really really need to make sure we avoid getting something that we're going to have a bunch of problems with again.

The vendor was selected for the ECM system project thru the RFP process, and that included the hardware to support it. I'm not confident that they have anyone who's a SAN or VMware expert, and I don't really trust my own experience enough to feel completely comfortable. There are two of us that take care of everything from x86, Itanium, and Power hardware; Windows, Linux, and AIX management; Storage; Backups; etc. Because we have to be jacks of all trades, as much as I'd love to, I haven't been able to really specialize in much of it. I'm going to recommend we bring in a 3rd party consultant to help us out with evaluation, but I'm worried that (lack of) budget might kill that.

My understanding is that the EVA (and maybe everyone else now) spreads I/O across every disk you have assigned to the disk pool, regardless of individual vdisk settings. Do I need to worry about creating separate disk pools for Oracle and VMware, or will creating one large pool for the Tier 1 disks (15K FC) actually give me better performance even though it'll be different workloads? I think that the PDF/TIFF storage will probably be on lower tiered disk (probably some 7.2K FATA, since access time on it will be a lot less critical than on the VMware and Database sides). Am I just over thinking/completely out of my depth with these worries because of the problems I've had recently?

Would I be better served pushing for a different solution, ie NetApp or EMC?
I've heard a lot of good things about VMware on NetApp NFS, and a few things about Oracle over NFS as well (though I don't know whose).

Any help anyone can offer would be great.

EDIT, fixed some stuff I said that didn't apply.

Intraveinous fucked around with this message at 00:48 on May 8, 2009

BonoMan
Feb 20, 2002

Jade Ear Joe
Being new to SAN type storage...what are options that don't require per license usage?

One of our companies has a FC SAN that has Stornext or whatever and it's like 3500 per computer. Do Dell EqualLogic's that are iSCSI require licenses per user?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

BonoMan posted:

One of our companies has a FC SAN that has Stornext or whatever and it's like 3500 per computer. Do Dell EqualLogic's that are iSCSI require licenses per user?
Sun or NetApp both have unlimited iscsi initiators.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
I just spent the last two days fighting with fdisk, kpartx, multipathd, multipath, {pv|vg|lv}scan, {pv|vg|lv}create, udev, device mapper, etc. Seriously what a retarded rube goldberg contraption.

oh well, lets trade sysbench results

quote:

[root@blade1 mnt]# sysbench --test=fileio --file-num=16 --file-total-size=8G prepare sysbench 0.4.10: multi-threaded system evaluation benchmark

16 files, 524288Kb each, 8192Mb total
Creating files for the test...

[root@blade1 mnt]# sysbench --test=fileio --max-time=300 --max-requests=1000000 --file-num=16 --file-extra-flags=direct --file-fsync-freq=0 --file-total-size=8G --num-threads=16 --file-test-mode=rndrw --init-rng=1 --file-block-size=4096 run
sysbench 0.4.10: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 16

Extra file open flags: 16384
16 files, 512Mb each
8Gb total file size
Block size 4Kb
Number of random requests for random IO: 1000000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
(last message repeated 15 times)
Done.

Operations performed: 482169 Read, 321436 Write, 0 Other = 803605 Total
Read 1.8393Gb Written 1.2262Gb Total transferred 3.0655Gb (10.463Mb/sec)
2678.55 Requests/sec executed

Test execution summary:
total time: 300.0154s
total number of events: 803605
total time taken by event execution: 4798.9296
per-request statistics:
min: 0.08ms
avg: 5.97ms
max: 1600.19ms
approx. 95 percentile: 23.35ms

Threads fairness:
events (avg/stddev): 50225.3125/358.34
execution time (avg/stddev): 299.9331/0.00


[root@blade1 mnt]# sysbench --test=fileio --max-time=300 --max-requests=1000000 --file-num=16 --file-extra-flags=direct --file-fsync-freq=0 --file-total-size=8G --num-threads=16 --file-test-mode=rndrw --init-rng=1 --file-block-size=8192 run
Block size 8Kb
2576.43 Requests/sec executed

[root@blade1 mnt]# sysbench --test=fileio --max-time=300 --max-requests=1000000 --file-num=16 --file-extra-flags=direct --file-fsync-freq=0 --file-total-size=8G --num-threads=16 --file-test-mode=rndrw --init-rng=1 run
Block size 16Kb
2427.95 Requests/sec executed

[root@blade1 mnt]# sysbench --test=fileio --max-time=300 --max-requests=1000000 --file-num=16 --file-extra-flags=direct --file-fsync-freq=0 --file-total-size=8G --num-threads=16 --file-test-mode=rndrw --init-rng=1 --file-block-size=32768 run
Block size 32Kb
2132.02 Requests/sec executed

[root@blade1 mnt]# sysbench --test=fileio --max-time=300 --max-requests=1000000 --file-num=16 --file-extra-flags=direct --file-fsync-freq=0 --file-total-size=8G --num-threads=16 --file-test-mode=rndrw --init-rng=1 --file-block-size=65536 run
Block size 64Kb
1716.62 Requests/sec executed




dell m610 running centos 5.3 (ext3 on lvm) connected to a xiotech emprise 5000 using a raid10 lun on a "balance" datapac

StabbinHobo fucked around with this message at 05:43 on Jun 10, 2009

rage-saq
Mar 21, 2001

Thats so ninja...

BonoMan posted:

Being new to SAN type storage...what are options that don't require per license usage?

One of our companies has a FC SAN that has Stornext or whatever and it's like 3500 per computer. Do Dell EqualLogic's that are iSCSI require licenses per user?

The use of StorNext is totally not required for any SAN usage at all, it is for very specialized use scenarios.
When you map a volume off a SAN to a system, it appears as a any regular local drive it has exclusive block level access to. SANs abilities to share those same volumes between two servers is great, but without a shared filesystem it would be useless. Two systems would write to the file table at the same time and just destroy the whole thing.

StorNext provides that shared filesystems so all the computers that have that volume mapped locally can use a centralized locking system so they don't write to the FAT/same files at the same time where you would get a destroyed filesystem.

Syano
Jul 13, 2005
Ok we are about to jump into the world of SAN storage and we have our sights set on an Equallogic array. I have some questions though that will help me to understand how this all works. First, what is the best way to provision the storage? Do we just take the entire set of drives and make it one big raid set then divy up LUNs from there to hand off to the servers or should we divide the array up into multiple raid sets and build LUNs based of those?

Next, I am having a difficult time wrapping my head around the idea of a snapshot. Is it really as awesome as I am thinking it is? Because I am thinking if I were able to move all my storage into the array then I would be able to use snapshots and eventually replication to replace my current backup solution. Are snapshots really that awesome or do I have an incorrect vision of what they do?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Syano posted:

Next, I am having a difficult time wrapping my head around the idea of a snapshot. Is it really as awesome as I am thinking it is? Because I am thinking if I were able to move all my storage into the array then I would be able to use snapshots and eventually replication to replace my current backup solution. Are snapshots really that awesome or do I have an incorrect vision of what they do?
You know the old adage of how RAID isn't backup?

It's still RAID. I remember a story here about some guy with a big-rear end BlueArc NAS that was replicating to another head. The firmware hit a bug and imploded the filesystem, including snapshots. It then replicated the write to the other head, which imploded the filesystem on the replica.

This is probably less of a concern when your snapshots happen at the volume level instead of the filesystem level, but there's still plenty of disaster scenarios to consider without even getting into the possibilities of malicious administrators/intruders or natural disasters. You really need to keep offline, offsite backups.

Vulture Culture fucked around with this message at 16:17 on Jun 13, 2009

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Can someone tell me what the practicality of using a Sun 7210 w/ 46 7200 rpm disks as the backend for approximately 40 VMware ESX guests is? On the one hand, I am very afraid of using 7200rpm disks here, but on the other hand there are 46 of them.

I realize that without me pulling real IOPS numbers this is relatively stupid, but I need to start somewhere and this seems like a good place.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

Can someone tell me what the practicality of using a Sun 7210 w/ 46 7200 rpm disks as the backend for approximately 40 VMware ESX guests is? On the one hand, I am very afraid of using 7200rpm disks here, but on the other hand there are 46 of them.

I realize that without me pulling real IOPS numbers this is relatively stupid, but I need to start somewhere and this seems like a good place.
You're also not telling us what kind of workload you're trying to use here. I've got close to 40 VMs running off of 6 local 15K SAS disks in an IBM x3650, but mostly-idle development VMs have very different workloads than real production app servers.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Misogynist posted:

You're also not telling us what kind of workload you're trying to use here. I've got close to 40 VMs running off of 6 local 15K SAS disks in an IBM x3650, but mostly-idle development VMs have very different workloads than real production app servers.
They are real production VMs. Here is a quick list of what I have running currently.

6 DCs
1 500 user exchange box (about half of which are for part time employees who use it very very lightly)
1 BES for roughly 30 blackberries
4 light-medium duty database servers
4 light webservers
1 citrix / TS license server
5 Application / Terminal Servers

This is what we currently have running on an older IBM SAN, 16 15k FC drives and 8 10k FC drives.

In addition to that workload, I want to add about 15 application servers, with workloads that I haven't even started to measure. All are currently running 2 to 6 10k disks.

Syano
Jul 13, 2005

Misogynist posted:

You know the old adage of how RAID isn't backup?

It's still RAID. I remember a story here about some guy with a big-rear end BlueArc NAS that was replicating to another head. The firmware hit a bug and imploded the filesystem, including snapshots. It then replicated the write to the other head, which imploded the filesystem on the replica.

This is probably less of a concern when your snapshots happen at the volume level instead of the filesystem level, but there's still plenty of disaster scenarios to consider without even getting into the possibilities of malicious administrators/intruders or natural disasters. You really need to keep offline, offsite backups.

Roger that. Good info to remember.

What about provisioning? Is it usually worth it to split up the disks into separate raid groups or just build one raid set from all the available disks? Or is this something you need to know more about your IO load to make a decision on?

H110Hawk
Dec 28, 2006

Syano posted:

What about provisioning? Is it usually worth it to split up the disks into separate raid groups or just build one raid set from all the available disks? Or is this something you need to know more about your IO load to make a decision on?

This depends on your workload. If you make one large raid set, then carve up luns, you will have the maximum possible IO throughput, but any one VM can bog down the rest of them with an I/O spike.

Consider a logging, email, and sql server running on your array. Each one has its own lun. We all know logging services sometimes go batshit insane and start logging things several times per second until you kill them. Do you want that to be able to bog down your email and sql service until you fix it? There are very real tradeoffs, and my suggestion to you is if you have time to test out several setups.

You may also want to consider which workloads have the heaviest read load (email, typical sql) and which have the heaviest write load (logging, atypical sql) for combining. Figure out how the various read and write caches interact and you may be able to squeeze out some extra iops, but I will leave that part to the more experienced in this field.

bmoyles
Feb 15, 2002

United Neckbeard Foundation of America
Fun Shoe
Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days?

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

bmoyles posted:

Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days?


How much money do you have? How Expandable? What kinds of interfaces do you want? How redundant?

Tons of questions to answer.

Adbot
ADBOT LOVES YOU

Fangs404
Dec 20, 2004

I time bomb.

bmoyles posted:

Say you need 100TB of storage, nothing super fast or expensive, but easily expandable and preferably managed as a single unit. Think archival-type storage, with content added frequently, but retrieved much less frequently, especially as time goes on. What do you go for these days?

As dumb as [H] can be sometimes, http://www.hardforum.com/showthread.php?t=1393939 is a really solid thread on huge storage systems. Right now, the record holder has a system with 53.5TB of storage (http://www.hardforum.com/showpost.php?p=1034055907&postcount=113). Probably getting a rackmount case like http://www.hardforum.com/showpost.php?p=1033721267&postcount=4 would be ideal. The largest case on Newegg has 36 bays (http://www.newegg.com/Product/Product.aspx?Item=N82E16811165143), so you're definitely going to need at least 2 cases.

"100TB" and "nothing ... expensive" don't belong in the same sentence.

[edit]
Oh, wait. I thought this was the home NAS thread. The enterprise solution will likely be much more elegant.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply