Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
evil_bunnY
Apr 2, 2003

ozmunkeh posted:

The eventual quote (that was 2 months in the making), with tax, was a few dollars under $60K for 12x2TB SATA disks.
That's hilariously bad.

Adbot
ADBOT LOVES YOU

Internet Explorer
Jun 1, 2005





NippleFloss posted:




I mean, they just started offering 0% financing on EQL storage. That's the same poo poo that Hyundai does when they want to get rid of last year's Accent's. That's not something you do when you're crushing the competition. Equallogic does pretty well in some segments because they sell at a discount relative to the fuller featured boxes from NetApp and EMC and because they have a good product. They're hardly crushing anyone in any markets. They only own 30% of the iSCSI-only market, with EMC at 20%.

http://www.marketwatch.com/story/wo...-idc-2012-06-08

Dell doesn't even get named, it just makes it into "Others" in External Disk systems market share, and that includes EQL, Compellant and PowerVault. They were at something like 7% market share middle of last year. And they're probably around the same now. EMC is crushing everyone in the storage market. They have nearly a third of the total market now, and they have been increasing market share year over year. Consider that the next time you want to make a "quality = market share" argument.

So what? We got 2 VNX 5300 SANs with 0% financing. You act like that is somehow unheard of. And you think Hyundai is the only car company that pulls something like that? Those numbers are overall. I specifically said in the SMB market. I spent a few minutes looking for anything broken down for the SMB market but was unable to find anything. But even your link said they were beating out EMC (the 800 lb gorilla) in the iSCSI market (which is heavily slanted toward SMB) by 10%.

For a small business, the difference in administering an Equallogic SAN or an EMC/NetApp SAN is like night and day. The only time I ever had to go into command line on our Equallogic boxes was for one of the very first firmware upgrades and the single and only time I had to call in for support. In over 5 years I went into command line twice and called in for support once.

On the EMC side, I never even started counting the support calls or times I had to use the command line because that would make me a crazy person. And from some of the stuff I have seen you discuss about NetApp configs in this thread I know NetApp isn't much different. Hell, the biggest reason we never went with NetApp was because the UI was so terrible. Well, that and the fact that it was twice the cost.

EoRaptor
Sep 13, 2003

by Fluffdaddy

complex posted:

This just sounds so ridiculous I can't believe it. Is this true?

Internet Explorer posted:

Wow. Really? I'll have to check the latest release notes. Do you know what version introduced this? I've stopped following Equallogic releases now that all my time is spent wrangling our VNX.

Yeah, I was a bit off:

Firmware 5.2.3: To protect access to available data, a member delete operation can now only be performed with customer support assistance.

I remembered that as LUN delete. Member delete is much rarer, and can do things like remove data on still active members if you don't prep it right, so I see why this was changed.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Internet Explorer posted:

So what? We got 2 VNX 5300 SANs with 0% financing. You act like that is somehow unheard of. And you think Hyundai is the only car company that pulls something like that? Those numbers are overall. I specifically said in the SMB market. I spent a few minutes looking for anything broken down for the SMB market but was unable to find anything. But even your link said they were beating out EMC (the 800 lb gorilla) in the iSCSI market (which is heavily slanted toward SMB) by 10%.

For a small business, the difference in administering an Equallogic SAN or an EMC/NetApp SAN is like night and day. The only time I ever had to go into command line on our Equallogic boxes was for one of the very first firmware upgrades and the single and only time I had to call in for support. In over 5 years I went into command line twice and called in for support once.

On the EMC side, I never even started counting the support calls or times I had to use the command line because that would make me a crazy person. And from some of the stuff I have seen you discuss about NetApp configs in this thread I know NetApp isn't much different. Hell, the biggest reason we never went with NetApp was because the UI was so terrible. Well, that and the fact that it was twice the cost.

So you're saying you don't actually have any numbers on the SMB market but you're just sure that they're crushing the competition, even though in overall metrics they're well behind? And you think it's a good thing that they only have a 30% market share in a market they are supposed to own and where EMC barely even tries? I've said a bunch of times that I think EQL is a very good product but it's not a demonstrably, unarguably better product that what a lot of other vendors offer.

NetApp isn't particularly hard to work with. My first job working with NetApp was also as a VMWare, Email, Network and Unix admin and I had plenty of time to get all of that done. I can't speak to EMC as I haven't used any of the VNX stuff but if you're an SMB you can get away with about 10 minutes a day of NetApp admin work. It's all incredibly easy, as is most modern storage that isn't using bleeding edge features or working in a very large environment.

NetApp certainly isn't perfect and a lot of their problems come down to QA but saying that it's time consuming or difficult isn't really a knock that I've heard against them. You'll also hear me bitching about things like bugs more often than your average customer because the environment I support is large and complex enough that we will end up hitting just about every one of them. Your average small office isn't going to run into most of that stuff.

edit: This may have come off as more confrontational than intended. The upshot is that EQL is easy to work with and that is a strong point of theirs, but it's not the only important factor and they've still got a lot to do if they want to really compete with NetApp, EMC and IBM.

YOLOsubmarine fucked around with this message at 18:36 on Aug 1, 2012

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


I'm gonna bump my question about Nimble Storage from a couple of pages ago. They're being pretty aggressive and we're near the point where we'd consider changing directions on storage. They make a good pitch with some pretty nice looking performance numbers and I've read some good stuff on VMware's forums and such but I'd like to know if anyone here has had it running in production and how it's been working out.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

NippleFloss posted:

I mean, they just started offering 0% financing on EQL storage. That's the same poo poo that Hyundai does when they want to get rid of last year's Accent's. That's not something you do when you're crushing the competition.
Cisco gives us the option to finance all of our gear at zero percent. They aren't exactly Hyundai clearing out last years model.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Internet Explorer posted:

For a small business, the difference in administering an Equallogic SAN or an EMC/NetApp SAN is like night and day. The only time I ever had to go into command line on our Equallogic boxes was for one of the very first firmware upgrades and the single and only time I had to call in for support. In over 5 years I went into command line twice and called in for support once.

One of my old clients is a small/medium sized business running Exchange on a NetApp and I don't think they've ever touched the CLI once. They however do love Snap Manager for Exchange and Single Mailbox Restore for Exchange. NetApp has been fairly easy to manage for even non-storage people for at least since OnTap 7. If you can figure out how to run a windows server you can figure out how to manage a NetApp filer.

The other plus with NetApp is I find that its harder for companies without dedicated storage people to back themselves into a corner with their configuration.

Also FlexClone is awesome.

Not being confrontational either I just thought I'd share my experiences. NetApp 3rd party tie-in tools are fantastic things to have and usually worth the extra cost.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

Cisco gives us the option to finance all of our gear at zero percent. They aren't exactly Hyundai clearing out last years model.
Yea, it was just a snarky throwaway comment. But Cisco is doing it for the same reason Dell is, to make a stronger push into the SMB market, which isn't something you do in a market you dominate.

Internet Explorer
Jun 1, 2005





My comment was stupid in the first place. I was tired and made it as I was going to bed. The comment from 1000101 about NetApp being easier after OnTap 7 is interesting. I really have never had the chance to play with NetApp gear, so I was only going off what I had seen.

I guess my frustration stems from EMC / NetApp / IBM and the "right" way to manage a SAN. In my day to day, ease of use is just as important as all the other features, and unfortunately I think there is a lot of hand-waving coming from the "big boys" when it comes to that. It is really hard for me to recommend a SAN that is a pain in the rear end to implement / manage to a business that does not have a dedicated storage admin.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Internet Explorer posted:

I guess my frustration stems from EMC / NetApp / IBM and the "right" way to manage a SAN. In my day to day, ease of use is just as important as all the other features, and unfortunately I think there is a lot of hand-waving coming from the "big boys" when it comes to that. It is really hard for me to recommend a SAN that is a pain in the rear end to implement / manage to a business that does not have a dedicated storage admin.
What issues have you had with IBM? I've never administered DS8000 or XIV, but the LSI/Engenio (DS) and V7000 equipment I've worked with was always (too) simple to administer.

Our NLSAS disk failure rates on V7000 are another story.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Internet Explorer posted:

My comment was stupid in the first place. I was tired and made it as I was going to bed. The comment from 1000101 about NetApp being easier after OnTap 7 is interesting. I really have never had the chance to play with NetApp gear, so I was only going off what I had seen.

I guess my frustration stems from EMC / NetApp / IBM and the "right" way to manage a SAN. In my day to day, ease of use is just as important as all the other features, and unfortunately I think there is a lot of hand-waving coming from the "big boys" when it comes to that. It is really hard for me to recommend a SAN that is a pain in the rear end to implement / manage to a business that does not have a dedicated storage admin.

I was being kind of a dick. No worries.

When I was starting out I found NetApp gear to be very easy to use. If you're not trying to do anything too fancy with it then it's really tough to make it take up too much of your time. I've actually never had a single day of formal NetApp training. Everything I learned was just poking around and reading documentation. A lot of the appeal of NetApp to me was that it wasn't hopelessly obfuscated and there was a TON of information about how the devices actually worked out there.

I think some of the confusion comes from the idea that because NetApp/EMC/IBM/HDS, whoever CAN be complicated that they must be complicated. If you're running a Tier 1 frame in a very large, complex environment then yes, it will probably take a lot more time, but it's going to be time you NEED to spend making sure everything works just so. And you'll have a dedicated admin. All of those vendors also make gear (in theory, anyway) that can be dropped into a small office and managed in a minimal amount of time by a busy admin. Because the requirements aren't as demanding the time spent is much less. There's really no right or wrong way to manage a SAN. There is a huge amount of variety in customer skills and requirements and some vendors serve specific markets better than others. But there's no "best", just "arguably better for this situation". Except EMC. They are objectively the worst.



Misogynist posted:

What issues have you had with IBM? I've never administered DS8000 or XIV, but the LSI/Engenio (DS) and V7000 equipment I've worked with was always (too) simple to administer.

Our NLSAS disk failure rates on V7000 are another story.

The only IBM gear I've ever worked on was an old FastT and I though it was dead simple to use. I learned FC on it with nothing more than a little googling to learn terminology. Granted it was a no-frills FC SAN, but the UI was still very straight-forward and it was very obvious what you were doing when you were doing it. That should be taken for granted but after using Storage Navigator on HDS I understand that it is very possible to make an incredibly convoluted and useless UI for a very simple thing.

Internet Explorer
Jun 1, 2005





Misogynist posted:

What issues have you had with IBM? I've never administered DS8000 or XIV, but the LSI/Engenio (DS) and V7000 equipment I've worked with was always (too) simple to administer.

Our NLSAS disk failure rates on V7000 are another story.

Sorry I did not reply to this. I just won't do business with IBM, really has nothing to do with their SANs in particular. They are the very definition of bloated, bureaucratic bullshit. I know at least for a while they were mostly just rebranded NetApp, but I really don't know enough about it to say anything.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I do literally everything in our environment from time to time, so I am not a specialized storage guy, and I think our NetApps have been very easy to work with. They are well documented and their features and functionality just make sense.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Internet Explorer posted:

They are the very definition of bloated, bureaucratic bullshit.
So is pretty much every large IT vendor, quite honestly.

Rusty Kettle
Apr 10, 2005
Ultima! Ahmmm-bing!
I am trying to rebuild a RAID array that is managed by Linux using 'fdisk' and 'mdadm' and I am having problems. I am a grad student who I guess is now the IT guy for our research group, so I have little to no experience with this kind of thing. I am very nervous and google isn't helping much.

Is this a good place to ask questions? If not, where should I go?

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Rusty Kettle posted:

I am trying to rebuild a RAID array that is managed by Linux using 'fdisk' and 'mdadm' and I am having problems. I am a grad student who I guess is now the IT guy for our research group, so I have little to no experience with this kind of thing. I am very nervous and google isn't helping much.

Is this a good place to ask questions? If not, where should I go?

Probably the Linux Questions thread would be a better choice; is it degraded but accessible or completely offline?

Also, do whatever you can to not be your research group's IT guy.

Rusty Kettle
Apr 10, 2005
Ultima! Ahmmm-bing!

PCjr sidecar posted:

Probably the Linux Questions thread would be a better choice; is it degraded but accessible or completely offline?

It's accessible. On boot, I get a message that the array is hosed and I can attempt to fix it manually. I used fdisk to find the failed disk, replaced it, and created a partition that is identical to the partition of a disk of identical size on the array. I assume this new disk is supposed to mirror this other disk as they are the only disks of this size. I think I want to do something like
code:
 mdadm /dev/md3 -a /dev/sdd1 
where md3 belongs to the other disk and sdd1 is the newly created partition. This doesn't work though.

evol262
Nov 30, 2010
#!/usr/bin/perl
The Linux Questions thread would be a better choice, so either here or there, please post the output of 'cat /proc/mdstat', and we'll go from there. Don't make assumptions about what it's supposed to do.

If you're really sure it's supposed to be there, you need to fail the old /dev/sdd1 first with "mdadm --manage /dev/md3 --fail /dev/sdd1 && mdadm --manage /dev/md3 --remove /dev/sdd1" then "mdadm --manage /dev/md3 --add /dev/sdd1".

Rusty Kettle
Apr 10, 2005
Ultima! Ahmmm-bing!

evol262 posted:

The Linux Questions thread would be a better choice, so either here or there, please post the output of 'cat /proc/mdstat', and we'll go from there. Don't make assumptions about what it's supposed to do.

If you're really sure it's supposed to be there, you need to fail the old /dev/sdd1 first with "mdadm --manage /dev/md3 --fail /dev/sdd1 && mdadm --manage /dev/md3 --remove /dev/sdd1" then "mdadm --manage /dev/md3 --add /dev/sdd1".

I moved to the Linux Questions thread. I posted mdstat there so feel free to answer the next couple of questions there. Should I have failed the old drive first? Would it be possible to replace the new drive with the older one and then follow your instructions?

Rhymenoserous
May 23, 2008

FISHMANPET posted:

Can anyone link to some specific criticisms of Drobo, or tell me what I should be looking for? Normally "I read it on the Internet" would be good enough but in this case, because of ~~PoLiTiCs~~ I really have to beat them over the head with why this would be bad.

Slow as gently caress. Prone to causing disk failure. There's your specific criticism of Drobo.

(gently caress drobo I've got two of them)

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.
I don't know where my boss keeps finding these supermicro/nexenta vendors but it's like playing Bingo asking them about their short commings and they are like, "How do you know all of this???"

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

ghostinmyshell posted:

I don't know where my boss keeps finding these supermicro/nexenta vendors but it's like playing Bingo asking them about their short commings and they are like, "How do you know all of this???"

I'd be interested in hearing about some of these, if you wouldn't mind.

Unrelated tip of the day: always have well-defined required performance numbers in your RFP and make sure they pass before you pay the vendor.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

PCjr sidecar posted:

I'd be interested in hearing about some of these, if you wouldn't mind.

Unrelated tip of the day: always have well-defined required performance numbers in your RFP and make sure they pass before you pay the vendor.
Also, do what I just did with our IBM SONAS install this week: start unplugging random cables and shutting off random breaker poles in your PDU and see what happens. Then, make the vendor show you the recovery process from the failure. Time them with a stopwatch.

Timdogg
Oct 4, 2003

The internet? Is that thing still around?
We are looking to buy 3 Isilon 72NL nodes to start our cluster. The intent for this storage is for large digital objects for a Library. I am most excited about the prospect of the OneFS and the ability to just grow the file system as I add nodes.

Anyone have recent experience with Isilon? Preferably since EMC has bought them? The support in our quote seems extraordinarily expensive, and it is their lowest level apparently (Gold?) My only experience has been with a set of Dell 3200i 1200i and they had 5 years of next business day support, which has been great. Isilon wants to offer 3 or 5 years of next business day support, but looks to cost an extra 20k to 30k per node. Any experience with their support? Is it worth the premium?

Thanks for your help.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Timdogg posted:

We are looking to buy 3 Isilon 72NL nodes to start our cluster. The intent for this storage is for large digital objects for a Library. I am most excited about the prospect of the OneFS and the ability to just grow the file system as I add nodes.

Anyone have recent experience with Isilon? Preferably since EMC has bought them? The support in our quote seems extraordinarily expensive, and it is their lowest level apparently (Gold?) My only experience has been with a set of Dell 3200i 1200i and they had 5 years of next business day support, which has been great. Isilon wants to offer 3 or 5 years of next business day support, but looks to cost an extra 20k to 30k per node. Any experience with their support? Is it worth the premium?

Thanks for your help.
It really depends. With storage vendors that support an easy upgrade path, in general you probably won't want to pay for support past the 3-year mark. For most vendors, years 3-5 represent a pretty significant uptick in support costs, and after year 5 they're astronomical. We replaced our 36NL cluster with 108NLs early this year because EMC/Isilon offered a trade-in that was more compelling than renewing our support and buying equivalent differential capacity in new 108NL kit. Rather than buying the support up-front, you might want to ask what kind of trade-in programs they'll have available at the 3-year mark, because available drive capacities will probably be much, much larger at the end of your warranty agreement. Again, it depends on exactly what kind of offers you're getting through your reseller.

We had really bad experiences with Isilon's support wherein they botched a firmware upgrade on the 36NLs and the nodes kept kernel panicking and rebooting randomly for about a month. Support proceeded to first not give a poo poo for weeks and then start blaming our NFS clients (RHEL) for the crashes. It took us months to get the situation sorted out. In order to get the issue addressed, it literally required our CIO and IT Director to get Isilon's sales people into a room for a sales discussion where they then started screaming about the support. A few minutes later there was a conf call open with a VP and suddenly we had their attention.

Since then, though, everything's been pretty good. The system has been far less problematic than our BlueArc, which pretty much requires daily babysitting for something or other, and if it was a one-off problem and we never again see an issue of that magnitude, I wouldn't be completely averse to buying Isilon again. I'm largely blaming EMC's handling of the takeover for the sequence of events that transpired (they dumped way too much money into sales and marketing way too fast and overtaxed all their competent support people for a long time). But again, everything else is working pretty well for us now.

If you do ever run into an issue where support is being utterly unhelpful, drop me a line. We've got all the right contacts in field support now to actually get things done.

Timdogg
Oct 4, 2003

The internet? Is that thing still around?

Misogynist posted:

It really depends. With storage vendors that support an easy upgrade path, in general you probably won't want to pay for support past the 3-year mark. For most vendors, years 3-5 represent a pretty significant uptick in support costs, and after year 5 they're astronomical. We replaced our 36NL cluster with 108NLs early this year because EMC/Isilon offered a trade-in that was more compelling than renewing our support and buying equivalent differential capacity in new 108NL kit. Rather than buying the support up-front, you might want to ask what kind of trade-in programs they'll have available at the 3-year mark, because available drive capacities will probably be much, much larger at the end of your warranty agreement. Again, it depends on exactly what kind of offers you're getting through your reseller.

Thank you so much! This is really helpful. I work for a state university and we have some weird policies about utilizing trade-in programs, so I usually don't even think about them, but your points are extremely valid and I will check with our sales folks as well as our campus contacts to see if equipment trade in is even possible. Agreed that it would make much more sense to get the new equipment in 3 years than extending the support.

Glad to hear they haven't been hell to manage, I like the Dell MDs, but having only iSCSI and their limited scale out is what has me looking to change. Thanks again.

Bitch Stewie
Dec 17, 2011
$30k per node for support.. gently caress!!

Bitch Stewie fucked around with this message at 09:38 on Aug 12, 2012

BnT
Mar 10, 2006

Can somebody explain to me how active/active SAN processors work on the SCSI level in enterprise SANs? Does one controller/processor control some disks, or do they share a bus somehow? Right now it's magic to me.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

BnT posted:

Can somebody explain to me how active/active SAN processors work on the SCSI level in enterprise SANs? Does one controller/processor control some disks, or do they share a bus somehow? Right now it's magic to me.

Each disk is only owned by one controller. FilerA may control disks 1,2,3 while FilerB controls disks 4,5,6. This used to be controlled by hardware, but is now usually controlled by software. That is to say: both FilerA and FilerB have paths to all of the disks, but you assign ownership at a software level. You can obviously give/take disks back and forth if needed, as long as they're not part of a RAID group or anything.

When a failure happens, the other controller seizes ownership. For example, if FilerA fails, FilerB will take control of those disks and will start serving out the data on those RAID groups, etc. It's very important that FilerB is running the same version of code as FilerA, so that it will understand all of the data/metadata on the drives it seizes. Once FilerA has recovered, FilerB can gracefully give control over those drives back to FilerA.

I believe the software ownership piece is a small block of data that is written at the front of the drive. If FilerA sees that FilerB has put its signature on the drive, it knows that it can't take them unless told to take them forcefully.

Because each controller has its own set of disks, the storage is not shared. If you give FilerA 10 disks and only give FilerB 5 disks, then FilerB is going to have a smaller capacity. On a system like a NetApp, FilerB has to have at least 3 disks, because the OS actually lives on the disk. I have deployed several active-active installations where FilerA has all but 3 of the disks, and FilerB only has its minimum of 3 disks. Essentially it's a single filer with redundant controllers at that point, which makes it easier to manage than two separate systems. You also get the advantage of dealing with one big storage pool vs trying to manage two different pools. Finally, you can do a lot of cool things like move volumes between RAID groups if all of the disk is owned by one controller, whereas if each owns half of the disks, you can't seamlessly transfer data between FilerA and FilerB.

madsushi fucked around with this message at 18:41 on Aug 12, 2012

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
Madsushi is absolutely right with most low end to midrange (and some higher end) though some storage arrays can support every disk being accessed/written to by every controller at the same time. The EMC VMAX for example is just such an array. I could have 8 controllers in it and access the same LUN from all 8 controllers without having to go through any cluster interconnects or resort to "LUN trespassing."

I think it's more true to say that a NetApp and VNX array would be active/passive and passive/active. i.e. you'll expose half your storage resources from one controller node; then the other half through the other controller node. Then you turn on ALUA to sort everything out for your hosts connecting.

Here's a blogpost I dug up on the subject (not mine): http://virtualeverything.wordpress.com/2011/04/25/vmax-on-a-clariion-planet-part1/

1000101 fucked around with this message at 22:32 on Aug 12, 2012

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

1000101 posted:

Madsushi is absolutely right with most low end to midrange (and some higher end) though some storage arrays can support every disk being accessed/written to by every controller at the same time. The EMC VMAX for example is just such an array. I could have 8 controllers in it and access the same LUN from all 8 controllers without having to go through any cluster interconnects or resort to "LUN trespassing."

I think it's more true to say that a NetApp and VNX array would be active/passive and passive/active. i.e. you'll expose half your storage resources from one controller node; then the other half through the other controller node. Then you turn on ALUA to sort everything out for your hosts connecting.

Here's a blogpost I dug up on the subject (not mine): http://virtualeverything.wordpress.com/2011/04/25/vmax-on-a-clariion-planet-part1/

The reason that NetApp, VNX, etc don't do this is that a LUN in those instances is not comprised of specific slices of disk. In a traditional SAN like a VMAX or HDS your lun is a pre-defined group of disk clusters and that isn't going to change very often. Even scenarios like thin provisioning and dynamic provisioning on traditional SANs are really just growing LUNs in pre-defined increments and those updates are infrequent enough that it's easy to keep all controllers up to date on what physical locations belong to which LUN.

On NetApp (and probably most other vendors that run full OS layer in between) a LUN is much more abstracted. It's a file that is not locked to specific disk blocks living inside a volume that is not locked to specific disk blocks. Every new write will consume a formerly unused block that, a few seconds earlier, could have belonged to another LUN or volume entirely. This means that the physical disk blocks that define the LUN are constantly changing and to provide IO in a reasonable time these structures are maintained in memory. If the filer had to consult the mapping file every time it wanted to find out what blocks belonged to a LUN things would slow to a crawl. That means that a ton of important information is kept in system memory and the partner controller doesn't have access to that information so if writes were to come from that controller it would need to read a whole crapload of block-map files just to figure out where to write and by the time it had that information write coming in to the other controller would have likely dirtied that file anyway.

The only way it would work well would be if the controllers updated one another every time on-disk data structures changed but that would increase latency and complexity for marginal benefit. I'm sure there are vendors out there that do this, like the VMAX uses a shared memory architecture for some stuff, but it's not the norm and they still aren't dealing with constantly changing block maps for LUNs. For NetApp it doesn't make sense as OnTAP is a unified OS meant to run on hardware across the entire range and you'd need very tightly integrated and pretty expensive hardware for it to work.

I don't really have a point to this digression it's just an interesting technical detail that I've had to explain to customers from time to time when asked why the NetApp doesn't work the same as the Hitachi VSP, i.e. why there's a small I/O pause during controller failover, rather than continuous access. It's because the other controller has to read information on what lives where from disk before it can provide access and that takes a bit of time if you have many volumes and LUNs.

r u ready to WALK
Sep 29, 2001

There's also the variant where one controller is active, the other looks active to the host but is actually just passing requests internally through to the active controller. You can do this if you have more bandwith on the backend than on the frontend FC ports and it will let the hosts do round-robin load balancing almost as fast as on true enterprise active/active.

luminalflux
May 27, 2005



This weekend my LeftHand decided it was a great time to give me a bunch of "quorum status for manager xxx in management group yyy is Down" :ohdear:. No idea what this actually means since it looks green in the console

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

error1 posted:

There's also the variant where one controller is active, the other looks active to the host but is actually just passing requests internally through to the active controller. You can do this if you have more bandwith on the backend than on the frontend FC ports and it will let the hosts do round-robin load balancing almost as fast as on true enterprise active/active.

That's still active/passive (or, like 1000101 said, active/passive and passive/active at the same time). That's the situation in which you'd use ALUA to ensure that access is only happening on optimal paths under normal conditions. You don't want to use round robin in that scenario as the path through the cluster interconnects will still be slower than the paths directly to the LUN through the owning controller. Even if your interconnect had no latency and infinite bandwidth you still pay a latency penalty due to processing overhead on data being accessed through the suboptimal path.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

NippleFloss posted:

That's still active/passive (or, like 1000101 said, active/passive and passive/active at the same time). That's the situation in which you'd use ALUA to ensure that access is only happening on optimal paths under normal conditions. You don't want to use round robin in that scenario as the path through the cluster interconnects will still be slower than the paths directly to the LUN through the owning controller. Even if your interconnect had no latency and infinite bandwidth you still pay a latency penalty due to processing overhead on data being accessed through the suboptimal path.

What NippleFloss is trying to say is:

quote:

[hostname: scsitarget.partnerPath.misconfigured:error]: FCP Partner Path Misconfigured.

[hostname: scsitarget.partnerPath.misconfigured:error]: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected.

Jesus, if I had a nickel for every AutoSupport I received for FCP Partner Path Misconfigured, I would retire.

Beelzebubba9
Feb 24, 2004

Number19 posted:

I'm gonna bump my question about Nimble Storage from a couple of pages ago. They're being pretty aggressive and we're near the point where we'd consider changing directions on storage. They make a good pitch with some pretty nice looking performance numbers and I've read some good stuff on VMware's forums and such but I'd like to know if anyone here has had it running in production and how it's been working out.

I'm going to bump this too. The internet seems to have very good things to say about Nimble's product, and the people I know who use them really like them, but it wasn't in a production or similarly stressed environment.

....or do I need to be SA's $250K guinea pig?

evil_bunnY
Apr 2, 2003

Nimble's mostly ex-Netapp people IIRC.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Beelzebubba9 posted:

I'm going to bump this too. The internet seems to have very good things to say about Nimble's product, and the people I know who use them really like them, but it wasn't in a production or similarly stressed environment.

....or do I need to be SA's $250K guinea pig?

How comfortable are you with being one of a very small number of users? It will mean you will be the one running into bugs more often than the other vendors that probably have just as many bugs but have more people to find them and fix them before you notice.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


three posted:

How comfortable are you with being one of a very small number of users? It will mean you will be the one running into bugs more often than the other vendors that probably have just as many bugs but have more people to find them and fix them before you notice.

This is my biggest hangup. We're not a five nines kind of shop but any kind of prolonged outage will be bad news. That and my test environment Nexenta is really making me like providing storage to VMware using NFS which I won't get from a Nimble unit.

Then again, the performance looks incredible which is hard to overlook. Hopefully they'll let me beat up a demo unit for a while. They're being very aggressive...they really want as much business as they can get.

Adbot
ADBOT LOVES YOU

evil_bunnY
Apr 2, 2003

Why's a netapp root vol 160GB minimum? Right now my controller is using all of 5GB. I'm moving the root vol according to this KB, but of course the original root vol is 2+TB because it was create on an aggregate of 3TB disks. The KB also fails to mention how to actually move the data (vol copy to a restricted volume).

I tried resizing the original root vol but it won't go below 160GB.

Can I copy the root vol into a new one, re-size this new one (to say, 10GB) then make it the new root vol?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply