Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
skipdogg
Nov 29, 2004
Resident SRT-4 Expert

InferiorWang posted:

Does anyone have any experience with the HP P4300 G2 SAN starter kit? Thoughts? Has HP screwed up the Lefthand units or are they still a good option for an iscsi SAN? I'm looking into virtualizing a large chunk of our physical machines. We only have one server running one mssql database and no Oracle. It will mostly be for our GroupWise system and network file storage, along with the odds and ends boxes that are just wasting electricity.

I got pricing and it was more than I was expecting. Then again, I have no real basis for my expectations.

Sure do. Pricing shouldn't be too bad, not sure where they came at you with, but if you PM me I'll ballpark what we paid.

I have 4 shelves of 450GB 15K drives of LeftHand/P4300 that are used in an Oracle lab environment running on 10Gig. No complaints at all other than the way LeftHand stuff is setup. You don't have granular disk control. You can't take drives 1-4 and create a LUN, it stripes everything across all disks. For the lab they're buying compellant next since they have physical drive control.

Adbot
ADBOT LOVES YOU

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


We just added a couple of nodes in a starter set and I was able to beat them up fairly aggressively on pricing. Then again, we were buying a lot of additional server and networking stuff as well. Regardless, all I had to do was namedrop NetApp and EMC and HP didn't even waste time coming back with a bad price. They cut it down to something unbeatable on the first quote.

idolmind86
Jun 13, 2003

It's better to burn out than to fade away.

It's even better to work out, numbnuts.
Guys thanks for all of the responses. I will investigate iSCSI as an alternative avenue. Unfortunately I don't know if we can get away with block level disk access because one of the portions where our product runs into issues is in a shell script, which of course needs file level access.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

idolmind86 posted:

Guys thanks for all of the responses. I will investigate iSCSI as an alternative avenue. Unfortunately I don't know if we can get away with block level disk access because one of the portions where our product runs into issues is in a shell script, which of course needs file level access.

If the script is running on the same host as your $APP then you'll have no problems. As far as your app is concerned, iSCSI is exactly the same as plugging a SATA drive into the computer. Would you have trouble shell scripting on a local drive?

CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!
:yaycloud::smithcloud:
We're primarily using hosts with local storage but I'd like to farm off our WSUS database drive to an iSCSI NAS (synology 1010) would I be better off using the windows server's iSCSI target connect directly to the NAS or should I be looking at having the storage allocated to a VMWare datastore then having VWWare present a drive to the Windows Server. I intend to allocate some of the storage to VMWare datastores, I just don't know if it's better to present the drive directly to windows when possible.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

bob arctor posted:

would I be better off using the windows server's iSCSI target connect directly to the NAS or should I be looking at having the storage allocated to a VMWare datastore then having VWWare present a drive to the Windows Server.
Ultimately it is probably a matter of preference, unless the storage vendor provides tools for snapshot management like NetApp does. One thing to consider is that using the default block size of 1MB in a VMFS volume, the maximum VMDK is 256GB. This means if you will need more than that you will need to either use iSCSI luns directly attached to your guest, or you will need to use multiple VMDKs with windows combining the disks into a single volume. On our WSUS servers we use iSCSI luns connected to the guests.

namaste friends
Sep 18, 2004

by Smythe

adorai posted:

Ultimately it is probably a matter of preference, unless the storage vendor provides tools for snapshot management like NetApp does. One thing to consider is that using the default block size of 1MB in a VMFS volume, the maximum VMDK is 256GB. This means if you will need more than that you will need to either use iSCSI luns directly attached to your guest, or you will need to use multiple VMDKs with windows combining the disks into a single volume. On our WSUS servers we use iSCSI luns connected to the guests.

How big do WSUS repositories get these days anyway?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Cultural Imperial posted:

How big do WSUS repositories get these days anyway?
Ours is about 45GB for english only and about half the available products.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

Ultimately it is probably a matter of preference, unless the storage vendor provides tools for snapshot management like NetApp does. One thing to consider is that using the default block size of 1MB in a VMFS volume, the maximum VMDK is 256GB. This means if you will need more than that you will need to either use iSCSI luns directly attached to your guest, or you will need to use multiple VMDKs with windows combining the disks into a single volume. On our WSUS servers we use iSCSI luns connected to the guests.
If he's looking at allocating a new iSCSI volume for the WSUS data anyway, he's perfectly capable of using an 8MB block size. We're not talking about the local datastore on an ESXi install or something.

namaste friends
Sep 18, 2004

by Smythe
If the size of a windows patch that you push out is smaller than the read cache on your NAS, it's probably not going to make much difference what block size you specify. The best thing to do, if you're really worried about performance is to try both block sizes and run some benchmarks on them.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Cultural Imperial posted:

If the size of a windows patch that you push out is smaller than the read cache on your NAS, it's probably not going to make much difference what block size you specify. The best thing to do, if you're really worried about performance is to try both block sizes and run some benchmarks on them.
We're talking about VMFS block sizes, which has implications far above and beyond (and irrelevant to) the NTFS cluster size.

namaste friends
Sep 18, 2004

by Smythe

Misogynist posted:

We're talking about VMFS block sizes, which has implications far above and beyond (and irrelevant to) the NTFS cluster size.

Ah, gotcha.

what is this
Sep 11, 2001

it is a lemur

idolmind86 posted:

Guys thanks for all of the responses. I will investigate iSCSI as an alternative avenue. Unfortunately I don't know if we can get away with block level disk access because one of the portions where our product runs into issues is in a shell script, which of course needs file level access.

Not to keep harping on this, and not to be mean, but

(a) You still don't know the difference between block level and file level access, and

(b) You still don't grasp the distinction between a SAN and a NAS.



Here's the explanation: a SAN - a filesystem mounted over the network with block level access - appears just like a local disk plugged in.

If your application will run on the internal hard drive it will work flawlessly over a SAN. I cannot imagine an app that only works over network drives but won't work on local drives.

Move to iSCSI and all your problems will be solved. Require this for clients. Small groups can then just buy local storage and I guarantee large orgs already have a SAN they can carve off a LUN for your application.

Syano
Jul 13, 2005
Is there anyone that competes price wise with dell and their power vault 3xxx series of iscsi SANs? I would love to be able to get a competitor to show me what they have but i have yet to find anyone that can compete price wise

MrMoo
Sep 14, 2000

Not any the two highend ReadyNAS units? They are expensive on CDW though.

Syano
Jul 13, 2005
Snap. I had no idea netgear sold iscsi stuff. It looks pretty feature rich too. I wonder if I could overcome my innate fear of putting production data on a netgear box

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Syano posted:

I wonder if I could overcome my innate fear of putting production data on a netgear box
I would have a hard time with that too, but I'd probably also have difficulty putting production data on a Dell box after my experiences at my current job.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Syano posted:

Is there anyone that competes price wise with dell and their power vault 3xxx series of iscsi SANs? I would love to be able to get a competitor to show me what they have but i have yet to find anyone that can compete price wise

What config are you looking at? If you beat them up enough you might be able to get an HP MSA/P2000 unit for about the same price.

Syano
Jul 13, 2005

skipdogg posted:

What config are you looking at? If you beat them up enough you might be able to get an HP MSA/P2000 unit for about the same price.

Nothing fancy. Right now I have a Powervault 3200i with 3.2tb raw storage on 15k rpm SAS disks. The main feature I would really like that I dont currently have is some sort of replication.

Nukelear v.2
Jun 25, 2004
My optional title text

skipdogg posted:

What config are you looking at? If you beat them up enough you might be able to get an HP MSA/P2000 unit for about the same price.

This is what we did. Couldn't get the G3 down to the same price as Dell, but it's close enough and is better hardware, it also let us switch to SFF drives. It also supports remote snap replication, which you want.

Edit: By close enough I mean you're probably paying around around 12k for the Dell, and would be around 19k on a G3. Obviously lots of factors influence this, so rough guide kind of thing.

Nukelear v.2 fucked around with this message at 17:37 on Dec 10, 2010

complex
Sep 16, 2003

Anyone have any experience with 3Par gear? We're looking for alternatives to our NetApp. We use Fiber Channel.

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.
Has anyone used OpenFiler before for an Enterprise enviroment? Our resident linux zealot thinks we should look at this since it's FREE, which now has the higher ups asking us to take a look.

I want to smack him.

edit: \/\/\/\/\/ Great thanks!

ghostinmyshell fucked around with this message at 21:10 on Dec 10, 2010

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

ghostinmyshell posted:

Has anyone used OpenFiler before for an Enterprise enviroment? Our resident linux zealot thinks we should look at this since it's FREE, which now has the higher ups asking us to take a look.

I want to smack him.
Doesn't work with MSCS = useless

Vanilla
Feb 24, 2002

Hay guys what's going on in th

complex posted:

Anyone have any experience with 3Par gear? We're looking for alternatives to our NetApp. We use Fiber Channel.

Good kit. Simple to use, some good features, does the job but expensive - the cost is between mid tier and high end. Those that use it love it.

They get great survey ratings. Those industry surveys that ask if you'd use your current vendor again - 3PAR get 100%.

Also - 3PAR is now owned by HP just FYI.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

ghostinmyshell posted:

Has anyone used OpenFiler before for an Enterprise enviroment? Our resident linux zealot thinks we should look at this since it's FREE, which now has the higher ups asking us to take a look.

I want to smack him.

edit: \/\/\/\/\/ Great thanks!
If you are looking for something that is free, look at nexenta.

Misogynist posted:

Doesn't work with MSCS = useless
I don't think that is quite a deal killer.

conntrack
Aug 8, 2003

by angerbeet
Grey box is just not "enterprise storage", keep good backups if you go with some linux poo poo.

Just because they have a sweet website does not mean they can give you five thousand nines.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

conntrack posted:

Grey box is just not "enterprise storage", keep good backups if you go with some linux poo poo.

Just because they have a sweet website does not mean they can give you five thousand nines.
Maybe I'm just being an obnoxious pedant, but not everything enterprise has mission-critical uptime requirements or OLTP-size performance requirements, either.

conntrack
Aug 8, 2003

by angerbeet

Misogynist posted:

Maybe I'm just being an obnoxious pedant, but not everything enterprise has mission-critical uptime requirements or OLTP-size performance requirements, either.

No it doesnt. But then again alot of companies sell standard pc boxes with "secret sauce". The point was to know what you are buying and base your expectations there after. What they write on their web to differentiate themselves from all the other hundreds like it might or might not be marketing writing what they wish the product can deliver.

idolmind86
Jun 13, 2003

It's better to burn out than to fade away.

It's even better to work out, numbnuts.

what is this posted:

Not to keep harping on this, and not to be mean, but

(a) You still don't know the difference between block level and file level access, and

(b) You still don't grasp the distinction between a SAN and a NAS.

I appreciate all the help but there's no reason to be rude. I know the difference between SAN and NAS as I've been doing a ton of research. I am not a unix admin and I am not a network admin so I apologize if I don't get every detail correct. I am a programmer and I really never had to care about file systems up until our customers started installing our software on unstable devices/environments and then expected us to address the issue. We run into tons of issues with NAS and NFS because our customers generally aren't running the latest versions of NFS and the OS that they run on. In general SANs had been fine up until lately when we have one customer with a SAN that has mounted partitions hanging indefinitely when they run into some type of iNode limit.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

idolmind86 posted:

In general SANs had been fine up until lately when we have one customer with a SAN that has mounted partitions hanging indefinitely when they run into some type of iNode limit.
Hire a support person who knows something about implementing production IT systems. Seriously. This particular comment reflects exactly the lack of understanding of systems principles and what things actually do that's getting you in trouble.

idolmind86
Jun 13, 2003

It's better to burn out than to fade away.

It's even better to work out, numbnuts.

Misogynist posted:

Hire a support person who knows something about implementing production IT systems. Seriously. This particular comment reflects exactly the lack of understanding of systems principles and what things actually do that's getting you in trouble.

How am I getting myself in trouble though? If the customer has all software on their Oracle application tier (not just our product) running off this mount and the mount locks up during heavy load times how is this our problem? All we hear is third-hand information from the unix admins who relay to the DBAs who relay to the application support people who tell us that there is "some sort of iNode limit that has been reached and you need to fix it". Our product has no file leaks and we verify all calls to open/read/write to files. At peak times we maybe have a maximum of 20 files open.

How are we supposed to anticipate any type of issue that may come up with whatever file system implementation our customer may have? I suppose it would be easy to say hire somebody with experience who knows about Veritas clusters if we only supported Veritas clusters but we have a vast number of customers on SANs and NAS running Linux, AIX, Solaris, HP-UX, Windows, etc. and god knows how many different storage vendors.

I guess my point is that I don't think it is out of the realm of reasonable thought for a software vendor to expect a customer to provide a stable file system for our software to run on, yet so many people think it is.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Inodes are a filesystem feature and have nothing to do with SAN architectures or anything else besides the filesystem. Whether a partition hangs or not similarly has nothing to do with this -- when the inode limit is reached, you can't create any more files. That's it. What you need is someone with the expertise to figure out whether something is an issue with your product, with the OS, or with the backend storage vendor. This is something basic that your company must have if you're selling products to be used in a datacenter.

This is exactly what I'm saying -- you guys clearly have no idea what you should and shouldn't be supporting because you don't know what basic system components are, how they work, and what the implications are when they break. If you have Unix admins on staff, they're obviously not communicating with anyone else and they need to be brought in on these development issues.

Vulture Culture fucked around with this message at 16:05 on Dec 13, 2010

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Vanilla posted:

Good kit. Simple to use, some good features, does the job but expensive - the cost is between mid tier and high end. Those that use it love it.

They get great survey ratings. Those industry surveys that ask if you'd use your current vendor again - 3PAR get 100%.

Also - 3PAR is now owned by HP just FYI.

We generally hear nothing but positive feedback about 3par in the field as well. In fact the only real negative thing I've heard is that its hard to find people that really understand 3par/know it inside and out for services.

quote:

How am I getting myself in trouble though? If the customer has all software on their Oracle application tier (not just our product) running off this mount and the mount locks up during heavy load times how is this our problem? All we hear is third-hand information from the unix admins who relay to the DBAs who relay to the application support people who tell us that there is "some sort of iNode limit that has been reached and you need to fix it". Our product has no file leaks and we verify all calls to open/read/write to files. At peak times we maybe have a maximum of 20 files open.

I guess the problem is its hard to put a finger on your core issue because you keep flipping between SAN/NAS contexts. If its an inode limit on an NFS volume then you can check it pretty quick with a 'df -i' on the NFS server.

I can't be absolutely certain though because all of this inode talk is peppered in with SAN talk and you said yourself a lot of this information is 57th hand so god knows what the telephone game has done with it.

quote:

How are we supposed to anticipate any type of issue that may come up with whatever file system implementation our customer may have? I suppose it would be easy to say hire somebody with experience who knows about Veritas clusters if we only supported Veritas clusters but we have a vast number of customers on SANs and NAS running Linux, AIX, Solaris, HP-UX, Windows, etc. and god knows how many different storage vendors.

I think really getting people trained up or knowledgeable of core technologies can be tremendously helpful. I don't think its really cost effective to go get an EMC expert, a NetApp expert, an HDS expert, etc. You can however look for folks with some experience managing iSCSI/FCP environments who can help you understand what you're getting into and more importantly to clearly communicate to both your support and your customers.

At the end of the day, iSCSI and FCP are means to get SCSI commands from your server to your disks and nothing more. If your filesystem says "oh poo poo I'm out of inodes!" then the SAN is just going to deliver the message back up to the host but it plays no part in the game beyond that.


quote:

I guess my point is that I don't think it is out of the realm of reasonable thought for a software vendor to expect a customer to provide a stable file system for our software to run on, yet so many people think it is.

Remember that iSCSI is no more a filesystem than SCSI, SATA, or SAS. It's a means to get to a disk; plumbing if you will.

Once you present your iSCSI device to a server (either via a hardware HBA or software initiator) it shows up as just another "plain Jane" SCSI device. You're going to fdisk it and mkfs it just like you would if you'd plugged a disk into a SAS card.

Here is the caveat; you want to make sure the network supporting iSCSI is absolutely healthy and at the very least gigabit. You probably want switches with relatively deep packet buffers and flow control (read: not dlink/linksys specials from bestbuy.)

quote:

In general SANs had been fine up until lately when we have one customer with a SAN that has mounted partitions hanging indefinitely when they run into some type of iNode limit.

Are you sure this is the problem? It doesn't make a lot of sense in my head when I think about it. inode limitations would be a filesystem issue not a block device issue.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

1000101 posted:

Are you sure this is the problem? It doesn't make a lot of sense in my head when I think about it. inode limitations would be a filesystem issue not a block device issue.
maybe he has a netapp with a million luns in a single volume

what is this
Sep 11, 2001

it is a lemur

idolmind86 posted:

I appreciate all the help but there's no reason to be rude. I know the difference between SAN and NAS as I've been doing a ton of research. I am not a unix admin and I am not a network admin so I apologize if I don't get every detail correct. I am a programmer and I really never had to care about file systems up until our customers started installing our software on unstable devices/environments and then expected us to address the issue. We run into tons of issues with NAS and NFS because our customers generally aren't running the latest versions of NFS and the OS that they run on. In general SANs had been fine up until lately when we have one customer with a SAN that has mounted partitions hanging indefinitely when they run into some type of iNode limit.

I'm not being rude. You just don't understand the difference between a SAN and a NAS, and the distinction between block level and file level storage.

I'm genuinely trying to help you here. You don't even need to hire someone as other people have suggested. Here is the fix for your issues:

(1) Require local storage or a SAN with a dedicated LUN for your app mounted (iSCSI/FC, doesn't matter).

There are no other steps. It's that simple. You can be the big hero here.


The confusion you're running into is because a SAN can serve up volumes as if it were a NAS, in other words the SAN can provide NAS services and provide network file-level storage (SMB/NFS/etc).



additionally


idolmind86 posted:

I guess my point is that I don't think it is out of the realm of reasonable thought for a software vendor to expect a customer to provide a stable file system for our software to run on, yet so many people think it is.


NOOOO. There is no file system with a SAN.


Here's what happens:

(1) Provision a LUN
(2) Use an iSCSI initiator or whatever to connect the computer to the LUN
(3) Windows pops up "You've connected an uninitialized disk! Would you like to format NTFS? (yes)"


There is no filesystem. You could format with NTFS, FAT32, ZFS, HFS+, or even BeOS. It's up to you.

what is this fucked around with this message at 00:33 on Dec 14, 2010

what is this
Sep 11, 2001

it is a lemur

Misogynist posted:

This is exactly what I'm saying -- you guys clearly have no idea what you should and shouldn't be supporting because you don't know what basic system components are, how they work, and what the implications are when they break. If you have Unix admins on staff, they're obviously not communicating with anyone else and they need to be brought in on these development issues.

This is not uncommon though. I worked with a company that had a postgres database and was completely clueless that you should try to steer clear of putting the database on SMB/NFS


http://www.postgresql.org/docs/8.3/static/creating-cluster.html

quote:

17.2.1. Network File Systems

Many installations create database clusters on network file systems. Sometimes this is done directly via NFS, or by using a Network Attached Storage (NAS) device that uses NFS internally. PostgreSQL does nothing special for NFS file systems, meaning it assumes NFS behaves exactly like locally-connected drives (DAS, Direct Attached Storage). If client and server NFS implementations have non-standard semantics, this can cause reliability problems (see http://www.time-travellers.org/shane/papers/NFS_considered_harmful.html). Specifically, delayed (asynchronous) writes to the NFS server can cause reliability problems; if possible, mount NFS file systems synchronously (without caching) to avoid this. (Storage Area Networks (SAN) use a low-level communication protocol rather than NFS.)



http://www.postgresql.org/files/documentation/books/aw_pgsql/hw_performance/node11.html

quote:

NFS and other remote file systems are not recommended for use by POSTGRESQL. NFS does not have the same file system semantics as a local file system, and these inconsistencies can cause data reliability or crash recovery problems.



Most databases should only be put on Direct Attached Storage (DAS) or a LUN mounted from a SAN providing block level access.


Is it possible to get them working with a networked file system on a NAS? Sure. Particularly with systems like Oracle that are engineered to support this. Are you going to do it easily, or with haphazard support or any unusual setup? No it will be a disaster every time. Using DAS or a SAN will provide an easy solution. It's never worth the file locking, caching, and other headaches of using networked file systems.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

adorai posted:

maybe he has a netapp with a million luns in a single volume

You win this round Adorai.....

That said if it is a NetApp then provision a new volume (not a qtree in an existing volume) to house your oracle database. Ideally you want this on an aggregate not already supporting a billion other things.

I would only do this with Oracle though. Never MSSQL/MySQL/PGSQL/etc. With those I would use iSCSI with NTFS/ext3/whatever if I didn't have FC available.

Serfer
Mar 10, 2003

The piss tape is real



Oh hi everyone with HP SAN equipment, http://seclists.org/bugtraq/2010/Dec/102

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Oh :wtc:

Adbot
ADBOT LOVES YOU

Nukelear v.2
Jun 25, 2004
My optional title text

Serfer posted:

Oh hi everyone with HP SAN equipment, http://seclists.org/bugtraq/2010/Dec/102

Just tried on mine and can confirm this. WTF HP.

Looks like I'll be keeping their mgt interface on an isolated network for awhile, which means we lose their phone home ability.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply