Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
evol262
Nov 30, 2010
#!/usr/bin/perl
It may be able to, if you want to bother with paying for licensing and Netapps in general. At least they won't push an update which kills your SAN, unlike the OpenIndiana guys last week with 151a3 or 151a4 (whichever one had undefined symbols in the ZFS kernel module).

How are you planning to HA OpenIndiana? I've seen it done with Nexenta, and some preliminary work with Heartbeat/Pacemaker, but never service manifests or anything else needed to actually get HA services running (as opposed to two "clustered" hosts that can see each other, but nothing more).

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
If you absolutely must roll your own, FreeBSD is by far a better ZFS platform than OpenIndiana, despite the best intentions of the OI/Illumos devs.

the spyder
Feb 18, 2011
Let me rephrase this as a question:

If you were to look at vendor to support the following setup, who would you use and why?

Vmware ESXi 5 Essentials Plus
3 Hosts (Dual 8 core, 64gb each)
40gb Infiniband Switch

AD/Exchange for under 100 users/WSUS/WDS/Application (Licensing)
10 web servers, 2 small database, handful of linux boxes for SFTP

We have a 140tb primary NAS and several smaller 50tb NAS already. So far I have mainly been looking for Fast Cache, Fast SAS drives, and a decent ~20tb storage pool.

Internet Explorer
Jun 1, 2005





Do you need NAS functionality or just Block-level stuff?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

madsushi posted:

More PM words!

In DFM 3.8 and higher DFM creates the secondary volume at:
1.1 * max[(1.2 * PrimaryCurrentUsedSize),(2.0 * PrimaryTotalVolumeSize)].

That 1.2 number is the default and is controlled by the hidden DFM option dpSrcVolTotalPct. The 2.0 number is controlled by dpSrcVolUsedPct. If you set

dpSrcVolUsedPct=.1 (some sufficiently low number that it will never be the max) and dpSrcVolTotalPct=.9 that should even out to a secondary that is approximately the same size as the primary. When you're fanning in more than one volume to the same vault it will use the sum of all of them for the calculation.

If you're not at 3.8 then you should probably upgrade, but 3.7 uses a different formula. If you upgraded from 3.7 then Dynamic secondary sizing will be disabled. You can enable it with the dpDynamicSecondarySizing option. That will allow PM to re-size secondary volumes to account for increased space needs from incoming replication.

You can also force PM to rescan a host for relevant changes (like a volume language update) by running "dfm host discover" from a command prompt on the DFM server. This is way less than ideal though, and it still takes a while to query everything. The "Refresh" button in the NMC should do the same thing, but it doesn't seem to.

I agree with you that DFM is needlessly opaque and the documentation is simply terrible (when it exists at all). I've had to do a lot of hands on training with the admins I work with to help them develop any facility with it at all, and many still hate it. A lot of the useful features are buried in the CLI which defeats the purpose of making a user friendly GUI tool. It's also got some pretty infuriating bugs or "design choices." And it still doesn't support 32-to-64 bit mirrors for OnTap versions 8.1 and up.

I'm not a huge fan of it, at least not for my customer, but it sounds like configuring Provisioning Manager along with Protection Manager might make it easier on both you and some of your smaller customers. I only use provisioning manager for provisioning mirror destinations, not primary storage, but it works just fine for that and I imagine in simpler deployments it would work well enough with primary volumes as well.

I do appreciate the fact that if I have a resource pool and provisioning policy attached to a protected dataset all I have to do is drop any newly created volumes in the dataset and it takes care of creating everything and instantiating the mirror. With hundreds of mirrors it's definitely a time saver.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


I just got back from a lunch and learn event by Nimble. I remember some of you talking about them a while ago. For any of you who decided to go with them: how is it working for you? They spit out some pretty impressive numbers but as always I take those with a huge grain of salt. The technology behind it is intriguing though and we are about a year away from our storage refresh cycle and I'm starting to look at my options.

We went from ~10 VMs on an OpenFiler box two years ago to 120 VMs on a LeftHand SATA SAN. It's been ok but I'm starting to run into the limits of it and it might be time for a change.

My big goal is to get our Perforce server virtualized and that requires more IOPS than we currently need. I know that Perforce says not to virtualize but there are a lot of people who do it without caring much. I'd love to get my second to last bare metal server on a VM and Nimble might just provide the IOPS I need at a price I can handle.

Nomex
Jul 17, 2002

Flame retarded.

adorai posted:

Building hundreds of snapmirror relationships so that I migrate my data to a new netapp sucks. What sucks worse is our offsite netapp is a 2050 so after we cutover, it will be a race to upgrade our 3140 to ontap8, reverse the snapmirrors, and drive it to our DR site.

Get a big list of the targets and hosts then slap them in excel. Build out the first command line around the source and destination columns, then copy/paste as much as you can until you have all the command lines to create the relationships. Copy/paste the contents of the sheet into notepad, replace all the tabs with spaces, then just copy/paste the entire script all at once into a putty session to the filer. Presto! Hundreds of relationships. If you want to do one better, set up a Linux administration box. You can really do some sweet stuff with Netapp from the command line.

Nomex fucked around with this message at 05:56 on Jul 26, 2012

evil_bunnY
Apr 2, 2003

For-loops, motherfucker.

r u ready to WALK
Sep 29, 2001

Although I enjoy bash and perl scripting as much as the next guy, I can totally get behind using Excel for one-time automation of stuff like that.

It's extremely easy to cut and paste tables into excel, or do delimited field splitting, then build a command line by merging cells together and buidling the complete CLI command you need.

Just take a look at http://www.contextures.com/xlCombine01.html

Sure, it's not fancy but it gets the job done really quickly and your coworkers might even understand what you did.

Docjowles
Apr 9, 2009

I'm here to make one of those embarrassing "I don't know anything about storage" posts :ohdear: Hoping to get some advice on products or at least vendors to look at for entry-level storage.

We have a couple storage arrays (HP MSA2012i) that do double duty as VMware datastores and storage for several terabytes of static files served by a website. IOPS requirements are very minimal. We're running out of space and I'm weighing my options. I'm new at this company, didn't set up any of the existing infrastructure. AFAIK they went with HP because that's what most of our servers are.

I really do not like the MSA's. They appear to be flaky as hell; updating firmware is a nail biting operation that tends to take them completely offline even though supposedly it updates one controller at a time and seamlessly fails over between them. We've had numerous disk failures and at least one total controller failure due to firmware bugs. Management is awful (although I gather this isn't unique in the storage world). It doesn't support modern features like compression and dedupe at all. I'd like to get rid of them or at least relegate to a backup role. But if buying a new array doesn't make sense, I can bite the bullet and just add expansion shelves.

Some requirements:

* Do not want to roll my own. This is production primary storage.
* We're currently using about 6TB without deduplication or compression, which will obviously increase over time.
* More concerned with capacity than raw IOPS. We do have one heavy-usage MSSQL box that runs on DAS that I would consider virtualizing, but that is not urgent
* Straightforward to manage. Not afraid to get my hands dirty and learn, but I am the only sysadmin, fiddling with storage cannot permanently consume 90% of my time. As you can see I don't have super demanding needs anyway.
* Hoping to spend under $25k incl. support contract for one filer

I find EMC's VNXe line and NetApp's FAS2200 somewhat appealing so far. Are those decent or terrible for any reason? Anywhere else I should be looking?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Docjowles posted:

I find EMC's VNXe line and NetApp's FAS2200 somewhat appealing so far. Are those decent or terrible for any reason? Anywhere else I should be looking?
IBM V7000 Unified is really competitive. PM me with some info about where you're geographically located and I'll let you know who you should be talking to.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Docjowles posted:

I'm here to make one of those embarrassing "I don't know anything about storage" posts :ohdear: Hoping to get some advice on products or at least vendors to look at for entry-level storage.

We have a couple storage arrays (HP MSA2012i) that do double duty as VMware datastores and storage for several terabytes of static files served by a website. IOPS requirements are very minimal. We're running out of space and I'm weighing my options. I'm new at this company, didn't set up any of the existing infrastructure. AFAIK they went with HP because that's what most of our servers are.

I really do not like the MSA's. They appear to be flaky as hell; updating firmware is a nail biting operation that tends to take them completely offline even though supposedly it updates one controller at a time and seamlessly fails over between them. We've had numerous disk failures and at least one total controller failure due to firmware bugs. Management is awful (although I gather this isn't unique in the storage world). It doesn't support modern features like compression and dedupe at all. I'd like to get rid of them or at least relegate to a backup role. But if buying a new array doesn't make sense, I can bite the bullet and just add expansion shelves.

Some requirements:

* Do not want to roll my own. This is production primary storage.
* We're currently using about 6TB without deduplication or compression, which will obviously increase over time.
* More concerned with capacity than raw IOPS. We do have one heavy-usage MSSQL box that runs on DAS that I would consider virtualizing, but that is not urgent
* Straightforward to manage. Not afraid to get my hands dirty and learn, but I am the only sysadmin, fiddling with storage cannot permanently consume 90% of my time. As you can see I don't have super demanding needs anyway.
* Hoping to spend under $25k incl. support contract for one filer

I find EMC's VNXe line and NetApp's FAS2200 somewhat appealing so far. Are those decent or terrible for any reason? Anywhere else I should be looking?

What protocols do you require? If you can get by with iSCSI only then you've got a lot more options like Nimble, HP Left-Hand, and Equallogic. If you need FC, or CIFS, or NFS then you're more limited to stuff like NetApp, EMC, or IBM.

How much capacity do you expect to need in a year? Two years? Do you want to utilize snapshot backups? Do you want to utilize application consistent snapshot backups for things like SQL or Exchange or VMWare? Do you care about automated data tiering? SSD as cache?

I know you don't care about some of this stuff right now but you should make any purchase like this with your long term goals in mind.

Docjowles
Apr 9, 2009

NippleFloss posted:

What protocols do you require? If you can get by with iSCSI only then you've got a lot more options like Nimble, HP Left-Hand, and Equallogic. If you need FC, or CIFS, or NFS then you're more limited to stuff like NetApp, EMC, or IBM.

How much capacity do you expect to need in a year? Two years? Do you want to utilize snapshot backups? Do you want to utilize application consistent snapshot backups for things like SQL or Exchange or VMWare? Do you care about automated data tiering? SSD as cache?

1) We currently use iSCSI only. I'd kind of like to have NFS as an option but it is not a deal breaker. We have no FC infrastructure or expertise so I am not considering that.

2) Historically our storage needs (on primary storage) have only grown by about 1.5TB a year, so pretty slow.

3) Our backup situation is actually kind of awful; you hit on another reason I am looking into this. So yeah, snapshots and the requisite additional space would be nice. For several services, we have application data backed up offsite but no backup of the OS/apps/configs. This is mitigated to a degree by boxes (in theory) being easily rebuilt via tools like Puppet but it's still a little scary. Maybe I'm behind the times on the idea of throwaway hosts in the cloud era.

4) Right now data tiering and SSD caching would be overkill. That would change if I wanted to virtualize our primary MSSQL servers but I don't feel like I have support from my boss for that.

Internet Explorer
Jun 1, 2005





Docjowles posted:

1) We currently use iSCSI only. I'd kind of like to have NFS as an option but it is not a deal breaker. We have no FC infrastructure or expertise so I am not considering that.

2) Historically our storage needs (on primary storage) have only grown by about 1.5TB a year, so pretty slow.

3) Our backup situation is actually kind of awful; you hit on another reason I am looking into this. So yeah, snapshots and the requisite additional space would be nice. For several services, we have application data backed up offsite but no backup of the OS/apps/configs. This is mitigated to a degree by boxes (in theory) being easily rebuilt via tools like Puppet but it's still a little scary. Maybe I'm behind the times on the idea of throwaway hosts in the cloud era.

4) Right now data tiering and SSD caching would be overkill. That would change if I wanted to virtualize our primary MSSQL servers but I don't feel like I have support from my boss for that.

I think I'm the resident Equallogic cheerleader, but I would take a good look at them if you do not need NAS functionality and do not have someone as a dedicated storage admin. I liked our old Equallogic SANs way more than our current EMC VNX units. That said, the EMC VNXe is significantly different from a VNX. I have not had a chance to play with the VNXe, but they are probably worth looking at. I just have a lot of problems with EMC's support and their convoluted way of doing poo poo.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Docjowles posted:

1) We currently use iSCSI only. I'd kind of like to have NFS as an option but it is not a deal breaker. We have no FC infrastructure or expertise so I am not considering that.
If you give a call to my contact, ask for the V7000 300 series (2076-312 and 2076-324), not the V7000 Unified. You'll save some cash not going with the NAS uplift.

With your requirements, you're almost certainly looking at the 312 with 3 TB drives.

Bitch Stewie
Dec 17, 2011
I'd love some ballpark numbers on some typical V7000 configs.

We have a refresh some time off so it's not work speaking to anyone just yet but I'm working the mental shopping list of who to look at.

What I want is synchronous replication between two location (we have 10gbps fibre so no latency issues) and iSCSI, then we're into things like NFS/CIFS which are nice to have's.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bitch Stewie posted:

What I want is synchronous replication between two location (we have 10gbps fibre so no latency issues) and iSCSI, then we're into things like NFS/CIFS which are nice to have's.
Are you looking at 1 gig or 10 gig iSCSI? All of the V7000 controller models have 4x1GbE, but you'll need to step up from the 112/124 to the 312/324 if you want 10 gig ports.

Nebulis01
Dec 30, 2003
Technical Support Ninny

Internet Explorer posted:

I think I'm the resident Equallogic cheerleader.

I'll cheer alongside you. I have a pair of Equallogic PV4000Xs and love them.

Bitch Stewie
Dec 17, 2011

Misogynist posted:

Are you looking at 1 gig or 10 gig iSCSI? All of the V7000 controller models have 4x1GbE, but you'll need to step up from the 112/124 to the 312/324 if you want 10 gig ports.

10gig would be nice but if it's the difference between $100k and $150k it's irrelevant IYSWIM :)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bitch Stewie posted:

10gig would be nice but if it's the difference between $100k and $150k it's irrelevant IYSWIM :)
The difference is practically nothing if you're talking about enough enclosures to hit that budget range, most of the V7000 cost comes from disk.

Bitch Stewie
Dec 17, 2011

Misogynist posted:

The difference is practically nothing if you're talking about enough enclosures to hit that budget range, most of the V7000 cost comes from disk.

Just looking at the options that are obvious - bit surprised they don't do 2.5" MDL drives yet do offer 3.5" MDL.

I suspect the feature licensing can make or break you here i.e. synchronous replication and full unified vs. "dumb" iSCSI?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Bitch Stewie posted:

Just looking at the options that are obvious - bit surprised they don't do 2.5" MDL drives yet do offer 3.5" MDL.
2.5" nearline/midline is a really weird price/performance ratio. I've actually never run into anyone interested in it before. The whole appeal I've seen with 2.5" is that you can jam a crapload more fast spindles into a smaller space without needing to make the huge cash outlays for SSD.

I am noticing that the product papers don't seem to make mention of the 900 GB 2.5" option. We run these.

Bitch Stewie posted:

I suspect the feature licensing can make or break you here i.e. synchronous replication and full unified vs. "dumb" iSCSI?
Unified is way more than a software uplift, since V7000 is, at its core, block storage. It's also an HA pair of boxes running some freakish scaled-down version of IBM's SONAS software on glorified x3650 servers. That hardware cost often isn't so cheap if you're running a very small (1-2 enclosure) V7000 configuration. On the plus side, you have a nice, clean separation between your components, and you don't end up seeing IBM do dumb poo poo like break your iSCSI storage because of an update that's only supposed to touch CIFS.

Feature licensing will snag you like with any storage vendor, but IBM is a little bit saner about how they do things. For example, the Midrange Storage (LSI/Engenio DS) series had metro mirroring licensed by how many LUNs you were replicating, and there were significant license uplifts involved. With V7000 it's based on the number of enclosures being replicated. It's not an insignificant cost, but it's really nicely priced if you're looking to mirror, say, a set of single-enclosure 2076-124s fully-laden with 900 GB disks from one site to another.

IBM's really going after the SMB market with Unified, and they know what they're doing. The product has some flaws (see NippleFloss's comment way earlier in the thread about the lack of contra-rotating cabling) and is subject to a few SONAS warts, but overall it's a great deal.

Vulture Culture fucked around with this message at 04:57 on Jul 27, 2012

Nomex
Jul 17, 2002

Flame retarded.

Misogynist posted:

2.5" nearline/midline is a really weird price/performance ratio. I've actually never run into anyone interested in it before. The whole appeal I've seen with 2.5" is that you can jam a crapload more fast spindles into a smaller space without needing to make the huge cash outlays for SSD.

It's not midline, but we've started buying all our SAS disk as 2.5" 10k from 3.5" 15k. A 24x600 15k x 3.5" shelf uses around 600w of power and a 24 x 600 x 10k x 2.5" one uses about 300. We're also in a physical space crunch, and we can get 90% of the performance from a shelf of 2.5" 10k disks that takes up half the rack space.

Bitch Stewie
Dec 17, 2011

Thanks very much :) I suspect it'll be out of our price range for what we need as it doesn't seem to sit in the EQL/P4000 sort of upper-entry to mid-range, but I may well look in more detail nearer the time.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online
What's EMCs average response time for SRs? I put in a ticket at 10:00 AM this morning about a failed Block/File OE upgrade and haven't heard poo poo back. We're running off one SP and my boss is bitching about the alert emails the VNX is spewing. I need this thing fixed...

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

They have a phone number you can call?

Internet Explorer
Jun 1, 2005





Goon Matchmaker posted:

What's EMCs average response time for SRs? I put in a ticket at 10:00 AM this morning about a failed Block/File OE upgrade and haven't heard poo poo back. We're running off one SP and my boss is bitching about the alert emails the VNX is spewing. I need this thing fixed...

Yeah, your best bet is to call or use the chat support. On "High Priority tickets" which is the 2nd level, just under 1st level "Critical", and I have had them take over a week several times before.

Out of curiosity, what Block and File OE are you upgrading to? We just upgraded our DR SAN to the latest version of both (File - 7.1.47.5, Block - 5.32.000.5.006) and our VNX is continuously yelling at us that the File side does not support the version the Block side is running. The error says we are still running the old version (05.31.000.5.716,7.31.32 (0.42)), but both SPs show they are running the new and proper version. :EMC:

We are in a rush to get our production SAN upgraded to the newest version to fix a rather nasty bug, but that error message has me holding off.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online
Turns out my SR got sucked into limbo and was unassigned. It's been thrown back into the live support queue.

I'm upgrading to File 7.1.47.5 and I'm not sure what block version it is since the surviving SP is on the old version and it's bitching about versions not being compatible.

Internet Explorer
Jun 1, 2005





Goon Matchmaker posted:

Turns out my SR got sucked into limbo and was unassigned. It's been thrown back into the live support queue.

This seems to be a recurring problem with us as well. Pretty much every ticket we have opened in the past month or so has been "in the wrong queue", which of course you do not get to choose a queue when creating the ticket...

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Goon Matchmaker posted:

What's EMCs average response time for SRs? I put in a ticket at 10:00 AM this morning about a failed Block/File OE upgrade and haven't heard poo poo back. We're running off one SP and my boss is bitching about the alert emails the VNX is spewing. I need this thing fixed...

What severity did you put it in at? Always call with important issues.

Severity 1: Critical: 30 minute call back (Data loss / data unavailable)
Severity 2: High: 2 hour call back (Systems heavily impacted)

etc

I'd say you should be on severity 2 but you have to have this discussion upon raising the call. call them back and get it upped.

Internet Explorer
Jun 1, 2005





I have never had a reply within 2 hours in a High request. Although I did just get a reply after 24 hours on a Saturday, so that's not too bad.

GrandMaster
Aug 15, 2004
laidback
Aren't EMC supposed to do these upgrades for you? For every CLARiiON/Celerra upgrade we have done, they have sent a tech out unless it was a low end AX or VNXe.

EMC also needed a whole bunch of change control done from their end to make sure they don't run into any compatibility issues with hosts or things like recoverpoint. I'm based in Australia so it might be different here..

Internet Explorer
Jun 1, 2005





This was the first update where a local tech offered to remote in to do the upgrade. I did it myself because I had done 3 other upgrades to our units and we were having a lot of problems and I wanted to gain the experience of working through it instead of having some guy do the upgrade, sweep it all under the rug, and then go "It's fixed!"

We were experiencing a pretty nasty bug on both of our VNX units that involved data tiering by the Block side on LUNs that the NAS side was using. It caused all sorts of fun things. Symptoms were basically the NAS side and the Block weren't communicating properly and any configuration changes involving both would fail.

Interestingly enough the last update apparently breaks RecoverPoint, but we are not using that so no worries. Also, I had to run a series of commands on both of my VNX units to stop them from freaking out and saying the Block version was something that it wasn't.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

Vanilla posted:

What severity did you put it in at? Always call with important issues.

Severity 1: Critical: 30 minute call back (Data loss / data unavailable)
Severity 2: High: 2 hour call back (Systems heavily impacted)

I entered the ticket as Severity 2: High.

What I've since learned is that some idiot at the consulting company we used to purchase our EMC gear sold us gear that "belongs" to another company. EMC thinks that the VNX is owned by some other company and when I put in a ticket with EMC to get upgrades and what not done, they get closed because the serial number doesn't match the site id where EMC thinks it should be. My manager has a call scheduled this morning with the consulting company to get this fixed.

Powdered Toast Man
Jan 25, 2005

TOAST-A-RIFIC!!!
"Device not ready" is not a message you want to get in DMC from an iSCSI target that is being used as the quorum disk on a rather large and important SQL cluster. :suicide:

On the other hand it's not my responsibility, so:

Only registered members can see post attachments!

Thanks Ants
May 21, 2004

#essereFerrari


Oh god I can tell I'm going to get horrifically abused about this, but I'll ask the question anyway. We are looking to move to 'proper' storage for our VMs and files - from what I've been reading it seems that NFS is the way to go for VMware, and CIFS for your files stuff, combined nicely with things like snapshots so users can use the Previous Versions frontend in Windows to recover recently-deleted stuff. All sounds good. However, our budget is comparatively tiny. I need roughly 6TB for VM, and the same again for files.

We've been talking with a vendor who wants us to have an IBM V7000, I had a demo and it looks awesome, but I think it's going to come in close to £20k (UK) which is about double what I've actually got to play with. Dell are keen to sell us a MD3220i, and from doing a bit of research the HP P2000 G3 plays in that sort of space as well. Has anyone used either of these with any success?

Am I lining myself up for failure to try and do this for ~£10k and is now the time to start working on getting that budget increased?

evil_bunnY
Apr 2, 2003

I love it when poo poo goes horribly wrong and it's not my problem. Like watching train wrecks I guess, you just can't look away.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

That's not enough budget for big boy storage with support and warranty. You're going to be looking at the 35 to 40K range U.S. so 20 to 25K Pounds seems about right.

evil_bunnY
Apr 2, 2003

Caged posted:

Dell are keen to sell us a MD3220i, and from doing a bit of research the HP P2000 G3 plays in that sort of space as well. Has anyone used either of these with any success?
We have a md3000i fully loaded with SATA which while not exactly blazingly fast, does work as advertised.
Apparently they work with third party disks so if you could always put a bunch of consumer SATA SSD's in there if you want to see what kind of IOPS the controllers will actually push.

Adbot
ADBOT LOVES YOU

Internet Explorer
Jun 1, 2005





Caged posted:

Oh god I can tell I'm going to get horrifically abused about this, but I'll ask the question anyway. We are looking to move to 'proper' storage for our VMs and files - from what I've been reading it seems that NFS is the way to go for VMware, and CIFS for your files stuff, combined nicely with things like snapshots so users can use the Previous Versions frontend in Windows to recover recently-deleted stuff. All sounds good. However, our budget is comparatively tiny. I need roughly 6TB for VM, and the same again for files.

We've been talking with a vendor who wants us to have an IBM V7000, I had a demo and it looks awesome, but I think it's going to come in close to £20k (UK) which is about double what I've actually got to play with. Dell are keen to sell us a MD3220i, and from doing a bit of research the HP P2000 G3 plays in that sort of space as well. Has anyone used either of these with any success?

Am I lining myself up for failure to try and do this for ~£10k and is now the time to start working on getting that budget increased?

I'm only familiar with US prices, but I would think for that price I would ignore NFS / CIFs / NAS functions on the SAN and just go with straight iSCSI. You can do the same Previous Versions bit on a standard Windows file server. But that's just my opinion.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply