Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Rhymenoserous
May 23, 2008

adorai posted:

Just today I was listening to one of my coworkers explain to our CDW rep why we went with Nimble instead of NetApp this time around. His opinion was that not only was it cheaper to get TWO Nimble units instead of just one NetApp, he also feels it is the easiest storage he has ever administered. Further, after the sale he had a great experience with a support rep who noticed an unrelated problem on one of our servers when working on another issue, and he took the time to independently research the issue and emailed him a solution, unsolicited, the next day.

All around, it has been a good experience and a great product.

Yeah managing a Nimble vs EMC/Netapp is night and day for how few headaches I have.

Adbot
ADBOT LOVES YOU

Rhymenoserous
May 23, 2008

bull3964 posted:

It's funny because I just had the same thing told to me by a re-seller about Nimble.

That was true of nimble two years ago when I bought mine very desperate to get a sale, but they are much more secure these days and when I order more units/add on's to the units I have it's a very laid back purchasing experience. Back then I was the guy posting on various forums going "I've never heard of these dudes help!" now I can't walk into a SAN/NAS discussion thread where they aren't talked about as a proven/stable technology.

Nimble is great. I'm glad I took one for the team.

Rhymenoserous
May 23, 2008

Internet Explorer posted:

I think it is because PowerVaults are OEMed by NetApp (through their LSI storage division purchase). They want to make that money themselves.

That's exactly what it is.

Rhymenoserous
May 23, 2008
That's not bad at all.

Rhymenoserous
May 23, 2008

Docjowles posted:

:drat: About 2 years ago I priced out almost exactly the same thing (for the same use case, even) at a prior job, and IIRC it was more like $25k. Would have been nice, but was just a bit too pricey for my boss. So instead they continued limping along on a barely functional HP MSA 2000 :hellyeah: Which thankfully I don't have to loving deal with anymore.

Competition in the storage arena has been picking up.

Also 2012/13 was a bad time to buy bulk storage because of... http://www.reuters.com/article/2011/10/28/us-thai-floods-drives-idUSTRE79R66220111028

Rhymenoserous
May 23, 2008
Isn't hyper-converged stuff just fancy VSA?

Rhymenoserous
May 23, 2008

Misogynist posted:

On the other hand, most of the people in the engineered systems division have seen the writing on the wall and peaced the gently caress out, so good luck actually getting support from whoever is left. The group I used to manage has had a Sev1 ticket open for four months with twice-weekly conference calls going all the way to Janis Landry-Lane and basically every call is IBM going "yeah, we still don't really know what's going on."

IBM has been slowly circling the drain for the last 10 years, I can't for the life of me understand why anyone would buy their loving products anymore.

Rhymenoserous
May 23, 2008

Richard Noggin posted:

What are people using for storage switches in very small deployments (2-3 hosts, 5-15 VMs, iSCSI SAN, vSphere)? We've been using Catalyst 3750-Xs stacked, but only because they've been solid.

Looks like the 3750-x's have (Or can have) SFP that support 10g ethernet. You could get 10 gig nics for the servers and bobs your uncle. That's what I did for the longest time before I could justify shelling out for a dedicated 10g switch just for server traffic.

Rhymenoserous
May 23, 2008
Meh, I prefer iscsi, at this point with 10gE being commodity priced. The real reason though is that a lot of the mid sized storage arrays only support iscsi. Unless you are doing hft the performance difference just doesn't matter all that much if you design things correctly.

Rhymenoserous
May 23, 2008

goobernoodles posted:

Engyte is one of my long term potential projects/solutions that I haven't gotten to yet. I'm considering buying a cheap NAS or other form of storage to run the proprietary Egnyte file server VM off of, the idea being to replicate from our existing file server to the Egnyte one. Point would be to not have our file server entrenched in a subscription service and to be able to limit licensing costs to only the people who need to access files remotely. Not sure how I'll replicate between the two without problems though. Robocopy?

...Anyone using FreeNAS for production file servers?

If you are using Server 2012 Branchcache is a possibility: https://technet.microsoft.com/en-us/library/dd425028.aspx

Rhymenoserous
May 23, 2008

1000101 posted:

OP was written a few years ago and I haven't had time to be a good curator. We like Nimble very much (speaking for the company I work for not goons in general) because it's easy to use and it generally performs well at a pretty solid price point.

edit:
My god I wrote that poo poo in 2008. Is there interest in a refresh?

I remember first writing about nimble in this thread like 2-3 years ago and goons going "Not so sure 'bout those dudes" and now I feel all vindicated.

Rhymenoserous
May 23, 2008
Raid 10.

Rhymenoserous
May 23, 2008
There's nothing difficult about iSCSI or NFS though. On a day to day basis I find both much easier to work with than FC.

Kaddish posted:

If you're using block level de-dupe and compression at the storage level, it doesn't matter if your vmdk is thin provisioned or thick. A block is a block and it's either being used on the array or it isn't. At least this is true on Pure, which is the only de-dupe/ compression/thin provisioning I use.

As mentioned above, make sure you utilize SCSI UNMAP periodically, especially if you have aggressive DRS. This doesn't run automatically. You will need to run it against a datastore from any host in the cluster.

Nimble specifically tells you to just roll thick provisioned client, it will take care of dedupe.

Rhymenoserous fucked around with this message at 20:22 on Aug 18, 2015

Rhymenoserous
May 23, 2008

NippleFloss posted:

They do inline zero elimination which is a flavor of deduplication and probably what they're suggesting when they say to (eager-zero) thick provision.

^

What he said.

Rhymenoserous
May 23, 2008

Wicaeed posted:

On my VMware & Nimble setup, our Used vs Free does not match between what Nimble is reporting for used vs what VMware sees. When we format a VM as thick, VMware reports that disk space as immediately being used on the datastore, but if I go into management, I can see that it really is thin provisioned on the storage backend.

Is there any special tools required for VMware to know that it really is being thin provisioned on the backend and mark that capacity as free accordingly?

They expressly told me not to do thin provisioning to avoid confusing scenarios like this. Also bear in mind what you are seeing on the array is post dedupe/compression/magic space maker.

Rhymenoserous
May 23, 2008

NippleFloss posted:

Cmon man, creating a datastore in VSC and then mirroring it with system manager is not a twenty minute job.

If it was EMC it would be a 20 minute job.

Rhymenoserous
May 23, 2008

bigmandan posted:

It's been awhile since I looked at pricing from Nimble, but I think their entry solutions are near that price point. A quote I have from about a year ago was ~20k CDN for a step above entry.

Nimble entry level has about 3-4 times more storage than he needs. At 1.5TB I wouldn't buy anything fancy.

Rhymenoserous
May 23, 2008
Just send someone off to the EMC certification class. If you have an isilon it's worth doing.

Rhymenoserous
May 23, 2008

Minus Pants posted:

Yeah 10GBASE-T is around 1-2 usec slower than DAC or fiber depending on the PHY.

And DAC cables are cheap as gently caress. Honestly 10g fiber is pretty loving close to commodity pricing these days. I don't even see the point of 10GBASE-T anymore.

Rhymenoserous
May 23, 2008

1000101 posted:

This is pretty much what we've done for these large migrations. You probably don't have 100TB of change every day so pre-stage as much as you can then just do a final cutover.

https://technet.microsoft.com/en-us/magazine/2009.04.utilityspotlight.aspx

Spawns multiple robocopy threads/does everything robocopy does, with a gui. You can even control how many copy threads it fires up at a time. It's a lifesaver for migrations and it's also free.

Rhymenoserous
May 23, 2008

Fruit Smoothies posted:

when storage is cheap to buy.


Oh it is now? That's good to know, I sure am glad it went from being the most expensive thing to do right to one of the least with no major paradigm shifts.

Uh my vendor just got back to me, poo poo's still expensive. You lied to me fruit smoothies. You lied to me.

Rhymenoserous
May 23, 2008

Docjowles posted:

I've been asked to help troubleshoot some iSCSI performance issues between a Windows Server host and a NetApp FAS8020. Any suggestions for a good, free storage benchmarking tool that runs on Windows? IOmeter looks decent but it's also old as gently caress (user guide refers to NT 4.0 :corsair:). I don't typically support Windows so I'm out of my element here.

Just this host having the problem?

Rhymenoserous
May 23, 2008

cheese-cube posted:

1Gb would be fine for a three-host setup.

Yeah, wouldn't bother with 10g here.

Rhymenoserous
May 23, 2008

Thanks Ants posted:

Goddam I am so out of touch on storage. I think I'll try and push this off to a VAR to solve and see what I can learn from the process.

You want a nimble or a netapp or something of that nature that does flash caching. The what you want isn't hard at all. The real question is "What are you willing to pay".

Rhymenoserous
May 23, 2008

Frank Viola posted:

I recently got a NetApp DS14 Mk4 Disc Array, and got another one from a shop oddly enough but not the filer head unfortunately. I connected them through an FC card. I wanted to low level format them but Windows didn't offer that kind of accessibility to my knowledge. Enter CentOS7, both of my servers were running CentOS7 so I tried to do the llf using lscsci and sginfo. No luck. I had read about FreeNAS and decided to implement it on two 3tb drive on an old poweredge that was going unused, so I installed to a SanDisc removable and tried it out. Heres, where the solution came from.

So NetApp uses 520 bit format sectors which is not going to work in the FreeNAS environment for the above reason, it's 512 not 520. I almost lost hope until I found an article that pointed me in the right direction. http://www.sysop.ca/archives/208. The guy had a brilliant idea to do camcontrol, which worked. SO if you see some disc shelves for cheap, and you're looking for a disc array for home or small shop use, give this a look see. I think I got the netapps with the drives for around 200+ shipping. I plan to run Openstack after I mount the LDAP volume to the centOS server so that I can do god knows what.

Hnnnnnnng.

Rhymenoserous
May 23, 2008

the spyder posted:

Our leadership is going to have a heart attack when they see the cost.

Meh. gently caress 'em. If anyone bitches about how much my infrastructure costs in SAN, network and VM Servers I'll gladly get quotes for bare metal and attached storage, then when they see that poo poo costs almost twice as much I'll spring the power bill and air conditioning costs on them.

Rhymenoserous
May 23, 2008

Mr-Spain posted:

Has anyone bought anything through http://www.enterasource.com/ ?

I'm looking at some secondary storage, as our primary is a pair of Tegile units. This would be for storing some video and video projects that our marketing department creates, so the Tegile isn't the best suited for it.

I'm ok with the fact it's refurbed, parts support is through them. I just wanted to see if anyone had experience with them. Below is the quote if anyone wants to see; They are MD1200's sitting under PE R710's.

Dell PowerVault MD1200 (12x 3.5" LFF Hard Drive Option)
12x 3TB 7.2K RPM NL SAS 6Gbps Hot Swap Drives
2x Controllers
2x 600W Power Supplies
Rail Kit
Front Bezel
1x Mini SAS to Mini SAS Cable
2x Power Cords

1x Dell PERC H810 1GB Raid Controller (Full Height)

Purchase Price - $2,425/ea

Total Purchase Price - $4,850 + $100 Shipping

Provided you can still get a support agreement from the primary vendor (Netapp/EMC/Whoever) I don't see the problem.

Rhymenoserous
May 23, 2008

Saukkis posted:

This is an important point. About a year ago we were in the process of purchasing new FC switches to our HP blade systems. Then one guy realized that the only thing that matters is that the blade chassis has a support contract and HP will swap any parts no questions asked. So instead of buying new switches for 4k a piece we could just eBay used switches for 1/10th the price.

I mean I'd still try to use the used equipment but still. If I have an ironclad support agreement from the primary vendor I'm happy. That's where they make all the money anyways.

Rhymenoserous
May 23, 2008

big money big clit posted:

Nah, they aren't car dealers, they make money on product.

I mean, EMC does but that's because they charge an arm and a leg.

Rhymenoserous
May 23, 2008

maxallen posted:

Curious if anyone has any thoughts why this happened:

I work for a company, we resell software/equipment (in a field that's small enough you could narrow me down if I was more specific.) We have a support/maintenance contract with one of our local customers, upgraded them to the newest version a few months ago. While we did this, we transitioned from physical to virtual servers. We have one server that hosts our application, one that hosts MSSQL.

Customer's IT department installed and prepped the SQL server (we finalized install), and they installed the SQL instance datastores on an iSCSI drive hosted on a Nimble controller.

Yesterday about 11:23 AM, the iSCSI drive suddenly died. Windows reported it as a RAW partition, disconnecting and reconnecting did nothing, and ... well let me post the description the IT Manager gave to the IT director:


So his boss sends to [Division Director] that they'll look into it, but who really needs to look into it is us, because they don't know our program (keep in mind we have nothing to do with the storage solution or how it was set up, that was all their IT department, and this server only hosts MSSQL). So I do some investigating and find two things in event viewer that kicked off when it started and just kept repeating afterwards:

Nimble service:
I/O Complete. Serial Number 1C4101B848DC3A536C9CE90097376601. Address 01:00:01:00. Path ID 0x77010001. Path Count 2. CDB 2A 00 01 EE 3D D7 00 00 01 00 -- -- -- -- -- --. Sense Data -- -- --. SRB Status 0x04. SCSI Status 0x18. Queue Status 0, 1. ALUA State: Active Optimized. Device State: Active Optimized.

Ntfs service:
The system failed to flush data to the transaction log. Corruption may occur in VolumeId: E:, DeviceName: \Device\HarddiskVolume6.
({Device Busy}
The device is currently busy.)

(repeat every few seconds until the iSCSI connection was terminated).

Any ideas? Anyone ever seen this before?

Tl;dr: Customer's nimble iSCSI share suddenly kicked the bucket, and only on our instance, and it's up to me to figure out why

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2006849

Looks like a microsoft/iscsi error that was kicked off by a snapshot

Rhymenoserous
May 23, 2008

devmd01 posted:

I will never run a SAN in production without a maintenance contract, period. If the company is willing to accept that risk, then it's time to find a new company.

At the job I just left, I migrated the data center to a colo space on a new SAN, but there was a 3 month period where the old SAN was out support and still running business critical applications, due to various delays. I still puckered anytime a disk failed in the old SAN, even though I had spare drives sitting on the shelf ready to go.

Yeah no poo poo. Servers these days are just the delivery boys and processing oomph for what the san delivers (If you are doing virtualization like a smart person). You could take a hammer to every server in my rack and I wouldn't flinch. I can recover. My SAN? Makes me pucker.

Rhymenoserous
May 23, 2008

lol internet. posted:


3. Is there benefits for using multiple LANS to hold VM disks? Aside from having to restore a whole lun if it gets corrupted/goes bad for whatever reason?

This is literally a question to ask your storage provider because otherwise the answer is "It depends"

Adbot
ADBOT LOVES YOU

Rhymenoserous
May 23, 2008
I’ll second pure, heard nothing but good things.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply