Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Digital_Jesus
Feb 10, 2011

Windows Server nfs seems to do very well with large file transfers and whatnot but its performance still suffers with lots of small files just like every other MS file system implementation.

Its not horrible but running a *nix os on the box may be a better choice.

Adbot
ADBOT LOVES YOU

Digital_Jesus
Feb 10, 2011

lol internet. posted:

Couple questions,

1. Why would you ever use more then two fibers (one for redundancy) to a SAN storage. Is there any benefit?

2. For Hyper-V is there any IO/VM monitoring tool? Sort of like VMware generally has a plugin, is there perhaps a third party app which can monitor VM IO on a SAN?

3. Is there benefits for using multiple LANS to hold VM disks? Aside from having to restore a whole lun if it gets corrupted/goes bad for whatever reason?

1. Depends on your configuration. Generally most of my smaller installs are 4 cable connections. Two dual-port cards in each server (for physical hosts) with each card's two connections split between two FC switches, with the same 4/8 uplinks to the storage depending on the appliance model. Redundancy and throughput, both at the server level and switching. E: When you start working with converged infrastructure a lot of it is throughput related. Not so great to feed 16 blades off two FC links.

2. Powershell is your friend. enable-vmresourcemetering & measure-vm.

3) Generally you split LUNs across storage relative to their spindle sets in a given storage pool. This way if you need to give a dedicated spindle set to a particular VM (SQL, Exchange, resource-intensive application servers, VDI) while maintaining your low-impact VMs on slower pool sets. Example is I generally put all of my VM OS virtual disks on a single LUN on a R5 set, and will split off LUNs to dedicated R1+0 pools for database operations.

Digital_Jesus fucked around with this message at 03:20 on Apr 4, 2019

Digital_Jesus
Feb 10, 2011

You'll be limited by the output of your storage appliance. If you're working with an appliance that has dual storage processors in an active/active setup, the way most appliances operate is to assign a given LUN to a single SP for processing, so if SPA has two 8Gb connections if configured it will round robin both and give you 16GB. Without a second LUN your second SP will sit idle and only assume IO operations if the first SP fails or tells it to take over.

In your particular case it would be 8Gb per SP so thats your max for a single lun.

E: Please keep in mind 8Gb of traffic for a storage connection is pretty loving huge unless you're working in the megacorp "we've got a whole building just for servers" world. I'm assuming you're not? Either way 8Gb will process a loooooota traffic.

E2: Also I should ask. Are you going FC because thats what you inherited or for a performance related reason? If not iSCSI is cheaper and easier to work with.

Digital_Jesus fucked around with this message at 02:54 on Apr 5, 2019

Digital_Jesus
Feb 10, 2011

Word. If you like hit me up in PMs and I can give you some help with zoning and poo poo if you want.

Digital_Jesus
Feb 10, 2011

YOLOsubmarine posted:

8Gb/s is only 1GB/s of throughput which really isn’t that much, especially for a large block workload. That’s only about 30,000 32kb IOPS or 4k 256k IOPS. You can definitely saturate that in a fairly modest environment with backup, analytics, EDW, badly formed SQL reports, etc.

Right, but if youre planning infra for those specific metrics you are calculating for them and aware of what you need. If youre not then Id wager youre not working in an environment that requires larger storage bandwidth and for most places an 8Gb link will probably move more traffic than your spinner drives are going to put out.

In the grand scheme of things, no, it isnt a lot of traffic but in smaller environments Id be impressed if you crushed it when things like backups, sql jobs, etc should all be scheduled around each other, not simultaneously destroying your storage adapters during peak business hours.

Digital_Jesus
Feb 10, 2011

Or better yet actually measure your storage resources and purchase the equipment you need to accommodate your workload and growth rather than making random assumptions on the internet in relation to a conversation between someone giving general advice and a dude who self-proclaimed to know approximately jack poo poo about working with his storage gear.

A single 8Gbps FC link will move a lot of storage data for a small environment. If you are arguing that "That isn't much data" you are outside the scope of the conversation at hand, which is being directed at someone running lower-end storage for the enterprise spectrum if its still even coming with 8Gb FC instead of 16 or 32.

Besides, if you're concerned about storage bandwidth you'd be moving off FC and switching to 25/40/100Gbps iSCSI anyway :v: (or working with and endless budget and balancing over 64Gb+ FC I suppose if you're concerned about block performance, and those lines are starting to blur a bit anywho)

Digital_Jesus fucked around with this message at 19:18 on Apr 5, 2019

Digital_Jesus
Feb 10, 2011

EMC Unity is fantastic. You should not judge it by prior VNX experience. Whole new system. Pricing is pretty good too.

Digital_Jesus
Feb 10, 2011

Moey posted:

Having never worked with EMC arrays, what are the pain points?

I am really just looking for some all flash storage block iSCSI storage. I would like to keep the Veeam SAN snapshot based backups that Nimble offers.

As someone that worked with VNX/Clariion stuff, the old interfaces were atrocious and legitimately a pain in the rear end to use.

Unity has moved to all HTML5 management interfaces and they integrate well with vCenter. I have no complaints with them and, at least in my area, EMC support is top notch and I have never had an issue in my ~3 years or so of selling them.

They're also pretty drat cheap for a "small" all-flash array. Nimble pricing has routinely been 1.5-2x more for the same storage capacity.

Digital_Jesus
Feb 10, 2011

Digital_Jesus fucked around with this message at 21:04 on Apr 23, 2019

Digital_Jesus
Feb 10, 2011

https://www.emc.com/dam/uwaem/documentation/unity-p-configure-smb-file-sharing.pdf

Though I believe it still requires AD integration if you want to just slam SMB off the appliance.

Digital_Jesus
Feb 10, 2011

YOLOsubmarine posted:

Flexibility in storage is overrated. Simplicity and consistency are way more important. There are plenty of other ways to serve files and usually they avoid some of the downsides of doing it directly from an array.

I dont necessarily think Flexibility is overrated, depending on your use case. If you're looking for a hybrid system or something where you can still dedicate drive/spindle sets theres value in that.

Pure / Nimble / etc have a good thing going with the "Giant chunk of flash" model but in some use cases I still prefer being able to add some drives and put them in their own pool specifically to run some particular functionality.

Digital_Jesus
Feb 10, 2011

Synology fits a great niche for the company size that can justify spending $$$$$$ on production storage but not on spending $$$$$$ for a full replicated storage appliance for backups or DR.

Also for cheap test labs or virtual environments where uptime is not critical to the guests running on it.

I've had several customers that will drop the money on EMC/NetApp/Whatever for their production storage but dont need or want a full failover DR system and some off-site synology units work great for data replication.

Adbot
ADBOT LOVES YOU

Digital_Jesus
Feb 10, 2011

Axe-man posted:

I just imagine some sales man is asking their support staff if they can run hyper v in VMware right this minute. :eng99:

You absolutely can and I run a nested hyper-v cluster on vmware for vendor support reasons on some very expensive to replace software.

Replication / Failover Works fine. Veeam backs up both the hyper-v host OS at the vmware level and the hyper-v guests as well.

Just gave the hyper-v vmware guests some dedicated storage to talk to and it was off to the races.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply