Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
I've only done this with Brocade but it should work similarly with cisco. If you attach an inter-switch link from your existing switch to a new one (without a config), it should download the config from the existing switch. Then you can migrate all your ports over to the new switch leisurely, disconnect the ISL once complete, and retire the existing switch.

You could download the config from the existing switch to a text file and upload it to the new one... I think that would work.

This link describes how to do it though it's not specific, maybe google around some more for specific instructions on downloading/uploading a config to a remote server.

https://www.cisco.com/en/US/docs/storage/san_switches/mds9000/sw/san-os/quick/guide/quick_cg.html

obviously check and test this before relying on some random internet jackasses' word.

Adbot
ADBOT LOVES YOU

qutius
Apr 2, 2003
NO PARTIES

SlowBloke posted:

Another MDS question, we might have to decommission out MDS9124 and upgrade to a more modern 9148S. Am I going to need to redo the configuration/zoning from scratch or could i import the config from the old switch to the new one?

That'll be a nice upgrade.

You can copy the config from the 9124 to the 9148S then move your connections over.

This is a good summary, though you won't need to worry about certain parts of this procedure like the license stuff:
https://www.cisco.com/c/en/us/support/docs/storage-networking/mds-9500-series-multilayer-directors/117621-configure-MDS-00.html

qutius fucked around with this message at 17:51 on Feb 7, 2019

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

SlowBloke posted:

Another MDS question, we might have to decommission out MDS9124 and upgrade to a more modern 9148S. Am I going to need to redo the configuration/zoning from scratch or could i import the config from the old switch to the new one?

You will need to put a basic configuration on the switch but things like zoning, aliasing, and zone sets are fabric properties and when a new compatible (in this case it’s same vendor so no issues) switch is joined to the fabric then fabric configuration is distributed to the switch.

As paperchase said above, simply creating an ISL (just run an FC cable between them) between the old switch and new will start this process, which is relatively fast. Once the fabric is distributed to the new switch you’ll just need to move cables from the old to new switch and then disconnect the ISL. well, that’s assuming you’re zoning based on WWPN and not port ID, which you should be.

Things to consider:

Make sure the VSANs configured match between the two switches.

Make sure trunking mode is enabled on both

Set the FCID on the new switch to be different than the existing switch for each VSAN

Make sure all features you need are enabled on the new switch (npv, fport channel, etc)

Balance your wwpn logins appropriately across the forwarding engines. Each set of 16 sequential ports shares a forwarding engine.

SlowBloke
Aug 14, 2017
Thanks everyone, the current settings are a combo of fcaliases(with pwwn)->zones->single zoneset so it sounds like it should do doable. First time setting up a mds from scratch (our 9124 were set up by a MSP, we just ship of theseused the conf to the current status) so I'm a tad apprehensive about ripping and replace our prod FC cores

SlowBloke fucked around with this message at 20:33 on Feb 7, 2019

qutius
Apr 2, 2003
NO PARTIES
Is anyone around here using or has used, or at least tested out, Unisphere Central?

I haven't really found much out there on people actually using it, which makes me think its a nightmare. This would be for Unity arrays only at this point, no older gear arrays in my environment.

CloudIQ is great for telemetry and alerting, but the business is asking/hoping for a centralized spot for actual config changes and such.

Edit: Now that I actually think more about this, I think I'll push to have CloudIQ as the pivot point for all these arrays. If someone needs to make actual changes on the storage, local Unisphere is a click away...still interested in if anyone out there is using Unisphere Central tho.

qutius fucked around with this message at 21:22 on Mar 4, 2019

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Does anyone have any insight to performance of the NFS Server in Windows Server 2016 or 2019?

I got this thing kinda dumped in my lap where we're trying to offload some bulk storage form our expensive storage appliance and someone had the genius idea to buy a 36 bay SuperMicro server and stuff it full of 12TB disks, install Windows Server with storage spaces, and the first test customer was using NFS. Performance was abysmal, but it looks like I'm able to get pretty decent performance on the same pool when I copy files from an SMB share onto the storage pool (haven't tried pushing from a windows server to the storage pool with SMB yet).

But before I go too far deep into the rabbit hole, any guidance on NFS with windows?

Potato Salad
Oct 23, 2014

nobody cares


FISHMANPET posted:

Does anyone have any insight to performance of the NFS Server in Windows Server 2016 or 2019?

I got this thing kinda dumped in my lap where we're trying to offload some bulk storage form our expensive storage appliance and someone had the genius idea to buy a 36 bay SuperMicro server and stuff it full of 12TB disks, install Windows Server with storage spaces, and the first test customer was using NFS. Performance was abysmal, but it looks like I'm able to get pretty decent performance on the same pool when I copy files from an SMB share onto the storage pool (haven't tried pushing from a windows server to the storage pool with SMB yet).

But before I go too far deep into the rabbit hole, any guidance on NFS with windows?

NFS export services work pretty drat well. Idk what guidance you're looking for other than "it does indeed work pretty darn quick"

Do spend the necessary effort to get authorization/authentication working for NFSv4 from the get-go. Propping up nfsv3 on Winserv is a literal 2 minute task, but you'll regret having to kill that at some point and implement v4 for your export point down the road.

Digital_Jesus
Feb 10, 2011

Windows Server nfs seems to do very well with large file transfers and whatnot but its performance still suffers with lots of small files just like every other MS file system implementation.

Its not horrible but running a *nix os on the box may be a better choice.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
This is all a proof of concept and the remote customer was not really willing to put much work into it besides "mount nfs point schedule rsync job" and getting much info from them has been like pulling teeth. So I'm starting over from scratch, going to do some SMB and NFS testing all on my own before letting an uncooperative customer at it.

E: I was so happy living in virtual land where storage and computer are someone else's problems and now I'm just getting dragged back into a physical box and it kinda sucks.

FISHMANPET fucked around with this message at 16:38 on Mar 29, 2019

wibble
May 20, 2001
Meep meep
Are there any other motherboard manufacturers that offer motherboards with remote console and management other than Supermicron?
Looking for a 1151 mATX for my next build but cant find any from the normal companies.

H110Hawk
Dec 28, 2006

wibble posted:

Are there any other motherboard manufacturers that offer motherboards with remote console and management other than Supermicron?
Looking for a 1151 mATX for my next build but cant find any from the normal companies.

Like, does anyone else support a BMC card? They basically all do. Heck Intel's support HTML5 KVM now. Can you be more specific as to your goals?

Thanks Ants
May 21, 2004

#essereFerrari


vPro will do what you want. Otherwise if you just need KVM then you can get an IP KVM and plug that in and get whatever board you want.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
We installed our new Nimble HF40 this week and I would be surprised if it has more than 3mm of clearance left in the rack. The longest SAN I have ever seen.

Thanks Ants
May 21, 2004

#essereFerrari


Check out the Hitachi Data102 if you want something that won’t fit in a rack

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

adorai posted:

We installed our new Nimble HF40 this week and I would be surprised if it has more than 3mm of clearance left in the rack. The longest SAN I have ever seen.

The controllers in it are just so goddamn huge. It has to be one of the absolute worst systems for disk space to physical size ratio I've ever seen.

H110Hawk
Dec 28, 2006

HalloKitty posted:

The controllers in it are just so goddamn huge. It has to be one of the absolute worst systems for disk space to physical size ratio I've ever seen.

Spoken like someone who forgets the days before top loading disk shelves.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
It's 2019. Disks should load in like ammo magazines

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Vulture Culture posted:

It's 2019. Disks should load in like ammo magazines

Xyratex had a fun bulk loader tool that would drop ten at a time from the shipping box into an enclosure.

But why buy disks when you can do weird poo poo with SCM and QLC https://www.vastdata.com/

evil_bunnY
Apr 2, 2003

Potato Salad posted:

NFS export services work pretty drat well. Idk what guidance you're looking for other than "it does indeed work pretty darn quick"

Do spend the necessary effort to get authorization/authentication working for NFSv4 from the get-go. Propping up nfsv3 on Winserv is a literal 2 minute task, but you'll regret having to kill that at some point and implement v4 for your export point down the road.
This is legit info+advice

qutius
Apr 2, 2003
NO PARTIES
We will be installing some Isilon A2000s soon, which I guess are loving massive too. Who needs rear doors?

lol internet.
Sep 4, 2007
the internet makes you stupid
Couple questions,

1. Why would you ever use more then two fibers (one for redundancy) to a SAN storage. Is there any benefit?


2. For Hyper-V is there any IO/VM monitoring tool? Sort of like VMware generally has a plugin, is there perhaps a third party app which can monitor VM IO on a SAN?

3. Is there benefits for using multiple LANS to hold VM disks? Aside from having to restore a whole lun if it gets corrupted/goes bad for whatever reason?

devmd01
Mar 7, 2006

Elektronik
Supersonik
Lots of reasons for #3. Separation of I/O workloads, if you have auto-tiering you can pin one LUN to the top tier. Maybe you replicate a couple of luns over to your DR SAN but not everything, keeping a/b pairs of vms separate, etc.

H110Hawk
Dec 28, 2006

lol internet. posted:

Couple questions,

1. Why would you ever use more then two fibers (one for redundancy) to a SAN storage. Is there any benefit?

1. Throughput, segregation, yet more redundancy, fc access to other trays, money to burn.

Digital_Jesus
Feb 10, 2011

lol internet. posted:

Couple questions,

1. Why would you ever use more then two fibers (one for redundancy) to a SAN storage. Is there any benefit?

2. For Hyper-V is there any IO/VM monitoring tool? Sort of like VMware generally has a plugin, is there perhaps a third party app which can monitor VM IO on a SAN?

3. Is there benefits for using multiple LANS to hold VM disks? Aside from having to restore a whole lun if it gets corrupted/goes bad for whatever reason?

1. Depends on your configuration. Generally most of my smaller installs are 4 cable connections. Two dual-port cards in each server (for physical hosts) with each card's two connections split between two FC switches, with the same 4/8 uplinks to the storage depending on the appliance model. Redundancy and throughput, both at the server level and switching. E: When you start working with converged infrastructure a lot of it is throughput related. Not so great to feed 16 blades off two FC links.

2. Powershell is your friend. enable-vmresourcemetering & measure-vm.

3) Generally you split LUNs across storage relative to their spindle sets in a given storage pool. This way if you need to give a dedicated spindle set to a particular VM (SQL, Exchange, resource-intensive application servers, VDI) while maintaining your low-impact VMs on slower pool sets. Example is I generally put all of my VM OS virtual disks on a single LUN on a R5 set, and will split off LUNs to dedicated R1+0 pools for database operations.

Digital_Jesus fucked around with this message at 03:20 on Apr 4, 2019

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

lol internet. posted:

3. Is there benefits for using multiple LANS to hold VM disks? Aside from having to restore a whole lun if it gets corrupted/goes bad for whatever reason?

This depends highly on the vagaries of your particular Storage solution and how you are using it, as well as which hypervisor you’re running. The general trend is towards larger datastores running more VMs though.

Rhymenoserous
May 23, 2008

lol internet. posted:


3. Is there benefits for using multiple LANS to hold VM disks? Aside from having to restore a whole lun if it gets corrupted/goes bad for whatever reason?

This is literally a question to ask your storage provider because otherwise the answer is "It depends"

lol internet.
Sep 4, 2007
the internet makes you stupid

Digital_Jesus posted:

1. Depends on your configuration. Generally most of my smaller installs are 4 cable connections. Two dual-port cards in each server (for physical hosts) with each card's two connections split between two FC switches, with the same 4/8 uplinks to the storage depending on the appliance model. Redundancy and throughput, both at the server level and switching. E: When you start working with converged infrastructure a lot of it is throughput related. Not so great to feed 16 blades off two FC links.

If I am running 8GB x 2 FC Connections from SAN (1FC per controller) <> Switch <> Server with MPIO, the actual throughput would be 16GB automatically assuming the spinning disks are not the bottleneck? This is not a appliance/vendor specific feature, just a general standard?

What if it was 8GB x 2FC across two separate fabrics, would it still be 16GB or I would be able utilize both fabrics and get 32GB?

Sorry if these questions sound silly but just been learning FC/SAN stuff in the last 3 months or so without much formal training.

Digital_Jesus
Feb 10, 2011

You'll be limited by the output of your storage appliance. If you're working with an appliance that has dual storage processors in an active/active setup, the way most appliances operate is to assign a given LUN to a single SP for processing, so if SPA has two 8Gb connections if configured it will round robin both and give you 16GB. Without a second LUN your second SP will sit idle and only assume IO operations if the first SP fails or tells it to take over.

In your particular case it would be 8Gb per SP so thats your max for a single lun.

E: Please keep in mind 8Gb of traffic for a storage connection is pretty loving huge unless you're working in the megacorp "we've got a whole building just for servers" world. I'm assuming you're not? Either way 8Gb will process a loooooota traffic.

E2: Also I should ask. Are you going FC because thats what you inherited or for a performance related reason? If not iSCSI is cheaper and easier to work with.

Digital_Jesus fucked around with this message at 02:54 on Apr 5, 2019

lol internet.
Sep 4, 2007
the internet makes you stupid
Inherited. Basically a couple 3pars and 16gb SAN switches. There is some older no name 8gb storage appliances connected to a hp c7000 blade enclosure but on 4gb cards. The enclosure is going to be replaced though.

Digital_Jesus
Feb 10, 2011

Word. If you like hit me up in PMs and I can give you some help with zoning and poo poo if you want.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Digital_Jesus posted:

E: Please keep in mind 8Gb of traffic for a storage connection is pretty loving huge unless you're working in the megacorp "we've got a whole building just for servers" world. I'm assuming you're not? Either way 8Gb will process a loooooota traffic.

8Gb/s is only 1GB/s of throughput which really isn’t that much, especially for a large block workload. That’s only about 30,000 32kb IOPS or 4k 256k IOPS. You can definitely saturate that in a fairly modest environment with backup, analytics, EDW, badly formed SQL reports, etc.

Digital_Jesus
Feb 10, 2011

YOLOsubmarine posted:

8Gb/s is only 1GB/s of throughput which really isn’t that much, especially for a large block workload. That’s only about 30,000 32kb IOPS or 4k 256k IOPS. You can definitely saturate that in a fairly modest environment with backup, analytics, EDW, badly formed SQL reports, etc.

Right, but if youre planning infra for those specific metrics you are calculating for them and aware of what you need. If youre not then Id wager youre not working in an environment that requires larger storage bandwidth and for most places an 8Gb link will probably move more traffic than your spinner drives are going to put out.

In the grand scheme of things, no, it isnt a lot of traffic but in smaller environments Id be impressed if you crushed it when things like backups, sql jobs, etc should all be scheduled around each other, not simultaneously destroying your storage adapters during peak business hours.

Pile Of Garbage
May 28, 2007



YOLOsubmarine posted:

8Gb/s is only 1GB/s of throughput which really isn’t that much, especially for a large block workload. That’s only about 30,000 32kb IOPS or 4k 256k IOPS. You can definitely saturate that in a fairly modest environment with backup, analytics, EDW, badly formed SQL reports, etc.

You're making a lot of assumptions here and you shouldn't feel comfortable throwing around IOPS numbers based purely on storage connectivity throughput, even in theoretical terms. Adding vague qualifiers like "fairly modest environment with backup" is even worse.

From my experience with FC if your fabric is configured correctly then you'll more often than not run into bottlenecks on your shelves/controllers well before you reach the throughput maximums of the fabric itself, which Digital_Jesus already pointed out.

evil_bunnY
Apr 2, 2003

8Gbs is about 5 drives worth of sequential IO, don’t assume it won’t be the bottleneck.

Digital_Jesus
Feb 10, 2011

Or better yet actually measure your storage resources and purchase the equipment you need to accommodate your workload and growth rather than making random assumptions on the internet in relation to a conversation between someone giving general advice and a dude who self-proclaimed to know approximately jack poo poo about working with his storage gear.

A single 8Gbps FC link will move a lot of storage data for a small environment. If you are arguing that "That isn't much data" you are outside the scope of the conversation at hand, which is being directed at someone running lower-end storage for the enterprise spectrum if its still even coming with 8Gb FC instead of 16 or 32.

Besides, if you're concerned about storage bandwidth you'd be moving off FC and switching to 25/40/100Gbps iSCSI anyway :v: (or working with and endless budget and balancing over 64Gb+ FC I suppose if you're concerned about block performance, and those lines are starting to blur a bit anywho)

Digital_Jesus fucked around with this message at 19:18 on Apr 5, 2019

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Pile Of Garbage posted:

You're making a lot of assumptions here and you shouldn't feel comfortable throwing around IOPS numbers based purely on storage connectivity throughput, even in theoretical terms. Adding vague qualifiers like "fairly modest environment with backup" is even worse.

From my experience with FC if your fabric is configured correctly then you'll more often than not run into bottlenecks on your shelves/controllers well before you reach the throughput maximums of the fabric itself, which Digital_Jesus already pointed out.

I’m not making assumptions, I’m laying out that 8Gb of throughout for an array to push. Any enterprise array bought in the last 3 years can drive that in real world conditions.

Saying that 8Gb of throughout is a lot to anyone who isn’t at the “whole building just for servers” level simply isn’t true. Ive seen customers in the 5 compute host range hit that on new arrays doing pretty normal things like ERP reports.

quote:

Besides, if you're concerned about storage bandwidth you'd be moving off FC and switching to 25/40/100Gbps iSCSI anyway :v: (or working with and endless budget and balancing over 64Gb+ FC I suppose if you're concerned about block performance, and those lines are starting to blur a bit anywho)

Very few customers that truly care about serious throughout use iSCSI. Infiniband and FC are still very common interconnects in the HPC and big data spaces. You’ll see all sorts of things like NFS over RDMA on Infiniband. Having guaranteed delivery built into the transport is a pretty big deal if you’re looking for maximum performance. Likewise offloading the cycles required for protocol encapsulation/de-encapsulation onto an HBA can make a big difference since those sorts of things are already CPU intensive. And iSCSI in particular is a no go, taking the bloated SCSI protocol stack and layering it on top of notoriously hard to tune TCP.

YOLOsubmarine fucked around with this message at 01:52 on Apr 6, 2019

LordAdakos
Sep 1, 2009
If I wanted to learn all about Enterprise storage And how to tell a NetApp from a compellant sc3000 from an ibm DS83XX and how everything connects and what do I need to access a sfp+ or qsfp or sff8088 connector systems..... Where can I start? Is there a good catch all YouTube series or learning course that covers all this in an easy to swallow pill?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

LordAdakos posted:

If I wanted to learn all about Enterprise storage And how to tell a NetApp from a compellant sc3000 from an ibm DS83XX and how everything connects and what do I need to access a sfp+ or qsfp or sff8088 connector systems..... Where can I start? Is there a good catch all YouTube series or learning course that covers all this in an easy to swallow pill?
SFP+ and QSFP+ are just connectors for network interfaces. They don't say anything about how you access the device. Storage protocols would be something like iSCSI or FCoE.

Understanding storage means understanding a bunch of different, interrelated disciplines. The difference between storage platforms is just a matter of vendor spec sheets and documentation. Most people only work in day-to-day with one or two different storage vendors for operational supportability reasons, so the people you see who seem to be experts in the differences between storage platforms are likely to either a) be consultants or b) have finished comparison shopping lately. Frankly, it's the least interesting and least useful part of the discipline. You can pick up all the differences by arranging a few vendor sales meetings and having a couple of educated conversations with people online. The rest is fundamentals.

And those fundamentals are quite a bit harder. They involve things like understanding how to profile and classify storage workloads, understand the impacts of sequential and random I/O and what multiple concurrent sequential I/O workloads do to that stream, read/write sizing, the impacts of segment size on a RAID array, and so forth. You need to know how to monitor the performance in production and understand how different indicators might indicate different kinds of performance degradation on your various consumers. Doing this effectively often requires understanding specifics of how certain workloads actually function under the hood, including databases and virtualization platforms. From a business perspective, it means knowing how to size, how to make tradeoffs between availability, capacity, and performance for a certain cost, and how to back up and restore the system. If the system will be used for direct file server access to clients (CIFS, NFS, etc.), you need to understand a certain amount about the systems you're tying into, how they authenticate, store, and delineate user information, and so on.

There's a lot to it, which is why you see a lot of businesses offloading an increasing amount of their storage activity into fully managed cloud platforms like Amazon S3.

Wicaeed
Feb 8, 2005
God help me, our company keeps deciding to go with this all-flash storage vendor named Kaminario.

I'd never heard of them before I started at this company, and I'm starting to wonder if some VP somewhere has money invested in them.

I never thought I'd miss an HPE product, but I swear to god all of our problems with pinpointing what is causing the slowness on these arrays would be instantly solved by having Infosight :(

Adbot
ADBOT LOVES YOU

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Wicaeed posted:

God help me, our company keeps deciding to go with this all-flash storage vendor named Kaminario.

I'd never heard of them before I started at this company, and I'm starting to wonder if some VP somewhere has money invested in them.

I never thought I'd miss an HPE product, but I swear to god all of our problems with pinpointing what is causing the slowness on these arrays would be instantly solved by having Infosight :(

Kaminario has been around for a while. They’re still a little niche, but it’s not bad stuff. Very fast, relatively simple.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply