Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Walked
Apr 14, 2003

So we inherited another department's equpment and I am a happy recipient of an unexpected EqualLogic PS6100 and some 6224 switches; maybe not the latest and greatest; but wont hurt to have operational.


I'm reading through the documentation, and am looking for what I think is pretty stupid clarification:


Just trying to confirm (looking only at one card for now, say controller 0)
Ports 1 and 3 should be on a separate subnet from ports 2 and 4.

Setting up for MPIO and iSCSI exactly as configured here; but nowhere does the Dell docs have anything about subnet configuration for MPIO

Adbot
ADBOT LOVES YOU

Walked
Apr 14, 2003

NippleFloss posted:

For EQL all members of the same group should have their ports connected to the same subnet.

Thanks! Was just coming back to say I found that information in some other discussion after enough googling.

Much appreciated!

Walked
Apr 14, 2003

I have an EqualLogic I want to benchmark for an SQL environment. Any tips for tests to run with iometer? It's tough to pull from environment history as this is a new pilot under development, so i can only guess (mostly reporting, writes aren't terribly intensive, at least that's the plan).

Just trying to get a feel for what the array can handle - I'm not a storage guy by any means, so whatever tests I can concoct with iometer that'd make sense would be very helpful!

Walked
Apr 14, 2003

Anyone worked with FusionIO cards?

I have a 1.2tb one that's not behaving as anticipated. It's for a lab and out of warranty but very very low use.


edit: nevermind; it looks like I needed to reinstall the chipset drivers for the host server.

Walked fucked around with this message at 18:38 on Feb 8, 2017

Walked
Apr 14, 2003

Mr Shiny Pants posted:

Is it a regular PCIe card? Home much do you want for it? :)

It is! And it's going to find a home in my desktop if I cant get what I want out of it for a storage server.

H110Hawk posted:

We are playing these games right now as well. Can you describe how it's performing, kernel version, benchmark, and copy/paste your partitioning + FS creation commands? I will see if I can find mine. We're playing with "LIQID" NVMe cards. And yes it's pronounced liquid it is a dumb name and I pronounce it lick-id.

I'm actually running inside Windows Server 2016; as I'm hoping to use it as a StarWind VSAN cache disk. Although I'm ready to dump and move to a ZFS build if this keeps up.

I've tried it all;

Local benchmarks are good. ~1.5gbps ready / ~1.2gbps write. IOPS within range (and insane on small block sizes :getin: ) As soon as I throw it on a network of ANY kind or even just a Hyper-V VHD stored on there, performance drops to ~200mbps read / ~400mbps write; IOPs are still find on lower block sizes though.

I've tried setting 512 and 4096 sector sizes, NTFS / ReFS, MBR and GPT.

That said; I just updated all my chipset drivers are I'm seeing much closer to spec performance inside my test system. I'm going to re-build this server today and see where I end up.


EDIT: nevermind performance is terrible again

Walked fucked around with this message at 18:44 on Feb 8, 2017

Walked
Apr 14, 2003

Any time I expose the storage to anything other than the native host bare metal, and performance tanks by 80% or so.

Hyper-V VHD on there? Performance in the VM is garbage time. And this was my local test to rule out network somehow being related.

Present the drive or a folder on the drive via SMB and the performance tanks.

Present the drive or a VHD on the drive via iSCSI and performance tanks.

It's quite strange.

But I think I'm getting somewhere. I've updated the drive firmware and reinstalled my chooser firmware again. A brief iometer run shows MUCH better numbers (close to theoretical max) but I want to run that a tad longer to be sure and calling it a day.

Walked
Apr 14, 2003

H110Hawk posted:

I would focus on this one, as it likely has the fewest variables. I am 100% unfamiliar with Windows but: Is there a way to map through the FusionIO drive to your Hyper-V instance with its VHD somewhere else? Or is that exactly what you're doing? Basically put the root disk for your windows instance elsewhere and expose through localhost a smb share or similar.

I've run both routes.

However, after the firmware update and chipset driver reinstall - all is still benchmarking as it should be, and for sustained periods now.

Saturating 10gbe in style :getin:

edit: nm; performance tanked again over network. Just going to use this for my vmware workstation as I'm tired of fighting with it for what amounts to a cache disk I dont really need for pure lab stuff

Walked fucked around with this message at 04:25 on Feb 9, 2017

Walked
Apr 14, 2003

Who should I be looking at for cost effective storage these days? Just took a new position that needs a relatively cost-conscious virtualization storage system. What's the current hotness in that space?

Adbot
ADBOT LOVES YOU

Walked
Apr 14, 2003

Moey posted:

Can you be more specific about your requirements? Size and workload?

Sure; we're aiming for something 20-30tb for a very generic "virtualization" workload; no particular high io workloads; but significant VM usage. Clustered environment.

I've used the Equallogic PS6000 series in the past and it's handled this workload well; but I want to see if there's another option to consider this time through, as cost is a bigger consideration here.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply