Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

I have a Dell MD 3820i full of SSDs on a 10 gig network and all of my benchmarks have random writes maxing out at 45 MB/s. Two different Dell teams have looked it over and and both of them say everything is configured correctly. The escalated pro support guy told me that the performance I was seeing was expected. The pro deploy guy thought that maybe my SSDs were bad. All 20 of them, I guess.

I hate Dell so much right now.

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


BonoMan posted:

Yeesh. I'm guessing y'all are talking about non-video products right? Most companies that use Adobe video software (premiere and after effects) work pretty exclusively from network shares (in some form or another).

https://helpx.adobe.com/premiere-pro/kb/networks-removable-media-dva.html

quote:

Adobe Technical Support only supports using Adobe Premiere Pro, Adobe Premiere Elements, After Effects, Encore, Media Encoder, Prelude, or SpeedGrade on a local hard disk.

BonoMan
Feb 20, 2002

Jade Ear Joe

Ha, I looked that up JUST as you were posting it.

Jesus, Adobe. Way to be oblivious to how non-freelancer companies actually work.

Thanks Ants
May 21, 2004

#essereFerrari


Happiness Commando posted:

I have a Dell MD 3820i full of SSDs on a 10 gig network and all of my benchmarks have random writes maxing out at 45 MB/s. Two different Dell teams have looked it over and and both of them say everything is configured correctly. The escalated pro support guy told me that the performance I was seeing was expected. The pro deploy guy thought that maybe my SSDs were bad. All 20 of them, I guess.

I hate Dell so much right now.

I'm pretty sure the MD3 is in no way designed as an all-flash box. I'm not saying that your speeds are indicative of everything working fine, but I think filling one up with Dell-priced SSDs is a waste of money.

Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

Thanks Ants posted:

I'm pretty sure the MD3 is in no way designed as an all-flash box. I'm not saying that your speeds are indicative of everything working fine, but I think filling one up with Dell-priced SSDs is a waste of money.

They sold us a Compellant - which we didnt need - but didn't tell us that it required 240V. We run 120V for no good reason. They took the Compellant back and gave us an MD with all flash at a sweet price point. The sweet price point is not worth having to deal with them.

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

Internet Explorer posted:

I have no idea if this knowledge is still current or not, but in general Adobe products seem to have a weird aversion to network storage. If they are saying they get bogged down or crash when working "off the network" I would first work with Adobe to see if what they are doing is supported on network shares before dropping a bunch of money.

Something I noted in a LTT video (link below) is that they were having multiple computers crash at the same time running adobe products. They found it was a network storage latency problem. They went to SSD servers. Probably not needed for indesign and photoshop, but obvious why they won't provide any support for network shares (their terrible code).
https://www.youtube.com/watch?v=eQED3tF8wuw

The QNAP rack mentioned does have 4 x 1 Gbs ports. The one cool thing with the QNAP interface is that it makes using multiple ports easy with a variety of options for configuring them. You could have 4 people working with full 1 Gbs speed to the NAS. It might solve the problem but the LTT solution in the video (for their video editing) was to have 10 Gbs networking.

Thanks Ants
May 21, 2004

#essereFerrari


The most fun can be had running Macs off SMB shares and Adobe products! poo poo SMB support from Apple, barely functional OS support for six months after the OS is released, Adobe's general attitude towards modern IT environments. It's all a lot of fun.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Happiness Commando posted:

I have a Dell MD 3820i full of SSDs on a 10 gig network and all of my benchmarks have random writes maxing out at 45 MB/s. Two different Dell teams have looked it over and and both of them say everything is configured correctly. The escalated pro support guy told me that the performance I was seeing was expected. The pro deploy guy thought that maybe my SSDs were bad. All 20 of them, I guess.

I hate Dell so much right now.

What benchmark tool are you using? How many workers, what is the IO size, what queue depth(s) how many concurrent IOs per host, etc? What latencies are you seeing at max IO rate?

This could just been poor performing storage but it’s like a poorly configured benchmark.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Happiness Commando posted:

I have a Dell MD 3820i full of SSDs on a 10 gig network and all of my benchmarks have random writes maxing out at 45 MB/s. Two different Dell teams have looked it over and and both of them say everything is configured correctly. The escalated pro support guy told me that the performance I was seeing was expected. The pro deploy guy thought that maybe my SSDs were bad. All 20 of them, I guess.

I hate Dell so much right now.

What raid config?

Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

YOLOsubmarine posted:

What benchmark tool are you using? How many workers, what is the IO size, what queue depth(s) how many concurrent IOs per host, etc? What latencies are you seeing at max IO rate?

This could just been poor performing storage but it’s like a poorly configured benchmark.

Crystal Disk Mark on default settings - 4kb with 8 queue 8 thread and 32 queue 1 thread came up with the weird results. To a lesser extent, 1 queue 1 thread was also interesting. We looked at the performance tab in the Dell MD config tool and it came up with more or less exactly the same figures. I ran the same benchmark against a number of configs, here is what I saw:
SSD RAID 6, SSD RAID 10, SSD Disk Pool (Dell's 'something like RAID6' implementation'), and HDD RAID 10 all had the same figures - roughly 40 MB/s for 8q/8t and 32q/1t and 10 MB/s for 1q/1t.

SSD RAID 10 locally installed behind some PERC on one of my ESX hosts returned 400 MB/s for 8q/8t and 32q/1t and 80 MB/s for 1q/1t. My laptop with a consumer level SSD returned on the order of 150 MB/s for 8q/8t and 32q/1t and 20? MB/s for 1q/1t.

I acknowledge that synthetic benchmarks aren't real world, and I don't know how to properly benchmark storage. We were just looking to compare the new SSD arrays to our existing spinning disks to see whether we needed to do a combination of RAID levels to hit our space targets

BonoMan
Feb 20, 2002

Jade Ear Joe

Devian666 posted:

Something I noted in a LTT video (link below) is that they were having multiple computers crash at the same time running adobe products. They found it was a network storage latency problem. They went to SSD servers. Probably not needed for indesign and photoshop, but obvious why they won't provide any support for network shares (their terrible code).
https://www.youtube.com/watch?v=eQED3tF8wuw

The QNAP rack mentioned does have 4 x 1 Gbs ports. The one cool thing with the QNAP interface is that it makes using multiple ports easy with a variety of options for configuring them. You could have 4 people working with full 1 Gbs speed to the NAS. It might solve the problem but the LTT solution in the video (for their video editing) was to have 10 Gbs networking.

Thanks I'll check it out when my kid goes to bed!

Our video storage solution is all 10GbE to 100tb of central storage and it's changed our world. But that's totally PC based. The Macs to 10Gbe would be not doable in our current budget unfortunately.

redeyes
Sep 14, 2002

by Fluffdaddy
I've never seen a PERC perform anything other than poo poo. That doesn't mean anything though.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Happiness Commando posted:

Crystal Disk Mark on default settings - 4kb with 8 queue 8 thread and 32 queue 1 thread came up with the weird results. To a lesser extent, 1 queue 1 thread was also interesting. We looked at the performance tab in the Dell MD config tool and it came up with more or less exactly the same figures. I ran the same benchmark against a number of configs, here is what I saw:
SSD RAID 6, SSD RAID 10, SSD Disk Pool (Dell's 'something like RAID6' implementation'), and HDD RAID 10 all had the same figures - roughly 40 MB/s for 8q/8t and 32q/1t and 10 MB/s for 1q/1t.

SSD RAID 10 locally installed behind some PERC on one of my ESX hosts returned 400 MB/s for 8q/8t and 32q/1t and 80 MB/s for 1q/1t. My laptop with a consumer level SSD returned on the order of 150 MB/s for 8q/8t and 32q/1t and 20? MB/s for 1q/1t.

I acknowledge that synthetic benchmarks aren't real world, and I don't know how to properly benchmark storage. We were just looking to compare the new SSD arrays to our existing spinning disks to see whether we needed to do a combination of RAID levels to hit our space targets

Are you using a VM for testing? If so, Is the MD storage attached directly to the VM via in guest iSCSI or is it a datastore that the VM resides on?

Crystal also isn’t the best tool to use for this unless your concern is simply absolute read write throughout and even then it’s probably not. You really need a tool that provides response time. If you’re only getting 2000 4K iops but the latency is .5ms then the problem isn’t the storage it’s something else.

Also local disk latency and performance is tough to compare to SAN or network attached because the effect of device and file system caching can be very different between the two depending on OS and storage driver and how the benchmark tool calls for IO.

Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

The VM is residing on a datastore. Can you suggest a tool that would give me, a non-expert, more valuable insight?

BonoMan
Feb 20, 2002

Jade Ear Joe

Devian666 posted:

Something I noted in a LTT video (link below) is that they were having multiple computers crash at the same time running adobe products. They found it was a network storage latency problem. They went to SSD servers. Probably not needed for indesign and photoshop, but obvious why they won't provide any support for network shares (their terrible code).
https://www.youtube.com/watch?v=eQED3tF8wuw

The QNAP rack mentioned does have 4 x 1 Gbs ports. The one cool thing with the QNAP interface is that it makes using multiple ports easy with a variety of options for configuring them. You could have 4 people working with full 1 Gbs speed to the NAS. It might solve the problem but the LTT solution in the video (for their video editing) was to have 10 Gbs networking.

So I ended up finally watching that video. Very neat and it's very similar to our setup minus the SSDs and watch folder server (although that's something we've thought about building).

We basically do 50TB with realtime backup to another 50TB. All editor machines connected via 10GbE. Then things go to "nearline" after 6 months and then off to LTO after another few months.

It's freakin' awesome and we work with RAW 4K RED footage all the time. We never transcode for editing. I love 10GbE. Even with our non-SSD hard drives it's blazing fast.

Two problems - we have no offsite backup and management won't pay for it. Lol. And it cost about $25-30K and management won't pay for something even remotely similar for the non-video departments.

I think best I'll be able to get for the non-video departments is maybe some SSD caching options with regular HDDs.

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe
I'm recording a fire test in a few days on an EOS M100. I'm dreading the size of the footage, and I think it saves it as RAW but I'll be checking all the settings this weekend. Lots of fun trying all this stuff out and I haven't done any video editing for a couple of years.

Your work setup sounds decent but there's the 3-2-1 rule for backups. 3 copies, 2 on site and 1 off site. I hope that someone factors that into your workplace disaster planning.

Potato Salad
Oct 23, 2014

nobody cares


Happiness Commando posted:

I have a Dell MD 3820i full of SSDs on a 10 gig network and all of my benchmarks have random writes maxing out at 45 MB/s. Two different Dell teams have looked it over and and both of them say everything is configured correctly. The escalated pro support guy told me that the performance I was seeing was expected. The pro deploy guy thought that maybe my SSDs were bad. All 20 of them, I guess.

I hate Dell so much right now.

I have a 38x0i with hard drives in raid10 pushing 300MB/s. Something is severely wrong with the firmware or configuration in your device.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice
So, I'm having some MASSIVE issues prying data from a VNX5300 to a modern NetApp, no tool I've utilized so far will actually carry over permissions.
Robocopy using /copyall spits error 31, using all of the /copy switches but S will net success, but the moment I try and actually *copy* securities, it errors.

NetApp's XCP errors on *some* permissions, citing that ACE type 170 is not supported yet, and fails to move the file that it failed on.

EMCOPY bombs out citing that it can't set the security descriptor, and doesn't even migrate the file it failed on.

Icacls is the only thing I can find that will scrape, and successfully apply permissions, but if you've ever used icacls on a path that contains 100,000+ tiny files, it's *very* time consuming, and for a hospital, I don't have hours to spare some times.

Any suggestions?
I'm not against looking at 3rd party tools such as DataDobi, but I don't have time to wine-n-dine for a tool.
Five months to move 600tb of data, oof.

Thanks Ants
May 21, 2004

#essereFerrari


Can you not just restore your backups onto the new storage :getin:

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice

Thanks Ants posted:

Can you not just restore your backups onto the new storage :getin:

Considered it, we're trying to fix some fucky groups and bad file structure :/

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

I’ve used SecureCopy in the past to migrate CIFS data onto NetApp.l and had better luck with permissions handling than, for instance, robocopy.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice

YOLOsubmarine posted:

I’ve used SecureCopy in the past to migrate CIFS data onto NetApp.l and had better luck with permissions handling than, for instance, robocopy.

I'll give it a look.
I think it's more of something cocked up with the VNX, we had immense problems getting stuff off of the same appliance to an Isilon awhile back.

evil_bunnY
Apr 2, 2003

YOLOsubmarine posted:

I’ve used SecureCopy in the past to migrate CIFS data onto NetApp.l and had better luck with permissions handling than, for instance, robocopy.
Robocopy constantly lost ACLs for us until we ran it elevated. Since then it’s been great.

EoRaptor
Sep 13, 2003

by Fluffdaddy

kzersatz posted:

So, I'm having some MASSIVE issues prying data from a VNX5300 to a modern NetApp, no tool I've utilized so far will actually carry over permissions.
Robocopy using /copyall spits error 31, using all of the /copy switches but S will net success, but the moment I try and actually *copy* securities, it errors.

NetApp's XCP errors on *some* permissions, citing that ACE type 170 is not supported yet, and fails to move the file that it failed on.

EMCOPY bombs out citing that it can't set the security descriptor, and doesn't even migrate the file it failed on.

Icacls is the only thing I can find that will scrape, and successfully apply permissions, but if you've ever used icacls on a path that contains 100,000+ tiny files, it's *very* time consuming, and for a hospital, I don't have hours to spare some times.

Any suggestions?
I'm not against looking at 3rd party tools such as DataDobi, but I don't have time to wine-n-dine for a tool.
Five months to move 600tb of data, oof.

You could robocopy data across, creating a log of copied files, then use powershell to ingest the log and get-acl the original file then set-acl the copied file. This only really works if you can break up the copy into chunks that aren’t touched during the copy process.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice

evil_bunnY posted:

Robocopy constantly lost ACLs for us until we ran it elevated. Since then it’s been great.

Is there only way I run it, but you are right, with other runs in the past, this was it.

ihafarm
Aug 12, 2004
I had similar issues during a file migration and ended up using BeyondCompare after trying many alternatives. Due to the way NTFS acls are applied I found it much faster to clone the directory structure and apply permissions first, then copy the files.

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read
The Dell tech finally got the SC5020 installed, glad to see Brocade switches are still using Java for the GUI!

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read
Okay, what’s the general opinion on using dedupe+compression on volumes holding SQL DBs? Compellent support was in the middle saying there would be the additional overhead that could add latency, other storage vendors are of course saying their systems can handle it without a problem, etc.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Spring Heeled Jack posted:

Okay, what’s the general opinion on using dedupe+compression on volumes holding SQL DBs? Compellent support was in the middle saying there would be the additional overhead that could add latency, other storage vendors are of course saying their systems can handle it without a problem, etc.
A SQL DB is just a bunch of bits on disk, with a random component and a sequential component. Consider your actual workload. It will be a good fit for some and not others. If this is a thing you're even considering in the first place, I'm going to guess that this isn't an extremely performance-sensitive database.

Spring Heeled Jack posted:

The Dell tech finally got the SC5020 installed, glad to see Brocade switches are still using Java for the GUI!
I'm equally surprised to find out people are still using the GUI on Brocade switches

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Spring Heeled Jack posted:

Okay, what’s the general opinion on using dedupe+compression on volumes holding SQL DBs? Compellent support was in the middle saying there would be the additional overhead that could add latency, other storage vendors are of course saying their systems can handle it without a problem, etc.

Well, the main issue here is that Compellent deduce and compression are lovely and other vendors implementations are not lovely. Whether you see a benefit depends on how it’s implemented. There are certainly vendors that can do very low latency SQL operations with those things enabled.

Vulture Culture posted:

I'm equally surprised to find out people are still using the GUI on Brocade switches

Same. The CLI is very friendly.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Spring Heeled Jack posted:

Okay, what’s the general opinion on using dedupe+compression on volumes holding SQL DBs? Compellent support was in the middle saying there would be the additional overhead that could add latency, other storage vendors are of course saying their systems can handle it without a problem, etc.

Again, Pure is awesome here. Most of our SQL DB volumes see a 3-4:1 reduction ratio and the performance has been nothing other than excellent.

Potato Salad
Oct 23, 2014

nobody cares


*bursts into room* consider most expensive *pant* operations and under what kind of load *wheeze* they're run

If dedupe would actually save you more than a three/four figures of capacity on a db's physical storage, perhaps you're running a ...lot....uh


I'm not experienced / creative enough here to think up a practical example of a situation where you'd see dedupe on DB data in, say, 3:1 other than "here's my blobs of uncompressed cat photos and large segments of other uncompressed, unordered data" (edit - but also isn't stained enough to care about a dedupe performance hit)

I guess what's weird about dedupe on DB storage is that either (a) implies the hardware wasn't specced specifically for that db's workload and structure/content characteristics or (b) it's a "database" that hardly matters and could run on a laptop for all anyone cares, were availability not a concern

Potato Salad fucked around with this message at 03:26 on Aug 24, 2018

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


I'm just going to answer that with 'huh?'

3:1 is a pretty conservative amount of dedup/comp on a database volume and if you have decent storage you aren't going to see any performance hit out of it.

One of our SQL servers has about 800 databases on it, presented storage to the VM is in the 15tb range. This is all transactional data. Actual amount of storage on the back end consumed is about 3.5-5tb of flash.

That server will easily do 10+gbs on sustained writes while maintaining sub 1ms latency on both reads and writes. All this while other workloads from other machines are hitting the same array.

On one array I have a mixed workload of VMWare storage, in guest iSCSI from VMs, and physical servers accessing via iSCSI. poo poo just works and works well.

bull3964 fucked around with this message at 04:57 on Aug 24, 2018

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
In my opinion, there is a shitload more spare compute capacity on an array than iops. On a hybrid array, dedupe and compression are going to allow you to fit more data into cache, which is going to improve your performance so long as you are not out of CPU (and you are not on an array that has some known performance penalty for dedupe, looking at you ZFS).

evil_bunnY
Apr 2, 2003

adorai posted:

In my opinion, there is a shitload more spare compute capacity on an array than iops.
Spoken like someone who's never seen an entry level netapp array ever :D

evil_bunnY fucked around with this message at 09:41 on Aug 24, 2018

Potato Salad
Oct 23, 2014

nobody cares


Six figures for intel atom systems

evil_bunnY
Apr 2, 2003

Potato Salad posted:

Six figures for intel atom systems
core2duo's but yeah. On the other hand <1% disk failure so I'm not complaining, we knew what we were getting.

Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

Vulture Culture posted:

A SQL DB is just a bunch of bits on disk, with a random component and a sequential component. Consider your actual workload. It will be a good fit for some and not others. If this is a thing you're even considering in the first place, I'm going to guess that this isn't an extremely performance-sensitive database.

I'm equally surprised to find out people are still using the GUI on Brocade switches

I would say out SQL DB is not 'extremely' performance sensitive. I've of the mindset we enable it to start as we can always turn it off if we see issues.

And yeah, this was at the request of the Dell tech who came out to set it up. Had to install Java and everything, not great.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Pure does a pretty good job with reducing DBs, though most of the benefit comes from compression and pattern removal, not deduplication. Still, you’ll see some benefit here or there on initialized blocks that don’t yet have application data in them, or repeated instances of LOB data like in sharepoint document repositories or something.

But the main thing is you don’t actually have to think about any of this at all. It’s turned on all the time. It’s not a dial you have to tweak and test. The system is built to perform consistently with always on deduplication. You will get sub millisecond latencies whether you get a lot of deduplication or none.

And of course if you ever start cloning those databases for testing and development then you see massive benefits in data reduction.

Adbot
ADBOT LOVES YOU

Maneki Neko
Oct 27, 2000

Anyone have a good resource for end of life info for Nimble/HPE? We picked up a client that has CS215 arrays, the OG Nimble rep said they don't go end of support until 2021 but HPE is now pulling out "lol end of support at the end of 2019".

I'm feeling like this is likely slimey HPE rep shenanigans, but I'm trying to verify. Our OG Nimble rep left the company (shocking news).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply