Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Thanks Ants posted:

Isilon on a casual glance seems to be the only scale-out NAS platform that doesn't try and downplay performance expectations. Is that EMC overreaching or is that a fair assessment?

Pure’s Flashblade is extremely fast. Also stuff like Panassas that no one’s ever heard of that is meant to be high performance. And HPC group used it at a previous job so I guess it did alright.

But Isilon is fine. Good, even, if your main criteria is throughout, particularly on ingest. It’s definitely not general purpose storage though.

Adbot
ADBOT LOVES YOU

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

For scale out performance, Isilon is fine, if a bit pricey. If you have petabyte scale needs it’d be on my shopping list. Their primary? engineers left and founded Qumulo. Similar arch. It’s a startup, so ymmv.

Panasas is fine, also kinda pricey. See them in enterprise HPC usually.

It might be worthwhile to talk to IBM about Spectrum Scale; they’ve been surprisingly competitive on some projects.

DDN has some options that aren’t bad.

People are doing interesting stuff with Ceph but that’s still pretty green. RH’s support licensing is kinda high.

Nearly all of these are built around large block IO. Most parallel file systems do poorly on metadata performance.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

PCjr sidecar posted:

Nearly all of these are built around large block IO. Most parallel file systems do poorly on metadata performance.

The interesting thing about Flashblade is that it’s not terrible on small block/object random IO. You wouldn’t necessarily run a bunch of VMs off of it but you can use it to store an OLTP database and be happy with the results. That’s definitely not true with Isilon.

evil_bunnY
Apr 2, 2003

YOLOsubmarine posted:

But Isilon is fine. Good, even, if your main criteria is throughout, particularly on ingest. It’s definitely not general purpose storage though.
what makes you say that?

YOLOsubmarine posted:

You wouldn’t necessarily run a bunch of VMs off of it but you can use it to store an OLTP database and be happy with the results. That’s definitely not true with Isilon.
That's fine

Zorak of Michigan
Jun 10, 2006

YOLOsubmarine posted:

The interesting thing about Flashblade is that it’s not terrible on small block/object random IO. You wouldn’t necessarily run a bunch of VMs off of it but you can use it to store an OLTP database and be happy with the results. That’s definitely not true with Isilon.

That's actually depressing. I would have expected FlashBlade to be good for VMs, just based on my experience with Pure as a vendor. Do you know what slows it down too much for VM storage?

Potato Salad
Oct 23, 2014

nobody cares


CDW has been loving great, y'all are completely correct that it comes down to your account manager. The only times I can get poo poo cheaper from Dell are when I want something skirting supported configuration to do something a little fucky but interesting to the regional sales manager.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Zorak of Michigan posted:

That's actually depressing. I would have expected FlashBlade to be good for VMs, just based on my experience with Pure as a vendor. Do you know what slows it down too much for VM storage?

You *could* run VMs off of it, that’s just not it’s best usage. It’s not tuned for extremely low latency the way FlashArray’s are. It’s tuned for great throughout during big block sequential work and reasonable latency during random IO, and for handling very large datasets in a single namespace.

Generally for things like general purpose virtual infrastructure latency is the critical metric and you don’t need hundreds of TB or even petabytes in a single namespaces so FlashArray is going to be a better fit.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Also, Tintri’s new CEO quit after just three months. The local SE is telling us that they can’t get any answers from management about what to tell customers about the future of their support contracts. We’re gonna have a few real unhappy customers, including some pretty big IT companies.

Richard Noggin
Jun 6, 2005
Redneck By Default

YOLOsubmarine posted:

Also, Tintri’s new CEO quit after just three months. The local SE is telling us that they can’t get any answers from management about what to tell customers about the future of their support contracts. We’re gonna have a few real unhappy customers, including some pretty big IT companies.

They're about to be delisted on NASDAQ. They will be out of cash by the end of the month if they don't get help. Don't hold your breath.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Richard Noggin posted:

They're about to be delisted on NASDAQ. They will be out of cash by the end of the month if they don't get help. Don't hold your breath.

They’ve got no chance of getting acquired until they go through bankruptcy. The question is can manage a bankruptcy, restructure and acquisition in a way that allows them to maintain their support organization to honor support contracts.

They aren’t getting any more capital anywhere other than acquisition so that’s really the only option. Hopefully they manage to do something to make it right for existing customers.

Violin has been a penny stock for years but they still exist somehow, so Tintri still has a shot at bare susbsistence.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I totally forgot about Violin, yikes.

Richard Noggin
Jun 6, 2005
Redneck By Default

YOLOsubmarine posted:

They’ve got no chance of getting acquired until they go through bankruptcy. The question is can manage a bankruptcy, restructure and acquisition in a way that allows them to maintain their support organization to honor support contracts.

They aren’t getting any more capital anywhere other than acquisition so that’s really the only option. Hopefully they manage to do something to make it right for existing customers.

Violin has been a penny stock for years but they still exist somehow, so Tintri still has a shot at bare susbsistence.

I don't see any value for the investor. They weren't able to keep the product afloat and this has essentially poisoned the entire line. Would anyone really buy Tintri after this? I sure as hell wouldn't.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Richard Noggin posted:

I don't see any value for the investor. They weren't able to keep the product afloat and this has essentially poisoned the entire line. Would anyone really buy Tintri after this? I sure as hell wouldn't.

Why would anyone buy Violin? And yet someone did. If it’s cheap enough it’s probably worth it to scrounge for IP.

underlig
Sep 13, 2007

underlig posted:

The problem is that the CSVs were all set with sofs1 as owner, and that server shut down two days ago.
When that happened the Hyper-V -cluster all lost contact with the SAN.

When we tried bringing the volumes up on sofs2 they connect for a few seconds then disconnects again stating "the resource is in use". This goes for both volumes and the witness.

We managed to get sofs01 online again, brought the volumes up and everything seemed to work for an hour until the server crashed again.

I finally, yesterday morning, managed to get the volumes to go online by first selecting bring online with sofs02 as owner and then set them in maintenance mode. (????) i have absolutely no idea why this worked and exactly WHAT maintenance mode really is since every host has full access to the volumes now, nothing seems to be draining or anything like that.
Maintenance mode seems to be just a looser connection to the SAN where you can run chkdsk etc on the volumes.

The problems with ownership continued, i wish i had contacted Microsoft right away since this has been a headake and major stress factor for me.

When trying to find more information about the fiberchannel cards in the fileservers i found references to software for controlling them called OneCommand Manager. This was not installed on any of the servers, in fact i could not find any software to configure the cards.
Once i installed OneCommand i saw that SOFS01 fc-card was set to talk iSCSI and SOFS02 fc-card was set for RoCE. The only internal documentation for the cluster is:

quote:

SOFS are created through MDT from deploy01. TS for SOFS exists, just choose SOFS TS and enter the name (sofs01,sofs02).
After installation install current SPP from HP. Nics should be configured for ROCE v2-mode.
So thinking that this was it i set the card on sofs01 to RoCE, but that still didn't help.

The thing that fixed my problems was to reboot the controllers on the SAN, first the management controllers and then the storage controller. When the SC rebooted the primary MC also reset/crashed, this time it wrote an event in the logfiles that i'll get HPE to check out tomorrow, but once it all came online again the volume lock was gone, i could set any sofs as owner, i could move ownership and everything is now running perfectly.

I suspect the iSCSI / RoCE mode on sofs01 was because HPE changed out the motherboard and that setting is handled somewhere on the motherboard, setting iSCSI as default.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice
As for isilon not bring good for general purpose.

Imagine this.
You have 15000 active connections, most of a hospital, right?
User data, appdata, data streaming constantly, video data writing all hours of the day, right?

The fact that one is is all file based protection, the fact that each time you take a snapshot and replicate, it locks the filesystem for a moment, even for split second, and you've got nearly 100 if these kicking off every 5minutes because your dumb rpo requires it; and a disk fails...
If you don't have this piece of poo poo loaded to the gills with cache drives for metadata accelleration, the entire cluster is going to not only going to gag on its own lunch, but vomit all over the place as well.
No department share gives two fucks if you can do PB scale, they don't care how many thousands of synciq jobs you claim to be able to choke down, they just want reliable data access.

Generic application shares? They just want the poo poo accessable

Your finnicky genomics processor though? They give a poo poo about scaling to numbers petabytes of data.

Your weird Cisco video recorders? Same

Security footage? Same

200tb highly compressed and deduplicated commvault data? I'd pass actually, this poo poo bag appliance doesn't support sparse files.

PMR systems that have billions of small files inside of 20tb? They may care, most poo poo would run out of inodes before you hit the allocated space, depending on queer electrons that day...

Tl;Dr

Isilon is finnicky, buy something else unless you need a huge rear end time sink and are looking at well over a petabyte in use, OR need a single contiguous filesystem that scales something dumb.

For everything else, there are niche cheap sans fronted by windows storage server, Cohesity, rubrik, and NetApp.

P.s. I'm not bitter... I swear.

evil_bunnY
Apr 2, 2003

Thanks for the input regardless! More data on what to look for when acceptance testing is good.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

PCjr sidecar posted:

It might be worthwhile to talk to IBM about Spectrum Scale; they’ve been surprisingly competitive on some projects.
After the poo poo they pulled with SONAS I'd never buy IBM NAS ever again

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice

evil_bunnY posted:

Thanks for the input regardless! More data on what to look for when acceptance testing is good.

Happy to.

A well built one will run well, a poorly built one will run like a slug on a blisteringly hot driveway lined with salt.

The difference between the two designs? how much flash you throw at it for caching and how creative stupid you want to get with smartpool designs

evil_bunnY
Apr 2, 2003

The train I’m getting on is basically an extension of a system that’s been running well for years already. I really appreciate the additional input.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

https://www.ddn.com/press-releases/ddn-storage-acquires-tintri/

Raere
Dec 13, 2007

I've always heard that it's best to keep storage arrays on their own subnet. In my lab we have multiple NASes using NFS. Does it benefit, or is it even possible to put the NASes on their own subnet? Also, if I want to put them on a different physical switch, wouldn't I need a router so the clients can talk to the storage? That doesn't seem beneficial, but I want to try to follow best practices.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Raere posted:

I've always heard that it's best to keep storage arrays on their own subnet. In my lab we have multiple NASes using NFS. Does it benefit, or is it even possible to put the NASes on their own subnet? Also, if I want to put them on a different physical switch, wouldn't I need a router so the clients can talk to the storage? That doesn't seem beneficial, but I want to try to follow best practices.
"Their own subnet" sounds wasteful. You generally want to avoid L3 routing between your storage consumets and any high-performance storage volumes, as it does add a lot of latency, and it may dramatically complicate your efforts to use jumbo frames (if those would improve your deployment). But it's not gospel. There are lots of reasons not to do it this way, especially if the network isn't a performance bottleneck or if the performance isn't really a concern in the first place.

If you have to ask, it sounds like you don't have a clear goal in mind. That's often a good signal to leave it alone.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice

Vulture Culture posted:

"Their own subnet" sounds wasteful. You generally want to avoid L3 routing between your storage consumets and any high-performance storage volumes, as it does add a lot of latency, and it may dramatically complicate your efforts to use jumbo frames (if those would improve your deployment). But it's not gospel. There are lots of reasons not to do it this way, especially if the network isn't a performance bottleneck or if the performance isn't really a concern in the first place.

If you have to ask, it sounds like you don't have a clear goal in mind. That's often a good signal to leave it alone.

I agree with this until you start getting into the horrifically large enterprise space, at that point your latency is mitigated by overkill of equipment and breakneck speed of processors.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

kzersatz posted:

I agree with this until you start getting into the horrifically large enterprise space, at that point your latency is mitigated by overkill of equipment and breakneck speed of processors.
I worked in academia supporting researchers and clinicians for a number of years; this is definitely not the circumstance of someone beginning an out-of-the-wheelhouse NAS question with "in my lab". :)

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice

Vulture Culture posted:

I worked in academia supporting researchers and clinicians for a number of years; this is definitely not the circumstance of someone beginning an out-of-the-wheelhouse NAS question with "in my lab". :)

Sure, not saying it is, your mileage may vary highly depending on situation and equipment.

I'm in medical Clinical/Research currently, and can say I don't experience latency generated by vlan segmentation, more commonly due to piss poor applications.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

kzersatz posted:

Sure, not saying it is, your mileage may vary highly depending on situation and equipment.

I'm in medical Clinical/Research currently, and can say I don't experience latency generated by vlan segmentation, more commonly due to piss poor applications.

Adding unnecessary latency to your storage path is always a good thing to avoid, even if it’s only very recently that NVMe has pushed access latency to the point where interconnect latency isn’t a rounding error. Plus hardware is less important than network design. It’s easy to end up with oversubscribed router ports if you’re hairpinning a bunch of traffic from your ToR switches to a core router and back and expecting to run high performance storage over that.

And, of course, a layer 3 boundary often implies that firewall inspection is also happening, which adds further latency.

If you’re trying to build a high performance storage network it’s nice to not have to worry about routing table exhaustion or tcam exhaustion or whatever causing issues. Outside of those with pitifully small port buffers modern switches are generally very consistent performers in a way that firewalls and even routers may not be.

H110Hawk
Dec 28, 2006

YOLOsubmarine posted:

Adding unnecessary latency to your storage path is always a good thing to avoid, even if it’s only very recently that NVMe has pushed access latency to the point where interconnect latency isn’t a rounding error. Plus hardware is less important than network design. It’s easy to end up with oversubscribed router ports if you’re hairpinning a bunch of traffic from your ToR switches to a core router and back and expecting to run high performance storage over that.

And, of course, a layer 3 boundary often implies that firewall inspection is also happening, which adds further latency.

If you’re trying to build a high performance storage network it’s nice to not have to worry about routing table exhaustion or tcam exhaustion or whatever causing issues. Outside of those with pitifully small port buffers modern switches are generally very consistent performers in a way that firewalls and even routers may not be.

These days everything is a layer3 boundary, which is making stateful inspection less of a default on layer3 interconnections on the "trust" side. (You can poorly setup a network in any configuration.) Pitifully small port buffers on modern switches are going to be a bigger issue in general. You're not going through a "router" in the traditional sense just becayse it's a layer3 interconnection. In Juniper land you're highly likely to layer3 interconnect all of your QFX devices, but only when you start getting into full tables or inter-site (etc) connections would you hit a MX router.

NVMe is a game changer no matter where you put it, like going from 15k SAS to normal SSDs 7-10 years ago.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice
I agree that you don't want to let egregious layer 3 routing occur across the environment especially in high workload environments.
But your general purpose NAS serving up profiles, department shares, etc. won't notice a damned bit of difference.

Your highly transactional workload, ala VMWare, Genomics, Oracle on NFS, etc, will suffer, I agree, not going to debate that.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
we have multiple unrouted layer 2 subnets for storage. No reason to introduce any extra latency when you can avoid it.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

H110Hawk posted:

These days everything is a layer3 boundary, which is making stateful inspection less of a default on layer3 interconnections on the "trust" side. (You can poorly setup a network in any configuration.) Pitifully small port buffers on modern switches are going to be a bigger issue in general. You're not going through a "router" in the traditional sense just becayse it's a layer3 interconnection. In Juniper land you're highly likely to layer3 interconnect all of your QFX devices, but only when you start getting into full tables or inter-site (etc) connections would you hit a MX router.

NVMe is a game changer no matter where you put it, like going from 15k SAS to normal SSDs 7-10 years ago.

There are still a *lot* of places that hairpin traffic north south to a routing core or for firewall inspection (or both on the same device). Like yea, QFabric or Fabricpath or ACI or just plain ole svis and layer 3 switches means you see layer 3 boundaries in more places that you might have previously, but there are still a lot of networks out there that are very traditional in design. And as a storage dude you don’t generally have any control over that so often simply sticking to layer 2, or even having a dedicated physically separate storage network, if your workloads are very latency sensitive.

And yes, NVMe will be a big deal no matter where it sits, but it’s going to force people to think much more carefully about storage networking than they traditionally have because up until NVMe just about any functional 10Gbe network (i.e. not exhausting port buffers or constantly suffering spanning tree events) was good enough that the added latency of even an excessively long network path was a few orders of magnitude lower than the storage response time.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

NVMEoF is pretty cool, and there’s some fun things being built on top of it.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

adorai posted:

we have multiple unrouted layer 2 subnets for storage. No reason to introduce any extra latency when you can avoid it.

Same here.

Potato Salad
Oct 23, 2014

nobody cares


Trip report: Rdma is :gizz:

Potato Salad
Oct 23, 2014

nobody cares


adorai posted:

we have multiple unrouted layer 2 subnets for storage. No reason to introduce any extra latency when you can avoid it.

BonoMan
Feb 20, 2002

Jade Ear Joe
Hoping this might be the right place to ask this question.

I'm currently doing some research on upgrading our storage at work for a subset of workers.

Now, my work doesn't and won't employ a regular IT or sysadmin person, so that's why me, a 3D/VFX guy is here asking this question (it's an uphill battle I've fought for years).

We have about 8 designers at an ad agency that, at the moment, are accessing an old NAS storage system that's proved a bit slow for them. So we'd like to upgrade it.

Currently our video team is accessing a nice 10GbE system and we all work from that 100TB of centralized storage. It's fast enough for us to do 4K video work off of. Love it.


The designers want the same ability to be able to work off their own central storage like we do. At the moment, for large files (like their 1 gig photoshop files or large indesign projects), they usually just copy it over, work on it and move it back. They claim they get bogged down or crash when they try to work off the network.


So my question is kind of two parts. Is the "style" of file access different for these two departments? I feel like the video projects read from the raw files "on the fly" whereas for larger print design projects, the computer usually tries to load that into memory and that may be causing the slowness they experience. That it doesn't really read photoshop files on the fly.

And second is, should an upgraded NAS with trunked 1gig ports help alleviate this problem? Right now they're just connected to an old NAS through a single 1 gig port.


Sorry if that's kind of a vague spaghetti question! Right now I'm just looking at upgrading to this (https://www.qnap.com/en-us/product/ts-873u-rp) and just trunking the ports and calling it a day.

redeyes
Sep 14, 2002

by Fluffdaddy
I think you are on the right track with that Qnap product with the 10Gbe ports. 2x1Gb ports would yield double the speed at best. You might be running into a situation where your older NAS can't saturate gigabit but it's hard to say. I don't know off hand how photoshop handles caching huge files but do the workstations involved have enough RAM to load these 1GB files completely?

BonoMan
Feb 20, 2002

Jade Ear Joe

redeyes posted:

I think you are on the right track with that Qnap product with the 10Gbe ports. 2x1Gb ports would yield double the speed at best. You might be running into a situation where your older NAS can't saturate gigabit but it's hard to say. I don't know off hand how photoshop handles caching huge files but do the workstations involved have enough RAM to load these 1GB files completely?

They should. They're all fairly new iMacs and I've upgraded the RAM in all of them to 24 gigs or better.

With that note, we won't be using the 10GbE like we do for the video side of things. The only way to do that, that I can tell, with the iMacs is to get expensive adapters and then run new cable for each one. Management would just lol in my face about that one!

Internet Explorer
Jun 1, 2005





I have no idea if this knowledge is still current or not, but in general Adobe products seem to have a weird aversion to network storage. If they are saying they get bogged down or crash when working "off the network" I would first work with Adobe to see if what they are doing is supported on network shares before dropping a bunch of money.

Thanks Ants
May 21, 2004

#essereFerrari


Adobe will not support working from network shares under any circumstances. Yes it's archaic and nobody actually works that way, but the only Adobe-approved workflow is to copy the file locally, work on it, and then move it back onto the share.

Adbot
ADBOT LOVES YOU

BonoMan
Feb 20, 2002

Jade Ear Joe
Yeesh. I'm guessing y'all are talking about non-video products right? Most companies that use Adobe video software (premiere and after effects) work pretty exclusively from network shares (in some form or another).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply