Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ihafarm
Aug 12, 2004

Kaddish posted:

Um. Why do you have 40+ Pure LUNS is the pertinent question? I can't fathom a reason for this.

Like, you're trying to fix something that seems to be fundamentally broken, and fixing the fundamentals will help with whatever you're trying to do going forward.

Edit - I see, it's the LUN naming convention that is the problem, which gives me a headache even thinking about.

Pure LUNS are just a storage bucket, and unless you need specific compression/dedupe statistics per Pure LUN, they don't mean anything. And even if you need those stats, just create a 'test' LUN for compression/dedupe potential.

Or, there's always vvols (lol)

After 10+ years running a couple generations of EMC storage(Celerra/VNXg1) I opted to go with Nimble for my most recent refresh. Still migrating over, but have been converting some non-user facing systems to vvol, what gotchas should I be worried about? I already knew that veeam doesn’t support direct backup from vvol data stores(at least w Nimble), but that’s not a concern in my environment. Seems like any enterprise storage advancement is met with skepticism, but vvols have been supported since what, 2015?

Adbot
ADBOT LOVES YOU

Kaddish
Feb 7, 2002

ihafarm posted:

After 10+ years running a couple generations of EMC storage(Celerra/VNXg1) I opted to go with Nimble for my most recent refresh. Still migrating over, but have been converting some non-user facing systems to vvol, what gotchas should I be worried about? I already knew that veeam doesn’t support direct backup from vvol data stores(at least w Nimble), but that’s not a concern in my environment. Seems like any enterprise storage advancement is met with skepticism, but vvols have been supported since what, 2015?

I actually haven't used vvols yet because it seemed to me the limitations and implementation was ....severely lacking. Vvol support wasn't even added to Storwize until later in the 7.x code stream and converting over seemed like a pain. Maybe the APIs/implementation/etc has gotten better over the years but it seemed like they attempted to solve a problem I didn't need solving. In general, I reduce complexity in the environment as much as possible.

Kaddish fucked around with this message at 17:36 on Mar 5, 2021

Kaddish
Feb 7, 2002
I used the new-to-me Storwize feature to set up a new Flashsystem 5100 -volumes, pools, mapping, etc, for use behind an SVC and it worked surprisingly well. I had no issues with any of it.

evil_bunnY
Apr 2, 2003

If I want a half a TB of HA NFS *delivered quickly*, who should I be talking to?

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

evil_bunnY posted:

If I want a half a TB of HA NFS *delivered quickly*, who should I be talking to?

Netapp?

evil_bunnY
Apr 2, 2003

We're already talking to them and Dell (Isilon). I'd like to know if I'm missing non-obvious players with EMEA presence.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

evil_bunnY posted:

We're already talking to them and Dell (Isilon). I'd like to know if I'm missing non-obvious players with EMEA presence.

Thread frequently mentions Pure Storage? I really like Vast but they’re probably out of scope for 500G.

H110Hawk
Dec 28, 2006
How fast is fast? How many clients?

Either way, call Pure or Netapp.

Thanks Ants
May 21, 2004

#essereFerrari


Storwize Unified will do NFS, but probably can't tick the 'quickly' box unless someone has demo kit ready to ship.

evil_bunnY
Apr 2, 2003

PCjr sidecar posted:

Thread frequently mentions Pure Storage? I really like Vast but they’re probably out of scope for 500G.
We don't want/need flash.

H110Hawk posted:

How fast is fast? How many clients?
Yesterday ideally, but 30 days ok. Very few users, but medium loads (about 1k iops base, ~6k peak).

H110Hawk
Dec 28, 2006
Oh, yeah quickly as in shipping lol good luck. Quickly as in high iops/low latency yeah.

evil_bunnY posted:

We don't want/need flash.

Yesterday ideally, but 30 days ok. Very few users, but medium loads (about 1k iops base, ~6k peak).

Call "Zerowait" I bet they can get it done "today." What's your budget? It's a REALLY small amount of storage honestly, how "HA" does it need to be? Like, why not just a synology?

evil_bunnY
Apr 2, 2003

H110Hawk posted:

Call "Zerowait" I bet they can get it done "today." What's your budget? It's a REALLY small amount of storage honestly, how "HA" does it need to be? Like, why not just a synology?
It's a small amount of storage but it's for core systems. This was supposed to be a Ceph back byzantine piece of poo poo and it's taken me this long for SuSE to show their whole rear end and me to convince them to hop off that train.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

evil_bunnY posted:

It's a small amount of storage but it's for core systems. This was supposed to be a Ceph back byzantine piece of poo poo and it's taken me this long for SuSE to show their whole rear end and me to convince them to hop off that train.

You were doing Ceph on SuSE?

Could you share a bit about your Ceph experience? It has broadly a good reputation at this point from what I've heard, with the caveat that you are probably paying Red Hat more than you really want to for support.

H110Hawk
Dec 28, 2006

evil_bunnY posted:

It's a small amount of storage but it's for core systems. This was supposed to be a Ceph back byzantine piece of poo poo and it's taken me this long for SuSE to show their whole rear end and me to convince them to hop off that train.

How much downtime is acceptable? How often?

Crunchy Black
Oct 24, 2017

by Athanatos
You're not getting Pure deployed without an existing contract that small that fast for a not-extravagant price unless you have a GREAT relationship with your VAR that also deals with Pure already.

It will still be very expensive.

Crunchy Black fucked around with this message at 19:34 on Apr 6, 2021

evil_bunnY
Apr 2, 2003

Twerk from Home posted:

You were doing Ceph on SuSE?
ya

Twerk from Home posted:

Could you share a bit about your Ceph experience? It has broadly a good reputation at this point from what I've heard, with the caveat that you are probably paying Red Hat more than you really want to for support.
performance is a piece of poo poo if you're not on RBD or CephFS. NFS HA (pacemaker on top of ganesha), isn't (5mn unresponsive). VMWare over iSCSI is supposedly supported but SVM performance is *atrocious* (~50MBps when an array *two hops over* will do 10GBE wire speed all day from the same cluster, and the ceph cluster can write at ~10GBE internally).

SuSE support is also a disaster with slow turnaround and missed deadlines aplenty. Everyone's done with their poo poo.

H110Hawk posted:

How much downtime is acceptable? How often?
Planned couple times a year. Unplanned no one wants to think about it.

Crunchy Black posted:

You're not getting Pure deployed without an existing contract that small that fast for a not-extravagant price
We don't need all-flash. IME our needs would be well served with a couple of mid-level netapp heads on top of high double digit disks but these midlevel motherfuckers just had to have a blue steel boner for something fancy and OSS.

evil_bunnY fucked around with this message at 20:02 on Apr 6, 2021

H110Hawk
Dec 28, 2006
Yeah just buy a synology. I'm not kidding. You don't have any useful requirements. Put it on a ups and back it up regularly.

Thanks Ants
May 21, 2004

#essereFerrari


Which bit of EMEA are you in?

SlowBloke
Aug 14, 2017

H110Hawk posted:

Yeah just buy a synology. I'm not kidding. You don't have any useful requirements. Put it on a ups and back it up regularly.

This plus QNAP sells HA twin controller setups if zero downtime is an hard requirement. Also if you can shift your protocol requirement to iscsi+fc(there is vfiio for nfs but I never used it) you could make a datacore storage pool with just a couple of generic servers

evil_bunnY
Apr 2, 2003

Thanks Ants posted:

Which bit of EMEA are you in?
northern eu

the qnap/syno stuff fits the use case but it's an absolute taboo politically. These people are Serious Business.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

EFS :devil:

Thanks Ants
May 21, 2004

#essereFerrari


evil_bunnY posted:

northern eu

the qnap/syno stuff fits the use case but it's an absolute taboo politically. These people are Serious Business.

It's country specific but Synology regional offices usually have a selection of enterprise kit in stock as part of their try and buy programme, here's the link if you're in France

https://event.synology.com/fr-fr/FR_Test_Buy/FR

Though if Synology has been ruled out it doesn't really matter.

Kaddish
Feb 7, 2002

Thanks Ants posted:

Storwize Unified will do NFS, but probably can't tick the 'quickly' box unless someone has demo kit ready to ship.

IBM has pulled marketing for Unified and seems to be exiting that sector of the file business. I have three Unified arrays I’ll be looking to replace soonish.

Thanks Ants
May 21, 2004

#essereFerrari


ibm.txt

evil_bunnY
Apr 2, 2003

Kaddish posted:

IBM has pulled marketing for Unified and seems to be exiting that sector of the file business. I have three Unified arrays I’ll be looking to replace soonish.
they probably want to sell ESS instead.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Ceph war story

https://michael-prokop.at/blog/2021/04/09/a-ceph-war-story/

Pile Of Garbage
May 28, 2007




See that poo poo is why you pay for a hardware storage appliance from a reputable vendor that provides support.

Kaddish
Feb 7, 2002
Kyndryl is the worst name I’ve ever heard for a spin-off company. Like, way beyond Quikster levels of bad.

Yaoi Gagarin
Feb 20, 2014


Is it just me or is Ceph overengineered for a setup with 36 disks?

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

VostokProgram posted:

Is it just me or is Ceph overengineered for a setup with 36 disks?

Probably? I've only actually used it in a homelab, and a Dell demo where they had 12 disks (10TB SATA) per node (and then another with 24 NVMe per node :q:), but only 4 nodes for the first and 2 nodes for the second.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

There’s no right number of disks for Ceph. You either have not enough disks or hosts or have way too many.

Kaddish
Feb 7, 2002
They didn’t mention what industry their customer is in. I assume the OP is a managed service provider of some sort. There is a reason we use proven technologies on solid frameworks for critical production environments.

Kaddish
Feb 7, 2002
I feel like storage is still the Wild West of IT and poo poo like proxmox, ceph etc is cool and fun until it isn’t.

devmd01
Mar 7, 2006

Elektronik
Supersonik
There were other reasons, but many years ago I straight up told a company why I refused their job offer without even seeing numbers - they were using a used CX4 without any kind of support contract and were buying parts off of eBay. gently caress. No.

Kaddish
Feb 7, 2002

devmd01 posted:

There were other reasons, but many years ago I straight up told a company why I refused their job offer without even seeing numbers - they were using a used CX4 without any kind of support contract and were buying parts off of eBay. gently caress. No.

I still support a cx4 at one of our small sites. We have a support contract though, obviously.

Edit - and it’s scheduled to be replaced but I’m only one person, poo poo.

Kaddish fucked around with this message at 21:21 on Apr 15, 2021

Yaoi Gagarin
Feb 20, 2014

Kaddish posted:

I feel like storage is still the Wild West of IT and poo poo like proxmox, ceph etc is cool and fun until it isn’t.

If they were dead set on using proxmox, they could have just dumped those disks into a zfs pool and called it a day. Really weird to be using a clustered filesystem when your disks would easily fit in a shelf imo.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Pile Of Garbage posted:

See that poo poo is why you pay for a hardware storage appliance from a reputable vendor that provides support.
I've dealt with way worse problems, and had way higher ownership in them, with IBM/Hitachi/DDN enterprise storage than anything that's going on in this post.

There was a stretch of nearly half a year where IBM shipped firmware on their DS4000/DS5000 LSI SANs that would silently forget to replicate anything past the first 2 TB of a LUN, that was fun to find doing a DR test.

gently caress, don't even get me started on SONAS/GPFS.


VostokProgram posted:

If they were dead set on using proxmox, they could have just dumped those disks into a zfs pool and called it a day. Really weird to be using a clustered filesystem when your disks would easily fit in a shelf imo.
It feels premature/overengineered, but it's hard to say without knowing what the max size was that they envisioned this setup scaling to. It's weird buying a server with a pile of empty drive bays in it if you have no plans for them, right?

Vulture Culture fucked around with this message at 18:14 on Apr 17, 2021

Pile Of Garbage
May 28, 2007



Vulture Culture posted:

I've dealt with way worse problems, and had way higher ownership in them, with IBM/Hitachi/DDN enterprise storage than anything that's going on in this post.

There was a stretch of nearly half a year where IBM shipped firmware on their DS4000/DS5000 LSI SANs that would silently forget to replicate anything past the first 2 TB of a LUN, that was fun to find doing a DR test.

gently caress, don't even get me started on SONAS/GPFS.

Yeah big whoop so what? I've seen things you'd never believe as well. It's an indisputable fact that all software and hardware is trash. That's why you, or rather your employer/customer, pays for enterprise kit with support so that the liability is shifted up to the vendor. Imagine where you'd be if you had no vendor support in the incidents you described?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Pile Of Garbage posted:

Yeah big whoop so what? I've seen things you'd never believe as well. It's an indisputable fact that all software and hardware is trash. That's why you, or rather your employer/customer, pays for enterprise kit with support so that the liability is shifted up to the vendor. Imagine where you'd be if you had no vendor support in the incidents you described?
Maybe slightly worse off? We were the drivers of all those resolutions. My team and I spotted the SONAS regression by ingesting 1 TB of time series metrics on NFS mounts from a compute cluster until we found a correlation between a running job and nfslock latency and chased it with the upstream developers directly. We found the 2 TB LUN regression doing a DR test. I once reported a different storage bug to VMware in which I had the specific patch release where the bug was introduced, a description of the precise area of the code where the bug lived, and an explanation of what the logic error was, and there was no fix made available for eight months.

Vendor support is amazing for dealing with hardware logistics (I'm grey enough to remember trying to source hard drives after the tsunami wrecked all the HDD manufacturing in Thailand), and there's a lot of value in smaller sets of certified releases. But it's the farthest thing from a panacea. The vendor will still introduce software bugs they can't help you with. There will still be times where you have to do the legwork and find the fix yourself. With a vendor there might be relationships and money on the table, but here's the catch: you're always losing more money from the thing being down than what you paid for it, or you wouldn't have spent the money. The pressure will always be higher on you than the company you bought the kit from.

Adbot
ADBOT LOVES YOU

H110Hawk
Dec 28, 2006
The support contract gets you someone with any part of the system in some "guaranteed" number of hours after they agree you need that part. Anything else is luck. Same with like hardware network devices, load balancers, basically anything "enterprise".

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply