Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Moey
Oct 22, 2010

I LIKE TO MOVE IT

skipdogg posted:

Not to mention having to deal with Oracle. Nimble will probably be very responsive if you have issues. Oracle's bastard hardware division... probably a crap shoot.

Working with their support has been enjoyable. They also are very proactive if something is wrong and we ignore it.

Adbot
ADBOT LOVES YOU

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

El_Matarife posted:

HORRIBLE VNX2 bug ETA 175619 https://support.emc.com/docu50194_V...e=en_US
SPA and SPB panic within minutes of each other, and their associated LUNs and DMs go offline. This problem occurs every 90-99 days in the following systems: VNX5200 VNX5400 VNX5600 VNX5800 VNX7600 This problem occurs in a VNX8000 system every 80 days.

Yeah... This happened where I work. It ruined new years eve and day.

Langolas
Feb 12, 2011

My mustache makes me sexy, not the hat

Goon Matchmaker posted:

Yeah... This happened where I work. It ruined new years eve and day.

Yeah I had to work New years eve because of it. I felt bad for the RCM guys at EMC because they were slammed and catching a LOT of poo poo which wasn't their fault, just EMCs fault as a whole.

JockstrapManthrust
Apr 30, 2013

Maneki Neko posted:

I don't think we've see anything that I would chalk up to cluster mode directly, but we've also had tons of problems (hopefully fixed in the 7.2 version we just upgraded to), so performance hasn't been a huge focus for us lately.

I was hoping that with 2x the hardware resources available (compaired to running on one contoler in 7 mode) to the vserver that it would make good use of them rather than just use them to provide node level HA.

Ah well, they took the longest time to properly multithread 7mode so who knows some time this decade perhaps.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

JockstrapManthrust posted:

I was hoping that with 2x the hardware resources available (compaired to running on one contoler in 7 mode) to the vserver that it would make good use of them rather than just use them to provide node level HA.

Ah well, they took the longest time to properly multithread 7mode so who knows some time this decade perhaps.

On a per node level cdot performs marginally worse than 7 mode due to the additional overhead involved in maintaining the various ring databases and an additional level of indirection in the storage layer.

It's around a 10% decrease in maximum performance if you're doing indirect access, and less if you're doing direct access.

A CDOT cluster is much more similar to a vmware cluster than an isilon or equallogic. The vserver can use resources on any node, but volume performance is still limited by wha t is available on a single node, so single filesystem performance will not benefit from additional nodes. The real benefits of cdot are non-disruptive operations and single namespace.

There is a relatively new feature called infinite volumes that will stripe volumes across multiple nodes, but they aren't fully baked yet. I suspect they will eventually become a scale out performance option and perhaps even the default volume type.

Wicaeed
Feb 8, 2005
I've been messing around with OpenFiler as a replacement for the lovely proprietary software we currently use called Open-E. I got a chance to reinstall one of these systems about 8 months ago with OpenFiler, and so far it's been running like a champ (no crashes or system hiccups) yet.

My company has 8 or so 24 disk Supermicro chassis w/24GB RAM, decent CPU and a kind of lovely RAID controller.

What options would I have for software that would let me create a cluster out of these systems and let me tier them (some have 7.2k RPM 1TB disks, others have 15K RPM 400GB SAS drives)?

evol262
Nov 30, 2010
#!/usr/bin/perl

Wicaeed posted:

What options would I have for software that would let me create a cluster out of these systems and let me tier them (some have 7.2k RPM 1TB disks, others have 15K RPM 400GB SAS drives)?

Not using openfiler, for starters, which is practically a dead project. You can hack this together with freenas, nexenta sort of does it, but lefthand (not free) is your best bet without in-house expertise.

If you have expertise, TSM, SAM, HPand storage spaces on 2k12r2 (never used the last) are your best bet for tiered. If you're ok manually tiering it, gluster/lustre or a hacked up zfs setup might work for you.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Wicaeed posted:

What options would I have for software that would let me create a cluster out of these systems and let me tier them (some have 7.2k RPM 1TB disks, others have 15K RPM 400GB SAS drives)?
If I were pricing new storage, from a vendor, I would buy something that came with HA out of the box.

If I had a shitload of storage servers, I wouldn't bother. I would burn a bunch of CDs with SmartOS on them, boot my storage servers, and build storage pools on each one. They would be standalone, and I would just create jobs to replicate the storage amongst them. If you really need HA, you are probably going to pay that premium. Answer these questions: how often do you see catastrophic failure that was completely unanticipated, and how much downtime can you tolerate on this storage? If the answers are rarely and a few hours, then congrats, you don't need HA, you just need nearline backups which SmartOS can provide.

theperminator
Sep 16, 2009

by Smythe
Fun Shoe
What would be the best solution for clustered storage using regular hardware?

For instance right now I manage a bunch of Dell 2950's acting as NASes, and shuffling stuff around to make sure they never fill up
Is there some way I can create a system whereby I can just keep adding more 2950s and keep adding to one large storage pool?

MrMoo
Sep 14, 2000

GFS or Lustre, if you want NAS a couple of gateways may be required.

evol262
Nov 30, 2010
#!/usr/bin/perl

theperminator posted:

What would be the best solution for clustered storage using regular hardware?

For instance right now I manage a bunch of Dell 2950's acting as NASes, and shuffling stuff around to make sure they never fill up
Is there some way I can create a system whereby I can just keep adding more 2950s and keep adding to one large storage pool?

This is the use case of gluster, lustre, swift, and dfs. ocfs and gfs2 will not do what you want.

What are your actual requirements for performance, management, and redundancy?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
^ to add on what are your fail over requirements for this clustered system?

Docjowles
Apr 9, 2009

Is ceph (in block storage mode) an option in this arena too? I'm asking, not offering it as a suggestion, have read a little about it but not played around at all.

Docjowles fucked around with this message at 06:47 on Jan 29, 2014

theperminator
Sep 16, 2009

by Smythe
Fun Shoe
I want to store and archive backups of VMs, I have tried swift but it doesn't seem to handle it well when I'm doing backup jobs in the range of 1-4TB.
I'd like it to be able to handle failure from one disk to a whole node, so Ceph Block storage looks like it'd do the job so I might check that out, thanks!

Syano
Jul 13, 2005
I am putting together some scratch storage that will be used for... well stuff we dont want to load on to our main production storage. I am using some parts from some old servers and I want to be able to present some NFS shares to the network so I can access from client machines but also esxi hosts. Whats the OS du jour for this sort of thing? FreeNas? Nas4free? Something else I havent seen yet?

evol262
Nov 30, 2010
#!/usr/bin/perl

Docjowles posted:

Is ceph (in block storage mode) an option in this arena too? I'm asking, not offering it as a suggestion, have read a little about it but not played around at all.

I always forget about Ceph since I haven't used it yet.

Syano posted:

I am putting together some scratch storage that will be used for... well stuff we dont want to load on to our main production storage. I am using some parts from some old servers and I want to be able to present some NFS shares to the network so I can access from client machines but also esxi hosts. Whats the OS du jour for this sort of thing? FreeNas? Nas4free? Something else I havent seen yet?

Literally anything which can function as a NFS server. FreeNAS is a safe bet.

El_Matarife
Sep 28, 2002

demonachizer posted:

What is the general opinion of Nimble with you guys? We are considering them for a project and like what we see so far but just are wondering about real world experiences also.

You check out PureStorage? It's a bunch of ex-EMC and ex-Veritas guys, EMC is actually suing them.
They're pretty drat impressive but weren't going to be landing things like replication, iSCSI / NFS, and a few other checkbox features for six months when I saw them last, but they appear to have some of it now according to their site.
Non-disruptive hardware upgrades is a pretty killer feature, plus 512B sectors that kill any alignment issues.

Violin Memory, Texas Memory Systems (Now IBM), Whiptail (Now Cisco), the flash SAN market is really overflowing with potential options.

Demonachizer
Aug 7, 2004

El_Matarife posted:

You check out PureStorage? It's a bunch of ex-EMC and ex-Veritas guys, EMC is actually suing them.
They're pretty drat impressive but weren't going to be landing things like replication, iSCSI / NFS, and a few other checkbox features for six months when I saw them last, but they appear to have some of it now according to their site.
Non-disruptive hardware upgrades is a pretty killer feature, plus 512B sectors that kill any alignment issues.

Violin Memory, Texas Memory Systems (Now IBM), Whiptail (Now Cisco), the flash SAN market is really overflowing with potential options.

We actually got scared off from a SAN on this project because people were quoting some pretty ridiculous prices for the support/warranty, like 5k+ per year which was tough to swallow.

That is possibly totally normal etc. We ended up just doing DAS and using Double-Take Availability for mirroring. I really wanted to get one of the Nimble units in so that we could build off of it but we couldn't make it work budget wise. We had a good idea as to the hardware cost but had no clue that we were looking at 30k+ on the support for two units. I will check out Pure next round when we retire some of our other poo poo and start a VM project.

parid
Mar 18, 2004
Anyone have recommendations for backup storage target systems? Commvault's software dedupe and compression tech continues its march to mediocrity. I'm getting real tired of them moving the goal posts for dedupe database's system requirements (it's now: just put it on FusionIO). Their storage efficiency has been poor. Including compression were seeing worse than 4:1, 420TB stored in 116TB of disk. Throughput has also been abysmal. If you add in the FusionIO cards, Commvault's very high support costs (we have capacity base licensing), and the cost of FAS2240's that we currently use, this environment has become very expensive and performs poorly in just about every measure.

I had a positive experience with DataDomain in the past (2+ years ago). I got decent throughput but excellent compression ratios (>10:1). I hear that EMC is messing with their backup products and the future is mirky for the DataDomain product line. They are trying to integrate all these disparate products they purchased and drive people into their complete data protection stack. Considering were a NetApp\Commvault shop right now that would lead to many complications for us.

Anyone know whats going to happen with DataDomain? Are there other similar products (inline dedupe storage) out there worth considering?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

demonachizer posted:

We actually got scared off from a SAN on this project because people were quoting some pretty ridiculous prices for the support/warranty, like 5k+ per year which was tough to swallow.
If it's scary to your financial officers for budgeting reasons and not your department for cost reasons, you should get a 3-year support contract rolled into the initial purchase. It will probably save you a few bucks if you can spare the cash now.

$5k/year is nothing. I've signed off on $250k+ for maintenance alone.

Docjowles
Apr 9, 2009

Yeah... if $5k is an unreasonable amount of money to your company you are probably not in the market for a SAN.

ragzilla
Sep 9, 2005
don't ask me, i only work here


Expect maintenance on any enterprise piece of hardware to be ~18% of list per year, maybe a bit more/less depending on response times.

qutius
Apr 2, 2003
NO PARTIES

parid posted:

Anyone have recommendations for backup storage target systems? Commvault's software dedupe and compression tech continues its march to mediocrity. I'm getting real tired of them moving the goal posts for dedupe database's system requirements (it's now: just put it on FusionIO).

We, as an organization, have gone round and round with them on this as well...very annoying.

TKovacs2
Sep 21, 2009

1991, 1992, 2009 = Woooooooooooo

parid posted:

Anyone have recommendations for backup storage target systems? Commvault's software dedupe and compression tech continues its march to mediocrity. I'm getting real tired of them moving the goal posts for dedupe database's system requirements (it's now: just put it on FusionIO). Their storage efficiency has been poor. Including compression were seeing worse than 4:1, 420TB stored in 116TB of disk. Throughput has also been abysmal. If you add in the FusionIO cards, Commvault's very high support costs (we have capacity base licensing), and the cost of FAS2240's that we currently use, this environment has become very expensive and performs poorly in just about every measure.

I had a positive experience with DataDomain in the past (2+ years ago). I got decent throughput but excellent compression ratios (>10:1). I hear that EMC is messing with their backup products and the future is mirky for the DataDomain product line. They are trying to integrate all these disparate products they purchased and drive people into their complete data protection stack. Considering were a NetApp\Commvault shop right now that would lead to many complications for us.

Anyone know whats going to happen with DataDomain? Are there other similar products (inline dedupe storage) out there worth considering?

Really happy with ExaGrid at the moment personally. Not cheap though.

parid
Mar 18, 2004

TKovacs2 posted:

Really happy with ExaGrid at the moment personally. Not cheap though.

I'm not sure how we could be spending more at the moment. Its probably in range. How are your compression ratios in the real world? These guys love to promise the world and it almost never lives up to it.

Demonachizer
Aug 7, 2004

Docjowles posted:

Yeah... if $5k is an unreasonable amount of money to your company you are probably not in the market for a SAN.

We had to ok about 350k in workstation purchases as part of the project after getting specs from a software vendor so finance was already reeling a bit. And honestly, for this project we really don't need the performance etc. of a SAN but it was a good way to get a large SAN into the room to then build off of for consolidation and later virtualization projects as well.

Just to give a picture, we currently have 7 or so different file servers of varying ages for different departments (weird budget poo poo, regulatory issues and grant politics). My dream is to get a single SAN infrastructure and virtualize the 60+ application servers we have but it is a tough sell even though I know I could put together a pretty good proposal with some obvious savings over time. There is also a big concern about knowledge etc. since I am the only person that seems to care about learning virtualization. Everyone else is pretty old school (most have been there for 12+ years).

Misogynist posted:

If it's scary to your financial officers for budgeting reasons and not your department for cost reasons, you should get a 3-year support contract rolled into the initial purchase. It will probably save you a few bucks if you can spare the cash now.

$5k/year is nothing. I've signed off on $250k+ for maintenance alone.

We are in a weird position because we don't really have a budget of our own so we sort of propose things to the finance director and work things out. We went with the three year in the beginning but the added amount for the two SANs killed it. I can say that the guy I was working with made a big (appreciated) effort to get us where we needed to be but it wasn't happening because of the aforementioned workstation costs.

Demonachizer fucked around with this message at 05:52 on Jan 30, 2014

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

ragzilla posted:

Expect maintenance on any enterprise piece of hardware to be ~18% of list per year, maybe a bit more/less depending on response times.
For the initial quote maybe, but if you're paying more than 14% for year 2 and year 3 support you're getting suckered.

Mr Shiny Pants
Nov 12, 2012
We are looking at a NetApp Metrocluster for our VMWare cluster and will be using commvault for backups.

Any gotcha's? Split brain? Ditch comm vault and go for Veeam? We looked at 3Par also but we liked the NetApp more because of the ease of snapshotting and the like.

Any criticism is welcome, we haven't fully decided yet.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Mr Shiny Pants posted:

We are looking at a NetApp Metrocluster for our VMWare cluster and will be using commvault for backups.

Any gotcha's? Split brain? Ditch comm vault and go for Veeam? We looked at 3Par also but we liked the NetApp more because of the ease of snapshotting and the like.

Any criticism is welcome, we haven't fully decided yet.

I'd say it's: PHDVirtual > CommVault > Veeam, personally. What are you backup up to?

Mr Shiny Pants
Nov 12, 2012
A smaller FAS in a Colo. What is wrong with Veeam? We were pretty impressed when they demoed it. The Sharepoint stuff and Exchange stuff was excellent.

Mr Shiny Pants fucked around with this message at 19:43 on Jan 30, 2014

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Mr Shiny Pants posted:

A smaller FAS in a Colo.

If you're going FAS-to-FAS, I don't know why you'd mess around with any VM backup software. Use the NetApp vCenter plugin (VSC - Virtual Storage Console) to take your snapshots and then use either SnapVault or SnapMirror to send them off-site.

Mr Shiny Pants
Nov 12, 2012

madsushi posted:

If you're going FAS-to-FAS, I don't know why you'd mess around with any VM backup software. Use the NetApp vCenter plugin (VSC - Virtual Storage Console) to take your snapshots and then use either SnapVault or SnapMirror to send them off-site.

That is the idea. We still might need the software for some other machines not on the filers.

OldPueblo
May 2, 2007

Likes to argue. Wins arguments with ignorant people. Not usually against educated people, just ignorant posters. Bing it.

Mr Shiny Pants posted:

We are looking at a NetApp Metrocluster for our VMWare cluster and will be using commvault for backups.

Any gotcha's? Split brain? Ditch comm vault and go for Veeam? We looked at 3Par also but we liked the NetApp more because of the ease of snapshotting and the like.

Any criticism is welcome, we haven't fully decided yet.

A metrocluster is basically the same as a regular cluster, just stretched across fiber switches with a few minor extra rules (like in case of the split brain thing). Are you looking at metrocluster because you need the whole two separate sites ability? Also one thing to keep in mind is that new feature support sometimes lags a little behind the regular FAS products. For example you can mix shelves in a stack now (though not recommended), but you can't yet with metrocluster.

Mr Shiny Pants
Nov 12, 2012

OldPueblo posted:

A metrocluster is basically the same as a regular cluster, just stretched across fiber switches with a few minor extra rules (like in case of the split brain thing). Are you looking at metrocluster because you need the whole two separate sites ability? Also one thing to keep in mind is that new feature support sometimes lags a little behind the regular FAS products. For example you can mix shelves in a stack now (though not recommended), but you can't yet with metrocluster.

We have two datacentres that are close by and we run fiber to them. The Metrocluster gives us the ability to have a stretched VMware cluster on top. The idea is to have it physically separated but logically one cluster.

Scuttlemonkey
Sep 19, 2006
Forum Monkey
First of all, I work for Inktank, the company supporting Ceph, so take what follows with the requisite grain of salt. I'd like to dig in to the Ceph v Gluster thing a bit...so if that's not your cup of tea feel free to breeze right on past this one (it's bound to be a bit of a WoT).

evol262 posted:

It's new and essentially has the same advantages and disadvantages as Gluster, except that it's newer, less stable, and arguably slower. It's mainline, though, and things should rapidly equalize.

Sorry I'm a bit late to this comment (Dec of last year) but I really hate to see it characterized this way. While I realize this may have been flippant/off-the-cuff, each system has use cases where they shine. I'm a little frustrated with all the misleading marketing bombs that keep getting lobbed over the fence from RedHat, but I suppose that's to be expected from any megacorp /rant.

Ok, on to the meat...

Grand Unified Storage Debate
If you haven't seen it, at LCA 2013 Sage (creator of Ceph) and John Mark Walker (Gluster community leader) debated the relative merits of each:


The best part about this is how both of these guys acknowledge the strengths and weaknesses of each system.

Architecture
Ceph is built on a strongly consistent object storage system which was designed to provide native object, block and file storage from its inception. The technology uses light weight peer to peer software processes along with an extremely flexible algorithm (called CRUSH) for data placement. The software processes automatically handle all expansion, contractions and rebalancing of the data within a cluster.

Red Hat Storage Server 2.0 (RHSS) is based on GlusterFS which was designed as a distributed POSIX filesystem and was optimized for this use­case. Additional capabilities have been added as plugins but without consideration to the requirements this may place on the underlying storage system. Data re­mirroring is not an automatic process when a node leaves/joins the cluster increasing the management cost of an ongoing cluster.

High Level Comparison
(summarized from a report by Hastexo)
    * Ceph > Gluster in arch/design
    * Ceph > Gluster in general data redundancy, distribution, and resilience
    * Gluster > Ceph in terms of POSIX filesystem maturity
    * Ceph > Gluster in distributed block device and ReSTful object storage support
    * Ceph > Gluster in availability and richness of APIs for programming use
    * Ceph > Gluster in terms of integration with virtualization and cloud computing stacks
    * Gluster > Ceph in asyncronous replication and hence, its use in cross-datacenter disaster recovery
    * Gluster > Ceph in ease-of-use wrt user experience

Now, this report was generated in September 2012 so obviously the gap between each system in each of the respective areas has narrowed quite a bit.

Overall it's still pretty open to what people prefer to use, and both options are definitely a viable solution. For things like OpenStack cloud deployments Ceph is the clear leader while RHSS, (and by extension Gluster) still seem to have more pure enterprise storage deployments (purely anecdotal, no evidence to support).

Ceph has quite a few large production deployments including places like CERN, Deutsche Telekom, Dreamhost, and the University of Alabama, so it has definitely met the bar in terms of stability and usability. The one quid pro quo is that CephFS, the POSIX layer on top of the underlying object store, is still being called "nearly awesome" and isn't suggested for production deployment. That is scheduled to change this year with the "Giant" release.

If anyone has more questions I'm always happy to talk shop in our IRC channel (scuttlemonkey on irc.oftc.net #ceph). As you might see from my posting habits I rarely come out of the woodwork here and mostly just lurk and cause a drain on the available bandwidth. :P

Scuttlemonkey fucked around with this message at 20:26 on Jan 30, 2014

Mr Shiny Pants
Nov 12, 2012
Ceph looks rad. Too bad I don't have any hardware to test it with. A VM is not the same as two physical boxes running Ceph.

evol262
Nov 30, 2010
#!/usr/bin/perl

Admittedly, it was a flippant/off the cuff comment. And ceph has a lot of advantages wrt gluster, but gluster also has a lot wrt ceph. It's not so much a marketing bomb as:
  • Ceph's default blocksize can lead to bad performance comparisons on untuned ceph v. gluster systems.
  • Requiring a metadata server is reminiscent of lustre in a bad way.
  • It's hard for me to make the argument that "split into many pieces which communicate over a APIs" is necessarily better or worse in arch/design than (relatively) unified design.
  • Gluster.org and RHSS are not the same thing. Gluster doesn't automatically expand or contract, but that's an intentional design decision which matches other distributed filesystems and disk-level redundancy, up to and including ZFS pools. It's inconvenient and more work for administrators, but hardly a black mark.
  • It's extremely difficult to say that "ceph > gluster in cloud/virtualization" integration. Huh? Gluster and Ceph are both supported in Openstack. RHEV/oVirt have native Gluster support. Gluster's NFS driver lets it be used as a backing datastore for VMware. You can do NFS over rbd, but it's not native. Ceph's support on Openstack is very comparable to Gluster. "How many Openstack deployments are on Ceph vs Gluster" is a terrible metric for whether "x > y" unless you also intend to argue that "netapp = gluster" and "xen > vmware" based on numbers from the user survey.
  • Ceph's distributed block devices and object store are features Gluster doesn't have without sticking Cinder and Swift on top of it. No argument

It wasn't intended to be a "Gluster rocks, Ceph sucks" post. I really don't know what problem you had with my characterization of Ceph, which compared it favorably to Gluster except for stability. And honestly, CephFS isn't stable. But again, Ceph is improving rapidly, and there are very good reasons to pick it over Gluster. Just not any on that list.

Scuttlemonkey
Sep 19, 2006
Forum Monkey

evol262 posted:

Ceph's default blocksize can lead to bad performance comparisons on untuned ceph v. gluster systems.
Yeah, I think until we have someone really try to do a reasonable comparison that employs appropriate caching/tuning and works with both the ceph and gluster community we're likely to have a lot of slanted reports that don't really answer any questions. There are so many levers on each system it's hard to qualify statements of performance realistically.

evol262 posted:

Requiring a metadata server is reminiscent of lustre in a bad way.
Actually this is one of the things that I'm the most excited about (and what Ceph originally set out to do before all the block&s3 stuff came about). The ability to have horizontally scalable metadata offers some really cool benefits.

evol262 posted:

Gluster.org and RHSS are not the same thing. Gluster doesn't automatically expand or contract, but that's an intentional design decision which matches other distributed filesystems and disk-level redundancy, up to and including ZFS pools. It's inconvenient and more work for administrators, but hardly a black mark.
True, not a black mark, but it is one of the things we're quite proud of. The autonomous nature of RADOS is something that really adds to the robustness of Ceph (in my opinion). Each system makes their own assumptions about use case, which is totally fine.

evol262 posted:

It's extremely difficult to say that "ceph > gluster in cloud/virtualization" integration. Huh? Gluster and Ceph are both supported in Openstack. RHEV/oVirt have native Gluster support. Gluster's NFS driver lets it be used as a backing datastore for VMware. You can do NFS over rbd, but it's not native. Ceph's support on Openstack is very comparable to Gluster. "How many Openstack deployments are on Ceph vs Gluster" is a terrible metric for whether "x > y" unless you also intend to argue that "netapp = gluster" and "xen > vmware" based on numbers from the user survey.
It was more a "recent adoption trend" than it was a "qualitative analysis" ...but you're right it's not terribly indicative of technical viability. I withdraw the point. :)


Keep in mind that the report summary that I was drawing from wasn't mine...it was a third party (which I can't find a public link to). So I felt remiss in including some of it but not all. I think that, properly tuned, Gluster and Ceph are both amazingly-awesometastic™ options in comparison to the historically available options.

evol262 posted:

And honestly, CephFS isn't stable.

Ahhh, ok...your response makes way more sense now. With a more file-system-centric view I totally get where you're coming from. I would have just amended your original statement to say "CephFS" instead of "Ceph" (which was the root of my frustration, which...admittedly is more related my interactions with Jeff Darcy than with your statements) and I think there would have been no response incited.

Honestly my biggest hope is that RedHat/Gluster and Ceph/Inktank can really drive a wedge into the storage industry (those are some huge numbers...both in storage and in dollars) and start weaning people off the expensive, black box, forklift options (/braces for NetApp and EMC fans...).

Thanks for such a reasoned response, always love to see good technical discourse. :)

Vanilla
Feb 24, 2002

Hay guys what's going on in th
Anyone had a good look at Pure yet?

Adbot
ADBOT LOVES YOU

Maneki Neko
Oct 27, 2000

Vanilla posted:

Anyone had a good look at Pure yet?

Just waiting to get our final budget numbers for 2014 (wtf board) back and then hopefully will be pulling the trigger on pure for SQL backend storage.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply