Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Kaddish posted:

I'm not a big Netapp guy but isn't BackupExec NDMP capable?

Oh, it is. I'm just not too keen on keeping BackupExec around if we move to something else for our VMWare level backups in our prod environment. We're pulling back our SAS Equallogic storage to build a small capacity 3 node VMWare Essentials Plus cluster in our office and I had planned on moving the current production backup exec licensing into our much smaller office environment where I feel it will work a bit better. So, I'm open to other (preferribly better but not horrifically expensive and complex) NDMP capable backup options.

Adbot
ADBOT LOVES YOU

Kaddish
Feb 7, 2002
TSM for VM seems to work well but that's ruled right out due to both of your requirements.

Internet Explorer
Jun 1, 2005





Vanilla posted:

That reminds me - for those of you who were following my job race between Nimble and Pure I did eventually join Pure and am loving the product and the company so far.

Assisted on my first install a few weeks ago. Took longer to rack than set up.....I was stood there saying 'is that it?'

I'm surprised at how many Pure employees are actually ex-customers.

If you go back something like 2 years you'll see me griping about how much of a cluster gently caress EMC gear is to set up and maintain. I think you and I talked about it a bit. There is so much better out there for most organizations.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Vanilla posted:

That reminds me - for those of you who were following my job race between Nimble and Pure I did eventually join Pure and am loving the product and the company so far.

Assisted on my first install a few weeks ago. Took longer to rack than set up.....I was stood there saying 'is that it?'

I'm surprised at how many Pure employees are actually ex-customers.

What area are you working in? I just met with some Pure folks last week.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Internet Explorer posted:

If you go back something like 2 years you'll see me griping about how much of a cluster gently caress EMC gear is to set up and maintain. I think you and I talked about it a bit. There is so much better out there for most organizations.

Yup, aint that the truth!

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Moey posted:

What area are you working in? I just met with some Pure folks last week.

Out in EMEA ;)

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Vanilla posted:

Out in EMEA ;)

Ahh, safe to assume I didn't meet with you then.

BonoMan
Feb 20, 2002

Jade Ear Joe
I'm looking for a Thunderbolt 2 enabled 1u or 2u (or hell 3u ... we have the space for it) storage system that can't exceed 27" in depth.

We ordered the LaCie 12TB 8Big rackmount not realizing it was too deep (just barely).

It's for a mobile DIT cart that will be out in the field and has to be closed up for transport so it HAS to fit that depth and it needs to be rackmounted for stability.

4 or 8 bay is enough and in the $1500-2000 range is preferable.

It can be relatively dumb storage. We'll just be writing to it via a Blackmagic Mini Record in.

edit: And the reason it's Thunderbolt 2 is because the mobo on the DIT machine has 2 x Thunderbolt 2 ports.

Amandyke
Nov 27, 2004

A wha?
Not sure this thread is really the best place to ask about DAS solutions. I did find this comparison via google and it looks like there might be a few products that could fit your needs: http://wolfcrow.com/blog/a-comparison-of-10-thunderbolt-raid-storage-solutions/

toplitzin
Jun 13, 2003


bull3964 posted:

Speaking of OnTap, we just finished our Netapp 2554 install today. 20x 4tb SATA, 4x 400gb SSD, 48x 900gb SAS. There's definitely going to be a bit of a learning curve to this as it's not quite as point and shoot as the Equallogic or Pure I've used so far.

We haven't configured the Flash Pool yet on recommendation of the tech we were working with from our vendor. OnTap 8.3 allows for partitioning of the flash pool, so we would rather wait until we upgrade to it and allow both the SAS and SATA aggregates use the flash pool than choose one now.

I didn't see this addressed, but is pretty important. When you go to provision the FlashPool you need to keep in my what the workload will be. Certain workload profiles won't even leverage the FlashPool so allocating it would be a waste. Also make sure your SSD Aggr is raid 4 instead of the default DP, otherwise you'll lose two disks to parity and only have two for data use.

How many nodes? Did you provision both nodes with the split disk types? The presence of SATA drives will change the performance characteristics (slightly) of the system vs one that is all SSD+SATA and the other all SAS.

Also, learn and love QoS in 8.3. Just try and keep the policy names short or you'll never tell them apart since the CLI truncates after 22 characters i think.

theperminator
Sep 16, 2009

by Smythe
Fun Shoe
I've got an EMC AX4 that is ancient and out of warranty, not in use in production any more but just sitting around.

Anyway, thinking of keeping it around in the office for VMWare lab work etc.

Does anyone know if it is possible to use non-emc branded disks in this? I understand it will probably flat out refuse to without some kind of gently caress around with disk formatting or something so I'm wondering if someone has done this before and if so, how?

Richard Noggin
Jun 6, 2005
Redneck By Default
AFAIK EMC SANs (and probably the vast majority of SANs in general) use custom firmware. Chances are the controller won't even identify the drive.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Richard Noggin posted:

AFAIK EMC SANs (and probably the vast majority of SANs in general) use custom firmware. Chances are the controller won't even identify the drive.

There are ways to brute force a firmware update onto a drive.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


toplitzin posted:

I didn't see this addressed, but is pretty important. When you go to provision the FlashPool you need to keep in my what the workload will be. Certain workload profiles won't even leverage the FlashPool so allocating it would be a waste. Also make sure your SSD Aggr is raid 4 instead of the default DP, otherwise you'll lose two disks to parity and only have two for data use.

How many nodes? Did you provision both nodes with the split disk types? The presence of SATA drives will change the performance characteristics (slightly) of the system vs one that is all SSD+SATA and the other all SAS.

Also, learn and love QoS in 8.3. Just try and keep the policy names short or you'll never tell them apart since the CLI truncates after 22 characters i think.

Two nodes. One node owns the SAS aggregate and one node owns the SATA aggregate. It will, of course, failover to the other node if necessary, but we split them so as to not cause any performance issues.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Richard Noggin posted:

AFAIK EMC SANs (and probably the vast majority of SANs in general) use custom firmware. Chances are the controller won't even identify the drive.

...and I think they used a 520 block size rather than your usual 512.

The disks must be pretty small? 146/300GB? Worth buying one just to try?

Amandyke
Nov 27, 2004

A wha?

Vanilla posted:

...and I think they used a 520 block size rather than your usual 512.

The disks must be pretty small? 146/300GB? Worth buying one just to try?

Ax4s will use 500gb and 1tb sata drives. They are 520 byte per sector formatted though. They should be reasonably affordable on eBay.

Serfer
Mar 10, 2003

The piss tape is real



Are there any dual path, dual controller DAS SAS JBODs that aren't horrible? We have some Intel JBOD2000s (224 and they're lousy. Fan controllers will increase fan speeds, but never decrease them, power supplies detected as failed when they're perfectly fine, can only be turned on by pushing a button, etc. I'm having a hard time finding something that doesn't suck.

Stealthgerbil
Dec 16, 2004


Any suggestions for building out a cheap storage server or a SAN to use as virtual machine storage for a small five to ten server hyper-V or xenserver setup? The servers will mostly be single socket xeon E3 servers with 32gb of ram and would use the storage server or SAN as storage. Ideally I would love to find some magical storage solution that would let me set up some sort of redundancy and allow me to add more SSDs as needed as well as add a second device later on for redundancy. Really we probably don't need a real SAN, a regular storage server with four Intel DC S3500 1.2TB SSDs would be enough to handle the virtual machines. We are mainly planning on using it to offer hosted application services to our clients and would love to have failover features available for their virtual machines. Also ideally I would love something that I can roll my own from because its something that I would enjoy more and I would also love to be able to fix any problem without having to contact vendor support.

Personally I would love to use something that is not hardware dependant like Windows Storage Spaces but I am worried that performance will be awful. It is something that I need to test but I only have regular consumer grade SSDs in my home lab. I have been using storage spaces on my backup server and its been amazing as far as how easy it is to add more disk to a cluster. However I have heard mixed reports about Storage Spaces performance. Does any of you knowledgeable fellows know of any magical product that exists that can do any of this stuff that is also relatively inexpensive? Also not sure if this is a question for the enterprise storage thread or the virtualization thread.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Stealthgerbil posted:

Any suggestions for building out a cheap storage server or a SAN to use as virtual machine storage for a small five to ten server hyper-V or xenserver setup?
I run NFS datastores off of SmartOS at work for things that we can tolerate additional risk on. It's cheap and works great.

Maneki Neko
Oct 27, 2000

We're pondering dumping our current NetApp gear (still on 7 mode) and moving to hybrid Tegile (T3200s in particular) for VM storage vs refreshing our current NetApp heads and going through the joy of a CDOT migration.

I haven't seen much Tegile chat, anyone have any thoughts on their gear they want to share? I've heard good things about their all flash arrays, but haven't really run across a lot of people running their stuff in a hybrid configuration.

Really I'm just curious if it's as "set it and forget" as the sales reps/engineers are claiming for NFS vmstores. Backup wise we'll be either going Veeam or Commvault.

Erwin
Feb 17, 2006

We looked at Tegile vs. Nimble (vs. VNX lol) and went Nimble (which invalidates the following story because you need NFS), not because anyone we talked to hated Tegile, but because no one felt strongly about it. Nimble feedback is nothing but positive, and their install base is much larger. Tegile's supposedly shooting for an IPO at some point, but until then, Nimble is a little more transparent if you're into that.

Tegile pushes their use of eMLC flash, which is dumb because the point of a storage array is to abstract away the underlying hardware. I spoke to some Tegile references and they were all like "yeah, I dunno, we set it up and it works, it wasn't too bad." So it is set it and forget it, and I was unable to find someone with juicy support stories, so that was my only real concern. Plenty of people vouched for Nimble's support experience, which I can also do at this point.

So tl;dr, we looked at Tegile and it was unexciting, but not bad. And since you're looking for NFS, my Nimble story is pointless except to explain why we didn't go with Tegile. :tipshat:

Maneki Neko
Oct 27, 2000

Erwin posted:

We looked at Tegile vs. Nimble (vs. VNX lol) and went Nimble (which invalidates the following story because you need NFS), not because anyone we talked to hated Tegile, but because no one felt strongly about it. Nimble feedback is nothing but positive, and their install base is much larger. Tegile's supposedly shooting for an IPO at some point, but until then, Nimble is a little more transparent if you're into that.

Tegile pushes their use of eMLC flash, which is dumb because the point of a storage array is to abstract away the underlying hardware. I spoke to some Tegile references and they were all like "yeah, I dunno, we set it up and it works, it wasn't too bad." So it is set it and forget it, and I was unable to find someone with juicy support stories, so that was my only real concern. Plenty of people vouched for Nimble's support experience, which I can also do at this point.

So tl;dr, we looked at Tegile and it was unexciting, but not bad. And since you're looking for NFS, my Nimble story is pointless except to explain why we didn't go with Tegile. :tipshat:

The "we set it up and it works" is all we've really gotten as well, which is certainly a good thing compared to my NetApp experiences. I liked Nimble as well, but Tegile overall seems like they have a more flexible line of arrays.

Maneki Neko fucked around with this message at 09:01 on Apr 10, 2015

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

The Nimble CS-420 we had in our lab pretty handily outperformed the Tegile HA2100 we had. The was running raid 10 on the backing storage and still couldn't keep up with the Nimble running raid 6. Raidz and raidz2 aren't particularly performant and Tegile suffers the same problems which means you're stuck dividing up capacity into different raid levels for different performance requirements versus just running everything off of one big pool. That's not simple and doesn't really fit with their message of easy setup and operation.

The benefits of Tegile are multi-protocol and inline dedupe and compression. They're not any faster than anyone else, from my experience, so it basically comes down to whether you want a NetApp-lite experience for less money. Me, I'd either go with Nimble for simplicity or NetApp for multi-protocol. Tegile seems to split the difference in a not-very-compelling way. If it's all virtual then TinTri has a really nice offering that is simple, fast, and has some really nice features that all work on a per VM basis, including per-VM QOS limits and guarantees now.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

NippleFloss posted:

Me, I'd either go with Nimble for simplicity or NetApp for multi-protocol.
We went with Nimble for performance and Oracle for multiprotocol + high cacheable workloads (like VDI).

Vanilla
Feb 24, 2002

Hay guys what's going on in th
Not forgetting a few other things such as Nimble being comparatively cheap (think against Netapp) and includes all software

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I'm doing a comparative study of distributed filesystems (Ceph, GlusterFS, Lustre) for a local user group this week. Is there anything anyone here would want to see covered? (Of course I'm gonna share the slides.)

socialsecurity
Aug 30, 2003





Few months back I think it was this thread someone needed to do large bandwidth file storage/transfers for 4k video editing and you guys gave him great advice but I can't find it, anyone remember what I'm talking about?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Cross posting this from the Enterprise Windows thread, because maybe people have seen something on the storage side:

FISHMANPET posted:

I'm gonna post this in the Storage thread too, but has anyone seen problems with slow storage performance on Server 2012 R2? I've got an open case with Microsoft but we're a month in and still seem to just be flailing randomly at even identifying a problem. I've heard mumblings of others having problems, but wondering if anyone has noticed anything.

evol262
Nov 30, 2010
#!/usr/bin/perl

Misogynist posted:

I'm doing a comparative study of distributed filesystems (Ceph, GlusterFS, Lustre) for a local user group this week. Is there anything anyone here would want to see covered? (Of course I'm gonna share the slides.)

Is how to be a Lustre admin without killing yourself covered?

But I honestly would love to see the slides. And a comparison to HDFS would be awesome.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

Is how to be a Lustre admin without killing yourself covered?

But I honestly would love to see the slides. And a comparison to HDFS would be awesome.
90% of being a Lustre admin is hand-rolling your own broken replication system that no one else on the planet understands.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

New TinTri OS adds support for VM level QoS including both limits and guarantees. That's a pretty great feature that no one else can match right now. Only Solidfire has workable QoS guarantees at all and those are at the volume level.

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

FISHMANPET posted:

Cross posting this from the Enterprise Windows thread, because maybe people have seen something on the storage side:

VM or physical? What kind of storage?

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Cross posting this from the virtualization thread. Probably belongs here anyway:

goobernoodles posted:

One of my two offices has only one host on local storage running a DC and some file, print, and super low-end application servers. It's a small office with about 20-30 people. The long term plan is to replace the core server and storage infrastructure in our main office, then potentially bringing the SAN and servers to the smaller office to improve on their capacity as well as have enough resources to act as a DR site. Until then though, I was planning on loading up a spare host down with 2.5" SAS or SATA drives in order to get some semblance of redundancy down there, as well as being able to spin up new servers to migrate the old 2003 servers to 2012. Right now, there's ~50Gb of free space on the local datastore. I'm looking for at least 1.2tb of space on the server I take down. I'm trying to decide on what makes the most sense from a cost, performance, resiliency and future usability standpoint. I'm trying to keep everything under a grand.

The spare x3650 I have has 8 total 2.5" bays (I have 3x 73Gb 10k SAS drives handy) but the downside is that 2.5" SAS drives are pretty spendy from what I've found so far. At least IBM drives, anyway.

I've been considering grabbing another IBM x3650 with 3.5" trays for about $130 a few blocks away, since, for some reason I have 4 500Gb IBM 7.2k SATA drives laying around. No idea why. We don't have any IBM servers with 3.5" bays. :iiam: At that point though, if I chose to go SATA, I might as well load the thing up with much larger drives since they're so cheap.

I was thinking of installing either ESXi or Freenas, though I'm open to trying something else to present the storage. I also have a spare SAS controller as well as plenty of memory and a couple HBA's. I've never actually tried it - you can mix SAS and SATA drives on the same controller, right - assuming different RAID arrays?

the spyder
Feb 18, 2011

Misogynist posted:

I'm doing a comparative study of distributed filesystems (Ceph, GlusterFS, Lustre) for a local user group this week. Is there anything anyone here would want to see covered? (Of course I'm gonna share the slides.)

What's your input on Gluster thus far? I'm considering evaluating it here in the next few weeks.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

mattisacomputer posted:

VM or physical? What kind of storage?

Physical server, attached with Fibre Channel to a Hitachi SAN. But apparently the same issue is happening on a local 10k SAS disk and also a FusionIO card.

But, it turns out, this request is coming from a production system running unrelease Commvault software in an experimental configuration. I assumed that we were doing normal stuff and other customers were doing this just fine, but nobody is doing this at the scale we are.

So tl;dr; maybe not a problem, backup guy is a poo poo.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

the spyder posted:

What's your input on Gluster thus far? I'm considering evaluating it here in the next few weeks.
Haven't run it in production. It seems to be the easiest distributed FS to plan and administer, since it only has one node type and there are few deployment gotchas. It seems to be a great fit overall for file-based workloads requiring high throughput from hundreds or thousands of clients, which covers most non-speciality HPC cluster use cases. It's a bad fit for client nodes that are throughput-constrained, because of the way it handles replication on the client rather than the server other than when it's healing a replication problem. It also doesn't seem to be quite as good a fit for mass object storage as Ceph, but that's a fairly specific use case for most on-premises environments. The client end is FUSE-based, which can result in some slowdown versus the CephFS client which is native and kernel-based. However, CephFS has an MDS component that's currently impossible to scale and a single point of failure, so I wouldn't recommend it for prime time.

evol262
Nov 30, 2010
#!/usr/bin/perl

Misogynist posted:

Haven't run it in production. It seems to be the easiest distributed FS to plan and administer, since it only has one node type and there are few deployment gotchas. It seems to be a great fit overall for file-based workloads requiring high throughput from hundreds or thousands of clients, which covers most non-speciality HPC cluster use cases. It's a bad fit for client nodes that are throughput-constrained, because of the way it handles replication on the client rather than the server other than when it's healing a replication problem. It also doesn't seem to be quite as good a fit for mass object storage as Ceph, but that's a fairly specific use case for most on-premises environments. The client end is FUSE-based, which can result in some slowdown versus the CephFS client which is native and kernel-based. However, CephFS has an MDS component that's currently impossible to scale and a single point of failure, so I wouldn't recommend it for prime time.

As far as I know, ceph RBD is still the component everyone loves and CephFS is questionably stable, but YMMV and it's under heavy development (we bought inktank, but ceph packages are still going through an overhaul to get into Fedora, which says a lot about how bad it was about shipping a ton of crap in /opt before)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

As far as I know, ceph RBD is still the component everyone loves and CephFS is questionably stable, but YMMV and it's under heavy development (we bought inktank, but ceph packages are still going through an overhaul to get into Fedora, which says a lot about how bad it was about shipping a ton of crap in /opt before)
RBD is the piece that allowed Ceph to become the distributed darling of the OpenStack community, because it's the best option out there for distributed block devices. Ceph itself is object-first, though, and it implements RBD on top of its object storage capabilities. While you can interface with the object store over the native Ceph object APIs, most people prefer to use the RADOS HTTP gateway, which provides S3 and Swift-compatible interfaces to the object store.

Yahoo! just announced plans yesterday to use Ceph to underpin the media storage for Flickr and Tumblr: http://www.theregister.co.uk/2015/04/15/yahoo_plans_storage_service_on_ceph/

CephFS doesn't have any inherent stability problems that I'm aware of, and it's rather well-tested, but from my perspective, it's inadvisable to use a metadata server with a single point of faiure in a production setting. It was bad enough when Hadoop did it.

Vulture Culture fucked around with this message at 05:50 on Apr 16, 2015

jre
Sep 2, 2011

To the cloud ?



Misogynist posted:

Haven't run it in production. It seems to be the easiest distributed FS to plan and administer, since it only has one node type and there are few deployment gotchas. It seems to be a great fit overall for file-based workloads requiring high throughput from hundreds or thousands of clients, which covers most non-speciality HPC cluster use cases. It's a bad fit for client nodes that are throughput-constrained, because of the way it handles replication on the client rather than the server other than when it's healing a replication problem. It also doesn't seem to be quite as good a fit for mass object storage as Ceph, but that's a fairly specific use case for most on-premises environments. The client end is FUSE-based, which can result in some slowdown versus the CephFS client which is native and kernel-based. However, CephFS has an MDS component that's currently impossible to scale and a single point of failure, so I wouldn't recommend it for prime time.

We've a couple of small gluster deployments and on the whole its works really well. The geo-rep feature is really good and the 3.6 version of it where it uses the volume change log rather than rsync is a big improvment. A small problem with it is upgrading between versions ( 3.4 -> 3.5 -> 3.6) can be a bit of an adventure and the last time I checked the advice was to disconnect all the clients and offline the cluster before doing this. Small file performance is pretty meh at the moment but they are doing a lot of work to improve this right now. If you are willing to handle failover yourself you can use NFS to access the cluster rather than the fuse client.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

jre posted:

We've a couple of small gluster deployments and on the whole its works really well. The geo-rep feature is really good and the 3.6 version of it where it uses the volume change log rather than rsync is a big improvment. A small problem with it is upgrading between versions ( 3.4 -> 3.5 -> 3.6) can be a bit of an adventure and the last time I checked the advice was to disconnect all the clients and offline the cluster before doing this. Small file performance is pretty meh at the moment but they are doing a lot of work to improve this right now. If you are willing to handle failover yourself you can use NFS to access the cluster rather than the fuse client.
3.7's got a lot of interesting improvements coming on the NFS side, including proper HA support for NFS-Ganesha.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply