Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Modulo16
Feb 12, 2014

"Authorities say the phony Pope can be recognized by his high-top sneakers and incredibly foul mouth."

I recently got a NetApp DS14 Mk4 Disc Array, and got another one from a shop oddly enough but not the filer head unfortunately. I connected them through an FC card. I wanted to low level format them but Windows didn't offer that kind of accessibility to my knowledge. Enter CentOS7, both of my servers were running CentOS7 so I tried to do the llf using lscsci and sginfo. No luck. I had read about FreeNAS and decided to implement it on two 3tb drive on an old poweredge that was going unused, so I installed to a SanDisc removable and tried it out. Heres, where the solution came from.

So NetApp uses 520 bit format sectors which is not going to work in the FreeNAS environment for the above reason, it's 512 not 520. I almost lost hope until I found an article that pointed me in the right direction. http://www.sysop.ca/archives/208. The guy had a brilliant idea to do camcontrol, which worked. SO if you see some disc shelves for cheap, and you're looking for a disc array for home or small shop use, give this a look see. I think I got the netapps with the drives for around 200+ shipping. I plan to run Openstack after I mount the LDAP volume to the centOS server so that I can do god knows what.

Adbot
ADBOT LOVES YOU

Rhymenoserous
May 23, 2008

Thanks Ants posted:

Goddam I am so out of touch on storage. I think I'll try and push this off to a VAR to solve and see what I can learn from the process.

You want a nimble or a netapp or something of that nature that does flash caching. The what you want isn't hard at all. The real question is "What are you willing to pay".

Rhymenoserous
May 23, 2008

Frank Viola posted:

I recently got a NetApp DS14 Mk4 Disc Array, and got another one from a shop oddly enough but not the filer head unfortunately. I connected them through an FC card. I wanted to low level format them but Windows didn't offer that kind of accessibility to my knowledge. Enter CentOS7, both of my servers were running CentOS7 so I tried to do the llf using lscsci and sginfo. No luck. I had read about FreeNAS and decided to implement it on two 3tb drive on an old poweredge that was going unused, so I installed to a SanDisc removable and tried it out. Heres, where the solution came from.

So NetApp uses 520 bit format sectors which is not going to work in the FreeNAS environment for the above reason, it's 512 not 520. I almost lost hope until I found an article that pointed me in the right direction. http://www.sysop.ca/archives/208. The guy had a brilliant idea to do camcontrol, which worked. SO if you see some disc shelves for cheap, and you're looking for a disc array for home or small shop use, give this a look see. I think I got the netapps with the drives for around 200+ shipping. I plan to run Openstack after I mount the LDAP volume to the centOS server so that I can do god knows what.

Hnnnnnnng.

Modulo16
Feb 12, 2014

"Authorities say the phony Pope can be recognized by his high-top sneakers and incredibly foul mouth."


That's pretty much the sound I made while trying to solve the problem

TeMpLaR
Jan 13, 2001

"Not A Crook"
Has anyone started getting into encrypting all of their data? I am starting to design a data at rest / data in motion type thing. Data at rest is easy, buy self encrypting drives. In motion is harder, SMB 3.0 supports encryption but what about ISCSI and NFS traffic? I've seen some inline encryption devices that exist but don't have any experience with them.

H110Hawk
Dec 28, 2006

Frank Viola posted:

I recently got a NetApp DS14 Mk4 Disc Array, and got another one from a shop oddly enough but not the filer head unfortunately. I connected them through an FC card. I wanted to low level format them but Windows didn't offer that kind of accessibility to my knowledge. Enter CentOS7, both of my servers were running CentOS7 so I tried to do the llf using lscsci and sginfo. No luck. I had read about FreeNAS and decided to implement it on two 3tb drive on an old poweredge that was going unused, so I installed to a SanDisc removable and tried it out. Heres, where the solution came from.

So NetApp uses 520 bit format sectors which is not going to work in the FreeNAS environment for the above reason, it's 512 not 520. I almost lost hope until I found an article that pointed me in the right direction. http://www.sysop.ca/archives/208. The guy had a brilliant idea to do camcontrol, which worked. SO if you see some disc shelves for cheap, and you're looking for a disc array for home or small shop use, give this a look see. I think I got the netapps with the drives for around 200+ shipping. I plan to run Openstack after I mount the LDAP volume to the centOS server so that I can do god knows what.

This (used to?) work in reverse as well, but you can use sg3_utils when Linux is willing to view the disks.

thebigcow
Jan 3, 2001

Bully!

TeMpLaR posted:

Has anyone started getting into encrypting all of their data? I am starting to design a data at rest / data in motion type thing. Data at rest is easy, buy self encrypting drives. In motion is harder, SMB 3.0 supports encryption but what about ISCSI and NFS traffic? I've seen some inline encryption devices that exist but don't have any experience with them.

IPSec?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Storage traffic should be segregated onto a secured, unrouted VLAN, so encryption in motion for it is usually not necessary unless it needs to leave the datacenter. NFS supports encryption natively, and IPSec can be run in software, but you're going to pay enough of a performance penalty that it's usually a bad idea for storage traffic that you want to be low latency. Hardware IPSec encryption end points would be the best option if you have to do it for some reason.

Also, make sure you have a handle on key management before you start using SEDs.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Rhymenoserous posted:

You want a nimble or a netapp or something of that nature that does flash caching. The what you want isn't hard at all. The real question is "What are you willing to pay".

Speaking of Nimble, we just got quoted for some equipment and they are doing some real good deals on hardware right now.

H110Hawk
Dec 28, 2006
https://www.pagerduty.com/blog/security-fault-tolerance/

This is a great article. To be honest, encryption is motion is not the performance penalty it is made out to be on modern CPUs. The initial handshake is the most expensive part, but you only do that once a day at most.

Erwin
Feb 17, 2006

Moey posted:

Speaking of Nimble, we just got quoted for some equipment and they are doing some real good deals on hardware right now.

Their stock dropped 50% in November after bad earnings, and it's even lower now. They're probably desperate for sales, and I hope they don't get acquired.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

H110Hawk posted:

https://www.pagerduty.com/blog/security-fault-tolerance/

This is a great article. To be honest, encryption is motion is not the performance penalty it is made out to be on modern CPUs. The initial handshake is the most expensive part, but you only do that once a day at most.

How efficiently IPSec works depends on the implementation. But in any case, storage arrays will generally be driving far more throughout to disparate clients than something like a point to point tunnel. The overhead of encryption hundreds of MB a second worth of packets to dozens of distinct clients can drive CPU utilization and latency up. Modern arrays target sub millisecond latency, so additional latency in the path can cause a large proportional increase.

And, of course, most arrays don't support IPsec on box so you'd be looking at inline hardware encryption devices anyway.

YOLOsubmarine fucked around with this message at 23:44 on Jan 12, 2016

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Erwin posted:

Their stock dropped 50% in November after bad earnings, and it's even lower now. They're probably desperate for sales, and I hope they don't get acquired.

Yeah I saw that, figured they are trying to get larger deployments out there to save face.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Moey posted:

Yeah I saw that, figured they are trying to get larger deployments out there to save face.

They need revenue growth and it's not coming from the enterprise which means driving volume in the mid market.

TeMpLaR
Jan 13, 2001

"Not A Crook"

NippleFloss posted:

Storage traffic should be segregated onto a secured, unrouted VLAN, so encryption in motion for it is usually not necessary unless it needs to leave the datacenter. NFS supports encryption natively, and IPSec can be run in software, but you're going to pay enough of a performance penalty that it's usually a bad idea for storage traffic that you want to be low latency. Hardware IPSec encryption end points would be the best option if you have to do it for some reason.

Also, make sure you have a handle on key management before you start using SEDs.

Yeah, storage traffic is already on a secured unrouted VLAN (a whole bunch of them depending on what environment it is). I checked out some hardware encryption endpoints but they don't do block, only file. Going with KeySecure for the key management. Glad to hear I didn't really miss too much from what it sounds like. Thanks.

Langolas
Feb 12, 2011

My mustache makes me sexy, not the hat

Erwin posted:

Their stock dropped 50% in November after bad earnings, and it's even lower now. They're probably desperate for sales, and I hope they don't get acquired.

Speaking of Nimble

http://www.businesswire.com/news/home/20160107005043/en/INVESTOR-ALERT-Investigation-Nimble-Storage-Announced-Law

I really like their product but me thinks something fishy is going on there.

Erwin
Feb 17, 2006

Langolas posted:

Speaking of Nimble

http://www.businesswire.com/news/home/20160107005043/en/INVESTOR-ALERT-Investigation-Nimble-Storage-Announced-Law

I really like their product but me thinks something fishy is going on there.

Eh, that happens every time a notable stock drops after earnings. It's the stock market equivalent of ambulance chasers and might as well be an ad saying "have you or a loved one been injured by NMBL?" Just google "investigation on behalf of investors."

Amandyke
Nov 27, 2004

A wha?

KennyG posted:

7.2.0.4 with 6 x410 nodes with GNA.

Our nodes are based around 3TB SED's with 128GB ram and 2 ssd drives for GNA.

Would you mind sharing your file pool policies and smartpool settings?

KennyG
Oct 22, 2002
Here to blow my own horn.
Not at all.

FilePoolSettings posted:


CLUTSTER-5# isi filepool policies list
Name Description
----------------
----------------
Total: 0

CLUSTER-5# isi filepool default-policy view
Set Requested Protection: default
Data Access Pattern: concurrency
Enable Coalescer: True
Data Storage Target: anywhere
Data SSD Strategy: metadata
Snapshot Storage Target: anywhere
Snapshot SSD Strategy: metadata
Cloud Pool: -
Cloud Compression Enabled: -
Cloud Encryption Enabled: -
Cloud Data Retention: -
Cloud Incremental Backup Retention: -
Cloud Full Backup Retention: -
Cloud Accessibility: -
Cloud Readahead: -
Cloud Cache Expiration: -
Cloud Writeback Frequency: -

Smartpool Settings posted:


CLUSTER-5# isi storagepool settings view
Automatically Manage Protection: files_at_default
Automatically Manage Io Optimization: files_at_default
Protect Directories One Level Higher: Yes
Global Namespace Acceleration: disabled
Virtual Hot Spare Deny Writes: Yes
Virtual Hot Spare Hide Spare: Yes
Virtual Hot Spare Limit Drives: 1
Virtual Hot Spare Limit Percent: 0
Global Spillover: anywhere
SSD L3 Cache Default Enabled: No

the spyder
Feb 18, 2011
What's your utilization? Are your clients primarily SMB2/2.1?

Amandyke
Nov 27, 2004

A wha?

KennyG posted:

Not at all.

What is your current SSD utilization?

KennyG
Oct 22, 2002
Here to blow my own horn.
95% SMB 2/2.1 by volume, the rest are vSphere hosts using NFS as tertiary tier storage.

SSD is for metadata, how do I check the utilization levels? We are currently going through the smart fail process for the new firmware FCO issue affecting the SED-SSDs

the spyder
Feb 18, 2011
Running InsightIQ?

Amandyke
Nov 27, 2004

A wha?

KennyG posted:

95% SMB 2/2.1 by volume, the rest are vSphere hosts using NFS as tertiary tier storage.

SSD is for metadata, how do I check the utilization levels? We are currently going through the smart fail process for the new firmware FCO issue affecting the SED-SSDs

It should show on isi stat -d or in the gui

Zorak of Michigan
Jun 10, 2006

Does anyone have ceph experience? I've been interested it in for a while now but it's not something anyone at work would get interested in unless there was native VMware or Windows support. I'm pondering assuaging my curiosity and my need for more NAS space by setting up a very small Ceph cluster and Ceph->NFS gateway in my basement. I know it wouldn't be close to enterprise standards of redundancy, since it would have just a single monitor running in a kvm guest, but is there any reason I couldn't do it?

Zorak of Michigan fucked around with this message at 23:39 on Feb 1, 2016

Docjowles
Apr 9, 2009

Go for it. A former coworker of mine had a giant boner for ceph and set up a home server with like 12 cheap consumer disks running it to store all his :filez:

He eventually got poached by Time Warner to help build their multi-petabyte ceph cluster and now makes $alot. Ceph is pretty cool and there are some really huge and interesting deployments of it out there.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

If you have Ceph experience or are interested in Ceph and have a resume, send me a PM.

Scuttlemonkey
Sep 19, 2006
Forum Monkey

Zorak of Michigan posted:

Does anyone have ceph experience? I've been interested it in for a while now but it's not something anyone at work would get interested in unless there was native VMware or Windows support. I'm pondering assuaging my curiosity and my need for more NAS space by setting up a very small Ceph cluster and Ceph->NFS gateway in my basement. I know it wouldn't be close to enterprise standards of redundancy, since it would have just a single monitor running in a kvm guest, but is there any reason I couldn't do it?

Hey, Ceph community monkey here. While it's really easy to set up a tiny Ceph cluster (and the tech is wildly awesome...I'm on board with the kool-aid), there is a fairly big learning curve between "tiny proof-of-concept to play with" and "usable cluster that can grow as you do," so I'd be careful about overcommitting. That said, there are quite a few different ways to play with Ceph from a (slightly aging) qemu img to running in Docker as well as pretty much every major deployment and orchestration framework (Chef, Puppet, Ansible, Juju, several Salt options).

WRT VMWare -- there have been a couple of people that have home-rolled Ceph-backed VMWare infrastructure setups, but the mainline support definitely isn't there. I know Intel is working on a VMWare integration, and there are rumblings of other major folks doubling down on that with them. Just hard to convince community FOSS fanatics to write code for a proprietary solution sometimes. It might be worth keeping on your radar though.

Windows support is a bit more developed, with quite a few people having different ways of serving content to Windows machines (NFS/pNFS, FS, object, etc). I'd say the best approach varies wildly depending on what you want to do with it. There still isn't a "run ceph ON windows" option though, so in that regards it's pretty much nil.

As far as whether or not to do it, without knowing your skill level, I'd say jump in with both feet on setting up a Ceph cluster and throwing some data at it. However, unless you are prepared to really dig in and do your homework beforehand, I'd suggest caution on how much you rely on it until you are comfortable. There are HUGE number of ways to tune (read: screw-up-performance), balance (put your cluster in a damaged state), or use a Ceph cluster. A familiarity with the distributed storage paradigm, and sometimes Ceph in particular, is often required to really get out of the gate without a few false starts.

That said, I'm a huge proponent of Ceph even beyond the whole "cutting me a paycheck" thing. If I stopped working at Red Hat tomorrow, I'd still be proselytizing Ceph use and spouting "The Future of Storage" in my sleep, so definitely check it out. Feel free to hit me up if you have questions about where to start or resources that might be able to help you beyond what I've linked here.

Zorak of Michigan
Jun 10, 2006

Scuttlemonkey posted:

As far as whether or not to do it, without knowing your skill level, I'd say jump in with both feet on setting up a Ceph cluster and throwing some data at it. However, unless you are prepared to really dig in and do your homework beforehand, I'd suggest caution on how much you rely on it until you are comfortable. There are HUGE number of ways to tune (read: screw-up-performance), balance (put your cluster in a damaged state), or use a Ceph cluster. A familiarity with the distributed storage paradigm, and sometimes Ceph in particular, is often required to really get out of the gate without a few false starts.

Thanks for the feedback! My skill level is weird because I've been a UNIX guy for 20 years now but my role gives me very limited hands-on experience. I'm effectively a tier 3 guy for weird performance problems but I've never actually loaded a Linux system from bare metal. Back in the 1990s I was an AFS admin but I haven't done distributed storage since then. The good news is that my performance needs are trivial by modern standards (support a max of 3 concurrent HD video streams through the Ceph->NFS gateway box) and I can afford some false starts since I'll keep the first ~5TB of data live on other systems for a while. I'm thinking that I'll scale out to two data servers with just 2 data disks each and make sure they're stable and then begin stacking them up.

Question I'm pondering as I design this scheme - would I better off trying to use the Ceph file system or a Ceph block device? If I read the docs right, file system means that I need metadata servers, and I'm not sure if it would be kosher to put them in the same KVM as my monitor daemons. On the other hand, file system implies that if I experience data loss, it will be localized to specific files, whereas data loss in objects making up a block device could mean the entire block device is trashed.

Cidrick
Jun 10, 2001

Praise the siamese

There's an outside chance I'll need to set up some manner of scalable storage backend for Cloudstack, and Ceph seems to be a popular option for backing VMs. Do you have any recommendations for reference architectures that I can look at to do some light research, from a hardware and network equipment standpoint? One of the concerns I have is having enough of a pipe for all the storage cross-chatter, but most of the network designs I'm familiar with would require going from top-of-rack Nexus 2Ks to middle-of-row 5Ks in order to keep costs down, which would likely get saturated real fast at scale.

H110Hawk
Dec 28, 2006

Cidrick posted:

There's an outside chance I'll need to set up some manner of scalable storage backend for Cloudstack, and Ceph seems to be a popular option for backing VMs. Do you have any recommendations for reference architectures that I can look at to do some light research, from a hardware and network equipment standpoint? One of the concerns I have is having enough of a pipe for all the storage cross-chatter, but most of the network designs I'm familiar with would require going from top-of-rack Nexus 2Ks to middle-of-row 5Ks in order to keep costs down, which would likely get saturated real fast at scale.

Look at low cost 10G Clos architecture. https://en.wikipedia.org/wiki/Clos_network

Remember you don't have to buy brand name optics at 10-100x the price. Call someone like Prolabs. Arista and Juniper have some decent offerings in this line, or if you're feeling really frisky, Cumulus.

Spudalicious
Dec 24, 2003

I <3 Alton Brown.
I've got a project to phase out a Compellent array being used as primary storage for our VMWare cluster. I have limited funding available, so I'm looking away from Dell, HP, Lenovo, IBM and towards use-your-own-disk solutions so that I don't get into the situation we are in with the Compellent - spending money to spend money so that we're allowed to spend some money. It makes sense for some organizations - not ours.

Anyway I think we're going with a Synology RS3614XS+ system, with 4TB of SSD storage as cache and 32TB of regular drive space for lower frequency of access data. Does anyone have experience with Synology's SSD Caching features and interoperability with VMWare? We have used Dell's data "tiering" or whatever and that seemed to work pretty well but I'm curious if anyone here has used this feature before. We have this specc'd out for around $10k which seems a lot more reasonable that similar offerings from Dell hitting 17-25k for similar featuresets and size.

Thanks Ants
May 21, 2004

#essereFerrari


It's dogshit, there's no support in the event of a problem, any claims of poor performance will be shrugged away with "dunno", and you have to down the box to perform software updates, of which there are loads.

How many hosts are you trying to provide storage for? At the barest minimum I'd try and stretch to a SAS SAN like a Dell MD3, HP MSA 2040, or potentially a VNXe1600 if you need to use iSCSI.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Spudalicious posted:

I've got a project to phase out a Compellent array being used as primary storage for our VMWare cluster. I have limited funding available, so I'm looking away from Dell, HP, Lenovo, IBM and towards use-your-own-disk solutions so that I don't get into the situation we are in with the Compellent - spending money to spend money so that we're allowed to spend some money. It makes sense for some organizations - not ours.

Anyway I think we're going with a Synology RS3614XS+ system, with 4TB of SSD storage as cache and 32TB of regular drive space for lower frequency of access data. Does anyone have experience with Synology's SSD Caching features and interoperability with VMWare? We have used Dell's data "tiering" or whatever and that seemed to work pretty well but I'm curious if anyone here has used this feature before. We have this specc'd out for around $10k which seems a lot more reasonable that similar offerings from Dell hitting 17-25k for similar featuresets and size.

We're probably going to need an idea of what you're using this for to make any suggestions. But in general I'd recommend Synology for production if you often think "eh, who really needs this data in a timely manner, or indeed, at all!?"

Internet Explorer
Jun 1, 2005





If you have a Compellent array and are thinking of a prosumer NAS to replace it, either you were vastly over sold the first time around or you are vastly underestimating your needs the second time around.

Potato Salad
Oct 23, 2014

nobody cares


I have three quad-hypervisor VRTX boxes each with full arrays of drives. These drives were purchased as part of boxes w/o a real plan for how their storage would be served. The intent is for them to store a first tier of backups. The VRTXes were purchased more as blade chassis than remote office branch devices. Meh, not the worst fuckup ever. They handle our research load fine.

So, I need to serve up three separate VRTX drive pools. With software. Ideally, I'd get some redundancy between them automatically. Tiered storage is a dead technology. Before I start going to vendors, what if any recommendations would this thread have? Set up Nutanix on VMs and serve 'em up? VMware v$AN with insane licensing fees?

I believe the VRTXes our hardware dude bought use H710 controllers as the hardware dude wasn't given any data on how the stoage, not compute, would be used in the future. http://www.dell.com/learn/us/en/04/campaigns/dell-raid-controllers

Potato Salad fucked around with this message at 07:23 on Feb 11, 2016

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Internet Explorer posted:

If you have a Compellent array and are thinking of a prosumer NAS to replace it, either you were vastly over sold the first time around or you are vastly underestimating your needs the second time around.

This. An old director of mine was hooked on buying some QNAP units and filling them with SSDs. They worked fine until the iSCSI service poo poo the bed (happened too often).

I bought some Synology RS2416RP+ units at the end of the year with excess budget money. While they work fine for slowly serving up some data, putting load on the box via iSCSI has some pathetic performance. From doing some research, NFS seems to perform better, so I'll be testing that out soon.

I would not use units like this for any sort of real production.

Thanks Ants
May 21, 2004

#essereFerrari


Also you can get fairly close to Synology pricing with a NetApp FAS2520 if you pick a bundle and push on the pricing, maybe lining it up with the end of a quarter.

Close as in "taking into account the fact it's a far better supported product", not literally the same pricing, that would be insane.

Moey posted:

I bought some Synology RS2416RP+ units at the end of the year with excess budget money. While they work fine for slowly serving up some data, putting load on the box via iSCSI has some pathetic performance. From doing some research, NFS seems to perform better, so I'll be testing that out soon.

I would not use units like this for any sort of real production.

I have inherited a client running really lightweight VM workloads using a similar model Synology with NFS datastores, and it's poo poo. It doesn't take a lot to choke the box and latency goes through the roof.

Thanks Ants fucked around with this message at 21:22 on Feb 11, 2016

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
Why are our vm's so slow!?!


Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

mayodreams posted:

Why are our vm's so slow!?!



AHHAAHAHAHAH. My end users have been known to complain if we 10ms for more than a few seconds.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply