Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Docjowles
Apr 9, 2009

goobernoodles posted:

Yeah, just 1gb. I currently have a Netgear GSM7248 and GS724AT switches on the rack that the hosts and iSCSI and and other things plug into. I'm hoping to get a full infrastructure refresh (servers, switches, SAN) on the budget for next year, but I want to go through our existing setup and redo the iSCSI networking as a stopgap until that point. We've had occasional, ongoing issues with this setup and I really want to get the iSCSI traffic segregated.

Sounds like your requirements are modest and really any vendor would work. Is there an OS you're more comfortable working in (IOS, JunOS, etc)? If so, go with that one. If not, just go with the cheapest deal you can get from a reputable vendor that meets your needs. Moey's suggestion is good.

Adbot
ADBOT LOVES YOU

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Get something with decent sized port buffers. Stuff like the Cisco 2960 looks like it will do the job, but it falls over at pretty modest throughout levels due to very small port buffer caches. As a result Storage traffic can really crater due to pause frames and restransmits.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

goobernoodles posted:

Yeah, just 1gb. I currently have a Netgear GSM7248 and GS724AT switches on the rack that the hosts and iSCSI and and other things plug into. I'm hoping to get a full infrastructure refresh (servers, switches, SAN) on the budget for next year, but I want to go through our existing setup and redo the iSCSI networking as a stopgap until that point. We've had occasional, ongoing issues with this setup and I really want to get the iSCSI traffic segregated.

On another note, are there any relatively cost effective SANs that allow for mixing flash and mechanical drives that I should look into? I'd like to be able to put the SQL databases for an application server or two and an RDS server on flash and put the rest on cheaper disks.

Just look for something with vlan support, given your environment and what you need to do; a Cisco SG will do fine...

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

Kaddish posted:

I'm seriously considering consolidating our entire VMware environment (about 45TB) to a Pure FA-420. I can get 60TB usable (assuming 5:1 compression) for about 240k. Anyone have any first hand experience? It seems like a solid product and perfect for VMware.

This is some reply necromancy, but we just replaced our FA-320s with FA-420s and added another 12TB shelf to take us to 23TB raw capacity. We do see compression ratios of around 6:1 over the entire array, but I don't recall what it is on the VMFS LUNs specifically.

Richard Noggin
Jun 6, 2005
Redneck By Default
We use Catalyst 3750-X switches in small storage environments with great success. We even break the rules and share duties with L3 routing.

Thanks Ants
May 21, 2004

#essereFerrari


I ended up with HP 2910-al switches for iSCSI and they were fine.

bigmandan
Sep 11, 2001

lol internet
College Slice
Got our storage arrays all racked and ready to go!



I've asked this before, but has anyone had any experience with the Dell compellent synchronous live volumes? I'd like to hear some experiences with using it in a production environment.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Are they finally putting the controller heads into Dell Chassis?

KS
Jun 10, 2003
Outrageous Lumpwad

FISHMANPET posted:

Are they finally putting the controller heads into Dell Chassis?

The SC8000 controllers are in Dell chassis and have been out for over two years. This looks like the new-ish 4020 that integrates the controllers and a disk shelf into one 2u unit.

bigmandan posted:

I've asked this before, but has anyone had any experience with the Dell compellent synchronous live volumes? I'd like to hear some experiences with using it in a production environment.

Synchronous replication is a big-boy feature and you need to make sure your network is rock solid. Remember, the remote array has to acknowledge the write before it completes. Any kind of latency and you can kiss performance goodbye. A single storage switch plus a small SAN and talk of sync replication are usually not things that go together well.

There are very specific use cases for it, like split metro clusters. Async replication is good enough for DR and backup. What's your use case?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Our newer Compellent trays are looking to just be 720xd chasis, but our initial kit, from soon after Dell bought Compellent, was still the Super Micro stuff.

bigmandan
Sep 11, 2001

lol internet
College Slice

KS posted:

The SC8000 controllers are in Dell chassis and have been out for over two years. This looks like the new-ish 4020 that integrates the controllers and a disk shelf into one 2u unit.


Synchronous replication is a big-boy feature and you need to make sure your network is rock solid. Remember, the remote array has to acknowledge the write before it completes. Any kind of latency and you can kiss performance goodbye. A single storage switch plus a small SAN and talk of sync replication are usually not things that go together well.

There are very specific use cases for it, like split metro clusters. Async replication is good enough for DR and backup. What's your use case?

The picture I have only shows one cabinet. Duplicate everything there (-1 server) in another cabinet and that's our initial setup (3 hosts, 2 switches, 2 storage arrays). Eventually the second storage array will be offsite (with multiple 10 Gbps links), but we are waiting for the DR site to built. Once that's done, one of the storage arrays will move over and then we'll add 3 more hosts in a new VM cluster. The idea is that we want to be able to fail over to the DR site if there is a ever a communications outage to our main data centre (we are building a redundant ring within the city). Network latency in general should not be an issue as we can easily provision 10 or 40 Gbps links if needed (we prefer 10 right now because 40 Gbps optics are expensive as gently caress right now).

One of the reasons I was asking about synchronous live volumes was:

"Since the disk signatures of the datastores remain the same between sites, this means that during a recovery, volumes do not need to be resignatured, nor do virtual machines need to be removed and re-added to inventory." (Dell Compellent Best Practices with VMware vSphere 5.x)

I understand we can get by with async replication but the above feature seems pretty enticing as it seems it would reduce administration headaches when dealing with a fail-over.

Also I think I need to get out and exercise. Racking the SC220 disk trays gave me quite the workout.

KS
Jun 10, 2003
Outrageous Lumpwad
For ASync, a product like SRM breaks the replication relationship and re-signatures the datastores automatically. It also has far more robust DR handling than a stretched cluster.

Here's the VMware whitepaper with metro cluster requirements. Check out page 12 for the "When to Use/When not to use" discussion.

There is also an entry in the VMware Storage HCL for "iscsi metro cluster storage." It appears the Compellent is not on it.

bigmandan
Sep 11, 2001

lol internet
College Slice

KS posted:

For ASync, a product like SRM breaks the replication relationship and re-signatures the datastores automatically. It also has far more robust DR handling than a stretched cluster.

Here's the VMware whitepaper with metro cluster requirements. Check out page 12 for the "When to Use/When not to use" discussion.

There is also an entry in the VMware Storage HCL for "iscsi metro cluster storage." It appears the Compellent is not on it.

Thanks for this link!

KennyG
Oct 22, 2002
Here to blow my own horn.

orange sky posted:

Holy poo poo, nice. I wish my company was selling you that :10bux:

Because we didn't get a good deal?

devmd01
Mar 7, 2006

Elektronik
Supersonik
Company I interviewed yesterday talked about buying parts for their 5+ year old emc cx380 on ebay, and the replacement brand new San didn't make the project list for next years budget. Pass. :stonk:

theperminator
Sep 16, 2009

by Smythe
Fun Shoe

Jadus posted:

Would you mind expanding on this? I've just recently purchased a PS6500ES and am very happy with it, but have no experience beyond that.

I've got about 30 various units, so maybe it's just that I've got a much higher chance of failure. I've had 4 controller failures in the past month and a bunch of Firmware related issues including the "Resets every 248 days" bug that was fixed in the latest firmware. and this is in the last month.
I've lost a lot of sleep in the last year, the V7 firmwares have been horrible, while our few groups that are still running 6.x have been flawless for years.

Now we've apparently hit another firmware bug that has resulted in a couple of controller panics, and we're waiting on Engineering to figure out what's causing it.

theperminator fucked around with this message at 13:46 on Nov 18, 2014

KennyG
Oct 22, 2002
Here to blow my own horn.

devmd01 posted:

Company I interviewed yesterday talked about buying parts for their 5+ year old emc cx380 on ebay, and the replacement brand new San didn't make the project list for next years budget. Pass. :stonk:

Sounds like the company equivalent of telling someone that you live with your mother on a first date. Some employers are just looking for rogues.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


devmd01 posted:

Company I interviewed yesterday talked about buying parts for their 5+ year old emc cx380 on ebay, and the replacement brand new San didn't make the project list for next years budget. Pass. :stonk:

Who tells you that on an interview? That is usually the poo poo you find out first week. I could maybe see we have a 5 year old SAN we are looking to replace; Its not in the budget so one of your first tasks will be to price one out. They will tell you they don't have the budget after you spec it.

Took a year and a half but finally got budget to replace my completely full array. Switching vendors too :toot: It's small but so is our budget it's over 10% of the ITs operating budget for the year.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

pixaal posted:

Who tells you that on an interview? That is usually the poo poo you find out first week. I could maybe see we have a 5 year old SAN we are looking to replace; Its not in the budget so one of your first tasks will be to price one out. They will tell you they don't have the budget after you spec it.

Took a year and a half but finally got budget to replace my completely full array. Switching vendors too :toot: It's small but so is our budget it's over 10% of the ITs operating budget for the year.
They are probably looking for a goony hacker that won't flinch at that poo poo.

Thanks Ants
May 21, 2004

#essereFerrari


adorai posted:

They are probably looking for a goony hacker that won't flinch at that poo poo.

"Oh wow a chance to run OpenFiler in production on some DL380s I got off eBay! Finally I can save someone else's money and it will only cost me some of my worthless time."

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

pixaal posted:

Took a year and a half but finally got budget to replace my completely full array. Switching vendors too :toot: It's small but so is our budget it's over 10% of the ITs operating budget for the year.
Most vendors will finance it at zero percent. Makes it easier on the budget.

KennyG
Oct 22, 2002
Here to blow my own horn.

Thanks Ants posted:

"Oh wow a chance to run OpenFiler in production on some DL380s I got off eBay! Finally I can save someone else's money and it will only cost me some of my worthless time."

The problem with this in an employment context is that unless you get some equity all that blood to get the savings disappears into the CEO's private jet.

When it comes time to ask for a raise you hear "we don't have the budget for that, see we spent $50 on your San last month what more do you want?"

Just move on.

devmd01
Mar 7, 2006

Elektronik
Supersonik
I refuse to support any SAN in a production environment without a maintenance contract in place, unless its something they really, really don't give a poo poo about on there. Ultimately its my rear end on the line if something goes south, and I want that contract in my back pocket to call up and make them fix it when there's a goddamn production outage.

Richard Noggin
Jun 6, 2005
Redneck By Default

devmd01 posted:

I refuse to support any SAN in a production environment without a maintenance contract in place, unless its something they really, really don't give a poo poo about on there. Ultimately its my rear end on the line if something goes south, and I want that contract in my back pocket to call up and make them fix it when there's a goddamn production outage.

This, 1000x this. Unless you have a team with vendor-level knowledge of the product and keep spares, this is gospel.

Aquila
Jan 24, 2003

Richard Noggin posted:

This, 1000x this. Unless you have a team with vendor-level knowledge of the product and keep spares, this is gospel.

This is why I bought a Hitachi SAN. In many ways it's been a nightmare, but it keeps serving my data, keeping the company up, and helps me keep my job.

evol262
Nov 30, 2010
#!/usr/bin/perl

devmd01 posted:

I refuse to support any SAN in a production environment without a maintenance contract in place, unless its something they really, really don't give a poo poo about on there. Ultimately its my rear end on the line if something goes south, and I want that contract in my back pocket to call up and make them fix it when there's a goddamn production outage.

I'm not sure that jives with your username.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Got a dell rep on the phone to say "wow you guys are crazy, you know you all should do X/Y/Z right?

San upgrades when you have a <8hr notice for 25K customers is fun, glad I sperged myself so much over SAN/IP poo poo.

8 controllers 125TB of upgrades in 2 hours + EMC battery back-plane failure + hooters; poo poo owns.

KennyG
Oct 22, 2002
Here to blow my own horn.
Since this thread needs some action: how many people are running 16gb FC.

I have seen a lot of marketing chatter in this area and with AFAs starting to become more than an edge case, they can easily supply the throughput needed. Obviously specs march on but has anyone seen it in the wild?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

KennyG posted:

Since this thread needs some action: how many people are running 16gb FC.

I have seen a lot of marketing chatter in this area and with AFAs starting to become more than an edge case, they can easily supply the throughput needed. Obviously specs march on but has anyone seen it in the wild?

I'd say it's still an edge case. Two 8Gb links provide 2GB/s of throughput which is more than enough for most use cases, especially when you're talking single array performance rather than scale out. And SSD arrays haven't really increased throughout substantially over disk arrays anyway.

I've seen it for ISLs, but 4 or 8 GB is still what I see for host connectivity.

Aquila
Jan 24, 2003

Netapp FAS 2554 installed and NFS'ing in maybe 2 hours. Compared to my Hitachi this thing is so easy to use.

e: I may go to 16Gb FC with my next Hitachi SAN if the stars align. As much for latency as throughput.

theducks
Feb 13, 2007
Duckman

Aquila posted:

Netapp FAS 2554 installed and NFS'ing in maybe 2 hours. Compared to my Hitachi this thing is so easy to use.

Clustered ONTAP, or 7 mode? If Clustered ONTAP, are you running 8.3RC1 with disk partitioning to save root aggregates?

theducks
Feb 13, 2007
Duckman

NippleFloss posted:

I'd say it's still an edge case. Two 8Gb links provide 2GB/s of throughput which is more than enough for most use cases, especially when you're talking single array performance rather than scale out. And SSD arrays haven't really increased throughout substantially over disk arrays anyway.

I've seen it for ISLs, but 4 or 8 GB is still what I see for host connectivity.

I did a 2 x FAS8040 install (dual sites, ROADM connectivity, including a shelf of SSDs).. customer also wanted a quote for 4 x 16Gb FC switches.

We got pricing, standard discounts, etc.

Their response was "how do the switches cost as much as the filer?"
:captainpop:

Rekka
Feb 1, 2004

oh god, it's.... THE DOOOO!
Hey guys,

I work for a pretty big movie production studio in Japan. Our department within the studio has requested my help on finding a solution to their problem...

Problem:

We are working on multiple large, high bitrate 4K, high FPS data files. Most of our content going forward is going to be done in 4K. Obviously as a movie production studio we will be working on CG, but also doing most of our work in After Effects. We are talking about hundreds of gigabytes of data for just seconds of movie footage. Our first project this year showed that we have a major bottleneck in our current set up. We have a large storage server in-house that we use to store backups and all our data. When we record the footage we need to move that footage onto the server. After that we need to work on the footage. Our current setup means that we can't work on the data directly on the server.

First we have to copy all the data to our PCs, work on it there and then copy it back, but when the size of the data for just 2 minutes of footage is over 10TB, you can see how the time lost copying data back and forth is pretty inefficient.

My boss has asked me to research, build a budget and get the ball moving on a way to make a fast, high capacity storage server that doesn't buckle when multiple people connect.

Ideally what we would like is a server that allows normal users to connect to it via ethernet just as you would a normal NAS storage device, like we have now. Then we would also like to have the ability for 5 or 6 PCs that will be dedicated to compositing and working on the data to connect directly to it via a fibre channel.

The server needs to not buckle when those 5 or 6 people are doing work on it, and it also needs the ability to share the data to normal users via ethernet (with priority given to data transfer via optical cable.

I've done a bit of research and it looks like if we bought something like a Powervault SAN from Dell, it could connect to our already existing network and be seen by regular users as a NAS, and then also can be directly connected to a PC and be seen as a DAS.

That is according to this; http://www.smallnetbuilder.com/nas/nas-howto/31485-build-your-own-fibre-channel-san-for-less-than-1000-part-1

The diagram seems to suggest that to be the case.

I guess I have a few questions regarding the feasibility or my understanding of this but, can you connect 5 or 6 PCs directly to a powervault SAN via optical like we want or does it not work like that?

Given our requirements, can you recommend something other than a Dell (that might be available in Japan) or suggest a configuration for a Dell Powervault?

Vanilla
Feb 24, 2002

Hay guys what's going on in th

On the train so can't give too detailed a reply but: I used to look after a lot of media accounts at my old role and what you are talking about it 100% the use case for EMC isilon.

Here's a big old list of users from years ago: http://www.storagenewsletter.com/rubriques/business-others/apple-isilon-itunes/

So it's a node based file architecture. Basically you add 'nodes' and each node adds the amount of compute, storage and networks so it gets bigger and faster the larger it gets and used by pretty much every media house for exactly what you say. Dead easy to use

Ataraxia
Jun 15, 2001

Champion of nothing.

Vanilla posted:

EMC isilon.

Here's a big old list of users from years ago

Heh, my company has 4 from that list as clients, either replacing or installed alongside their Isilons.

I would send you a PM Rekka, but you don't have the button.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Rekka posted:

NAS and "DAS" stuff

What you want is functionally impossible. You can have arrays that serve both NAS data and FC data at the same time, but nothing can serve THE SAME data as both NAS and FC. One is block level and one is file level, one is meant for shared access and one isn't, except in a clustered environment.

There's nothing special about FC or DAS that makes it faster than NAS, particularly when your workload is based largely on throughput. Buy a good NAS, build a good, high throughput network for your storage traffic, and you will be fine.

Isilon is a good recommendation for what you're doing, as mentioned above, though there are certainly other possibilities.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NippleFloss posted:

There's nothing special about FC or DAS that makes it faster than NAS
One exception: flock()/lockf() is slow as balls on NFS, but this doesn't affect too many use cases.

Rekka
Feb 1, 2004

oh god, it's.... THE DOOOO!

Ataraxia posted:

Heh, my company has 4 from that list as clients, either replacing or installed alongside their Isilons.

I would send you a PM Rekka, but you don't have the button.

thanks Ataraxia, can you send me an email at dominic2401@gmail.com

Also thanks to everyone for their replies about Isilons, I'm taking a look into it!

Rekka
Feb 1, 2004

oh god, it's.... THE DOOOO!

NippleFloss posted:

What you want is functionally impossible. You can have arrays that serve both NAS data and FC data at the same time, but nothing can serve THE SAME data as both NAS and FC. One is block level and one is file level, one is meant for shared access and one isn't, except in a clustered environment.

There's nothing special about FC or DAS that makes it faster than NAS, particularly when your workload is based largely on throughput. Buy a good NAS, build a good, high throughput network for your storage traffic, and you will be fine.

Isilon is a good recommendation for what you're doing, as mentioned above, though there are certainly other possibilities.

Hmmmm, how many people can be connected via FC at one time? Could we get 30 people or so connected via fibre?

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


Think of fibre channel like connecting a hard drive's SATA cable to multiple PCs. Your first question should be "how do I stop people overwriting my stuff?". FC can only have a 1:1 relationship between LUNs and hosts, unless the host is cluster-aware, which it's unlikely a bunch of video editing machines will be.

You're looking for shared storage, or NAS. You can connect to this over 10Gbps fibre assuming the arrays can keep up, but that's not FC.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply