Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
hackedaccount
Sep 28, 2009

Cultural Imperial posted:

three, this is an oracle db right? Has anyone run an oracle statspack to see that the Oracle db isn't doing anything funny?

I think this is a really solid idea. From a UNIX perspective when Oracle (or any other application) saturates I/O it's often because of lovely queries / applications and not actually limits of the OS, hardware, infrastructure, etc. This isn't always the case, but it's been the culprit way too many times for me. Writing a billion small files, updating tons of metadata too frequently, bad scheduling, or poorly written SQL queries can bring big boxes to their knees.

Get em involved, make use of the trillions you pay Oracle each year for support.

Adbot
ADBOT LOVES YOU

hackedaccount
Sep 28, 2009
At the Big Companies (TM) I have worked at, they continue to pay for support. Once the vendor EOLs the product they frequently continue to pay an even higher fee for "extended support" or "post-end of life support" or some crap if they still continue to use the server.

I've always though most hardware support contracts were magnitudes overpriced for what you actually get. Notice that even contracts with companies like HP and IBM will give you a 4 hour response, but won't promise anything more.

If you do decide to let the contract expire, it's probably best have the spare parts on-hand, in the building and ready to go. You may even want to consider testing the spare parts every 6 months or a year to make sure you actually have what you think you have.

You may also consider asking this question in another area as it pertains to everything from SAN to software, and I'm curious about other people's opinion on the subject, too.

EDIT: My post is from a server perspective, but the conversation extends to everything hardware with a support contract. I've seen plenty of ancient storage arrays that are still in use, but I'm not sure if they continued to pay or stopped.

hackedaccount fucked around with this message at 17:02 on Oct 27, 2010

hackedaccount
Sep 28, 2009
Hey three, any updates on the consultant that says iSCSI is too much overhead, wants to you rip out everything and install FC, etc?

hackedaccount
Sep 28, 2009

Vanilla posted:

Hey guys, came across someone asking for 'server offload backup solutions'. Never heard this terminology before? Are they talking about clones for backup??

Most likely yes, clones. Mirror your poo poo on shared storage, split the mirror at night, mount the copy on a dedicated server, do backups from the dedicated server, repeat for next box. I've only seen this on large, powerful, important system (usually SAP, Oracle, DB2, etc) and never on web/file/print type servers.

hackedaccount
Sep 28, 2009
In my experience the SAN switches and disk arrays are managed by a single team (and this is in anywhere from Fortune 10 companies to 30 person companies). At most places the guys who run the Ethernet also run the fiber, but do not have any type of access to the switches or arrays.

The Fibre Channel itself protocol is very easy to understand if you know TCP/IP, but networking and storage are two very different beasts.

hackedaccount
Sep 28, 2009

paperchaseguy posted:

lol

Try searching for something, tell us how that goes!

Hey EMC isn't just storage, they're "information management" now too, and as you can see from their support site, they're great at it.

hackedaccount
Sep 28, 2009

KS posted:

The crappy management software was one of the big reasons we didn't even consider Hitachi storage this time around.

Do they still use that awful UI that they created like 10 years ago?

hackedaccount
Sep 28, 2009
Would some type of logical volume manager to concatenate the disks work?

hackedaccount
Sep 28, 2009

Wompa164 posted:

Yeah that's a great suggestion as well. Do you have a particular solution mind?

If you're using Windows it seems you want to create what they call a Dynamic Volume. To be honest I assume it works and it's easy etc etc but I've never used it and I'm sure the guys over in the Windows thread could help you with any problems.

If you're using Linux you can use the built in Logical Volume Manager (aka LVM aka LVM2) to concatenate and I'm sure the guys in the Linux thread wouldn't mind helping a bit.


Two potential gotchas:

1) If one of the drives fails the data is likely lost, but you can replace the drive, recreate the logical drive (wiping all other drives in the process) and create a new fresh backup. May or may not be a problem for you.

2) You mentioned storing drives in the closet. Some OSes are sensitive to device names and will crap out when they change. For example, say the first time you connect hard drive #1 to USB port #1, disconnect it and put it in the closet. Then, the 2nd time you connect hard drive #1 to USB port #2 and suddenly Windows thinks the logical volume is broke. I can tell you how to get around this problem in Linux but I have no idea how it works (or if it's even a problem) in Windows.

hackedaccount
Sep 28, 2009

Martytoof posted:

This may be a terrible question to ask among people who discuss actual SAN hardware, but if I want to get my feet wet with iSCSI ESXi datastores, what would be the best way to go about this on a whitebox as far as the operating system is concerned. I'm looking at something like OpenFiler. This would literally just be a small proof of concept test so I'm not terribly concerned about ongoing performance right now.

I used straight up out of the box CentOS to create iSCSI targets for RHCE practice. It took some messin around to get it working but I read and wrote files so it seems to work.

hackedaccount
Sep 28, 2009

Xenomorph posted:

Does anyone use ZFS with Linux? Or is that asking for trouble?

Check out btrfs as an alternative.

hackedaccount
Sep 28, 2009
To further ease his mind, the same technology has been doing this with big, active databases for a long time (think financial institutions).

hackedaccount
Sep 28, 2009

madsushi posted:

I also HATE tiering and I am glad that the industry is moving away from it.

I have a hard time looking at SANs that don't include a SSD/Flash-based read cache these days. The 3PAR (and Compellent, and etc) tiering isn't real-time and isn't going to get you anywhere near the same performance boost.

What's the context of the first line there? The traditional cache -> flash -> fast HD -> slow HD -> tape type tiering or something else? What's up with the industry moving away from it?

What do you mean about how the 3PAR tiering isn't "real time"?

hackedaccount
Sep 28, 2009
Ok gotcha. So it isn't an issue with automated storage tiering itself, it's that some vendors don't monitor the data in real-time and because they move the data to different tiers once per day it's super inefficient vs real-time tiering.

hackedaccount
Sep 28, 2009
Any need to get it done today or to power it off over the weekend? If not, let it sit and see what it looks like on Monday.

hackedaccount
Sep 28, 2009
Bit of a noob question here if you don't mind: Let's say I have two 1 Gbps interfaces that are bonded / teamed in active-active mode for 2 Gbps total throughput on a Linux box.

Which of these can provide 2 Gbps of throughput (well not the whole 2 but >1 and actually take advantage of the bonding):

NFS v4 TCP connection
CIFS / SMB connection
iSCSI with MPIO
HTTP

I know iSCSI with MPIO can, and I'm pretty sure HTTP can't but I'm not sure about the others.

hackedaccount
Sep 28, 2009

FISHMANPET posted:

CIFS, NFS, and HTTP can do it I think if there are multiple connections, but iSCSI is the only one that can do it for a single connection.

Yeah my bad I should have said that: I was thinking a single TCP connection from a single client type thing. I assume if there are multiple connections that the OS is smart enough to spread those over the available links.


That makes sense. Like FC it's working at the "storage driver" (or whatever) level and not on the network stack. With FC you would have 2 HBAs with unique WWPNs, with iSCSI you would have 2 NICs with unique IPs, and they would both aggregate the bandwidth and multipath in a similar fashion.

NippleFloss posted:

All of the common hashing mechanisms have the property that they always send packets for a single TCP connection over the same link.

Perfect! Like Misogynist mentioned if the protocol (SMB 3.0) is smart enough to open multiple TCP streams it could use multiple links, but HTTP and NFS aren't designed that way so they're always gonna be a single link.

Thanks for the info, I appreciate it.

hackedaccount
Sep 28, 2009

The_Groove posted:

I manage about 10PB of storage (4560 disks), if I had a ton of free SSDs lying around I'd use them as metadata-only disks. Currently the metadata is on the same LUNs as actual data, so under heavy loads the "interactive user experience" goes down. HPC storage is usually all about large-block sequential bandwidth, capacity, and saving as much money as possible so it can be spent on the computational aspects of the system.

When you talk about metadata in this context what type of storage are you talking about? Object storage where you keep it's metadata on SSDs, or normal file system metadata where you somehow put the superblocks and inodes on the SSD and keep the actual data on slower drives?

hackedaccount
Sep 28, 2009
Yeah, I just did some Googling and it looks like ext4 and xfs support metadata / log devices. I remember them from back in the day when I worked with Veritas but had no idea the concept had made it into mainstream Linux file systems. I assume you just chop up an SSD, present a SSD partition and a LUN to a host, and create the file system that way.

Learned somethin new today.

hackedaccount
Sep 28, 2009

the spyder posted:

We tossed SeisMac on a spare MBP and set it next to our two main storage NAS(s). It registered the equivalent of me shaking and tossing the MBP as hard as I could for 6 hours.
I am amazed these consumer 1.5TB Seagates have held up since 2009.

You should really drop the OpenIndiana guys a line about this. I'm sure they would enjoy a writing a whitepaper about how their ZFS can survive a dance club.

hackedaccount
Sep 28, 2009
It's a long shot but what about LVM replication? I have no idea if that will work with LUKS.

hackedaccount
Sep 28, 2009
Yeah maybe I didn't phrase it correctly. Instead of using some type of shared storage (iSCSI, NFS, SAN) he could use LVM replication to a 2nd host.

Adbot
ADBOT LOVES YOU

hackedaccount
Sep 28, 2009

Zorak of Michigan posted:

Is there a cheap option for reliable and replicate-able nearline storage? One of my coworkers is pushing us toward Amazon Glacier for archival storage. I'm wondering if there are on-prem solutions that might come close to that same price target. We're mostly an EMC shop now and while I know VNX2 pricing has gotten pretty good, I haven't seen them anywhere near a penny per gig per month over the life of the array.

How near-line is it gonna be? If you go with Glacier it's cheap to put in and store but outbound bandwidth fees can get costly if you access it frequently.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply