Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
conntrack
Aug 8, 2003

by angerbeet
I'm looking at the sun gear too. Does anyone know if it's possible to
connect FC luns to the openstorage appliances?

Or if we could get one of their storage servers with the slick GUI?

Support is needed for this as people get fired if there is dataloss.

I'm looking to make a poor mans vfiler. I like the openstorage GUI
but don't have any kidnapped orphans left to sell to netapp.

Adbot
ADBOT LOVES YOU

conntrack
Aug 8, 2003

by angerbeet
For "cheap" SATA FC storage we recently got two Hitachi AMS2100 systems. Besides having the wonky HDS style of management software i like the AMS2100 series fine.

We will get a free HDP (thin prov) license next time we buy a new batch of drives so performance should (might) be the same as a really wide stripe when we have the boxes filled with 2TB spindles.

Sun quoted us a higher price for ONE 7000 series with 24 spindles than
what we paid for two 2100 systems with active-active controllers and 30 1TB spindles in each box.

HDS is looking better and better these days. HP can die in a fire, buying anything from them is a hassle and the sales people are all retards.

conntrack
Aug 8, 2003

by angerbeet

adorai posted:

This is simply not true. A copy on write snapshot, like the kind data on tap (netapp) and zfs (sun/oracle) use have zero performance penalty associated with them. Additionally, snapshots coupled with storage system replication make for a great backup plan. We keep our snapshots around for as long as we kept tapes, and we can restore in minutes.

Now I'll admit, there are some extra pieces we use to put our databases into hot backup mode and we create a vmware snapshot before we snap the volume that hosts the lun, but it's not changing the snapshot itself, just making sure the data is consistent before we snapshot.


If you keep three months of one snap each day writing one block will result in 90 writes per block? Im sure that database will be fast for updates.

Edit: i guess that depends on how smart the software is. clasical snaps would turn to poo poo.

conntrack fucked around with this message at 15:18 on Sep 2, 2010

conntrack
Aug 8, 2003

by angerbeet
Being aligned is more important.

Databases allocated on a fresh ntfs filesystem will never benefit from a filesystem level defrag as the intelligence about dataplacement is in the database.

Trancient files are likeley to be created/deleted before the defrag even runs.

Perhaps if you do something retarded like mixing loads in one partition or single drive luns it might be worth the effort to defrag?

conntrack
Aug 8, 2003

by angerbeet
Like the previous speaker said, depends on what you want. An archive or disaster recovery. If users depend on being able to recover each and every deleted gif of funny dogs 6 months from now tape might be better as it's easy to buy more tape.

Tapes are also portable.

A server might be tempting but won't be worth much if a power surge burns out the backup server at the same time as the production servers because they were all in the same building.

Edit: i focused on cost. If you have money you would of course get two diskbased systems with off site replication. Pricey but worth it.

conntrack fucked around with this message at 22:09 on Sep 8, 2010

conntrack
Aug 8, 2003

by angerbeet

Misogynist posted:

And hey, while I'm here, can someone explain to me how long-distance ISLs (~3km) are supposed to be configured on Brocade 5000/300 switches? Obviously I didn't set something up right on our longer pair of campus ISLs, because I started anal-retentively monitoring SNMP counters today and I'm noticing a crapload of swFCPortNoTxCredits on those ports.

I assume that means it needs more credits? try lowering the speed or see if you can donate credits from other ports?

conntrack
Aug 8, 2003

by angerbeet
From the earlier discussions on snapshots i got my rear end in gear for testing out snapdrive on our 6040 netapp.

I mounted a qtree over NFS on say /mnt/goon and did:

touch /mnt/goon/somefile (snaps poo poo themselves if i restore an empty share)
snapdrive snap create -snapname start -fs /mnt/goon

This gets me a snap with the share almost empty.
Then i created 100k empty files with some bash loops.

To get rid of all the files an get a clean share:
snapdrive snap restore -snapname start

I started this when i left work and it was still running 14 hours later.

Sounds a tad bit slow?

conntrack
Aug 8, 2003

by angerbeet

adorai posted:

Is there something preventing the volume from unmounting? Snapdrive will unmount the volume before rolling it back i believe, and if there is a file open the unmount may fail.


It got unmounted ok, double checked in the logs. The restore deleted about 60% of the files before i called it quits for taking to long and remounted it.

A 1k test i ran before the larger test took about 5 minutes but completed ok.

conntrack
Aug 8, 2003

by angerbeet
i do know i would like to have the inline compression on my boxes. looks sweet in the propaganda.

conntrack
Aug 8, 2003

by angerbeet
It's the tradeoff you make for running midrange hardware.

You often get fine performance but the uptime is not guaranteed to a
gazzilion nines.


But all is not dandy in the enterprise world either.

Today we got word from our HP support contact that they are
holding us hostage for $700 before giving us any future support.

Support under a contract we pay $100k per year for.

gently caress HP in the rear end, i will suck a bag of soggy dicks before i buy a single new HP storage product.

conntrack
Aug 8, 2003

by angerbeet
If im not mistaken the 3140 box has 4 gigs of ram.

Is there a way to check the amount of memory involved in the operations stored in nvram?

Ie, adding more nvram might be useless if the buffers are full anyway?

conntrack
Aug 8, 2003

by angerbeet

Mausi posted:

Forgive my ignorance on this topic, but could someone point me to an explanation of how NFS compares to direct block access in terms of performance?

How is an Oracle server using NFS for it's data storage?


Netapp has alot of whitepapers online about oracle and nfs.

conntrack
Aug 8, 2003

by angerbeet
Fresh support story? Don't mind if i do.

Netapp just sent me a log as proof one of my raidgroups isnt degraded.

The thing is the log is a day old BEFORE the raid rebuild was started and the system became unresponsive during said rebuild.

The reply to calling the lady on her poo poo is "thank you for the information".

I have now downed a stiff drink and i guess i will have to do several more and just sleep a few hours until the men come back on shift.

conntrack
Aug 8, 2003

by angerbeet
Did they get SMB2 back in? When 8 was releasing there was alot of grumbling about that.

conntrack
Aug 8, 2003

by angerbeet
Anyone using datadomain? We got quoted a price that would buy us a petabyte of raw disk for the same price as a data domain box.

We could probably buy half a petabyte, compress it with standard gzip and come out paying less money.

Going back to tape and tape robots is starting to sound good again.......

conntrack
Aug 8, 2003

by angerbeet

skipdogg posted:

We use them. Work as advertised. Not cheap though.

code:
UPTIME= 06:59:30 up 325 days, 15:40,  0 users,  load average: 1.00, 1.02, 1.00

==========  SERVER USAGE   ==========
Resource             Size GiB   Used GiB   Avail GiB   Use%   Cleanable GiB*
------------------   --------   --------   ---------   ----   --------------
/backup: pre-comp           -    15942.6           -      -                -
/backup: post-comp     2537.2      771.2      1766.0    30%              0.0
/ddvar                   19.7        2.6        16.1    14%                -
------------------   --------   --------   ---------   ----   --------------
 * Estimated based on last cleaning of 2010/11/23 06:24:06.

Filesys Compression
--------------
                      
From: 2010-11-22 06:00 To: 2010-11-29 06:00
                      
                  Pre-Comp   Post-Comp   Global-Comp   Local-Comp      Total-Comp
                     (GiB)       (GiB)        Factor       Factor          Factor
                                                                    (Reduction %)
---------------   --------   ---------   -----------   ----------   -------------
Currently Used:    15942.6       771.2             -            -    20.7x (95.2)
Written:*                                                                        
  Last 7 days                                                                    
  Last 24 hrs                                                                    
---------------   --------   ---------   -----------   ----------   -------------
 * Does not include the effects of pre-comp file deletes/truncates
   since the last cleaning on 2010/11/23 06:24:06.
Key:                                                          
       Pre-Comp = Data written before compression             
       Post-Comp = Storage used after compression             
       Global-Comp Factor = Pre-Comp / (Size after de-dupe)   
       Local-Comp Factor = (Size after de-dupe) / Post-Comp   
       Total-Comp Factor = Pre-Comp / Post-Comp               
       Reduction % = ((Pre-Comp - Post-Comp) / Pre-Comp) * 100

This looks so sweet. I have neck beard envy right now.

conntrack
Aug 8, 2003

by angerbeet
Grey box is just not "enterprise storage", keep good backups if you go with some linux poo poo.

Just because they have a sweet website does not mean they can give you five thousand nines.

conntrack
Aug 8, 2003

by angerbeet

Misogynist posted:

Maybe I'm just being an obnoxious pedant, but not everything enterprise has mission-critical uptime requirements or OLTP-size performance requirements, either.

No it doesnt. But then again alot of companies sell standard pc boxes with "secret sauce". The point was to know what you are buying and base your expectations there after. What they write on their web to differentiate themselves from all the other hundreds like it might or might not be marketing writing what they wish the product can deliver.

conntrack
Aug 8, 2003

by angerbeet
We got the "admin / !admin" account as the inital management account for ours G3 box.

If this was supposed to be a "secret account" they did a pisspoor job of it. We have used it since we got the G3 array.

conntrack
Aug 8, 2003

by angerbeet
Like said before, make two flavours of the app.

One "enterprise" that requires fc/iscsi to get a valid support contract.

Make the other one a virtual appliance with a preconfigured linux install. Tell the customers to use what ever storage they need and then take it up with vmware/xen/jebus if there is problems that does not come from your virtualised server.

It would also give you a lot more buzzwords for marketing too.

conntrack
Aug 8, 2003

by angerbeet
You need to be worthy (spend millions of dollars) before the the HP sales gods(in their own mind) deign to grant you an audience.

conntrack
Aug 8, 2003

by angerbeet
The issue is mixing spindles of different quality/speed. Having a pool with a vdev that is a lot slower than the others will kill the speed of the entire pool.

As you are not doing this you are in the clear.

Unless you put the ssd drives on some $10 sata controller you found in the garbage.

conntrack
Aug 8, 2003

by angerbeet
On the notion on replicating/shipping tape to DR as recovery. People to it because it's simple and low cost.

Simple in the way that no "magic" is involved. You get the lowest paid monkey to
follow a check list. If there is any problems he can solve them. poo poo tons of software and scripts for automated recovery can go wrong and you get the monkey calling you for everything. No need to stretch vlans, maintain virtualisation platforms and so on.

Would i do this for 50 servers? Not likely. Just because your mission critical webstore selling plastic vaginas need 24/7 uptime doesn't mean a tape based DR plan is retarded.

I hear about a lot of people that buy expensive as hell array based sync replication and think this will save them from everything including alien invasion.

When bob mongo formats that production volume or that odd app starts corrupting it's database it won't help if you get the format and corruption replicated 100 miles down the road.

conntrack
Aug 8, 2003

by angerbeet

Linux Nazi posted:

What would you guys recommend for a simple to setup 50TB storage device? Performance really isn't an issue as it is basically just warehousing data, and will really only be accessed by one host.


I checked out both the NetApp and EMC site to look at products but honestly it kind of makes my head spin. I would much rather seek a recommendation and start from there.


HP P2000 G3 is cheapish. Don't buy the hp tech install option though.

conntrack
Aug 8, 2003

by angerbeet

ragzilla posted:

Anyone here have any experience with CommVault SnapProtect? In particular, can it quiesce and SAN snap raw RDMs presented to VMs?

Ask me in 6 months :) The sales person became a man of few words when we asked questions like this and the online documentation is somewhat sparse.

Please do post what you find out if you do.

conntrack
Aug 8, 2003

by angerbeet
You don't see any risk in getting sun gear from oracle?

Everything oracle touches get expensive and is the market even sure they will continue supporting solaris and ZFS for any longer period of time?

They are remaking opensolaris and telling the opensource folks to follow orders or get stuffed. Oracle isn't old tech loving sun.

But then again i might be out of the loop?

conntrack
Aug 8, 2003

by angerbeet
This is the sort of things that starts long department wars. Networking thinks they should own everything with cables and starts to make a stink about things.

Results in pulled fibers(you don't have ospf on the storage NETWORK?), poo poo switches getting bought because the san doesn't affect their operations and so on. I have heard horror stories.

Then again if the storage "team" is and old crusty guy that makes pretty christmas trees with the FC cabling a revolution might be warranted.

conntrack
Aug 8, 2003

by angerbeet

ghostinmyshell posted:

I've been asked for a car analogy from my manager for any Snapmanager product for NetApp because the person who is signing the checks can't understand the justification for SQL/Exchange.

Spending lots of money to save our jobs when "OH poo poo" hits doesn't fly I guess.

i guess you did the "company down and nobody working" calculation and it didn't bite? i feel you pain.

conntrack
Aug 8, 2003

by angerbeet

Maneki Neko posted:

ROI: You still have all your email.

Maybe the car analogy can be:

It's like cloning your family every morning so when they die in a fiery car crash on the way to work/school you aren't left a bitter empty shell of a man.

I have printed this to hard copy.

conntrack
Aug 8, 2003

by angerbeet

Misogynist posted:

If you got that feature for free, definitely take advantage of it. If you paid extra for it so you wouldn't have to think, we may have a disagreement between us. :)


Sometimes the best of plans get screwed when the customers change the requirements at random intervals.

This is why my dreams are of online lun migrations.

conntrack
Aug 8, 2003

by angerbeet
I'm not 100% convinced autotiering in it's current forms(VSP,VMAX) would benefit all setups.

I know if I bought 2TB of SSD space for all that "automatic right performance" joy, the nightly batch processing system that just happens to be 2TB would push out the interactive systems. That system goes balls out after hours and in the day others take over during the shorter office hours window. Making a profile for that might be interesting.

The migration of blocks between the tiers takes some time after profiling. Just because you get some amount of SSD storage it still has to be right sized for the working set. I'd be interested in when work spent and dedicated 15k spindles costs more than autotiering and SSDs. From the white papers and sales guys it's hard to get solid info. "you started saving money just by talking to me now, wink wink".

But i might just be sperging and nitpicking i guess. When you go to the performance seminars it's all about sizing one array per application, we suckers that have one array for all applications sit and cry in the corner over our small budgets.

Did i make this post a thousand times before in this thread? I ask these questions in alot of forums and many people just go BUT WE NEEED SSD BECAUSE ITS COOOOOOL.

conntrack
Aug 8, 2003

by angerbeet

ragzilla posted:

At least on Compellent the migration is based on snapshots (so after a snapshot, the data is 'eligible' to get pushed down to a lower tier). Realistically though if you have some app that runs overnight and you don't want it using Tier0, just put it on a LUN that only has Tier1-3 storage? Just because you can give every LUN tier 0-3 access doesn't mean you should or would in production.

My point was that they both warranted tier 0 performance, the migration profiler would profile them as tier 0 just during different time of the day. The migration will be over days in case of large datasets, interday migration to shrink the needed tier 0 storage isn't there yet. At least in the VSP from what I gather.

conntrack
Aug 8, 2003

by angerbeet

adorai posted:

Personally I think Sun had it right -- throw a shitload of commodity spindles at the problem, and put a shitload of cache in front of it. 90+% of your reads come from cache, and 100% of writes are cached and written sequentially. Saves you the trouble of tiering, period and you never have to worry about fast disk, just cache. Which, iirc, an 18GB SSD for ZIL from Sun was ~$5k, and a shelf of 24 1TB SATA drives was around $15k. Too bad Oracle is already killing that poo poo.

I had a boner for those ZFS boxes until we got the quote. I would have loved to get some but sun was sinking and the sales people just didn't give a drat any more.

Now I thank them for not getting me to buy it.

conntrack
Aug 8, 2003

by angerbeet

Vanilla posted:

You set the window of observation. For most places this would be 9-5 and everything overnight (backups etc) is ignored.

Additionally you can choose to ignore some LUNs, lock a LUN in place if the user would prefer a set performance level, etc.

Talking about EMC, not sure about all vendors.

You mean getting the same effect as giving the applications dedicated spindles? :)

Vanilla posted:


It's the utilisation of larger 1TB and 2TB drives that are generating the cost savings (in addition to reduced footprint, power, cooling, etc). This is why I see automated storage tiering as mostly a money saver than a performance improver.

Little SSD, a few slivers of fast disk and a ton of SATA. I've already seen it in action 10 times because people have just been using fast FC drives for all data and it isn't needed.

Eventually the small, fast drives will go away, SSDs will be cheaper and it will be all SSD and large 2TB+ drives.

That depends on who you talk to, I personally share your view in this matter. A lot of people see it as a way to fire all those crusty storage guys though.

Why doesn't the vmax virtualize external storage? Any one?

conntrack
Aug 8, 2003

by angerbeet
The cost of "enterprise sata" sort of takes out the "save" part in "save money", so virtualizing midrange is looking better and better.

Edit: If the vmax would get the external capability i would definitely look more in to it.

conntrack fucked around with this message at 18:38 on Mar 4, 2011

conntrack
Aug 8, 2003

by angerbeet

Vanilla posted:


So it all depends on what you are trying to do - why does the array need to be virtualised for you?

The idea is to be able to buy any(supported) array and put behind the enterprise one.

What I get:
I can put a smaller FC/SAS tier inside the enterprise array.

I can buy a larger cache from not getting all those FC raidgroups. With cache boundaries/partitions I can give the sata luns a small amount to minimise thrashing and still probably swallow bursty writes.

I can get quotes from all the players in FC storage for mid to slow demanding storage and just connect those. The quote we got for 10TB enterprise sata was higher than buying a new 20TB midrange dual-controller array. After the inital outlay for the midrange box the savings just get bigger as the controller costs gets spread over a larger amount of storage while the enterprise sata is the same expensive price all the time no matter how much we get.

Of course I will have to pay for more front end ports and san ports but unless we run in to some queue depth issue I hope to be able to run several arrays on a shared amount of 8gb ports in the virtualizer.

The main point is being able to have multiple tiers, present these from a single box with a single management interface and being able to play the vendors against each other. If we buy in to the vmax for example we are stuck with it. If we need cheaper storage we have to buy a forest of arrays all with single points of management if they are from several vendors.

I am tired of paying the GNP of Angola for low access storage. I am a jew that wish to get the enterprise features I pay for to my tier 1 luns and use it for tier 2 and 3 but pay less for the privilege.

If I ever get time away from managing my forest of arrays I might get time to explore the options ....

conntrack fucked around with this message at 14:44 on Mar 5, 2011

conntrack
Aug 8, 2003

by angerbeet

Cavepimp posted:

I just inherited a little bit of a mess of a network, including a NetApp StoreVault S500. From what I can gather it's a few years old and no longer under maintenance.

That, combined with the fact that it was never really implemented properly (glorified file dump for end users, not even being backed up and they never even provisioned 40% of the space), has me thinking we should get rid of it.

Would anyone else do otherwise? Is it possible to re-up the maintenance? At the very least I'd need to sink money into an LTO-4 drive to attach and get backups, and ideally would need to buy something new anyway to move half of that (oh god, critical) data off the stupid thing and start using it for iSCSI LUNs for our VMs. I'm thinking ditch it and sell it to management as yet another poorly thought out idea by the previous IT manager.

If i read the netapp pages correctly the S500 series hardware is "end of support" in 29-Feb-11 so I doubt they will sell support contracts to you. Might wish to call them and check though.

conntrack
Aug 8, 2003

by angerbeet
My personal theory is that nobody outside dell marketing knows what makes the switches "optimized".

Probably some play on flow control or qos?

conntrack
Aug 8, 2003

by angerbeet

Vanilla posted:

Netapp question guys.

What's the write penalty on RAID DP?

I.e RAID 1 = 2 writes, R5 = 4 writes, R6 = 6 writes but i've no idea what it is with RAID DP. I know there's two parity drives but no idea the actual penalty.

Im on lovely GRPS now so i couldn't find the netapp papers i was looking for to give you.

http://blogs.netapp.com/extensible_netapp/2009/03/understanding-wafl-performance-how-raid-changes-the-performance-game.html

Try this url and digg around the blogs, they have really good explanations about raid-dp,wafl and the tricks they use to maximise write performance.

Adbot
ADBOT LOVES YOU

conntrack
Aug 8, 2003

by angerbeet
The compellent system looks cool but the presales from Dell looked miffed when i asked them technical details about their "not mirroring" secret sauce.

This is either them lying/embellishing the truth or they just don't know. Either way it makes them look bad.

Many people are starting the scale out with servers game. Has netapp come up with a counter product/propaganda?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply