|
I'm looking at the sun gear too. Does anyone know if it's possible to connect FC luns to the openstorage appliances? Or if we could get one of their storage servers with the slick GUI? Support is needed for this as people get fired if there is dataloss. I'm looking to make a poor mans vfiler. I like the openstorage GUI but don't have any kidnapped orphans left to sell to netapp.
|
# ¿ Feb 5, 2010 09:02 |
|
|
# ¿ Apr 26, 2024 22:52 |
|
For "cheap" SATA FC storage we recently got two Hitachi AMS2100 systems. Besides having the wonky HDS style of management software i like the AMS2100 series fine. We will get a free HDP (thin prov) license next time we buy a new batch of drives so performance should (might) be the same as a really wide stripe when we have the boxes filled with 2TB spindles. Sun quoted us a higher price for ONE 7000 series with 24 spindles than what we paid for two 2100 systems with active-active controllers and 30 1TB spindles in each box. HDS is looking better and better these days. HP can die in a fire, buying anything from them is a hassle and the sales people are all retards.
|
# ¿ Mar 5, 2010 07:31 |
|
adorai posted:This is simply not true. A copy on write snapshot, like the kind data on tap (netapp) and zfs (sun/oracle) use have zero performance penalty associated with them. Additionally, snapshots coupled with storage system replication make for a great backup plan. We keep our snapshots around for as long as we kept tapes, and we can restore in minutes. If you keep three months of one snap each day writing one block will result in 90 writes per block? Im sure that database will be fast for updates. Edit: i guess that depends on how smart the software is. clasical snaps would turn to poo poo. conntrack fucked around with this message at 15:18 on Sep 2, 2010 |
# ¿ Sep 2, 2010 15:12 |
|
Being aligned is more important. Databases allocated on a fresh ntfs filesystem will never benefit from a filesystem level defrag as the intelligence about dataplacement is in the database. Trancient files are likeley to be created/deleted before the defrag even runs. Perhaps if you do something retarded like mixing loads in one partition or single drive luns it might be worth the effort to defrag?
|
# ¿ Sep 3, 2010 21:34 |
|
Like the previous speaker said, depends on what you want. An archive or disaster recovery. If users depend on being able to recover each and every deleted gif of funny dogs 6 months from now tape might be better as it's easy to buy more tape. Tapes are also portable. A server might be tempting but won't be worth much if a power surge burns out the backup server at the same time as the production servers because they were all in the same building. Edit: i focused on cost. If you have money you would of course get two diskbased systems with off site replication. Pricey but worth it. conntrack fucked around with this message at 22:09 on Sep 8, 2010 |
# ¿ Sep 8, 2010 22:07 |
|
Misogynist posted:And hey, while I'm here, can someone explain to me how long-distance ISLs (~3km) are supposed to be configured on Brocade 5000/300 switches? Obviously I didn't set something up right on our longer pair of campus ISLs, because I started anal-retentively monitoring SNMP counters today and I'm noticing a crapload of swFCPortNoTxCredits on those ports. I assume that means it needs more credits? try lowering the speed or see if you can donate credits from other ports?
|
# ¿ Sep 18, 2010 17:12 |
|
From the earlier discussions on snapshots i got my rear end in gear for testing out snapdrive on our 6040 netapp. I mounted a qtree over NFS on say /mnt/goon and did: touch /mnt/goon/somefile (snaps poo poo themselves if i restore an empty share) snapdrive snap create -snapname start -fs /mnt/goon This gets me a snap with the share almost empty. Then i created 100k empty files with some bash loops. To get rid of all the files an get a clean share: snapdrive snap restore -snapname start I started this when i left work and it was still running 14 hours later. Sounds a tad bit slow?
|
# ¿ Sep 24, 2010 12:00 |
|
adorai posted:Is there something preventing the volume from unmounting? Snapdrive will unmount the volume before rolling it back i believe, and if there is a file open the unmount may fail. It got unmounted ok, double checked in the logs. The restore deleted about 60% of the files before i called it quits for taking to long and remounted it. A 1k test i ran before the larger test took about 5 minutes but completed ok.
|
# ¿ Sep 24, 2010 13:46 |
|
i do know i would like to have the inline compression on my boxes. looks sweet in the propaganda.
|
# ¿ Nov 11, 2010 22:13 |
|
It's the tradeoff you make for running midrange hardware. You often get fine performance but the uptime is not guaranteed to a gazzilion nines. But all is not dandy in the enterprise world either. Today we got word from our HP support contact that they are holding us hostage for $700 before giving us any future support. Support under a contract we pay $100k per year for. gently caress HP in the rear end, i will suck a bag of soggy dicks before i buy a single new HP storage product.
|
# ¿ Nov 18, 2010 13:25 |
|
If im not mistaken the 3140 box has 4 gigs of ram. Is there a way to check the amount of memory involved in the operations stored in nvram? Ie, adding more nvram might be useless if the buffers are full anyway?
|
# ¿ Nov 19, 2010 11:47 |
|
Mausi posted:Forgive my ignorance on this topic, but could someone point me to an explanation of how NFS compares to direct block access in terms of performance? Netapp has alot of whitepapers online about oracle and nfs.
|
# ¿ Nov 19, 2010 12:25 |
|
Fresh support story? Don't mind if i do. Netapp just sent me a log as proof one of my raidgroups isnt degraded. The thing is the log is a day old BEFORE the raid rebuild was started and the system became unresponsive during said rebuild. The reply to calling the lady on her poo poo is "thank you for the information". I have now downed a stiff drink and i guess i will have to do several more and just sleep a few hours until the men come back on shift.
|
# ¿ Nov 26, 2010 20:41 |
|
Did they get SMB2 back in? When 8 was releasing there was alot of grumbling about that.
|
# ¿ Nov 27, 2010 16:20 |
|
Anyone using datadomain? We got quoted a price that would buy us a petabyte of raw disk for the same price as a data domain box. We could probably buy half a petabyte, compress it with standard gzip and come out paying less money. Going back to tape and tape robots is starting to sound good again.......
|
# ¿ Nov 29, 2010 21:29 |
|
skipdogg posted:We use them. Work as advertised. Not cheap though. This looks so sweet. I have neck beard envy right now.
|
# ¿ Nov 29, 2010 21:43 |
|
Grey box is just not "enterprise storage", keep good backups if you go with some linux poo poo. Just because they have a sweet website does not mean they can give you five thousand nines.
|
# ¿ Dec 11, 2010 21:10 |
|
Misogynist posted:Maybe I'm just being an obnoxious pedant, but not everything enterprise has mission-critical uptime requirements or OLTP-size performance requirements, either. No it doesnt. But then again alot of companies sell standard pc boxes with "secret sauce". The point was to know what you are buying and base your expectations there after. What they write on their web to differentiate themselves from all the other hundreds like it might or might not be marketing writing what they wish the product can deliver.
|
# ¿ Dec 12, 2010 12:22 |
|
We got the "admin / !admin" account as the inital management account for ours G3 box. If this was supposed to be a "secret account" they did a pisspoor job of it. We have used it since we got the G3 array.
|
# ¿ Dec 16, 2010 16:46 |
|
Like said before, make two flavours of the app. One "enterprise" that requires fc/iscsi to get a valid support contract. Make the other one a virtual appliance with a preconfigured linux install. Tell the customers to use what ever storage they need and then take it up with vmware/xen/jebus if there is problems that does not come from your virtualised server. It would also give you a lot more buzzwords for marketing too.
|
# ¿ Dec 21, 2010 10:56 |
|
You need to be worthy (spend millions of dollars) before the the HP sales gods(in their own mind) deign to grant you an audience.
|
# ¿ Jan 11, 2011 13:44 |
|
The issue is mixing spindles of different quality/speed. Having a pool with a vdev that is a lot slower than the others will kill the speed of the entire pool. As you are not doing this you are in the clear. Unless you put the ssd drives on some $10 sata controller you found in the garbage.
|
# ¿ Jan 13, 2011 14:58 |
|
On the notion on replicating/shipping tape to DR as recovery. People to it because it's simple and low cost. Simple in the way that no "magic" is involved. You get the lowest paid monkey to follow a check list. If there is any problems he can solve them. poo poo tons of software and scripts for automated recovery can go wrong and you get the monkey calling you for everything. No need to stretch vlans, maintain virtualisation platforms and so on. Would i do this for 50 servers? Not likely. Just because your mission critical webstore selling plastic vaginas need 24/7 uptime doesn't mean a tape based DR plan is retarded. I hear about a lot of people that buy expensive as hell array based sync replication and think this will save them from everything including alien invasion. When bob mongo formats that production volume or that odd app starts corrupting it's database it won't help if you get the format and corruption replicated 100 miles down the road.
|
# ¿ Jan 21, 2011 10:38 |
|
Linux Nazi posted:What would you guys recommend for a simple to setup 50TB storage device? Performance really isn't an issue as it is basically just warehousing data, and will really only be accessed by one host. HP P2000 G3 is cheapish. Don't buy the hp tech install option though.
|
# ¿ Jan 25, 2011 22:48 |
|
ragzilla posted:Anyone here have any experience with CommVault SnapProtect? In particular, can it quiesce and SAN snap raw RDMs presented to VMs? Ask me in 6 months The sales person became a man of few words when we asked questions like this and the online documentation is somewhat sparse. Please do post what you find out if you do.
|
# ¿ Feb 4, 2011 23:53 |
|
You don't see any risk in getting sun gear from oracle? Everything oracle touches get expensive and is the market even sure they will continue supporting solaris and ZFS for any longer period of time? They are remaking opensolaris and telling the opensource folks to follow orders or get stuffed. Oracle isn't old tech loving sun. But then again i might be out of the loop?
|
# ¿ Feb 7, 2011 21:12 |
|
This is the sort of things that starts long department wars. Networking thinks they should own everything with cables and starts to make a stink about things. Results in pulled fibers(you don't have ospf on the storage NETWORK?), poo poo switches getting bought because the san doesn't affect their operations and so on. I have heard horror stories. Then again if the storage "team" is and old crusty guy that makes pretty christmas trees with the FC cabling a revolution might be warranted.
|
# ¿ Feb 16, 2011 14:06 |
|
ghostinmyshell posted:I've been asked for a car analogy from my manager for any Snapmanager product for NetApp because the person who is signing the checks can't understand the justification for SQL/Exchange. i guess you did the "company down and nobody working" calculation and it didn't bite? i feel you pain.
|
# ¿ Feb 16, 2011 17:58 |
|
Maneki Neko posted:ROI: You still have all your email. I have printed this to hard copy.
|
# ¿ Feb 16, 2011 19:29 |
|
Misogynist posted:If you got that feature for free, definitely take advantage of it. If you paid extra for it so you wouldn't have to think, we may have a disagreement between us. Sometimes the best of plans get screwed when the customers change the requirements at random intervals. This is why my dreams are of online lun migrations.
|
# ¿ Feb 28, 2011 10:27 |
|
I'm not 100% convinced autotiering in it's current forms(VSP,VMAX) would benefit all setups. I know if I bought 2TB of SSD space for all that "automatic right performance" joy, the nightly batch processing system that just happens to be 2TB would push out the interactive systems. That system goes balls out after hours and in the day others take over during the shorter office hours window. Making a profile for that might be interesting. The migration of blocks between the tiers takes some time after profiling. Just because you get some amount of SSD storage it still has to be right sized for the working set. I'd be interested in when work spent and dedicated 15k spindles costs more than autotiering and SSDs. From the white papers and sales guys it's hard to get solid info. "you started saving money just by talking to me now, wink wink". But i might just be sperging and nitpicking i guess. When you go to the performance seminars it's all about sizing one array per application, we suckers that have one array for all applications sit and cry in the corner over our small budgets. Did i make this post a thousand times before in this thread? I ask these questions in alot of forums and many people just go BUT WE NEEED SSD BECAUSE ITS COOOOOOL.
|
# ¿ Mar 3, 2011 14:51 |
|
ragzilla posted:At least on Compellent the migration is based on snapshots (so after a snapshot, the data is 'eligible' to get pushed down to a lower tier). Realistically though if you have some app that runs overnight and you don't want it using Tier0, just put it on a LUN that only has Tier1-3 storage? Just because you can give every LUN tier 0-3 access doesn't mean you should or would in production. My point was that they both warranted tier 0 performance, the migration profiler would profile them as tier 0 just during different time of the day. The migration will be over days in case of large datasets, interday migration to shrink the needed tier 0 storage isn't there yet. At least in the VSP from what I gather.
|
# ¿ Mar 3, 2011 15:22 |
|
adorai posted:Personally I think Sun had it right -- throw a shitload of commodity spindles at the problem, and put a shitload of cache in front of it. 90+% of your reads come from cache, and 100% of writes are cached and written sequentially. Saves you the trouble of tiering, period and you never have to worry about fast disk, just cache. Which, iirc, an 18GB SSD for ZIL from Sun was ~$5k, and a shelf of 24 1TB SATA drives was around $15k. Too bad Oracle is already killing that poo poo. I had a boner for those ZFS boxes until we got the quote. I would have loved to get some but sun was sinking and the sales people just didn't give a drat any more. Now I thank them for not getting me to buy it.
|
# ¿ Mar 4, 2011 15:57 |
|
Vanilla posted:You set the window of observation. For most places this would be 9-5 and everything overnight (backups etc) is ignored. You mean getting the same effect as giving the applications dedicated spindles? Vanilla posted:
That depends on who you talk to, I personally share your view in this matter. A lot of people see it as a way to fire all those crusty storage guys though. Why doesn't the vmax virtualize external storage? Any one?
|
# ¿ Mar 4, 2011 16:10 |
|
The cost of "enterprise sata" sort of takes out the "save" part in "save money", so virtualizing midrange is looking better and better. Edit: If the vmax would get the external capability i would definitely look more in to it. conntrack fucked around with this message at 18:38 on Mar 4, 2011 |
# ¿ Mar 4, 2011 18:36 |
|
Vanilla posted:
The idea is to be able to buy any(supported) array and put behind the enterprise one. What I get: I can put a smaller FC/SAS tier inside the enterprise array. I can buy a larger cache from not getting all those FC raidgroups. With cache boundaries/partitions I can give the sata luns a small amount to minimise thrashing and still probably swallow bursty writes. I can get quotes from all the players in FC storage for mid to slow demanding storage and just connect those. The quote we got for 10TB enterprise sata was higher than buying a new 20TB midrange dual-controller array. After the inital outlay for the midrange box the savings just get bigger as the controller costs gets spread over a larger amount of storage while the enterprise sata is the same expensive price all the time no matter how much we get. Of course I will have to pay for more front end ports and san ports but unless we run in to some queue depth issue I hope to be able to run several arrays on a shared amount of 8gb ports in the virtualizer. The main point is being able to have multiple tiers, present these from a single box with a single management interface and being able to play the vendors against each other. If we buy in to the vmax for example we are stuck with it. If we need cheaper storage we have to buy a forest of arrays all with single points of management if they are from several vendors. I am tired of paying the GNP of Angola for low access storage. I am a jew that wish to get the enterprise features I pay for to my tier 1 luns and use it for tier 2 and 3 but pay less for the privilege. If I ever get time away from managing my forest of arrays I might get time to explore the options .... conntrack fucked around with this message at 14:44 on Mar 5, 2011 |
# ¿ Mar 5, 2011 14:41 |
|
Cavepimp posted:I just inherited a little bit of a mess of a network, including a NetApp StoreVault S500. From what I can gather it's a few years old and no longer under maintenance. If i read the netapp pages correctly the S500 series hardware is "end of support" in 29-Feb-11 so I doubt they will sell support contracts to you. Might wish to call them and check though.
|
# ¿ Mar 31, 2011 12:30 |
|
My personal theory is that nobody outside dell marketing knows what makes the switches "optimized". Probably some play on flow control or qos?
|
# ¿ Apr 19, 2011 05:32 |
|
Vanilla posted:Netapp question guys. Im on lovely GRPS now so i couldn't find the netapp papers i was looking for to give you. http://blogs.netapp.com/extensible_netapp/2009/03/understanding-wafl-performance-how-raid-changes-the-performance-game.html Try this url and digg around the blogs, they have really good explanations about raid-dp,wafl and the tricks they use to maximise write performance.
|
# ¿ Apr 19, 2011 11:03 |
|
|
# ¿ Apr 26, 2024 22:52 |
|
The compellent system looks cool but the presales from Dell looked miffed when i asked them technical details about their "not mirroring" secret sauce. This is either them lying/embellishing the truth or they just don't know. Either way it makes them look bad. Many people are starting the scale out with servers game. Has netapp come up with a counter product/propaganda?
|
# ¿ Jun 10, 2011 21:22 |