Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Nomex posted:

As an HP vendor I'd be interested to know your reasoning behind that.
I have witnessed each of these three events: single failed disk taking down an array, firmware upgrades failing an array, and a failed head and the partner didn't take over properly.

Adbot
ADBOT LOVES YOU

Jadus
Sep 11, 2003

adorai posted:

I have witnessed each of these three events: single failed disk taking down an array, firmware upgrades failing an array, and a failed head and the partner didn't take over properly.

Were these all on the same unit? Not saying that's an excuse; I'd blacklist HP storage altogether if this was just one unit in my environment. More curiosity.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Jadus posted:

Were these all on the same unit? Not saying that's an excuse; I'd blacklist HP storage altogether if this was just one unit in my environment. More curiosity.
First two were on one unit, third one was on a seperate unit. I was never the storage admin at this place, but I witnessed it all go down.

We also had a terrible performance issue on the second unit, but that was probably more due to an admin that didn't know wtf.

namaste friends
Sep 18, 2004

by Smythe

ferrit posted:

Is there any way to increase the write performance on a NetApp FAS3140 running OnTap 7.2.6.1? It appears that our options, according to NetApp support, are:

1) Increase the number of spindles (we've got 4 fully populated ds14mk4 disk shelves attached to this head with a mix of 150 and 450 GB 15K FCAL drives) so that the NVRAM is able to flush the writes to disk faster without hitting back-to-back consistency points. This may be difficult as we might run into power issues with adding another shelf.
2) Setup a proper reallocate schedule for all volumes to ensure that the volumes aren't "fragmented" and that we're not running into a hot disk scenario. We've tried this and although it appears to help somewhat, there are several times when we still see latency rise due to the back-to-back consistency points.
3) Stop pushing so much drat data to the filers so drat quickly - this might not be achievable, as it's an Oracle database that the DBA's insist must be able to handle this type of load.
4) Buy a bigger filer head that has a larger NVRAM size to help mitigate the instances when it is pushing alot of data to the filer.

Are there any other bits of performance tuning that can be done? Have there been any significant changes in OnTap 7.3 as far as performance tuning is concerned? We're looking specifically for write performance, so I'm not sure if a PAM module would help us out (I had believed that they were meant for reads more than writes). We had recommended they go for a FAS3170 when it was specced out a couple of years ago, but they saw the cost and backed off.

Thanks!

7.3.x has significant performance improvement over 7.2.x. Ontap 7.3.x was written to take advantage of multiple core filer heads by running more threads to take care of background processes. I'm surprised support didn't mention this to you. Is it possible for you to upgrade ontap?

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

GrandMaster posted:

just heard back from support, they will be replacing the cabling on the SPA side bus 0 as there were some other strange bus errors. it looks like SPA crashed, SPB didnt so i'm not sure why the luns didnt all trespass and stay online :(

The lcc took out a whole enclosure in my case. Yes, the LUNs should have trespassed unless the whole enclosure faulted. This appears to be a rare, but occasional achilles' heel of the Clariion.

conntrack
Aug 8, 2003

by angerbeet
Fresh support story? Don't mind if i do.

Netapp just sent me a log as proof one of my raidgroups isnt degraded.

The thing is the log is a day old BEFORE the raid rebuild was started and the system became unresponsive during said rebuild.

The reply to calling the lady on her poo poo is "thank you for the information".

I have now downed a stiff drink and i guess i will have to do several more and just sleep a few hours until the men come back on shift.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

conntrack posted:

Fresh support story? Don't mind if i do.

Netapp just sent me a log as proof one of my raidgroups isnt degraded.

The thing is the log is a day old BEFORE the raid rebuild was started and the system became unresponsive during said rebuild.

The reply to calling the lady on her poo poo is "thank you for the information".

I have now downed a stiff drink and i guess i will have to do several more and just sleep a few hours until the men come back on shift.
Their support might suck but don't be a chauvinist douche

oblomov
Jun 20, 2002

Meh... #overrated

Crowley posted:

I would too. I've been using EVAs for the better part of a decade without any issue at all.

Haven't used EVAs, but I've had a terrible time dealing with HP sales team on desktop/laptop purchases. Talking large scale account with 8 figure sales a year and bad responsiveness, slow ordering, lags on delivery, just overall a bad experience.

On the other hand EMC, NetApp, and Dell are always prompt, responsive and provided excellent support from pretty much anything we got from them. Now, Dell we escalate through the TAM sometimes, but that's how it rolls, and it's still quick. Personally, this soured me enough on HP that I wouldn't look at them as a vendor for anything for a while.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

complex posted:

Anyone have any thoughts on NetApp's new offerings? The FAS6200, but in particular ONTAP 8.0.1. I'm thinking of going to 8 just for the larger aggregates.

Data ONTAP 8.0.1 also brings DataMotion, which lets you move volumes between aggregates without downtime. The catch is that you can't move a volume in a 32-bit aggregate to a 64-bit aggregate, or vice-versa. Also compression might be nice for shrinking user shares, but I haven't got a chance to see that in action yet to see how much it actually helps. Finally, the introduction of VAAI in 8.0.1 makes a ton of improvements for VMWare via iSCSI on NetApp, notably much faster storage vMotion.

conntrack
Aug 8, 2003

by angerbeet
Did they get SMB2 back in? When 8 was releasing there was alot of grumbling about that.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

conntrack posted:

Did they get SMB2 back in? When 8 was releasing there was alot of grumbling about that.

SMB2 is in 8.0.1, but not SMB2.1, which I guess Windows 7 is capable of.

namaste friends
Sep 18, 2004

by Smythe

madsushi posted:

SMB2 is in 8.0.1, but not SMB2.1, which I guess Windows 7 is capable of.

For everyone that has access to NOW, here's the release notes: http://now.netapp.com/NOW/knowledge/docs/ontap/rel801rc2/html/ontap/rnote/frameset.html

idolmind86
Jun 13, 2003

It's better to burn out than to fade away.

It's even better to work out, numbnuts.
Two questions...

I am a software developer and the increasing trend to go to storage over NFS is loving us big time. It seems that a lot of companies don't fully grasp the technology or how to correctly configure it. Our product contains a proprietary database and we run into many issues including, but not limited to, locking and caching issues, known locking and caching bugs with NFS, stale NFS mounts causing our product to hang until the mount becomes available again, etc. Anyone else out there seeing situations like this? Any advice. We are scrambling internally to figure out better ways to deal but there are frequent incidents that I'm not sure we could avoid if we want to.

Some examples are customers doing maintenance to the filer while our product is running and our processes hanging until a machine reboot. Kind of similar to if you just bring down an NFS mount host without unmounting it first. In this case the customer was blaming our software for the problem even though any attempt to do simple UNIX commands in the product install directory made the process hang (such as cd, ls, pwd, df, etc).

Another example was a customer with an NFS version with known caching issues where we wrote a test program that wrote to a file and another test program that read from the file and the reader would not immediately find the data.

Anyway, it has been a nightmare to support so I figured I'd throw the question out there.

Also, on SunOS and someone using a veritas cluster does a mount file system of "vfs" mean veritas file system or SunOS's virtual file system?

toplitzin
Jun 13, 2003


Woohoo! I just made it through the third round of interviews for NetApp support. I asked them some areas of study to try and brush up on. The rep suggested looking into LUN's, mounting and configuring exchange, maybe a little mild SQL, but really stressed the LUN-NAS/SAN side. Any suggestions?

I've been reading wikipedia, read some of the IBM redbooks and some of NetApp's Technical resources( Thanks 1000101!!) but I'll be honest, some of them are a bit...chewy. I'm trying to step up into the world of enterprise level support, and want to at least have an idea since I sure as hell won't know everything. The client even stated that I won't know most of the answers for a good 4-6 months after starting. But any foundation I can build on is better than none.

what is this
Sep 11, 2001

it is a lemur

idolmind86 posted:

Two questions...

I am a software developer and the increasing trend to go to storage over NFS is loving us big time. It seems that a lot of companies don't fully grasp the technology or how to correctly configure it. Our product contains a proprietary database and we run into many issues including, but not limited to, locking and caching issues, known locking and caching bugs with NFS, stale NFS mounts causing our product to hang until the mount becomes available again, etc. Anyone else out there seeing situations like this? Any advice. We are scrambling internally to figure out better ways to deal but there are frequent incidents that I'm not sure we could avoid if we want to.

Some examples are customers doing maintenance to the filer while our product is running and our processes hanging until a machine reboot. Kind of similar to if you just bring down an NFS mount host without unmounting it first. In this case the customer was blaming our software for the problem even though any attempt to do simple UNIX commands in the product install directory made the process hang (such as cd, ls, pwd, df, etc).

Another example was a customer with an NFS version with known caching issues where we wrote a test program that wrote to a file and another test program that read from the file and the reader would not immediately find the data.

Anyway, it has been a nightmare to support so I figured I'd throw the question out there.


Just use iSCSI for your database, that's what it's meant for.


I don't want to reiterate the title of the thread, but explain your database requires direct attached storage or a SAN, and that NFS is not acceptable. This is the case with many databases. It's perfectly reasonable to require they use a system that grants you block level access to files. File level access over a network drive is often just not going to cut it - that's what you're experiencing. Require block level access and your problems go away.

Mierdaan
Sep 14, 2004

Pillbug

Misogynist posted:

Their support might suck but don't be a chauvinist douche

Post/username combo, right here.

content: For a FAS2020, is there anything I'm doing wrong that forces me to spend half my time rebooting these damned BMCs? Pretty frequently, when I try to ssh to them I get 'server unexpectedly closed connection' and NetApp's answer was to just reboot the BMC. Works fine, but it's happening frequently enough that I'm this close to using the filerview command line, and ugh.

conntrack
Aug 8, 2003

by angerbeet
Anyone using datadomain? We got quoted a price that would buy us a petabyte of raw disk for the same price as a data domain box.

We could probably buy half a petabyte, compress it with standard gzip and come out paying less money.

Going back to tape and tape robots is starting to sound good again.......

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

conntrack posted:

Anyone using datadomain? We got quoted a price that would buy us a petabyte of raw disk for the same price as a data domain box.

We could probably buy half a petabyte, compress it with standard gzip and come out paying less money.

Going back to tape and tape robots is starting to sound good again.......

We use them. Work as advertised. Not cheap though.

code:
UPTIME= 06:59:30 up 325 days, 15:40,  0 users,  load average: 1.00, 1.02, 1.00

==========  SERVER USAGE   ==========
Resource             Size GiB   Used GiB   Avail GiB   Use%   Cleanable GiB*
------------------   --------   --------   ---------   ----   --------------
/backup: pre-comp           -    15942.6           -      -                -
/backup: post-comp     2537.2      771.2      1766.0    30%              0.0
/ddvar                   19.7        2.6        16.1    14%                -
------------------   --------   --------   ---------   ----   --------------
 * Estimated based on last cleaning of 2010/11/23 06:24:06.

Filesys Compression
--------------
                      
From: 2010-11-22 06:00 To: 2010-11-29 06:00
                      
                  Pre-Comp   Post-Comp   Global-Comp   Local-Comp      Total-Comp
                     (GiB)       (GiB)        Factor       Factor          Factor
                                                                    (Reduction %)
---------------   --------   ---------   -----------   ----------   -------------
Currently Used:    15942.6       771.2             -            -    20.7x (95.2)
Written:*                                                                        
  Last 7 days                                                                    
  Last 24 hrs                                                                    
---------------   --------   ---------   -----------   ----------   -------------
 * Does not include the effects of pre-comp file deletes/truncates
   since the last cleaning on 2010/11/23 06:24:06.
Key:                                                          
       Pre-Comp = Data written before compression             
       Post-Comp = Storage used after compression             
       Global-Comp Factor = Pre-Comp / (Size after de-dupe)   
       Local-Comp Factor = (Size after de-dupe) / Post-Comp   
       Total-Comp Factor = Pre-Comp / Post-Comp               
       Reduction % = ((Pre-Comp - Post-Comp) / Pre-Comp) * 100

conntrack
Aug 8, 2003

by angerbeet

skipdogg posted:

We use them. Work as advertised. Not cheap though.

code:
UPTIME= 06:59:30 up 325 days, 15:40,  0 users,  load average: 1.00, 1.02, 1.00

==========  SERVER USAGE   ==========
Resource             Size GiB   Used GiB   Avail GiB   Use%   Cleanable GiB*
------------------   --------   --------   ---------   ----   --------------
/backup: pre-comp           -    15942.6           -      -                -
/backup: post-comp     2537.2      771.2      1766.0    30%              0.0
/ddvar                   19.7        2.6        16.1    14%                -
------------------   --------   --------   ---------   ----   --------------
 * Estimated based on last cleaning of 2010/11/23 06:24:06.

Filesys Compression
--------------
                      
From: 2010-11-22 06:00 To: 2010-11-29 06:00
                      
                  Pre-Comp   Post-Comp   Global-Comp   Local-Comp      Total-Comp
                     (GiB)       (GiB)        Factor       Factor          Factor
                                                                    (Reduction %)
---------------   --------   ---------   -----------   ----------   -------------
Currently Used:    15942.6       771.2             -            -    20.7x (95.2)
Written:*                                                                        
  Last 7 days                                                                    
  Last 24 hrs                                                                    
---------------   --------   ---------   -----------   ----------   -------------
 * Does not include the effects of pre-comp file deletes/truncates
   since the last cleaning on 2010/11/23 06:24:06.
Key:                                                          
       Pre-Comp = Data written before compression             
       Post-Comp = Storage used after compression             
       Global-Comp Factor = Pre-Comp / (Size after de-dupe)   
       Local-Comp Factor = (Size after de-dupe) / Post-Comp   
       Total-Comp Factor = Pre-Comp / Post-Comp               
       Reduction % = ((Pre-Comp - Post-Comp) / Pre-Comp) * 100

This looks so sweet. I have neck beard envy right now.

idolmind86
Jun 13, 2003

It's better to burn out than to fade away.

It's even better to work out, numbnuts.

what is this posted:

I don't want to reiterate the title of the thread, but explain your database requires direct attached storage or a SAN, and that NFS is not acceptable.

I agree 100% but we work with some pretty large customers who just don't seem to get it. For instance the latest headache has been by a very large customer who claims they have no physical storage in house and that all storage is done on a central veritas cluster and that there is absolutely no way to install on a physical disk.

da sponge
May 24, 2004

..and you've eaten your pen. simply stunning.

idolmind86 posted:

I agree 100% but we work with some pretty large customers who just don't seem to get it. For instance the latest headache has been by a very large customer who claims they have no physical storage in house and that all storage is done on a central veritas cluster and that there is absolutely no way to install on a physical disk.

..and they can only present that storage to the app server over NFS and not iSCSI or FC?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

da sponge posted:

..and they can only present that storage to the app server over NFS and not iSCSI or FC?
even if that's the case, tell them to install openindiana in a VM and share the NFS VMDK via iscsi.

idolmind86
Jun 13, 2003

It's better to burn out than to fade away.

It's even better to work out, numbnuts.

da sponge posted:

..and they can only present that storage to the app server over NFS and not iSCSI or FC?

I'm not sure. That's when we're getting out of my area of expertise. Actually, any SAN is out of my area of expertise. Our biggest problem is that we constantly get DBAs opening up tickets about our software, related to NFS (or other filer issues). We usually resolve the issue manually and then the issue never gets escalated to the UNIX admins at the customer site. This repeats in a vicious cycle until some CTO or other higher up freaks out over the amount of tickets and then the DBAs point fingers at us, not the file system.

Anyway, it appears as if it is going to be an epic struggle so I'm trying to educate myself more, and hopefully come up with something.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

idolmind86 posted:

I'm not sure. That's when we're getting out of my area of expertise. Actually, any SAN is out of my area of expertise. Our biggest problem is that we constantly get DBAs opening up tickets about our software, related to NFS (or other filer issues). We usually resolve the issue manually and then the issue never gets escalated to the UNIX admins at the customer site. This repeats in a vicious cycle until some CTO or other higher up freaks out over the amount of tickets and then the DBAs point fingers at us, not the file system.

Anyway, it appears as if it is going to be an epic struggle so I'm trying to educate myself more, and hopefully come up with something.

Yeah, that's going to almost always be a customer storage issue. NFS/CIFS does some really bizarre poo poo depending on which halfassed implementation you're using. iSCSI is pretty much designed to fix the file level caching and buffering crao. Sure it has it's own problems, but dealing with that poo poo isn't one of them.

namaste friends
Sep 18, 2004

by Smythe

idolmind86 posted:

Two questions...

I am a software developer and the increasing trend to go to storage over NFS is loving us big time. It seems that a lot of companies don't fully grasp the technology or how to correctly configure it. Our product contains a proprietary database and we run into many issues including, but not limited to, locking and caching issues, known locking and caching bugs with NFS, stale NFS mounts causing our product to hang until the mount becomes available again, etc. Anyone else out there seeing situations like this? Any advice. We are scrambling internally to figure out better ways to deal but there are frequent incidents that I'm not sure we could avoid if we want to.

Some examples are customers doing maintenance to the filer while our product is running and our processes hanging until a machine reboot. Kind of similar to if you just bring down an NFS mount host without unmounting it first. In this case the customer was blaming our software for the problem even though any attempt to do simple UNIX commands in the product install directory made the process hang (such as cd, ls, pwd, df, etc).

Another example was a customer with an NFS version with known caching issues where we wrote a test program that wrote to a file and another test program that read from the file and the reader would not immediately find the data.

Anyway, it has been a nightmare to support so I figured I'd throw the question out there.

Also, on SunOS and someone using a veritas cluster does a mount file system of "vfs" mean veritas file system or SunOS's virtual file system?

Sometimes I run into clients that use NFS without any consideration for the type of workload they require. That is you end up seeing them mount up their exports without putting in the right options. The problem is that databases treat the filesystem differently than conventional applications. Oracle is an example that has intimate knowledge of filesystems, particularly local filesystems. Sometimes this results in some pretty messed up performance over a NAS protocol like NFS. One way of overcoming this problem is to get rid of NFS and use FC or ISCSI. Another way is to simply tune your NFS options to suit your database workload. What follows below is a crib of a NetApp techinical report (TR3322, get it here: http://media.netapp.com/documents/tr-3322.pdf ) . I want to try and explain what NetApp considers to be problems with respect to configuring NFS to suit a database workload.

There are 4 considerations in using NFS instead of a local filesystem to store your database.

1) Data Caching Mechanisms
2) Data Integrity
3) Asynchronous I/O
4) I/O pattern

With respect to
1)
Conventional file I/O doesn't have a facility to deal with the caching of data. The file system will have a mechanism to cache data to reduce I/O. On the other hand, a database is likely smart enough to have its own caching mechanism. This presents a problem where it may be possible that a 'double' caching effect can occur which is undesirable.

2)
File systems will often defer the writing of data to disk until some point in time as determined by the operation system. Sometimes databases require that data is immediately written to disk to provide data integrity. The deferral of writes (known as write back) to the file system can cause unwanted latency to the db.

3)
Asynchronous I/O (AIO) is a feature of an OS that enables your application to continue processing while the file system I/O requests are being serviced. AIO is relevant to databases because they can control their read-ahead and write-back behaviour. Read-ahead and write-back behaviour are intimately intertwined with AIO.

4)
I/O patterns of databases, particularly online transactional processing, are generate a high amount of small, random, highly parallelized reads and writes. NFS performance improvements (as far as I'm told anyway) have neglected this sort of workload.

So what do you do about this? Without knowing what your proprietary database does and how it works, I would suggest generating some sort of workload and then benchmarking the I/O when it is run against local attached storage. Then I'd suggest running the same workload and benchmark against ISCSI and FC attached storage. Finally, run it against NFS mounted storage. If you think you're seeing problems with performance which may be related to any of the above problems, the next step is to start playing around with your mount options. The problem with this is that you could be playing around for a long time and the options you end up using depend on the OS and version. Maybe taking a look at NetApp's NFS options for Oracle might provide a starting a point: https://kb.netapp.com/support/index?page=content&id=3010189 .

On Solaris it's possible to mount an export with forced direct io - forcedirectio, no attribute caching - noac. As far as I've seen, lock problems generally arise when something is interrupting the network or communication with the storage device. There's no easy way to deal with this but take a look at the nointr option.

I believe Veritas is usually referred to as VxFS.

namaste friends
Sep 18, 2004

by Smythe

what is this posted:

Just use iSCSI for your database, that's what it's meant for.


I don't want to reiterate the title of the thread, but explain your database requires direct attached storage or a SAN, and that NFS is not acceptable. This is the case with many databases. It's perfectly reasonable to require they use a system that grants you block level access to files. File level access over a network drive is often just not going to cut it - that's what you're experiencing. Require block level access and your problems go away.

While I think your approach has merit, I've done work with a significant number of clients (national utilities, oil & gas, investment banking) that use NFS with Oracle. I think there's a lot of problems with any particular protocol you use and it just depends on the amount of time and energy you're willing to dedicate to solving the problem that makes sense in your choice in the end.

namaste friends
Sep 18, 2004

by Smythe

toplitzin posted:

Woohoo! I just made it through the third round of interviews for NetApp support. I asked them some areas of study to try and brush up on. The rep suggested looking into LUN's, mounting and configuring exchange, maybe a little mild SQL, but really stressed the LUN-NAS/SAN side. Any suggestions?

I've been reading wikipedia, read some of the IBM redbooks and some of NetApp's Technical resources( Thanks 1000101!!) but I'll be honest, some of them are a bit...chewy. I'm trying to step up into the world of enterprise level support, and want to at least have an idea since I sure as hell won't know everything. The client even stated that I won't know most of the answers for a good 4-6 months after starting. But any foundation I can build on is better than none.

Congratulations on making it this far. I'd suggest spending some time to review networking, particular anything that you think would aid in troubleshooting network problems. Any NAS implementation I've come across has always been delayed because of misconfigured networks. I always come across misconfigured VLANs, dns problems, firewalls tightened up like fort knox and routers sending traffic across ISLs. Have you ever used wireshark? It might be worth thinking over how you would use wireshark to solve any of these problems. Are you sure the third interview will be tehcnical?

what is this
Sep 11, 2001

it is a lemur

idolmind86 posted:

I'm not sure. That's when we're getting out of my area of expertise. Actually, any SAN is out of my area of expertise. Our biggest problem is that we constantly get DBAs opening up tickets about our software, related to NFS (or other filer issues). We usually resolve the issue manually and then the issue never gets escalated to the UNIX admins at the customer site. This repeats in a vicious cycle until some CTO or other higher up freaks out over the amount of tickets and then the DBAs point fingers at us, not the file system.

Anyway, it appears as if it is going to be an epic struggle so I'm trying to educate myself more, and hopefully come up with something.

I guarantee almost all your customers can expose storage from their existing SAN/NAS through iSCSI.

Simply require this and your problems will go away. They will not have to buy new storage hardware. Their IT department should know how to set up iSCSI if they are not idiots, and if they are idiots they can call up their storage vendor who will explain how it's done.


Your problems are 100% related to the fact that your storage is currently file level storage instead of block level storage.

I'm not going to make a long post about the differences between block level access storage and file level access storage but suffice to say that for your application block level storage will appear the same as directly attached and mounted storage and you will have no caching issues, no file locking issues, etc.

GrandMaster
Aug 15, 2004
laidback

conntrack posted:

Anyone using datadomain? We got quoted a price that would buy us a petabyte of raw disk for the same price as a data domain box.

We could probably buy half a petabyte, compress it with standard gzip and come out paying less money.

Going back to tape and tape robots is starting to sound good again.......

yeah, we had a similar quote.. decided to go with a sun thumper instead, zfs inline dedupe is out in the next release of solaris

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

GrandMaster posted:

yeah, we had a similar quote.. decided to go with a sun thumper instead, zfs inline dedupe is out in the next release of solaris

They just came out with the 9/10 release, and no dedup. The previous release was 11 months ago. Solaris 11 is coming at us at lightning speed. Not sure how long you're going to be waiting for dedup in Solaris 10.

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.

toplitzin posted:

Woohoo! I just made it through the third round of interviews for NetApp support. I asked them some areas of study to try and brush up on. The rep suggested looking into LUN's, mounting and configuring exchange, maybe a little mild SQL, but really stressed the LUN-NAS/SAN side. Any suggestions?

I've been reading wikipedia, read some of the IBM redbooks and some of NetApp's Technical resources( Thanks 1000101!!) but I'll be honest, some of them are a bit...chewy. I'm trying to step up into the world of enterprise level support, and want to at least have an idea since I sure as hell won't know everything. The client even stated that I won't know most of the answers for a good 4-6 months after starting. But any foundation I can build on is better than none.

Learn about the SnapX products, since those will be the ones hardest to troubleshoot because your hands-on is limited. The reason they give the 6-8 months bullshit is because you are thrown to the wolves within two weeks and actual training is pretty hard to get into. Your most valuable resource will be your co-workers so don't piss them off. Also if you are going night shift, I hope you can understand Indian accents on a grainy connection.

Which part of support are you getting into?

ghostinmyshell fucked around with this message at 16:49 on Nov 30, 2010

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

FISHMANPET posted:

They just came out with the 9/10 release, and no dedup. The previous release was 11 months ago. Solaris 11 is coming at us at lightning speed. Not sure how long you're going to be waiting for dedup in Solaris 10.
The funny thing is that ZFS v22 is in 9/10 but they are assholes and marked duplication (21) as reserved:

code:
$ zpool upgrade -v
This system is currently running ZFS pool version 22.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Reserved
 22  Received properties

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.

Some people seem to think this is due to the settlement with NetApp over the ZFS lawsuit rather than a technical limitation. Also, Solaris 11 Express is out and is supported by Oracle if you are brave enough to put it into production.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Welp, that sure is "special" on Oracle's part. They're probably going to try and make it a big selling point of Solaris 11, which maybe means they'll come out with an Intel thumper?

Ans I think the Solaris 11 express is only for evaluation, you can't actually use it in production (but they haven't released Oracle Solaris Studio 12 for it, WTF Oracle?). I'm "evaluating" it at home, which isn't really a lie because I'm learning all sorts of great poo poo that I can use when we go to 11 here at work.

TobyObi
Mar 7, 2005
Ain't nobody Obi like Toby ;)

Bluecobra posted:

Also, Solaris 11 Express is out and is supported by Oracle if you are brave enough to put it into production.
Interesting... since I'm still running OpenSolaris in production, as an FC SAN.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

Ans I think the Solaris 11 express is only for evaluation, you can't actually use it in production (but they haven't released Oracle Solaris Studio 12 for it, WTF Oracle?). I'm "evaluating" it at home, which isn't really a lie because I'm learning all sorts of great poo poo that I can use when we go to 11 here at work.
Oracle is pushing it in the Sparc T3 clusters they're demoing:

http://www.theregister.co.uk/2010/11/29/oracle_sunrise_supercluster/

Who knows if that implies it'll be a supported configuration for end-users.

TobyObi posted:

Interesting... since I'm still running OpenSolaris in production, as an FC SAN.
Haven't upgraded to OpenIndiana yet?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Sweet, I guess I hope they enjoy violating their own license if they start selling products running Solaris 11 express.

Oracle posted:

You may not:
- use the Programs for your own internal business purposes (other than developing, testing, prototyping and demonstrating your applications) or for any commercial or production purposes;
- remove or modify any program markings or any notice of our proprietary rights;
- make the Programs available in any manner to any third party;
- use the Programs to provide third-party training;
- assign this agreement or give or transfer the Programs or an interest in them to another individual or entity;
- cause or permit reverse engineering (unless required by law for interoperability), disassembly or decompilation of the Programs;
- disclose results of any benchmark test results related to the Programs without our prior consent.

There's also a section on making sure you don't accidentally GPL Solaris code or something:

Oracle posted:

Open Source Software
"Open Source" software - software available without charge for use, modification and distribution - is often licensed under terms that require the user to make the user's modifications to the Open Source software or any software that the user 'combines' with the Open Source software freely available in source code form. If you use Open Source software in conjunction with the Programs (or if you plan on licensing your own application under an Open Source license), you must ensure that your use does not: (i) create, or purport to create, obligations with respect to the Oracle Programs; or (ii) grant, or purport to grant, to any third party any rights to or immunities under our intellectual property or proprietary rights in the Oracle Programs. For example, you may not develop a software program using an Oracle program and an Open Source program where such use results in a program file(s) that contains code from both the Oracle program and the Open Source program (including without limitation libraries) if the Open Source program is licensed under a license that requires any "modifications" be made freely available. You also may not combine the Oracle program with programs licensed under the GNU General Public License ("GPL") in any manner that could cause, or could be interpreted or asserted to cause, the Oracle program or any modifications thereto to become subject to the terms of the GPL.
Source

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
It's really not violating anything if they're not legally bound to agree to it in the first place. They're sort of the copyright holder.

TobyObi
Mar 7, 2005
Ain't nobody Obi like Toby ;)

Misogynist posted:

Haven't upgraded to OpenIndiana yet?
Nope. Don't really have any kind of wanky moral objection to Oracle trying to lock up Sol11, just needed COMSTAR before they realised how useful it could be.

Though, to be honest, it may also never get upgraded to Solaris 11 either, based on the fact that getting downtime for that server now will be pretty difficult.

Nomex
Jul 17, 2002

Flame retarded.
I got a FusionIO IODrive to play with, but I'm having some issues. VMWare formats the drive with 512 byte sectors. I've made sure it starts at sector 128, so it should be write aligned for 4k blocks in the VM, however I'm getting absolutely terrible 4k random IO. Have a look:



Does anyone know if there's any way to format the drive with 4k blocks? Or does anyone have any suggestions?

Adbot
ADBOT LOVES YOU

Boner Buffet
Feb 16, 2006
Does anyone have any experience with the HP P4300 G2 SAN starter kit? Thoughts? Has HP screwed up the Lefthand units or are they still a good option for an iscsi SAN? I'm looking into virtualizing a large chunk of our physical machines. We only have one server running one mssql database and no Oracle. It will mostly be for our GroupWise system and network file storage, along with the odds and ends boxes that are just wasting electricity.

I got pricing and it was more than I was expecting. Then again, I have no real basis for my expectations.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply