Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
complex
Sep 16, 2003

Anyone have any thoughts on NetApp's new offerings? The FAS6200, but in particular ONTAP 8.0.1. I'm thinking of going to 8 just for the larger aggregates.

Adbot
ADBOT LOVES YOU

conntrack
Aug 8, 2003

by angerbeet
i do know i would like to have the inline compression on my boxes. looks sweet in the propaganda.

SmellsOfFriendship
May 2, 2008

Crazy has and always will be a way to discredit or otherwise demean a woman's thoughts and opinions

Erwin posted:

Anybody have experience with Promise VessRAID units? I know it's super low-end, but it's just going to be our backup-to-disk target/old files archive. My concern with the unit is that it's been synchronizing the 6TB array for almost 23 hours now, and it's at 79%. If I add another array down the road and it takes a day to synchronize, the existing array better be usable.

That's what I was shoehorning just yesterday! We had to synch 8 1 TB disks. That doesn't sound unreasonable given my experience. So far the Promise isn't bad. Pretty quick, easy to set up, easy to get talking and easy to administer. We're using an LSI card too, those support guys are awesome.

Good luck fellow cheap array goon.

SmellsOfFriendship fucked around with this message at 22:21 on Nov 11, 2010

egoslicer
Jun 13, 2007
Does anyone have any experience with NexSan? We are looking at their SATABoy product for our hosting our initial round of VDIs. Our users only really use Word, Excel, and a couple of web apps and nearly bottom out when it comes to perfmon across the board. We wanted something inexpensive, but decent. It is looking to go between NexSan or a MDI3200.

da sponge
May 24, 2004

..and you've eaten your pen. simply stunning.

egoslicer posted:

Does anyone have any experience with NexSan? We are looking at their SATABoy product for our hosting our initial round of VDIs. Our users only really use Word, Excel, and a couple of web apps and nearly bottom out when it comes to perfmon across the board. We wanted something inexpensive, but decent. It is looking to go between NexSan or a MDI3200.

I'm using 2 of their SASBoys w/dual controllers over FC (they are pretty much the same as the SATABoy). The SASBoys are the core of our ESX cluster and are rock solid. I think I had to use support once and they were very responsive.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Any Compellent users? Opinions?

edit: it's for a lab environment, our dev's want complete control over drive assignment, something our LeftHand P4300 boxes don't let them do.

skipdogg fucked around with this message at 19:28 on Nov 15, 2010

ragzilla
Sep 9, 2005
don't ask me, i only work here


skipdogg posted:

Any Compellent users? Opinions?

edit: it's for a lab environment, our dev's want complete control over drive assignment, something our LeftHand P4300 boxes don't let them do.

The software still has some kinks in 'bizarre' failure scenarios (like if a controller loses both connections to a loop because they were both on a failing ASIC on your FC card, it does NOT fail over to the other controller) but overall it works as advertised.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

skipdogg posted:

Any Compellent users? Opinions?
The sales engineer i spoke to about a year ago could not have possibly more condescending.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

adorai posted:

The sales engineer i spoke to about a year ago could not have possibly more condescending.

My VAR just complained about not being able to get ahold of his rep today. He bitched all the way up to the regional manager. I'm sure he ruffled some feathers in the process.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Does anyone know how to make a RAID 10 in Freenas? I can't seem to find it anywhere, all the guides point me to raid 5 or 1 or 0 but not 1+0? Am I missing something or does freenas not do this. Yes this is software emulation

H110Hawk
Dec 28, 2006

Corvettefisher posted:

Does anyone know how to make a RAID 10 in Freenas? I can't seem to find it anywhere, all the guides point me to raid 5 or 1 or 0 but not 1+0? Am I missing something or does freenas not do this. Yes this is software emulation

Maybe it's like some really terrible LSI firmware versions which are out there. Do you have to make a bunch of raid 1's, then start over and make a raid 0 out of all of your mirrors?

Hok
Apr 3, 2003

Cog in the Machine

Jadus posted:

Somewhat related, we're looking at an MD3220i with two hosts, and since we don't expect to grow beyond two hosts for a while, we're thinking about cutting out redundant switches between the servers and the MD3220i.

If we do this, is loadbalancing still possible across the links when direct connecting? For example, will it work properly to connect two NIC from server 1 to Controller 1, two NIC from server 1 to Controller 2, and the same for server 2?

Sorry for the slightly delayed response, haven't been checking in much the last week.

Yeah, if you've only got 2 systems, then there's no reason not to direct connect, you've got 4 ports on each controller, you can use all of em, and with enough connected it will load balance fine as long as you've got MPIO sorted properly.

If you've got 2 ports available on server, connect one to each controller, this will give some load balancing, but as each virtual disk is bound to a controller you'll only get 1GB through put for each VD, but you'll get more spread across all the VD's

Best case is 4 ports on each server, with 2 connected to each controller, that will give full load balancing and a nice perf boost.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Semi-related to Enterprise Storage (and I don't want to start a new topic):

For a file server, what is the best way to set up the storage? We plan on virtualizing the host, and connecting it directly to an iSCSI SAN (through the iSCSI initiator, instead of creating VMDKs for the drives). We have a 10TB unit dedicated to this, but we're currently only using 2.75TB of space on our existing file server.

Should we immediately allocate a giant 10TB partition or create a more reasonable size (5TB) and then add in more partitions/volumes if needed? What is the best way to add in volumes after-the-fact? Add the volumes then symlink them in? Add volumes and move shares to them?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

three posted:

We plan on virtualizing the host, and connecting it directly to an iSCSI SAN (through the iSCSI initiator, instead of creating VMDKs for the drives).
Is there any particular reason you're avoiding VMDKs? Storage vMotion would basically eliminate all of your problems here.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Misogynist posted:

Is there any particular reason you're avoiding VMDKs? Storage vMotion would basically eliminate all of your problems here.

From my understanding, the largest LUN size that can be presented to ESX is 2TB. I believe there are ways to get around this by having multiple 2TB LUNs and using extents, but I was under the impression extents were bad.

Also, if it's presented just as a straight LUN, we could map it to a physical server if need be.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

three posted:

From my understanding, the largest LUN size that can be presented to ESX is 2TB. I believe there are ways to get around this by having multiple 2TB LUNs and using extents, but I was under the impression extents were bad.

Also, if it's presented just as a straight LUN, we could map it to a physical server if need be.
Extents are for VMFS volumes, so you're right, guest-side iSCSI's probably the way to go if you want to avoid cobbling together a bunch of LUNs using Dynamic Disks.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I would guest side iscsi and allocate 30% more space than you are currently using. Then I would resize volume (hopefully your storage supports this) as needed

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

adorai posted:

I would guest side iscsi and allocate 30% more space than you are currently using. Then I would resize volume (hopefully your storage supports this) as needed

I feel retarded for not thinking of this. I overthink things way too much. :(

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

H110Hawk posted:

Maybe it's like some really terrible LSI firmware versions which are out there. Do you have to make a bunch of raid 1's, then start over and make a raid 0 out of all of your mirrors?

I thought I would make 2 raid 1 arrays, then go to raid 0 and add both arrays but to no avail

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

Corvettefisher posted:

I thought I would make 2 raid 1 arrays, then go to raid 0 and add both arrays but to no avail

Could it be like some of the HP onboards where you just select RAID1, then select more than 2 disks, and it automagically changes itself to 10/1+0?

GrandMaster
Aug 15, 2004
laidback
are there many emc users here?
we just had a nightmare day at the office - emc engineer was in to install 3 new dae's to our cx3-40. first tray went in with no problems but when the second was plugged in, the bus faulted and took the entire array down. the "non-disruptive" upgrade pretty much brought down our entire call centre :(

after the emc engineering lab did all their research it looks like a dodgy lcc in the dae was the cause.

has anyone else seen anything like this happen before?

conntrack
Aug 8, 2003

by angerbeet
It's the tradeoff you make for running midrange hardware.

You often get fine performance but the uptime is not guaranteed to a
gazzilion nines.


But all is not dandy in the enterprise world either.

Today we got word from our HP support contact that they are
holding us hostage for $700 before giving us any future support.

Support under a contract we pay $100k per year for.

gently caress HP in the rear end, i will suck a bag of soggy dicks before i buy a single new HP storage product.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

GrandMaster posted:

are there many emc users here?
we just had a nightmare day at the office - emc engineer was in to install 3 new dae's to our cx3-40. first tray went in with no problems but when the second was plugged in, the bus faulted and took the entire array down. the "non-disruptive" upgrade pretty much brought down our entire call centre :(

after the emc engineering lab did all their research it looks like a dodgy lcc in the dae was the cause.

has anyone else seen anything like this happen before?

yes. Once a few years ago saw a bad lcc in a clariion when adding DAEs. Don't think it took down the whole array though.

ferrit
Mar 18, 2006
Is there any way to increase the write performance on a NetApp FAS3140 running OnTap 7.2.6.1? It appears that our options, according to NetApp support, are:

1) Increase the number of spindles (we've got 4 fully populated ds14mk4 disk shelves attached to this head with a mix of 150 and 450 GB 15K FCAL drives) so that the NVRAM is able to flush the writes to disk faster without hitting back-to-back consistency points. This may be difficult as we might run into power issues with adding another shelf.
2) Setup a proper reallocate schedule for all volumes to ensure that the volumes aren't "fragmented" and that we're not running into a hot disk scenario. We've tried this and although it appears to help somewhat, there are several times when we still see latency rise due to the back-to-back consistency points.
3) Stop pushing so much drat data to the filers so drat quickly - this might not be achievable, as it's an Oracle database that the DBA's insist must be able to handle this type of load.
4) Buy a bigger filer head that has a larger NVRAM size to help mitigate the instances when it is pushing alot of data to the filer.

Are there any other bits of performance tuning that can be done? Have there been any significant changes in OnTap 7.3 as far as performance tuning is concerned? We're looking specifically for write performance, so I'm not sure if a PAM module would help us out (I had believed that they were meant for reads more than writes). We had recommended they go for a FAS3170 when it was specced out a couple of years ago, but they saw the cost and backed off.

Thanks!

complex
Sep 16, 2003

What does your IOPS profile look like, read vs. write? How about just pure total IOPS? NFS, ISCSI or block? At peak load what does cache age look like?

We have a FAS3140 with 7 full DS14MK4s and 2 full DS4243s emitting block storage to a vSphere 4.1 installation and I have looked at a lot of performance numbers.

We are running 7.3.3.

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.

ferrit posted:

Is there any way to increase the write performance on a NetApp FAS3140 running OnTap 7.2.6.1? It appears that our options, according to NetApp support, are:

1) Increase the number of spindles (we've got 4 fully populated ds14mk4 disk shelves attached to this head with a mix of 150 and 450 GB 15K FCAL drives) so that the NVRAM is able to flush the writes to disk faster without hitting back-to-back consistency points. This may be difficult as we might run into power issues with adding another shelf.
2) Setup a proper reallocate schedule for all volumes to ensure that the volumes aren't "fragmented" and that we're not running into a hot disk scenario. We've tried this and although it appears to help somewhat, there are several times when we still see latency rise due to the back-to-back consistency points.
3) Stop pushing so much drat data to the filers so drat quickly - this might not be achievable, as it's an Oracle database that the DBA's insist must be able to handle this type of load.
4) Buy a bigger filer head that has a larger NVRAM size to help mitigate the instances when it is pushing alot of data to the filer.

Are there any other bits of performance tuning that can be done? Have there been any significant changes in OnTap 7.3 as far as performance tuning is concerned? We're looking specifically for write performance, so I'm not sure if a PAM module would help us out (I had believed that they were meant for reads more than writes). We had recommended they go for a FAS3170 when it was specced out a couple of years ago, but they saw the cost and backed off.

Thanks!

I don't know the model number translation from IBM to NetApp but this doc has a pretty good explaination about the PAM module for you to decide yourself.

http://www.redbooks.ibm.com/abstracts/sg247129.html

The IBM redbooks are a really good source of netapp info.

H110Hawk
Dec 28, 2006

ferrit posted:

2) Setup a proper reallocate schedule for all volumes to ensure that the volumes aren't "fragmented" and that we're not running into a hot disk scenario. We've tried this and although it appears to help somewhat, there are several times when we still see latency rise due to the back-to-back consistency points.

I'm a bit rusty on this, but you can see if you have a hot disk yourself to see exactly how much performance you're gaining from reallocation. I haven't sat at a NetApp console in a year+ so verify these commands before running them.

During your worst IO performance times, where you are getting back to back CP's do:
# priv set advanced
# statit -b
(wait several minutes)
# statit -e
# priv set

This should give you a whole pile of output. Look through the disk utilization numbers and see how you are doing. Netapp still does consolidated parity, right? This means 1 or 2 of your disks per raid group will show some piddling amount of utilization and that is normal.

Also look through sysconfig -r, wafl scan status, and options to make sure you aren't doing some kind of constant scrubbing or other high impact job during peak hours. Any scrub jobs should be paused during times of extremely high utilization.

Sometimes you can just slap a bigger NVRAM card into your existing netapp. This might get into warranty voiding territory.

I've never used Oracle, but make sure you are doing aligned writes to your netapp. Ask Netapp support how you can verify that. One block write should net one data block write (and one or two parity blocks) on your netapp.

At a certain point management is going to have to bite the bullet on extra disks. Depending on how risk adverse you are you can spend as much or as little as you want. PM me for details.

This is also a pretty good source of diagnostic commands. Temporarily bypass the ssl warnings. Do not under any circumstances run a command which you do not understand. Do not argue with the filer. The filer will win all arguments. The filer knows best.
https://wiki.smallroom.net/doku.php?id=storage:netapp:general

H110Hawk fucked around with this message at 16:58 on Nov 18, 2010

ferrit
Mar 18, 2006
Thanks to all of you for the responses - I really appreciate it.

complex posted:

What does your IOPS profile look like, read vs. write? How about just pure total IOPS? NFS, ISCSI or block? At peak load what does cache age look like?

I should have mentioned this in my first post but we're presenting every volume as NFS. As I only really have a very amateurish knowledge of NetApp performance (by using 'sysstat -u' and 'statit' mainly), I only really know how to get the total number of operations. For example (a snippet of stuff from running 'sysstat -u 1'):

CPU Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk
ops/s in out read write read write age hit time ty util
48% 2998 57149 20355 107524 72588 0 0 35s 93% 99% M 70%
47% 3214 59451 20107 105572 67364 0 0 33s 94% 100% : 76%
33% 1417 9045 21801 91588 51480 0 0 33s 97% 100% # 60%
27% 1077 318 20653 100956 66712 0 0 33s 95% 100% # 70%
47% 3504 59658 20528 65120 48220 0 0 33s 97% 100% b 85%
28% 1815 14856 28860 66544 50176 0 0 33s 96% 100% : 70%
26% 1376 3569 24536 69968 54756 0 0 33s 95% 100% : 68%
28% 2098 11302 23890 71072 52128 0 0 34s 96% 80% : 73%

So just in that small period where I see the back to back CP, we're seeing anywhere from 1000 ops to 3500 ops. To me, the disk utilisation figures look quite high, though obviously they are lower (around 20-40%) when there is not a CP occurring at that moment. Our big problem is that the incidents of back-to-back CP's are really peaky - we might hit a 5 or 6 second period of high latency and then not see it again for hours.

My statit's are definitely showing that the last rg (the shelf with the 450 GB disks) have a higher utilisation figure than the other 3 rg's with the 150 GB disks - but could that just be because they are bigger disks and thus utilised more? I thought that the volume reallocate would have assisted us in levelling this out.

ghostinmyshell posted:

I don't know the model number translation from IBM to NetApp but this doc has a pretty good explaination about the PAM module for you to decide yourself.

http://www.redbooks.ibm.com/abstracts/sg247129.html

The IBM redbooks are a really good source of netapp info.

I'll have to check this out - thanks for the link. I believe there is some way that you can actually simulate what a PAM module would do for you if you had it by running some advanced commands, but I can't find it in the NOW site right now.

H110Hawk posted:

Also look through sysconfig -r, wafl scan status, and options to make sure you aren't doing some kind of constant scrubbing or other high impact job during peak hours. Any scrub jobs should be paused during times of extremely high utilization.

Sometimes you can just slap a bigger NVRAM card into your existing netapp. This might get into warranty voiding territory.

I've never used Oracle, but make sure you are doing aligned writes to your netapp. Ask Netapp support how you can verify that. One block write should net one data block write (and one or two parity blocks) on your netapp.

At a certain point management is going to have to bite the bullet on extra disks. Depending on how risk adverse you are you can spend as much or as little as you want. PM me for details.

This is also a pretty good source of diagnostic commands. Temporarily bypass the ssl warnings. Do not under any circumstances run a command which you do not understand. Do not argue with the filer. The filer will win all arguments. The filer knows best.
https://wiki.smallroom.net/doku.php...:netapp:general

We have the automatic WAFL scrub scheduled for Sunday Mornings, so we tend to discount those stats if we see the high latency during that period, but that's a good point. We also regularly check 'wafl scan status' to be sure that the customer has not decided to delete a few TB of data and thus cause a big block reclamation scan to take place (although we have that one option 'wafl.trunc.throttle.hipri.enable' set to 'off' so it shouldn't take the filer to its knees if they do decide to do it).

Do NetApp even sell NVRAM upgrades anymore? I had a quick look around the site but couldn't find anything - otherwise, this would probably be ideal for us, unless it were to invalidate our support contract. And that link looks good too - though the java commands, frankly, look frightening.

Another quick question to throw in the mix, not necessarily related to this issue - aggregate snapshots. Do you have them enabled or not? Is it worth having them? I've spoken to several people who've worked with NetApp for years and they've never had to use an aggregate snapshot to recover. Our biggest issue is the amount of reserve it takes up - the customer is angry about that space that should be his (his words, not mine). Also, if he ends up deleting an arseload of data and it's bigger than the aggregate snap reserve, then we have seen the filer go loopy trying to clear that aggregate snapshot (we have aggregate snapshot autodelete turned on) - and several times it has gone over 100% snap usage and eaten into the normal aggregate.

complex
Sep 16, 2003

ferrit posted:

I'll have to check this out - thanks for the link. I believe there is some way that you can actually simulate what a PAM module would do for you if you had it by running some advanced commands, but I can't find it in the NOW site right now.

We've done this. PCS. Because we are dealing with very large files in VMware, Predictive Cache Statistics indicated that an increase in FlexCache/PAM would not significantly increase cache hit ratios for us, and this not be worth it. Instead we decided to simply increase spindles.

Nukelear v.2
Jun 25, 2004
My optional title text

ferrit posted:

Are there any other bits of performance tuning that can be done? Have there been any significant changes in OnTap 7.3 as far as performance tuning is concerned? We're looking specifically for write performance, so I'm not sure if a PAM module would help us out (I had believed that they were meant for reads more than writes). We had recommended they go for a FAS3170 when it was specced out a couple of years ago, but they saw the cost and backed off.

Thanks!

Same basic question I always ask, are your SQL data files and transaction logs on the same spindles? If they share spindles with an active db then you will end up killing your log performance.

After the last SQL thread I decided to test this best practice to satisfy my curiosity. HP P2000 G3 6Gig SAS DA. 2 Minute runs over a 22 Gig file.

With a few 64k rand read going on to simulate data file activity.

Logs separated to own 4xDisk 15k RAID10, seq write 8k blocks:
111596 IOs/s 90.59 MBs/s

Logs mixed with Data on a 16xDisk 15k RAID10, seq write 8k blocks:
3297 IOs/s 25.75 MBs/s

Edit: Not sure what block size oracle uses for it's logs, but 64k was similarly affected. 412MB/s vs 149.8 MB/s

Nukelear v.2 fucked around with this message at 20:56 on Nov 18, 2010

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Check your alignment

priv set diag; stats show lun; priv set admin

Look for any writes that are on .1 through .7, they should all be on .0 or partial if you are aligned.

GrandMaster
Aug 15, 2004
laidback

paperchaseguy posted:

yes. Once a few years ago saw a bad lcc in a clariion when adding DAEs. Don't think it took down the whole array though.

just heard back from support, they will be replacing the cabling on the SPA side bus 0 as there were some other strange bus errors. it looks like SPA crashed, SPB didnt so i'm not sure why the luns didnt all trespass and stay online :(

conntrack
Aug 8, 2003

by angerbeet
If im not mistaken the 3140 box has 4 gigs of ram.

Is there a way to check the amount of memory involved in the operations stored in nvram?

Ie, adding more nvram might be useless if the buffers are full anyway?

Mausi
Apr 11, 2006

Forgive my ignorance on this topic, but could someone point me to an explanation of how NFS compares to direct block access in terms of performance?

How is an Oracle server using NFS for it's data storage?



Thanks, found http://media.netapp.com/documents/tr-3496.pdf which is interesting reading, if a few years old.
\/ \/ \/

Mausi fucked around with this message at 12:47 on Nov 19, 2010

conntrack
Aug 8, 2003

by angerbeet

Mausi posted:

Forgive my ignorance on this topic, but could someone point me to an explanation of how NFS compares to direct block access in terms of performance?

How is an Oracle server using NFS for it's data storage?


Netapp has alot of whitepapers online about oracle and nfs.

Crowley
Mar 13, 2003
I'm on the verge of signing the order for a HP x9720 in a 164TB configuration with three accelerators for online media archiving and small-time online viewing.

Anyone have an informed opinion on those? I haven't been able to find any independent comments since the system is still pretty new, and I'd like to know if I'm buying a piece of crap.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Crowley posted:

I'm on the verge of signing the order for a HP x9720 in a 164TB configuration with three accelerators for online media archiving and small-time online viewing.

Anyone have an informed opinion on those? I haven't been able to find any independent comments since the system is still pretty new, and I'd like to know if I'm buying a piece of crap.
It would be a cold day in hell before I pulled the trigger on any HP storage.

Nomex
Jul 17, 2002

Flame retarded.
As an HP vendor I'd be interested to know your reasoning behind that.

Crowley
Mar 13, 2003
I would too. I've been using EVAs for the better part of a decade without any issue at all.

Adbot
ADBOT LOVES YOU

H110Hawk
Dec 28, 2006

ferrit posted:

My statit's are definitely showing that the last rg (the shelf with the 450 GB disks) have a higher utilisation figure than the other 3 rg's with the 150 GB disks - but could that just be because they are bigger disks and thus utilised more? I thought that the volume reallocate would have assisted us in levelling this out.

Do NetApp even sell NVRAM upgrades anymore?

Another quick question to throw in the mix, not necessarily related to this issue - aggregate snapshots. Do you have them enabled or not?

The 450GB disks are going to run hot because new blocks are likely to wind up there as you reach capacity on the 150gb disks. You have roughly the same IOPS performance, I assume, between the 150gb and the 450gb disks. This means you are trying to pull more blocks from the same number of IOPS and creating a bottleneck. If your VAR did not explain this they should be raked over the coals.

I have no idea if they ever sold NVRAM upgrades. I do know that if you slapped a bigger NVRAM card into an older box it would use it. :q:

Snapshots have their place. If you have no use for them then just disable them. Snap reserve is a magical thing sometimes, as it can let you squeeze some "whoops!" space out of the device much like ext2's root reserved space. One common thing you can do with them is tie them to your frontend software, issue a coalesce/get ready for backup command, fire a snapshot, then release the lock. This then lets you do a copy of the snapshot somewhere else for backup. If this is not a part of your backup system then don't worry about it. I personally love snapshots.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply