Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


TobyObi posted:

However, what I am trying to figure out, is am I limited to using it as a NAS device, ie, NFS only, or will the optional FC card allow me to use it as an FC target in some way?
It's not straightforward or particularly well-documented whatsoever, but the COMSTAR stack in OpenSolaris will let you run it as an FC target through ZFS. The process is almost exactly the same as setting up an iSCSI target, except you're zoning it out to WWNs instead of IQNs. I haven't used it personally, and can't speak for its performance or reliability, but my iSCSI experiences using COMSTAR have been extremely positive.

There is no support for this whatsoever if you want to use plain Solaris 10.

Adbot
ADBOT LOVES YOU

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS


I'm curious to hear other peoples feedback here...

I can't think of any reason to actually use partition tables on most of my disks. Multipath devices are one example, but really even if I just add a second vmdk to a VM... why bother with a partition table? Why mount /dev/sdb1 when you can just skip the whole fdisk step and mount /dev/sdb ?

Why create a partition table with one giant partition of type lvm, when you can just pvcreate the root block device and skip all that? What do the extra steps buy you besides extra steps and the potential to break a LUN up into parts (something I have no intention of ever doing).

TobyObi
Mar 7, 2005
Ain't nobody Obi like Toby ;)

Misogynist posted:

It's not straightforward or particularly well-documented whatsoever, but the COMSTAR stack in OpenSolaris will let you run it as an FC target through ZFS. The process is almost exactly the same as setting up an iSCSI target, except you're zoning it out to WWNs instead of IQNs. I haven't used it personally, and can't speak for its performance or reliability, but my iSCSI experiences using COMSTAR have been extremely positive.

There is no support for this whatsoever if you want to use plain Solaris 10.

I figured that would be the answer.

I've already got an interesting device utilising COMSTAR and FC (and it has been rock solid), but for this, I think NFS over 10Gb ethernet is going to be easier, considering raw device access isn't a necessity, and the whole Oracle having OpenSolaris up in the air bit.

bmoyles
Feb 15, 2002

United Neckbeard Foundation of America

Fun Shoe

StabbinHobo posted:

I'm curious to hear other peoples feedback here...

I can't think of any reason to actually use partition tables on most of my disks. Multipath devices are one example, but really even if I just add a second vmdk to a VM... why bother with a partition table? Why mount /dev/sdb1 when you can just skip the whole fdisk step and mount /dev/sdb ?

Why create a partition table with one giant partition of type lvm, when you can just pvcreate the root block device and skip all that? What do the extra steps buy you besides extra steps and the potential to break a LUN up into parts (something I have no intention of ever doing).
Speaking as someone who has nuked 1TB of production porn (Playboy) because a drive without a partition table looked just like the new drive I was going to format for a quick BACKUP of said data, it can be helpful :)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


TobyObi posted:

I figured that would be the answer.

I've already got an interesting device utilising COMSTAR and FC (and it has been rock solid), but for this, I think NFS over 10Gb ethernet is going to be easier, considering raw device access isn't a necessity, and the whole Oracle having OpenSolaris up in the air bit.
I'm using both NFS and iSCSI extensively in my VMware test lab, and I don't really have any complaints about the way either one is implemented in OpenSolaris. I don't think there's necessarily any benefit to FC unless you're connecting up with an existing fabric.

bmoyles posted:

Speaking as someone who has nuked 1TB of production porn (Playboy) because a drive without a partition table looked just like the new drive I was going to format for a quick BACKUP of said data, it can be helpful :)
I've used raw disks for LVM before (I stopped), but I think this general sentiment is a strong one -- do something, anything, to label your partitions so you know what they are at a glance without any guesswork bullshit. I don't know about other filesystems, but I know ext2/3/4 and XFS support partition labels.

StabbinHobo posted:

I can't think of any reason to actually use partition tables on most of my disks. Multipath devices are one example, but really even if I just add a second vmdk to a VM... why bother with a partition table? Why mount /dev/sdb1 when you can just skip the whole fdisk step and mount /dev/sdb ?
Well, I can only think of one good reason to ever not partition your disk -- you don't have to worry about alignment issues if your partition always starts at block 0. And that's nice, but you potentially lose transparency as to what the LUN is for. This might not be a big deal if there's hugely tight integration between systems and storage administration in a shop, but you can still run into huge disasters if, for example, you inadvertently zone the wrong LUN to a server. If you always partition your disks, you know exactly what your unused LUNs look like -- unpartitioned disks.

Real question:

My role has apparently been hugely expanded regarding management of our SAN. I've got most of the basics down, but can anyone recommend any really good books to start with that don't assume I'm a non-technical manager or some kind of moron? Something that pragmatically covers LAN-free backups, best practices for remote mirroring and that kind of stuff is a big plus for me.

Vulture Culture fucked around with this message at 20:10 on Apr 9, 2010

TobyObi
Mar 7, 2005
Ain't nobody Obi like Toby ;)

Misogynist posted:

I'm using both NFS and iSCSI extensively in my VMware test lab, and I don't really have any complaints about the way either one is implemented in OpenSolaris. I don't think there's necessarily any benefit to FC unless you're connecting up with an existing fabric.
The existing FC fabric makes the choice easy, in a way.

To do either NFS or iSCSI, it's time to fork out for 10Gb ethernet infrastructure, otherwise it's just pissing into the wind.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

TobyObi posted:

To do either NFS or iSCSI, it's time to fork out for 10Gb ethernet infrastructure, otherwise it's just pissing into the wind.
I think you would be very surprised by how much utilization you would see with iscsi. None of our VMware hosts come even close to saturating a single gigabit link with iscsi traffic. Even without 10Gb ethernet, I think it's worthwhile to consider the benefits of iscsi, which pretty much comes down to port cost and management.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


StabbinHobo posted:

I'm curious to hear other peoples feedback here...

I can't think of any reason to actually use partition tables on most of my disks. Multipath devices are one example, but really even if I just add a second vmdk to a VM... why bother with a partition table? Why mount /dev/sdb1 when you can just skip the whole fdisk step and mount /dev/sdb ?

Why create a partition table with one giant partition of type lvm, when you can just pvcreate the root block device and skip all that? What do the extra steps buy you besides extra steps and the potential to break a LUN up into parts (something I have no intention of ever doing).

I've read somewhere that when making a RAID it's a good idea to make a partition a little bit smaller than the size of the disk, in case you're replacment disk is a few sectors smaller than the failed disk.

optikalus
Apr 17, 2008


FISHMANPET posted:

I've read somewhere that when making a RAID it's a good idea to make a partition a little bit smaller than the size of the disk, in case you're replacment disk is a few sectors smaller than the failed disk.

I ran into this problem on my PowerVault 220S with 14 146GB drives. One of the drives failed (Fujitsu), so I replaced it with a new Fujitsu and the Adaptec RAID card would not use the drive. It was 2MB smaller than the other drives >:(

I had to move all the content to a new filer, swap all the users over, down the PV220, rebuild the array, move all the content back and swap everyone back over.

My Areca RAID cards have an option to truncate the disk capacity to the nearest specified capacity round (I used 10GB), so a 250GB drive will probably only get 240GB, but will allow much variation between actual 250GB drive capacities.

I don't think this has anything to do with the partition size, though.

TobyObi
Mar 7, 2005
Ain't nobody Obi like Toby ;)

adorai posted:

I think you would be very surprised by how much utilization you would see with iscsi. None of our VMware hosts come even close to saturating a single gigabit link with iscsi traffic. Even without 10Gb ethernet, I think it's worthwhile to consider the benefits of iscsi, which pretty much comes down to port cost and management.

We're already saturating 4Gb FC links, and we're migrating to 8Gb with this upgrade that is being worked on.

Sadly, single gig links aren't going to cut it. If they did, my life would be easy...

EnergizerFellow
Oct 11, 2005

More drunk than a barrel of monkeys

TobyObi posted:

We're already saturating 4Gb FC links, and we're migrating to 8Gb with this upgrade that is being worked on.

Sadly, single gig links aren't going to cut it. If they did, my life would be easy...
What are you up to if you're needing 8G FC? If performance is a real concern, you may want to look into NFS over 10G ethernet. We're pushing out a bunch of 10G NFS right now, so feel free to drop me a line.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

TobyObi posted:

We're already saturating 4Gb FC links, and we're migrating to 8Gb with this upgrade that is being worked on.

Sadly, single gig links aren't going to cut it. If they did, my life would be easy...
Are you saturating 4Gb links on the SAN side or on the host side? By using trunking or a few 10Gb ports for your SAN you can do it cheaply. Our SANs obviously generate a LOT more iscsi traffic than any individual host.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


TobyObi posted:

To do either NFS or iSCSI, it's time to fork out for 10Gb ethernet infrastructure, otherwise it's just pissing into the wind.
This might have been true in the 3.x days, but 4.0 has iSCSI MPIO that's worked very well in our testing. (We still mostly use NFS on the development side because it's easy as hell to provision new VMs. Also, we can't afford Storage VMotion.)

TobyObi
Mar 7, 2005
Ain't nobody Obi like Toby ;)

adorai posted:

Are you saturating 4Gb links on the SAN side or on the host side? By using trunking or a few 10Gb ports for your SAN you can do it cheaply. Our SANs obviously generate a LOT more iscsi traffic than any individual host.
The links are saturated at the disk controllers. We're smashing 400MB/sec all day during hours, basically.

Misogynist posted:

This might have been true in the 3.x days, but 4.0 has iSCSI MPIO that's worked very well in our testing. (We still mostly use NFS on the development side because it's easy as hell to provision new VMs. Also, we can't afford Storage VMotion.)
I think I might have missed this bit (and it might explain a bit more) but this isn't for VMware. While some of my current infrastructure does use VMware, it is nowhere near the major pain point.

SAM-QFS. Archiving file system. Constant data movement up and down tiers.

H110Hawk
Dec 28, 2006


optikalus posted:

I ran into this problem on my PowerVault 220S with 14 146GB drives. One of the drives failed (Fujitsu), so I replaced it with a new Fujitsu and the Adaptec RAID card would not use the drive. It was 2MB smaller than the other drives >:(

I don't think this has anything to do with the partition size, though.

You have to be sure you read the Guaranteed Sector Count on any disk you purchase to replace an existing one. You are correct, it's not the partition size, but the size of whatever "thing" your array sees when building itself. This could be an exported multi-disk device (think raid10), a partition/file (when doing testing of raid subsystems), or the raw block device itself.

You can add a failsafe to this by lowering the used sector count in your raid controller software for each disk while building your array. Even if your array asks you "How many gigs do you want to use on this disk?" there is typically a way to see the actual block/sector counts. Where do you set it? 1% should be totally safe, but an easy way to tell is to look at all the major disk manufacturers for similar size disks, pick the smallest number, and reduce that by a tiny percentage. Even then just pay attention to the spec sheet when ordering and send it back if it doesn't match spec.

If you want an example of this, look at a Netapp sysconfig -r output. Compare the Logical to Physical sector counts. You will see Logical is far lower than physical. This helps with block remapping and them not having direct control over the manufacturing process, and that they will send you Hitcahi, Seagate, or Fujitsu disks as replacements.

H110Hawk fucked around with this message at 21:23 on Apr 10, 2010

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams


H110Hawk posted:

You have to be sure you read the Guaranteed Sector Count on any disk you purchase to replace an existing one. You are correct, it's not the partition size, but the size of whatever "thing" your array sees when building itself. This could be an exported multi-disk device (think raid10), a partition/file (when doing testing of raid subsystems), or the raw block device itself.

You can add a failsafe to this by lowering the used sector count in your raid controller software for each disk while building your array. Even if your array asks you "How many gigs do you want to use on this disk?" there is typically a way to see the actual block/sector counts. Where do you set it? 1% should be totally safe, but an easy way to tell is to look at all the major disk manufacturers for similar size disks, pick the smallest number, and reduce that by a tiny percentage. Even then just pay attention to the spec sheet when ordering and send it back if it doesn't match spec.
Yeah, in the thing I read you built your RAID array on top of those slighly smaller partitions.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


Can anyone recommend me a good book on SAN architecture and implementation that doesn't assume I'm either retarded or non-technical management? I'm apparently now in charge of an IBM DS4800 and an IBM DS5100, which is nice because I'm no longer going to be talking out of my rear end in this thread, but it sucks because I'm a tiny bit in over my head with IBM Redbooks right now.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


Sorry to bump again, but is anyone managing an IBM SAN using IBM Systems Director? I installed the SANtricity SMI-S provider on a host and connected it up to the SAN and can see all the relevant details if I look at the instance view in the included WBEM browser. However, when I try to connect to it using IBM Director, it can't discover it, even when given the server's IP address directly. Anyone have any ideas?

oblomov
Jun 20, 2002

Meh... #overrated

Misogynist posted:

Can anyone recommend me a good book on SAN architecture and implementation that doesn't assume I'm either retarded or non-technical management? I'm apparently now in charge of an IBM DS4800 and an IBM DS5100, which is nice because I'm no longer going to be talking out of my rear end in this thread, but it sucks because I'm a tiny bit in over my head with IBM Redbooks right now.

There is nothing on Amazon beside their "IBM Press" stuff? I usually just google for things, but then again I have never dealt with IBM, only NetApp, Dell (their crappy MD3000i setup), Equallogic and some EMC. I've never had to get a book for anything there, usually between vendor docs and google, it's been good enough.

oblomov
Jun 20, 2002

Meh... #overrated

brent78 posted:

Please explain. We are looking at picking up 6 shelves of Lefthand. I've used EqualLogic in the past and loved everything about them, except my boss is anti Dell these days. If LeftHand sucks, please tell before I get neck deep in it.

I got to pipe out and say it's a pleasure dealing with Equallogic (NetApp too) support. We haven't had any really weird calls, mostly drive failure here and there and couple network based shenanigans, but they are very quick to respond. Also, the modules seem pretty solid and easy to use.

For pricing, agree with previous posters, don't even look at retail pricing for NetApp (or Equallogic). Get some competitive quotes and start talking to sales people. I've used 2020s and 2050s and they are pretty nice units for what they go for, however lately we have been buying Equallogic for low and low-mid level instead, turns out cheaper even with dedupe (and for VMware you got vSphere thin-provisioning now). For a bit higher-end (mid to high level SANs), we have been going NetApp. Not that Equallogic can't deliver mid-level SAN performance, but Netapp got a lot of flexibility in quite a few areas, all being said and done.

Oh, and I am not sure about SAN, but dealing with HP sales is like pulling teeth. I am talking fairly high end contracts too (not just couple hundred $K).

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


oblomov posted:

There is nothing on Amazon beside their "IBM Press" stuff? I usually just google for things, but then again I have never dealt with IBM, only NetApp, Dell (their crappy MD3000i setup), Equallogic and some EMC. I've never had to get a book for anything there, usually between vendor docs and google, it's been good enough.
I was just looking for generic SAN stuff, not something necessarily vendor-specific, but IBM did have a couple of free redbooks that helped me out quite a bit.

Of course, I might be accepting a new job tomorrow, in which case I'd be learning the EMC side of things (particularly as it relates to Oracle). Having more experience never hurts :)

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean



Fun Shoe

The IBM N series is just a rebranded NetApp if that helps you any.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Misogynist posted:


Of course, I might be accepting a new job tomorrow, in which case I'd be learning the EMC side of things (particularly as it relates to Oracle). Having more experience never hurts :)

Symmetrix or Clariion user?

Cyberdud
Sep 6, 2005

Space pedestrian

My company wants to consolidate their data (and VMs) and i've been put in charge of the project.

There's so many choices out there and the bugget i was given is limited.

What do you suggest for a small company who wants around 2-4 Tb that can survive one drive failure.

We are talking about around 10 VMs being ran by two VMware servers. And how much is it going to cost me?

skipdogg
Nov 29, 2004
Resident SRT-4 Expert


Cyberdud posted:

My company wants to consolidate their data (and VMs) and i've been put in charge of the project.

There's so many choices out there and the bugget i was given is limited.

What do you suggest for a small company who wants around 2-4 Tb that can survive one drive failure.

We are talking about around 10 VMs being ran by two VMware servers. And how much is it going to cost me?

What kind of budget?

This could cost 10K or 200K, depends on the companies needs, tolerance for downtime, budget, and skill set.

Cyberdud
Sep 6, 2005

Space pedestrian

skipdogg posted:

What kind of budget?

This could cost 10K or 200K, depends on the companies needs, tolerance for downtime, budget, and skill set.

let's say the less expensive the better, i don't think we could go above 15-20k. Also skill set with SAN/NAS is nonexistant, so i'm willing to learn as much as possible.

EDIT: also would be looking for a Gigabit switch supporting jumbo frames.

Cyberdud fucked around with this message at 15:38 on Apr 22, 2010

da sponge
May 24, 2004

..and you've eaten your pen. simply stunning.

Cyberdud posted:

let's say the less expensive the better, i don't think we could go above 15-20k. Also skill set with SAN/NAS is nonexistant, so i'm willing to learn as much as possible.

EDIT: also would be looking for a Gigabit switch supporting jumbo frames.

Jumbo frames AND flow control is what you want, although many entry level / mid level switches don't support both (like the procurve 2800 series, much to my disappointment). That said, I'm running 4 ESX hosts with ~30 VMs off round robined iSCSI with the default 1500 byte MTU without issue.

Cyberdud
Sep 6, 2005

Space pedestrian

How about this : QNAP TS-859 Pro turbo NAS which supports jumbo frames (http://www.qnap.com/pro_detail_feature.asp?p_id=146)

it comes to around 1600 CAD and supports 8 bays each so we can purchase two of them.

Does netgear make good switches ? I saw a pretty affordable one that supports Jumbo Frames. What do you guys recommend?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


Cyberdud posted:

Does netgear make good switches ?
Netgear anything is universally, without exception, poo poo.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

Cyberdud posted:

What do you suggest for a small company who wants around 2-4 Tb that can survive one drive failure.
If you do not require HA I would buy one of these. You can pick one up for well under $10k. You can then get a second one and put it in a colo or even in someone's basement and replicate your data to it. You'll also want to get a gigabit (probably managed) switch if you don't already have one, which I would get something like this. You could probably come in right at $10k for your primary site plus another $8k for your replication partner if you wanted to go that route.

oblomov
Jun 20, 2002

Meh... #overrated

Cyberdud posted:

How about this : QNAP TS-859 Pro turbo NAS which supports jumbo frames (http://www.qnap.com/pro_detail_feature.asp?p_id=146)

it comes to around 1600 CAD and supports 8 bays each so we can purchase two of them.

Does netgear make good switches ? I saw a pretty affordable one that supports Jumbo Frames. What do you guys recommend?

It's decent if you want to run a small NAS for 5-10 people. I wouldn't run VMware from it, this is not for this. Netgear makes decent switches for your house or your dentist's office, not for enterprise gear (it's actually decent for low end switching). Check out Dell or HP switches if Cisco is a bit too pricey (it is indeed). Do get something that supports flow control (send and receive) as mentioned. Going with either of these should save you a bit of cash.

For the SAN, depending on the load, check out MD3000i from Dell or maybe Equallogic 4000 series. Make sure to talk to Sales rep, also get quotes from HP/Cisco/IBM and pressure the sales guy/girl, you can get good discount that way.

Cyberdud
Sep 6, 2005

Space pedestrian

It's funny how it's advertised as a VMWARE READY NAS. I don't get it.

oblomov posted:

It's decent if you want to run a small NAS for 5-10 people. I wouldn't run VMware from it, this is not for this. Netgear makes decent switches for your house or your dentist's office, not for enterprise gear (it's actually decent for low end switching). Check out Dell or HP switches if Cisco is a bit too pricey (it is indeed). Do get something that supports flow control (send and receive) as mentioned. Going with either of these should save you a bit of cash.

For the SAN, depending on the load, check out MD3000i from Dell or maybe Equallogic 4000 series. Make sure to talk to Sales rep, also get quotes from HP/Cisco/IBM and pressure the sales guy/girl, you can get good discount that way.

Any explanation on why that QNAP couldn't run vmware?

Cyberdud fucked around with this message at 15:33 on Apr 23, 2010

KoeK
May 15, 2003
We dont die we multiply

Cyberdud posted:

Any explanation on why that QNAP couldn't run vmware?

Look at is specs, it only has one powersupply.

It might able to function as a VMware SAN technically, but I would never ever let it near any production use.

Nukelear v.2
Jun 25, 2004
My optional title text

Cyberdud posted:

let's say the less expensive the better, i don't think we could go above 15-20k. Also skill set with SAN/NAS is nonexistant, so i'm willing to learn as much as possible.

EDIT: also would be looking for a Gigabit switch supporting jumbo frames.

We did a similar project with Dell kit here, here's how I would do it for ~25k.

1 x MD3000i, Dual controller, use RAID10 pick drive speed and size according to needs.
2 x Poweredge 6224, can't let that switch be a single point of failure
1 x RPS 600
2/3 x R610's, Loaded to the gills with RAM and NICs. You want at least 2 interfaces for iscsi, 2 for vm management and then whatever else you need for production
1 x vSphere Essentials Plus bundle for 3 hosts

http://www.delltechcenter.com/page/VMware+ESX+4.0+and+PowerVault+MD3000i
That will walk you through setting the iscsi side up. For some reason the images on it aren't loading for me right now.

Edit:
In the cover-your-rear end approach to architecture, present them a feasible option like this and if they come back and need to go cheaper, tell them what you can remove, how much it will save and what the ramifications are. So when your single switch with no rps fails, you can't be blamed for designing a bad solution. Protip, you will be blamed anyway.

Nukelear v.2 fucked around with this message at 17:23 on Apr 23, 2010

ShizCakes
Jul 16, 2001
BANNED

Nukelear v.2 posted:

Protip, you will be blamed anyway.

Which is why you should propose something a bit overkill, and then bitch and moan about bringing it down to whatever level it is that you actually need.

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

Cyberdud posted:

It's funny how it's advertised as a VMWARE READY NAS. I don't get it.


Any explanation on why that QNAP couldn't run vmware?

Do you really think that you are going to get good performance off an NFS server running embedded Linux on an Intel atom? This might work okay for 1 or two hosts, but you're talking 10. The Sun storage system that adorai recommended will run miles around this, not to mention you will get ZFS which was a far superior file system.

what is this
Sep 11, 2001

it is a lemur


Cyberdud posted:

It's funny how it's advertised as a VMWARE READY NAS. I don't get it.

Any explanation on why that QNAP couldn't run vmware?

It's not enterprise hardware. It's perfectly fine for small business or consumer grade use. You can run VMware as a consumer.

There's a whole thread for consumer storage, NAS and iSCSI.

namaste friends
Sep 17, 2004

by Smythe


Cyberdud, I think you may want to go through the following exercise:

1) figure out how much it will cost your company per hour of down time
2) figure out what your company's tolerance is for down time, given the cost

You need to have a conversation with management about the managey, business stuff like this because ultimately you need to be accountable for your design decisions. Finally, you need to document this stuff. It sounds like you work for a pretty small shop and you guys may be pretty informal about decisions like this, but I can definitely tell you that this exercise is worth performing. Not only that, future employers would consider this sort of exercise positively in your favour.

Also if the whole set up blows up in your face you can pull the report out and show it to management and tell them why you made the decisions you did.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

So I have a ton of perfmon stats from a certain server.

What tools do you use to analyse these? I know there's the windows Performance Monitor tool but i've found it a bit 'hard'.

Do you know of any third party tools for analysing permon outputs?

Adbot
ADBOT LOVES YOU

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Vanilla posted:

So I have a ton of perfmon stats from a certain server.

What tools do you use to analyse these? I know there's the windows Performance Monitor tool but i've found it a bit 'hard'.

Do you know of any third party tools for analysing permon outputs?

Export .csv's and you can probably feed the data into esxplot:

http://labs.vmware.com/flings/esxplot

I regularly use this to parse through 1-2GB of esxtop data at a time when doing performance troubleshooting.

It might work with generic windows counters too. I'm guessing it just plots whatever is in the csv.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply