Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

bull3964 posted:

The only reason to thin provision is to oversubscribe resources. The question is WHERE to you oversubscribe. What's best practice?
We thin provision EVERYTHING and then monitor the backing. We don't go crazy, but are generous.

Adbot
ADBOT LOVES YOU

Internet Explorer
Jun 1, 2005





bull3964 posted:

If you don't thin provision at the storage level though, how is dedup providing you any benefits at all?

If I thick provision a 10tb volume on netapp and put 2 thick provisioned 5tb LUNs on it and then put 50 100% full but identical 100gb VMs on each LUN I'll be wasting a ton of space.

I will have reserved 10tb of storage on the SAN, both of my 5tb VMware volumes would be full, each guest would also be full, but I would only be consuming around 100gb (give or take) of real storage.

NetApp is actually one of the big names I haven't worked with. I don't know how their dedupe works but I'd assume it doesn't need thin provisioning. It would just be deduping empty space?

My question back to you is why make such a big LUN if you don't need it? Now a days it should be fairly easy to increase the size of the LUN, grow the datastore, then grow the vdisk and the guest.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Internet Explorer posted:

NetApp is actually one of the big names I haven't worked with. I don't know how their dedupe works but I'd assume it doesn't need thin provisioning. It would just be deduping empty space?

My question back to you is why make such a big LUN if you don't need it? Now a days it should be fairly easy to increase the size of the LUN, grow the datastore, then grow the vdisk and the guest.

Maybe I'm not explaining myself well enough.

If you don't thin provision, you are making a reserved storage commitment on the SAN (doesn't have to be netapp, the principles should be the same regardless.)

If you have some dedup or compression ratio, even a small one, the actual raw storage you use is going to be less than what the host sees for the LUN.

The size of the LUN is irrelevant. If you get a 2:1 dedup ratio on data you store on the LUN, then you are always going to be using half the raw storage compared the consumption the host is seeing on the LUN. You write 100gb to the LUN, the host sees that 100gb more of the LUN is used, but in reality you only consume 50gb of raw storage. At some point, you'll fill the LUN, but only half of the actual raw storage will be used for that data.

But if your LUN isn't thin provisioned, you have that other half of raw storage sitting there reserved for the LUN and no other LUN can use it, but you also can't possibly use it since the host is going to see a larger percentage of consumption than you have on raw disk. So, you essentially have raw capacity that you cannot access anymore.

Dedup doesn't need thin provisioning, but it doesn't make sense to dedup a thick provisioned LUN (or volume, or whatever) because you are reserving capacity that will never get used as the amount of raw storage being used will always be less than the consumption that the host sees.

bull3964 fucked around with this message at 00:01 on Aug 15, 2015

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Internet Explorer posted:

NetApp is actually one of the big names I haven't worked with. I don't know how their dedupe works but I'd assume it doesn't need thin provisioning. It would just be deduping empty space?

My question back to you is why make such a big LUN if you don't need it? Now a days it should be fairly easy to increase the size of the LUN, grow the datastore, then grow the vdisk and the guest.
Netapp does not dedupe the extra thick provisioned space. As for lun sizing, we create what we think we will need in the next year. Going for less than that is a recipe for someone ignoring an alert and the lun going offline when it fills up.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Deduping the unused thick provisioned space cross LUNs would essentially be thin provisioning. There's no practical distinction.

For dedup to have any value, you have to oversubscribe your storage and the only way I know of to do that is thin provision volumes/LUNs/whatever storage widget.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Dedupe and compression both have benefits beyond space savings. Dedupe is often (as on NetApp) transparent to caching, so when you cache a deduped block you are effectively multiplying the size of your cache since the single block represents many blocks worth of,logical data. Likewise, compression reduces the amount of data that needs to be written to disk, which decreases IO pressure on the drives.

I recommend thin provisioning all the way down and managing utilization of your physical capacity, as that's the easiest way to get transparent information about growth and actual utilized capacity. This requires that you have good forecasting in place and a sane purchase process so that you can get additional capacity added before things become dire.

If you're using block storage your eager zeroed thick VMDKs may end up being effectively thin provisioned anyway if the array does inline zero block deduplication. ONTAP does this as of 8.3 on any volume with sis enabled (even if sis scans aren't running; and sis is required to use file and lun level cloning) and Nimble does this by default.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
The reality is thick provisioning is only good if your monitoring, purchasing, and team responsiveness suck beyond belief. There is no reason to give 40GB to a VM when it will realistically use only 20GB. You can thin provision, and if it ends up needing the full 40GB, it can take it, but otherwise you have effectively doubled your capacity for free. It is similar to the fact that your electric company does not keep available the full capacity that every home in your neighborhood could consume simultaneously. It is much more effective to forecast a reasonable amount above the normal maximum, and then increase capacity later if needed.

froward
Jun 2, 2014

by Azathoth
Is there a good place to buy 2nd hand, refurbished & cast-off enterprise hardware? I'd like to start a small home lab for loving around with VMs & such.

What do companies do with last generation hardware when they upgrade? because I would like to have some.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

froward posted:

Is there a good place to buy 2nd hand, refurbished & cast-off enterprise hardware? I'd like to start a small home lab for loving around with VMs & such.

What do companies do with last generation hardware when they upgrade? because I would like to have some.
They dump the equipment on recycling companies. Typically, those recycling companies then offload in bulk to used equipment dealers, or sell them to the third world as unsorted scrap.

eBay is probably your best bet if you're buying one-offs and not bulk equipment.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

froward posted:

Is there a good place to buy 2nd hand, refurbished & cast-off enterprise hardware? I'd like to start a small home lab for loving around with VMs & such.

What do companies do with last generation hardware when they upgrade? because I would like to have some.

Don't go down this road, simulate it the best you can on modern hardware. Old rear end enterprise kit is noisy as gently caress and sucks down a ton of electricity. There's a reason it's being sold so cheap.

If you truly want to replicate the experience, get yourself a 1000W space heater and a leaf blower. Turn them both on in the room you'll be using the stuff in. Same thing basically, but without the blinky lights.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

skipdogg posted:

Don't go down this road, simulate it the best you can on modern hardware. Old rear end enterprise kit is noisy as gently caress and sucks down a ton of electricity. There's a reason it's being sold so cheap.

If you truly want to replicate the experience, get yourself a 1000W space heater and a leaf blower. Turn them both on in the room you'll be using the stuff in. Same thing basically, but without the blinky lights.
Depends; you can do a lot with 3-year-old servers, if you get things from companies like quantitative trading houses that are religious about those depreciation cycles. Keep in mind that Sandy Bridge server parts came out in 2012, and core-for-core gains versus Haswell are pretty minimal.

Kaddish
Feb 7, 2002
If you're using block level de-dupe and compression at the storage level, it doesn't matter if your vmdk is thin provisioned or thick. A block is a block and it's either being used on the array or it isn't. At least this is true on Pure, which is the only de-dupe/ compression/thin provisioning I use.

As mentioned above, make sure you utilize SCSI UNMAP periodically, especially if you have aggressive DRS. This doesn't run automatically. You will need to run it against a datastore from any host in the cluster.

Kaddish fucked around with this message at 15:46 on Aug 17, 2015

Pile Of Garbage
May 28, 2007



Also be careful if you're using NFS on NetApp as you're entirely at the whim of the filer when it comes to reclamation of space on a volume. I've run into issues before where I've aggressively evacuated volumes and then immediately repopulated them only to find that I'm suddenly out of space.

Edit: should probably say that the specific instance in which I encountered the aforementioned issue was something of an edge-case where I was attempting to remediate NFS datastores which were on an indirect path by evacuating them, removing and then recreating them with the correct node IP address in vCenter and then immediately repopulating.

Pile Of Garbage fucked around with this message at 16:35 on Aug 17, 2015

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Kaddish posted:

If you're using block level de-dupe and compression at the storage level, it doesn't matter if your vmdk is thin provisioned or thick. A block is a block and it's either being used on the array or it isn't. At least this is true on Pure, which is the only de-dupe/ compression/thin provisioning I use.


Yeah, that's kinda why I posed the question in the first place. Originally we were on equallogic storage which doesn't have dedup/comp so it made sense to thin provision things. But when you have dedup/comp on the storage side (we are using both Pure and NetApp), it seems like extra complexity to thin provision on the VMDK side since you aren't going to use that storage anyways because dedup takes care of that on the physical side. In fact, VMWare is more efficient at storage vmotioning thick provisioned VMDK files (as long as it's on the same storage) so there are even advantages to thick provisioning.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

cheese-cube posted:

Also be careful if you're using NFS on NetApp as you're entirely at the whim of the filer when it comes to reclamation of space on a volume. I've run into issues before where I've aggressively evacuated volumes and then immediately repopulated them only to find that I'm suddenly out of space.
You have to clean up your SIS database when you do this. We learned the hard way.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

cheese-cube posted:

Also be careful if you're using NFS on NetApp as you're entirely at the whim of the filer when it comes to reclamation of space on a volume. I've run into issues before where I've aggressively evacuated volumes and then immediately repopulated them only to find that I'm suddenly out of space.

Edit: should probably say that the specific instance in which I encountered the aforementioned issue was something of an edge-case where I was attempting to remediate NFS datastores which were on an indirect path by evacuating them, removing and then recreating them with the correct node IP address in vCenter and then immediately repopulating.

If you've already evacuated the volume completely then you're better off deleting it and re-creating it, since that deletes all of the map files and means delete processing only needs to happen at the aggregate level which is going to be faster than processing a large amount of truncate activity at the volume level. It also avoids any SIS issues since the SIS database gets deleted, as adorai pointed out. Processing large deletes synchronously is difficult for any sufficiently large filesystem because the metadata that controls block allocation is split of many filesystem blocks that must be read and modified and updated while other activity is paused (since new blocks cannot be allocated or modified while the metadata files are locked). This is especially true in redirect-on-write filesystems like ZFS or WAFL (and probably CASL and others) where the metadata itself is stored on filesystem blocks, and thus self referential, which can lead to long chains of updates.

bull3964 posted:

Yeah, that's kinda why I posed the question in the first place. Originally we were on equallogic storage which doesn't have dedup/comp so it made sense to thin provision things. But when you have dedup/comp on the storage side (we are using both Pure and NetApp), it seems like extra complexity to thin provision on the VMDK side since you aren't going to use that storage anyways because dedup takes care of that on the physical side. In fact, VMWare is more efficient at storage vmotioning thick provisioned VMDK files (as long as it's on the same storage) so there are even advantages to thick provisioning.

Well, it's still a management headache since you've got a VMware datastore reporting full because all of your VMs are fully allocated, but the backing LUN is not consuming anywhere near that amount of space on the actual array, so you either grow the LUN, even though it's not actually consuming anywhere near it's currently allocated space, or you create a new LUN and your datastore count starts to go up up up. Yet another reason why I like NFS is not having to thing too hard about these sorts of things.

Pile Of Garbage
May 28, 2007



NippleFloss posted:

If you've already evacuated the volume completely then you're better off deleting it and re-creating it, since that deletes all of the map files and means delete processing only needs to happen at the aggregate level which is going to be faster than processing a large amount of truncate activity at the volume level. It also avoids any SIS issues since the SIS database gets deleted, as adorai pointed out. Processing large deletes synchronously is difficult for any sufficiently large filesystem because the metadata that controls block allocation is split of many filesystem blocks that must be read and modified and updated while other activity is paused (since new blocks cannot be allocated or modified while the metadata files are locked). This is especially true in redirect-on-write filesystems like ZFS or WAFL (and probably CASL and others) where the metadata itself is stored on filesystem blocks, and thus self referential, which can lead to long chains of updates.

Admittedly I'm not technically a storage admin and my core storage experience is only with IBM SVC and FC. The environment we inherited was a mess and whilst we have proper storage admins who know NetApp it was my fault for not engaging them. Oh how I wish I was working with FC again instead of this horrid NFS+iSCSI mish-mash :allears:

Rhymenoserous
May 23, 2008
There's nothing difficult about iSCSI or NFS though. On a day to day basis I find both much easier to work with than FC.

Kaddish posted:

If you're using block level de-dupe and compression at the storage level, it doesn't matter if your vmdk is thin provisioned or thick. A block is a block and it's either being used on the array or it isn't. At least this is true on Pure, which is the only de-dupe/ compression/thin provisioning I use.

As mentioned above, make sure you utilize SCSI UNMAP periodically, especially if you have aggressive DRS. This doesn't run automatically. You will need to run it against a datastore from any host in the cluster.

Nimble specifically tells you to just roll thick provisioned client, it will take care of dedupe.

Rhymenoserous fucked around with this message at 20:22 on Aug 18, 2015

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Rhymenoserous posted:

Nimble specifically tells you to just roll thick provisioned client, it will take care of dedupe.
Nimble doesn't dedupe, it compresses.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

Nimble doesn't dedupe, it compresses.

They do inline zero elimination which is a flavor of deduplication and probably what they're suggesting when they say to (eager-zero) thick provision.

Rhymenoserous
May 23, 2008

NippleFloss posted:

They do inline zero elimination which is a flavor of deduplication and probably what they're suggesting when they say to (eager-zero) thick provision.

^

What he said.

Wicaeed
Feb 8, 2005

Rhymenoserous posted:

Nimble specifically tells you to just roll thick provisioned client, it will take care of dedupe.

On my VMware & Nimble setup, our Used vs Free does not match between what Nimble is reporting for used vs what VMware sees. When we format a VM as thick, VMware reports that disk space as immediately being used on the datastore, but if I go into management, I can see that it really is thin provisioned on the storage backend.

Is there any special tools required for VMware to know that it really is being thin provisioned on the backend and mark that capacity as free accordingly?

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE
Use Thin Provisioning unless otherwise required....?

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Wicaeed posted:

On my VMware & Nimble setup, our Used vs Free does not match between what Nimble is reporting for used vs what VMware sees. When we format a VM as thick, VMware reports that disk space as immediately being used on the datastore, but if I go into management, I can see that it really is thin provisioned on the storage backend.

Is there any special tools required for VMware to know that it really is being thin provisioned on the backend and mark that capacity as free accordingly?

I don't believe it will ever match. If you use space on that thin provisioned disk, then free it up, you will need to SvMotion it to reclaim those zeros?

Why does it matter though? In VMware I have an 8tb datastore with 800gb free (all thick provisioned). On the nimble side, I am only utilizing 2.25tb.

Edit: SvMotion will only do it if moving to a datastore with a different block size.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004155

Moey fucked around with this message at 22:06 on Sep 1, 2015

Rhymenoserous
May 23, 2008

Wicaeed posted:

On my VMware & Nimble setup, our Used vs Free does not match between what Nimble is reporting for used vs what VMware sees. When we format a VM as thick, VMware reports that disk space as immediately being used on the datastore, but if I go into management, I can see that it really is thin provisioned on the storage backend.

Is there any special tools required for VMware to know that it really is being thin provisioned on the backend and mark that capacity as free accordingly?

They expressly told me not to do thin provisioning to avoid confusing scenarios like this. Also bear in mind what you are seeing on the array is post dedupe/compression/magic space maker.

Wicaeed
Feb 8, 2005

Rhymenoserous posted:

They expressly told me not to do thin provisioning to avoid confusing scenarios like this. Also bear in mind what you are seeing on the array is post dedupe/compression/magic space maker.

That doesn't really make sense, them telling you to not to thin provision, seeing as you're throwing away disk space (as far as VMware is concerned) on a datastore that uses nothing but thick provisioned VMs.

Unless I'm dumb :confused:

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Rhymenoserous posted:

They expressly told me not to do thin provisioning to avoid confusing scenarios like this. Also bear in mind what you are seeing on the array is post dedupe/compression/magic space maker.

In his scenario he is doing thick provisioning and is confused. It is confusing in either scenario because what ESX reports as used and what Nimble reports as used will never match. But thick provides other benefits on Nimble.

Wicaeed posted:

That doesn't really make sense, them telling you to not to thin provision, seeing as you're throwing away disk space (as far as VMware is concerned) on a datastore that uses nothing but thick provisioned VMs.

Unless I'm dumb :confused:

You're not throwing away any space, you're still thin provisioned on the storage layer where the blocks actually live. Thin or thick or eager zero thick all consume the same amount of space on the array, the only difference is how they appear to VMFS.

This is also why NFS is great for VMware, no issues translating thin provisioning from one file system layer to the next.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Moey posted:

I don't believe it will ever match. If you use space on that thin provisioned disk, then free it up, you will need to SvMotion it to reclaim those zeros?

Why does it matter though? In VMware I have an 8tb datastore with 800gb free (all thick provisioned). On the nimble side, I am only utilizing 2.25tb.

Edit: SvMotion will only do it if moving to a datastore with a different block size.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004155

Any tool that zeroes out unused blocks at the guest level will reclaim that space. Sdelete is one such tool on windows.

Wicaeed
Feb 8, 2005

NippleFloss posted:

In his scenario he is doing thick provisioning and is confused. It is confusing in either scenario because what ESX reports as used and what Nimble reports as used will never match. But thick provides other benefits on Nimble.


You're not throwing away any space, you're still thin provisioned on the storage layer where the blocks actually live. Thin or thick or eager zero thick all consume the same amount of space on the array, the only difference is how they appear to VMFS.

This is also why NFS is great for VMware, no issues translating thin provisioning from one file system layer to the next.

I know I'm not throwing away any space, Nimble knows that too, just curious why VMware doesn't. I mean I know it's block level storage, so whatever the storage is doing underneath that doesn't really matter, but regardless it'd be nice to know! More importantly, VMware alerting doesn't tell you this either

I'm surprised that Nimble or VMware hasn't released any tools to reconcile the the difference between looking at storage from a VMware VMFS level vs a Nimble OS level, or any other vendor for that matter. Even with the vCenter Nimble integration it doesn't tell you this.

I've used a FreeNAS appliance I built from scratch before to store some VM's so I know about the NFS/iSCSI difference from the VMware storage level, and having that information there is REALLY nice.

Wicaeed fucked around with this message at 00:00 on Sep 2, 2015

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Wicaeed posted:

I know I'm not throwing away any space, Nimble knows that too, just curious why VMware doesn't. I mean I know it's block level storage, so whatever the storage is doing underneath that doesn't really matter, but regardless it'd be nice to know! More importantly, VMware alerting doesn't tell you this either

I'm surprised that Nimble or VMware hasn't released any tools to reconcile the the difference between looking at storage from a VMware VMFS level vs a Nimble OS level, or any other vendor for that matter. Even with the vCenter Nimble integration it doesn't tell you this.

I've used a FreeNAS appliance I built from scratch before to store some VM's so I know about the NFS/iSCSI difference from the VMware storage level, and having that information there is REALLY nice.

VMware is reporting logical blocks allocated by the VMFS filesystem and the storage is reporting physical blocks allocated to the underlying disk pool. They are tracking two different things and thus can't be reconciled. VMFS doesn't know or care what the storage does, all it cares is that the storage exposes a standard SCSI interface that it can use to read and write blocks. This is a good thing because it means that VMFS can ride on and understand anything that speaks bog standard SCSI. The storage doesn't care what VMFS does because logical blocks don't consume storage directly. It's going to compress and de-duplicate them and then report on what you're actually using because that's the number you need to know to know if you're running out of storage. This behavior also exists at the guest OS level. The guest might think it has a 50GB VMDK but on the datastore that VMDK is consuming only a few GB of space because it is thin provisioned. Heck, it happens with CPU resources too. Guests don't report that they have 2 1/16th core CPUs, they report 2 CPUs, because the resource sharing is invisible to them. That's just the nature of virtualization, one layer needs to be kept generally ignorant of what the ones below it are doing so that things continue to function properly using standard interfaces.

What would you even want to see? VMware to report that your 100GB logical datastore is really only 50GB in size because that's what it's consuming on the storage? So if you wanted to thick provision 100GB of VMDKs into that datastore should it fail? The vendor plugins provide a way to map datastore usage to actual back end storage usage, so it sounds like you want something beyond that. This behavior also exists at the guest OS level.

NFS is different because the filesystem is controller by the array so the array can report directly on what's used. VMware just takes whatever the array is reporting as the usage of the NFS datastore as the correct answer so there's no reconciling to be done.

Kaddish
Feb 7, 2002
If you really want to see actual storage usage from one interface Nimble probably has a plugin for Vcenter. Pure does.

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE
Nimble does. It's not a great plugin considering we still can't add an ssl certificate to the array to get rid of warnings when you use it.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Nitr0 posted:

Nimble does. It's not a great plugin considering we still can't add an ssl certificate to the array to get rid of warnings when you use it.

However it takes what was a 20 minute on netapp (create volume on both sides, map to hosts, replicate to partner) and turns it into a 30 second job.

Nitr0
Aug 17, 2005

IT'S FREE REAL ESTATE
Agreed

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

However it takes what was a 20 minute on netapp (create volume on both sides, map to hosts, replicate to partner) and turns it into a 30 second job.

Cmon man, creating a datastore in VSC and then mirroring it with system manager is not a twenty minute job.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

NippleFloss posted:

Cmon man, creating a datastore in VSC and then mirroring it with system manager is not a twenty minute job.

Hyperbole my friend, but it was a lot more effort.

Rhymenoserous
May 23, 2008

NippleFloss posted:

Cmon man, creating a datastore in VSC and then mirroring it with system manager is not a twenty minute job.

If it was EMC it would be a 20 minute job.

some kinda jackal
Feb 25, 2003

 
 
Does anyone here run an old pre-Oracle Sun x4540 Storage array? Trying to get a good idea of average power draw.

Just interited a pair of x4540s along with SAS expansion chassis (minus all drives which I've ordered shredded). Likely going to pop this into the dev env here at work and back my VMware host, but I'm trying to get an indication of what kind of power draw the base machine might use. I will only fill the first 10 or 12 slots since I'll be popping in higher capacity drives so I won't be running a fully decked out system.

This isn't critical info since I won't be popping any breakers any time soon, but I was hoping to get a guess as to base power draw. I can calculate the additional disks by themselves :)

Gather that the underlying system is a dual opeteron. It's got 64gigs of RAM and I threw in a FC HBA to go with the SAS expander card.

Still debating Solaris vs FreeNAS vs Linux with iscsitarget or whatever but I'm not terribly worried about getting it going.

some kinda jackal fucked around with this message at 20:05 on Sep 4, 2015

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

We have (I think) a Dell MD3200 that only has the external SAS connectors, and only 2 of those. We have 2 servers running VMware (just the lowest paid version). Basic Linux and Windows file servers, no big databases or anything. I think we have like 1.5TB worth of stuff.

It's coming up on 3 years old, what are my options? Ideally I would like an all-SSD based solution, is that possible for < $15,000? Is there a good solution that makes good use of SSD-caching?

I'd like to go to something iSCSI-based (there are no network ports in our current PowerVault), but are there really any advantages or disadvantages to doing that? I don't really see adding any more hosts in the near future. We're hopefully getting rid of about half the VM's we run by moving to a cloud-based ERP.

Adbot
ADBOT LOVES YOU

bigmandan
Sep 11, 2001

lol internet
College Slice

Bob Morales posted:

We have (I think) a Dell MD3200 that only has the external SAS connectors, and only 2 of those. We have 2 servers running VMware (just the lowest paid version). Basic Linux and Windows file servers, no big databases or anything. I think we have like 1.5TB worth of stuff.

It's coming up on 3 years old, what are my options? Ideally I would like an all-SSD based solution, is that possible for < $15,000? Is there a good solution that makes good use of SSD-caching?

I'd like to go to something iSCSI-based (there are no network ports in our current PowerVault), but are there really any advantages or disadvantages to doing that? I don't really see adding any more hosts in the near future. We're hopefully getting rid of about half the VM's we run by moving to a cloud-based ERP.

What's the main drive for upgrading? Depending on what that is will dictate what solution will work for you. Do you need more performance, capacity, features, etc.. ?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply