Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
evol262
Nov 30, 2010
#!/usr/bin/perl

Hi Jinx posted:

SLI: I have the two GTX 1080s already. Not sure I can get a single more powerful card. :p The reason is.. why not? I have a 4K monitor so wanting to run games in 4K is pretty understandable. One 1080 does a decent job with most modern titles, two should be fine for steady 60 fps.
Let me be honest: this is dumb, and an absurd waste of money. If you don't care about money, just buy a nimble array and call it a day. If you do care, sell one of those cards, buy a cheap base system, and put together a storage system.

Hi Jinx posted:

ZFS: Self-healing and dedupe. I want dedupe for VM backups / snapshots, and I have enough ECC RAM to do it.
You have ECC in your gaymes system? Why? See above.

Dedupe doesn't actually do anything for snapshots, which are differencing 99% of the time anyway. And what's the point of backups on the same system the VMs are running on?

This still isn't a real use case. Do you have any idea what you actually want and how it's supposed to work, or are you just throwing wads of money at concepts you've heard about?

Hi Jinx posted:

I realize I could do this in two machines, but it'll end up costing more, take up more space, consume more power, produce more heat & noise, etc. I could buy/build a cheap NAS box but it won't give me the speed or the reliability I want, and the rig certainly has the power to spare to also do storage, which is a pretty menial task in a single-user environment.
So you think you get speed and reliability out of a VM with passthrough disks? Right...

"Rig" :commissar:

You will not get reliability out of a single system. An atom box with 16gb of memory is dirt cheap, silent, and has almost no power draw, but is a perfectly suitable storage server which actually helps, can sort backups, and even host VMs! Then you can do your gaming stuff on whatever. This atom system is the cost of a 1080.

Hi Jinx posted:

Evol262 asked about use case: aside from gaming (which is really not the main point) it's meant to let me do software development on multiple platforms, which also involves running 4-5 VMs at a time; mostly for testing & debugging.

This helps. What are you testing and debugging? Or developing? How do you do it now? How do you expect virtualization to help you?

Adbot
ADBOT LOVES YOU

Potato Salad
Oct 23, 2014

nobody cares


Dedupe does very little for SOHO from a cost benefit perspective.

Potato Salad
Oct 23, 2014

nobody cares


I thought just by subject matter that I was in the enterprise storage thread. Apologies.

Dude with the money to burn: your system plan is unbalanced. Sell the second 1080, get a 950 PRO or other NVMe storage device and a turnkey SSD-backed SOHO NAS and enjoy the fastest storage you can buy without going always-on ramdisk.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Dedupe on ZFS requires copious amounts of RAM to function properly. Once your dedupe table overflows onto spinning disk performance will drop significantly. You can mitigate this to an extent with SSD L2ARC, but you may still have performance problems as garbage collection and wear take a toll on the write speeds of the SSD. And it's still much slower than RAM at its best.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I can understand the use case for a storage VM, if you only have one computer that'd be served by a NAS. Consolidating things makes partly sense, since the NAS doesn't need to run when the desktop's off. I had a test setup running FreeBSD 10 in a Hyper-V VM a long while ago. It worked well enough, but invited more drama than necessary on BSODs and power outages. Plus I had the advantage of rolling with FreeBSD native, allowing me to dick around with the paravirt drivers for Hyper-V, which aren't in FreeNAS yet.

An alternative bullshit setup would be running Linux with ZFS on baremetal, and run Windows on top in a KVM. This'd allow you to snapshot your Windows installation and have large fast storage by means of your ZFS pool made of spinning disks and a L2ARC with part of your SSD. And a lot of RAM for the ZFS ARC to make IO even faster. Of course, this also involves a lot of drama, since tweaking KVM to run your poo poo nicely takes quite some tinkering (been there done that). And you're in need of an Xeon E5 or any -E CPU, if this is supposed to become a permanent solution, because those have a bunch of features the desktop ones don't, that are quasi needed to run things very smoothly.

tl,dr: Get a NAS.

Kachunkachunk
Jun 6, 2011
No SLI with that either. +1 to getting a NAS, but none of them will run ZFS, as far as I know. Netgear I think has officially moved to BTRFS by default for their consumer stuff (and I like this more anyway).

Anyway, maybe he can consider a Mini ITX system to do the storage serving, and still shove everything into one case. There are a small small small handful of dual motherboard cases out there: http://www.pcper.com/news/Cases-and-Cooling/Want-Build-Two-Systems-One-Case-Then-You-Need-Phanteks-Enthoo-Mini-XL

Essentially the Gayming rig goes in the bottom taking up most space, drives go in the bays, fed up to the mini ITX system up-top that's running cheaper hardware for storage serving. A bunch of cash might have ultimately gone to waste on ECC memory potentially, but maybe some of it can go back into the ITX system hopefully. The next best bets are either going to a proper NAS or another system. And dedupe is probably not worth the hassle for a home setup, no matter how prosumer it is.

Personally, I went with a dedicated storage whitebox since NASes with 8+ bays and 10Gb are not comparable in cost. It's all running Ubuntu and serves out three BTRFS volumes via CIFS/SMB, NFS, and iSCSI (fileio) targets, from 18 drives (NL-SAS, SATA SSDs, and regular SATA disks). Different tiers and use cases/redundancy, basically.

The moment 2TB SSDs (or larger) become affordable enough to replace ~12TB of hot mechanical disks (double that for redundancy), I'm retiring all the spinning metal so I can consolidate this crap into a smaller, quieter, cooler, footprint. I think my NL-SAS and existing 500GB drives will last long enough for that to be possible.

I also have a feeling that we'll hit some surge in drive density increases combined with significant cost drops. Like, 16TB drives within two or three years for ~$300 or less, each.

Kachunkachunk fucked around with this message at 13:19 on Jun 13, 2016

Potato Salad
Oct 23, 2014

nobody cares


There was a post here, but it was needlessly confrontational. What people do with their money is none of my business.

Potato Salad fucked around with this message at 13:34 on Jun 13, 2016

stevewm
May 10, 2005

Tab8715 posted:

I'd want to know how all the networking and bits come together but drat that's cool.

What were the other solutions you were looking at?

I was a bit disappointed with the networking aspect of it... Each node has 2x 1GB (or 10GB) NICs that all VM traffic goes to. You can VLAN tag interfaces assigned to the VMs, but that is the extent of it. Of course the market they are targetting is not likely to have a complicated networking setup anyways. And neither do we.



As for other options we have been looking at..

1. Throwing up a few Hyper-V hosts with local storage and utilize Hyper-V replica to replicate the 2 mission critical VMs to the other hosts.
2. Dell VRTX box with 3 blades and setup a Hyper-V cluster.
3. Starwind HCA appliance with 2 hosts (more expensive than the Dell VRTX once MS licensing is thrown in)

The MS licensing ends up adding around 23k USD to the total cost unfortunately. (80x User CALs, 80x RDS User CALs, SQL Server 4x Core licenses with SA)

stevewm
May 10, 2005

Tab8715 posted:

That is pretty cool but what's the hypervisor?

KVM, with a custom interface and storage solution.

It basically stripes the storage across the local disks in the nodes.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Kachunkachunk posted:

No SLI with that either.
From what I know, it is actually possible (in KVM with VFIO).

Kachunkachunk posted:

The moment 2TB SSDs (or larger) become affordable enough to replace ~12TB of hot mechanical disks (double that for redundancy), I'm retiring all the spinning metal so I can consolidate this crap into a smaller, quieter, cooler, footprint. I think my NL-SAS and existing 500GB drives will last long enough for that to be possible.
Same plan here. 10TB in disks, altho all in a RAID10 (i.e. 5TB available). Not sure how I want to handle that with SSDs. Theoretically, they should be more reliable than spinning disks, so hardware redundancy tends more towards the optional.

Hi Jinx
Feb 12, 2016

evol262 posted:

Dedupe doesn't actually do anything for snapshots, which are differencing 99% of the time anyway. And what's the point of backups on the same system the VMs are running on?
This is getting into the weeds, but snapshots store every changed block blindly with no regard to content. And I do have two locations I sync backups between.

evol262 posted:

Do you have any idea what you actually want and how it's supposed to work, or are you just throwing wads of money at concepts you've heard about?
Mostly the latter. But as I wrote in the original question, my goal is fast ZFS storage in Windows.

evol262 posted:

So you think you get speed and reliability out of a VM with passthrough disks? Right...
Nothing wrong with reliability and passthrough disks, and performance is relative. In pretty much every other scenario I'd be bottlenecked with the 1Gbe network, which a single disk can saturate, passthrough or not. A local VM gives me 10Gbe.

evol262 posted:

This helps. What are you testing and debugging? Or developing? How do you do it now? How do you expect virtualization to help you?
Native and web apps on Windows and to a lesser degree Linux. C++ stuff. SQL Server, Cassandra, IIS, Nginx. But I've been doing this for a while, I still have the box for vmware workstation 1.0 somewhere. I don't really need help with the dev environment. I was looking for help with frankensteining ZFS into a Windows environment somehow, in a way that it performs like local storage.

I get that it's cheaper and simpler to go with a NAS box. If I wanted slow external storage, I'd do just that.

Combat Pretzel posted:

From what I know, it is actually possible (in KVM with VFIO).
I will look into this, thanks. From what I read (granted I did not look very deep) I got the notion that the limitation is the emulated chipset, which doesn't do SLI..

thebigcow
Jan 3, 2001

Bully!
Have you looked into Windows Server with Hyper-V and ReFS?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Kachunkachunk posted:

I also have a feeling that we'll hit some surge in drive density increases combined with significant cost drops. Like, 16TB drives within two or three years for ~$300 or less, each.

NetApp is starting to ship 16TB TLC SSD drives this month, and 32TB drives are on the short term roadmap. Price per GB is going to get there pretty quickly.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
When the guest customizations for VMware or whatever do offline Registry edits to change the hostname on a Windows VM, does anyone know what they're actually doing? I need to do something similar with bare-rear end VMs in my own non-VMware ghetto lab.

e: I've tried the ComputerName\ComputerName\ComputerName and Tcpip\Parameters\Hostname values to no avail
e2: Server 2016 if it matters
e3: looks like it generates a sysprep answer file and calls sysprep automatically. What's the right place in the lifecycle to do that?

Vulture Culture fucked around with this message at 18:18 on Jun 13, 2016

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Hi Jinx posted:

I get that it's cheaper and simpler to go with a NAS box. If I wanted slow external storage, I'd do just that.

Why do you assume external storage will be slow?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Hi Jinx posted:

I will look into this, thanks. From what I read (granted I did not look very deep) I got the notion that the limitation is the emulated chipset, which doesn't do SLI..
Personally haven't tried it and I seem to remember quite a bunch of talk about people having gotten it to work. No guarantees, tho.

There's only so much I can help, anyway. Ended up firing up the VM so often and spending so much time in it, I ended up running Windows fulltime again a long while ago. If you are going to bother with KVM, forego the libvirt stuff and run QEMU directly. Better control over the quirky passthrough poo poo and lets you run some voodoo configurations. Also, people figured out how to have the primary graphics device relinquish control and let you pass it through. So no need to enable any integrated graphics or run another cheap card to boot with. Of course, if QEMU fucks up and your script can't reattach things to the console, you need to reset.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Does anyone have opinions of running Docker containers with non-priviliged users?

At work the most need we have for Docker is with computation servers of research group and the software they require. For example a group may have an Ubuntu 14.04 and they need to install software that is designed for debian 8. And another that is distributed as RPMs. With apt-pinning and alien it's probably possible to make that work, but I feel it will end up a mess over time. Running the different software inside containers tailored for them would seem much cleaner solution.

But a big stumbling block is how to allow non-root users to do this without giving them root through Docker loopholes, even if the users are semi-trusted. A solution I have come up with is a shell script users are allowed to run with sudo, and which would start the containers with specific whitelisted options. But how to make a script that wouldn't have major holes in it and would the users still be able to escape the containers with privileges?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Saukkis posted:

Does anyone have opinions of running Docker containers with non-priviliged users?

At work the most need we have for Docker is with computation servers of research group and the software they require. For example a group may have an Ubuntu 14.04 and they need to install software that is designed for debian 8. And another that is distributed as RPMs. With apt-pinning and alien it's probably possible to make that work, but I feel it will end up a mess over time. Running the different software inside containers tailored for them would seem much cleaner solution.

But a big stumbling block is how to allow non-root users to do this without giving them root through Docker loopholes, even if the users are semi-trusted. A solution I have come up with is a shell script users are allowed to run with sudo, and which would start the containers with specific whitelisted options. But how to make a script that wouldn't have major holes in it and would the users still be able to escape the containers with privileges?
Previously coming from research/academia myself: what's the material business risk in a research group getting root access to their own server?

Hi Jinx
Feb 12, 2016

thebigcow posted:

Have you looked into Windows Server with Hyper-V and ReFS?

I did. Aside from the fact that it requires 2 or 3-way mirroring and doesn't offer raid6 or raidz2-like parity, ReFS doesn't support dedupe, nor will it in Server 2016.

Hi Jinx
Feb 12, 2016

NippleFloss posted:

Why do you assume external storage will be slow?

1Gbe. Or 2Gbe if I futz around with teaming.

Hi Jinx
Feb 12, 2016

Combat Pretzel posted:

Ended up firing up the VM so often and spending so much time in it, I ended up running Windows fulltime again a long while ago.

I'm leaning towards running storage in a VM and Windows as the hypervisor, at least the GPUs will just work this way.

evol262
Nov 30, 2010
#!/usr/bin/perl

Hi Jinx posted:

1Gbe. Or 2Gbe if I futz around with teaming.

Or 1Gbe x number of links with multipath iSCSI. Or native speed with a storage server, but the question is kind of whether or not you need more than ~110MB/s (including the loss to protocol). You might. If you do, though, you may be better off with a dedicated virt host which also holds your storage, and keeping your other stuff separate.


Hi Jinx posted:

This is getting into the weeds, but snapshots store every changed block blindly with no regard to content. And I do have two locations I sync backups between.
Yes, they do. But how many blocks are changing? At what point do you think you'll actually start seeing any kind of gains from dedupe versus the memory requirements on 4-5 VMs with differencing snapshots? Is it before or after you could have spent the additional money for the memory dedupe will take on another disk?

Hi Jinx posted:

Mostly the latter. But as I wrote in the original question, my goal is fast ZFS storage in Windows.
I guess my question is "what do you think ZFS is going to offer you?" Not a feature list for ZFS per-se. How does it helps you?

Hi Jinx posted:

Nothing wrong with reliability and passthrough disks, and performance is relative. In pretty much every other scenario I'd be bottlenecked with the 1Gbe network, which a single disk can saturate, passthrough or not. A local VM gives me 10Gbe.
A local VM can give you more than 10GBe depending on the virtualized adapter you pick.

The "reliability" issue is mostly that a single system which does it all misses a key point of reliability -- redundancy. If your motherboard/PSU/whatever takes a dive, it's all down. This is also a possibility with a separate storage system, but then you maybe have local working copies of the git tree or whatever, or you can connect to the VMs on some compute host from a laptop (or tablet, or phone) instead of losing your entire environment.

Hi Jinx posted:

Native and web apps on Windows and to a lesser degree Linux. C++ stuff. SQL Server, Cassandra, IIS, Nginx. But I've been doing this for a while, I still have the box for vmware workstation 1.0 somewhere. I don't really need help with the dev environment. I was looking for help with frankensteining ZFS into a Windows environment somehow, in a way that it performs like local storage.
I gathered that you don't need help with the dev environment. However, to ascertain what your use case and requirements are, knowing what the dev environment is going to be doing (how hard is that database being hit, how much traffic is going through cassandra, are you doing load testing on websites or just hosting them to deploy to, etc) helps a hell of a lot.

ZFS is great. It is. For a dedicated storage system. However, it doesn't seem to offer a lot in your use case.

If you really must have ZFS for whatever reason, feel free to go with passthrough disks (or passthrough with KVM and VFIO, then run the nvidia driver update rollercoaster next time they make some other change which gets better about detecting whether you're running consumer cards passed through in a virtualized environment).

However, if your goal is:

  • Gaming PC cum workstation
  • Native speed storage
  • Any hypervisor you want
  • No external storage system or compute hosts
I don't see a reason not to just use Hyper-V in Win10 with your storage inside Windows instead of making a super complicated setup.

Passthrough is really useful in a couple of cases:
  • Steam streaming
  • Full-time Linux desktop but you want to run games here and there in one VM
  • CUDA

Disk passthrough is useful in a couple of cases (though passing through an entire controller is almost always better). Mostly
  • I already have a compute host, and it has a bunch of disks for some reason (didn't go hyperconverged, it's a lab system, whatever)
  • You want to run a storage server on the same system as your compute server
  • Other things in your environment (other compute hosts, plex, sabnzbd, whatever) use that storage routinely
  • It's a dedicated storage server

The environment you've described is just a Matryoshka doll for no really good reason that I can see. And if you really want to do it, then proceed. But "are you sure this is really what you want? Why?" is a valid line of questioning. And it's not meant to be aggressive or anything.

Hi Jinx posted:

I will look into this, thanks. From what I read (granted I did not look very deep) I got the notion that the limitation is the emulated chipset, which doesn't do SLI..
It is a chipset limitation.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Vulture Culture posted:

Previously coming from research/academia myself: what's the material business risk in a research group getting root access to their own server?

Truth to tell, it gives the server administration indigestion when they aren't Doing It Right™. Since we would want to use a wrapper anyway so the users don't need to know all the weirdo parameters, then the wrapper might also hopefully hinder them for doing anything silly and maybe even neuter a bit a convenient attack vector.

But yes, we do operate largely on the assumption that no one would bother come after these groups with a targeted attack. It's more of a question would this provide any actual benefit, or is it just security theater.

Saukkis fucked around with this message at 19:56 on Jun 13, 2016

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Hi Jinx posted:

1Gbe. Or 2Gbe if I futz around with teaming.

iSCSI MPIO is good and easy. I'm also not convinced that you actually need the performance you think you need for a home lab/development environment.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

NippleFloss posted:

iSCSI MPIO is good and easy. I'm also not convinced that you actually need the performance you think you need for a home lab/development environment.
our zfs appliance hits just under 1gbe average at our peak time running a 500 seat VDI deployment. Since it's averaged, we do exceed that speed at times but not sustained. 1gbe is enough I'm sure.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Saukkis posted:

Truth to tell, it gives the server administration indigestion when they aren't Doing It Right™. Since we would want to use a wrapper anyway so the users don't need to know all the weirdo parameters, then the wrapper might also hopefully hinder them for doing anything silly and maybe even neuter a bit a convenient attack vector.

But yes, we do operate largely on the assumption that no one would bother come after these groups with a targeted attack. It's more of a question would this provide any actual benefit, or is it just security theater.
It's theater. Invest the engineering time into better auditing so you know when you have to yell at someone.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
I got asked to write some Powershell scripts to be used by vRealize Orchestrator.

Anyone know how I can get up to speed on that? I know Powershell but I'm not sure how to handle the inputs and outputs.

Also, is there any good way to securely handle credentials?

ozmunkeh
Feb 28, 2008

hey guys what is happening in this thread
I have some R630 machines with 8 core Xeon E5-2630 and need another but I can't get them anymore with that configuration. I've been offered a 10 core Xeon E5-2650 instead. Is VMware going to care about that for things like vMotion? I thought they all had to be identical, is that close enough?

The specific Dell line items are:
Intel Xeon E5-2630 v3 2.4GHz,20M Cache,8.00GT/s QPI,Turbo,HT,8C/16T (85W) Max Mem 1866MHz (338-BFFU)
Intel Xeon E5-2650 v3 2.3GHz,25M Cache,9.60GT/s QPI,Turbo,HT,10C/20T (105W) Max Mem 2133MHz (338-BFFF)

Thanks Ants
May 21, 2004

#essereFerrari


vMotion cares about things like processor features like SSE and other more modern extensions, core count isn't a problem.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Thanks Ants posted:

vMotion cares about things like processor features like SSE and other more modern extensions, core count isn't a problem.
You also can't vMotion between AMD and Intel CPUs regardless of the feature set you're targeting, but that also isn't an issue here, of course.

ozmunkeh
Feb 28, 2008

hey guys what is happening in this thread
That's good to know, thanks. I've only ever used identical hardware before so this hasn't ever come up.

Potato Salad
Oct 23, 2014

nobody cares


Look up "VMware EVC Mode." You can tell VMware what generation of CPU features to present to guests of a virtual datacenter, allowing you to mix them to the oldest common cpu generation.

evil_bunnY
Apr 2, 2003

Yeah EVC is what you want.

some kinda jackal
Feb 25, 2003

 
 
Man, every now and then I just have weird things happening to my VCSA. Couldn't authenticate this morning, spent some time digging through logs, looked to be some kind of SSO error. I'm using the internal SSO provider and I should have filed a bug report but to be completely honest I just wanted the thing up and running and my setup isn't complex in the least so I just reinstalled VCSA. Seems to be a quarterly thing for me lately.

If I could template with base ESXi then I might ditch vSphere altogether. It's one of the reasons I was thinking of switching to oVirt for a while, but I never got off my rear end to do it.

Internet Explorer
Jun 1, 2005





Do you use vShield or VDP? We ran into a fun bug with vShield on 5.5U3 where vCenter would just stop responding to SSO requests. I forget what the patch number was that fixed it but if that matches your environment let me know and I can find it.

I think we saw a bunch of failed logins at the time on the vCenter server.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Is anyone here using NFSv4.1? Our NetApp array says they support it along with the latest version of the VAAI plugin for the hardware acceleration features. I'm playing around with it and things seem to mount, but Storage IO Control still isn't supported so I'll probably hold off on it. I feed it the IPs of both my SVM targets and it takes them without question but after the mount there is no indication that MPIO is functioning properly and the device backing view only shows the first target IP I entered. It feels pretty half-baked at this point.

some kinda jackal
Feb 25, 2003

 
 

Internet Explorer posted:

Do you use vShield or VDP? We ran into a fun bug with vShield on 5.5U3 where vCenter would just stop responding to SSO requests. I forget what the patch number was that fixed it but if that matches your environment let me know and I can find it.

I think we saw a bunch of failed logins at the time on the vCenter server.

Neither here. The more I think about it the more I wish I hadn't just vim-cmd vmsvc/destroy'd the original VCSA so I could go back and look at the logs. Getting rash and impulsive in my old age.

stevewm
May 10, 2005
We have been looking for a VM platform to host our POS/ERP application and consolidate some other servers for some time...

Scale Computing quoted us a 3 node hyperconverged HC3 cluster at $27,700 USD. They use Dell hardware for the nodes, with Xeon E5-2620v4 8-core, 32GB RAM, 4TB of NL-SAS drives. There would be 6TB total usable space. (way more than we need) Running their customized hypervisor and storage system. (based around KVM).

Of course this does not include the requisite MS licensing..

While I am not a fan of the proprietary hypervisor/software setup, the product is very compelling and meets our needs pretty much perfectly.

Am I stupid for considering it?

We have been looking into other options, including going the shared storage route, and a Dell Vertex. The cheapest setup with multiple hosts and proper shared storage was around $10k more expensive.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

BangersInMyKnickers posted:

Is anyone here using NFSv4.1? Our NetApp array says they support it along with the latest version of the VAAI plugin for the hardware acceleration features. I'm playing around with it and things seem to mount, but Storage IO Control still isn't supported so I'll probably hold off on it. I feed it the IPs of both my SVM targets and it takes them without question but after the mount there is no indication that MPIO is functioning properly and the device backing view only shows the first target IP I entered. It feels pretty half-baked at this point.

VMware does not support pNFS and ONTAP 8.3 does not support NFS session trunking, so you don't actually have any multipathing. Don't do 4.1, it doesn't buy you anything.

Adbot
ADBOT LOVES YOU

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

NippleFloss posted:

VMware does not support pNFS and ONTAP 8.3 does not support NFS session trunking, so you don't actually have any multipathing. Don't do 4.1, it doesn't buy you anything.

LACP it is. Thanks.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply