Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Fruit Smoothies
Mar 28, 2004

The bat with a ZING
I am having to setup a new configuration for a client that's never used virtualisation before. My plan is to have two physical servers with a failover cluster of VMs with storage on an iscsi connection. I have a couple of questions relating to the storage.

Firstly, I am used to dealing with a QNAP NAS for iscsi which is fine, but this client can't have a single point of failure, and I know they wouldn't really like their QNAP to fail and have them out of action. Is there a real-time blocklevel iscsi copier? Or am I approaching this the wrong way? Their current setup uses DFS and it's frequently annoying for all concerned.

Secondly - and this may be a question in the VM thread - but I would assume that the NAS / SAN setup would have a few iscsi targets, one for the VMs, and one for the storage of the client's data. My only headache about this, is that a file access means

Client -> Server -> VM target -> Server -> File store target -> server -> client

Which seems a little long winded. Would there simply be one BIG target that contains big VHD files with the client data in, in the single VM storage cluster target?

Thanks!

Adbot
ADBOT LOVES YOU

Fruit Smoothies
Mar 28, 2004

The bat with a ZING

Vulture Culture posted:

What you're describing is LUN mirroring. It's a very common feature in enterprise-grade storage.

But if you're trying to do a high-availability configuration with a consumer-grade NAS, you're going about it the wrong way and you're never going to get a reliably working system out of it. On the VMware side you have some of their software-defined storage offerings that you can look at, but I don't believe Microsoft has anything similar in the Hyper-V ecosystem. Unless you're hosting things that absolutely need to be on-premises, please consider Azure or AWS with a VPN to a server in the cloud, and let them handle the storage availability. They can do it a lot cheaper than you can.

I'm not sure what you're asking. Are you asking about whether you should store the data in an iSCSI export mounted through the guest, or in a VHD directly attached to the guest? The answer is yes.

From all I've seen & tested with Hyper-V, failover clustering seems what I want, and as long as the LUN mirroring is handled by the device, and not the servers, I don't imagine needing to investigate VMWare.

Cloud seems a reasonable suggestion, but they'd need a leased line put in, and it seems silly to recommend that expenditure when it would be just for that purpose and when storage is cheap to buy.

Regarding the storage of the client data, you're right to assume I was asking whether it should be in the VHD or separated out. Your answer of "yes" seemed to imply that both were feasible.

Fruit Smoothies
Mar 28, 2004

The bat with a ZING

Vulture Culture posted:

The LUN mirroring isn't handled by consumer-level devices, that's my point; HA on entry-level NAS devices is a shitshow in the best of cases. QNAP isn't going to give you what you want in any way that's sane to manage. Storage isn't cheap to buy at all if you actually need it to work quickly and reliably. Expect a decent storage system to run you easily more than your two servers, unless your servers have a terabyte of RAM each. Providing reliable storage at a minimal marginal cost is something that the cloud providers have worked out. You've got to explain this to your client or manage their expectations re: single points of failure on the budget they have to work with. They probably don't need actual HA as long as their data is safe and you have contingencies in place that will allow their business to continue working if the storage shits itself.

I'm not sure why you would need a leased line unless they're on rural dial-up and have no broadband options. Any decent business broadband router will support site-to-site VPN connections.

Either one is going to involve network round-trips and perform similarly. Use whatever's easier for you to manage.

Performance or storage volume aren't the issue. Their business is running an old application powered by CSV files. It's incredibly sensitive, such that it won't work over WiFi. However, after extensive testing with the QNAP and clustering VMs, it doesn't corrupt during these scenarios.
(It's so sensitive, the remote sites have to use terminal server to access the application locally)

They're working on developing a new product, but it's going to take time. In the mean while, I am lumbered with ensuring as close to 100% uptime as possible. DFS never worked with these files, and there's about 40GB of data, and 10 GB of archive.

If you can come up with a better idea for me to test, I am happy to accept suggestions.

Fruit Smoothies
Mar 28, 2004

The bat with a ZING

Vulture Culture posted:

If it works fine over terminal services, it might be worthwhile to investigate setting up a terminal server in the cloud to handle it. I'm not sure how many user licenses you're talking about, but I can't imagine mass concurrency in an application driven by CSVs.

Everyone wants 100% uptime, and everyone will tell you they need 100% uptime. That uptime has costs associated. If they cannot pay for enterprise storage, then they need to either adjust their expectations or consider alternatives. You can't engineer out a single point of failure just by buying several of something.

My plan B, is to buy two identical QNAP devices and test the ability to simply swap the drives across in case of a unit failure.
From my testing, I get good enough performance from the QNAP to handle in testing, but will it have problem scaling in the real world?

Fruit Smoothies
Mar 28, 2004

The bat with a ZING

Thanks Ants posted:

Just deploy the app on Azure RemoteApp.

Qnap and Synology stuff is great in your house or in a lab or as a backup target. It's only a matter of time until it bites you in the arse if you're putting VMs on top of it.

The application has MAPI connectors. I considered RDP for everyone before, but the exchange server is on site, and no one wanted to keep swapping between desktop and RDP.

Fruit Smoothies
Mar 28, 2004

The bat with a ZING
I am going to get serious judgement here, but I bear in mind that I am only experiment with stuff right now. I am new to SANs.

As aforementioned in some of my posts, we're looking for a way to cluster 1-3 VMs. I have an SMB share using Storage Spaces. The server has 32GB RAM and a decent Xeon processor. I have done some benchmarks using Crystal Disk Mark on various VHDX files, and benchmarks look network limited on the random read / writes. My client is happy for me to spend a bit of money to experiment if it saves buying a 'proper' SAN. At the moment, the network is only gigabit.

I am not a fan of doing things on the cheap, and my business would certainly profit more from selling them a SAN, but I can't help but feel interested in purchasing a few SFP+ 10G network cards and installing them into the existing VM servers, and the pseudo-SAN to see what kind of figures I get that way. If nothing else, it is a rare opportunity for me to get a budget to pretty much gently caress around with things and see how they play out.

But is this a massive waste of money? Is there anything else I should be buying? I don't really understand what a SAN is doing any differently than having a decent processor, RAM and NIC in it. Obviously the storage spaces could be replaced with a RAID card if needed.

As for the requirements, the school has one database, and I was thinking of storing the data for that on a separate area of SSD storage, as the database is ~2GB and unlikely to grow too much. It's not like the staff or students need lightening fast access to their data. Mostly just reliable access.

Fruit Smoothies
Mar 28, 2004

The bat with a ZING

mayodreams posted:

So by '1-3 VMs' do you mean virtual machines or virtual hosts? If you only need a hand full of VMs, a SAN is complete overkill. Assuming you are using Hyper-V due to the mention of VHDX files, just load that server up with some fast (10-15k RPM) SAS drives and a RAID controller for a RAID10 setup and be done. Or if the VMs aren't that big, 4 enterprise class SSDs in RAID10.

Otherwise you are trying to cluster 3 hosts and that is a different story. What does the whole environment look like in terms of CPU and Memory utilization?

There are ultimately going to be 3 virtual servers, with two physical hosts in a cluser. Their usage is currently very low, as they pretty much do AD, DNS and SMB. There is an exchange server, but I am reluctant to virtualise that at this stage because I don't think they're going to have it for long, and it's pretty beefy.

If this seems reasonable, I don't see a getting RAID controller being a problem. Do you think I need 10G network? Or would a dedicated, separate gigabit storage network do it?

Fruit Smoothies
Mar 28, 2004

The bat with a ZING

NippleFloss posted:

Sounds like your current Storage Spaces server is a single box, so no high availability in case of failure? Host raid cards also degrade performance when doing raid 5/6. An enterprise NAS/SAN will address those issues and also provide useful functionality on top of better reliability and performance.

Yeah there is a single point of failure with the storage space, as AFAIK there's no real network RAID for Windows.
You may then ask why I am even bothering with clustering? Well, mostly because another basic server isn't expensive, and it at least prevents against hardware failure to some degree.

I am not sure what performance, reliability or functionality my setup could get from a SAN, but I'm happy to listen. I could even replace the data hard drives with the new Samsung SM863 SSDs, and the cost would be lower than a SAN.

To what degree does a SAN have higher reliability than any other server?

Fruit Smoothies
Mar 28, 2004

The bat with a ZING

Gwaihir posted:

You're generally paying to always eliminate single points of failure: Things like dual controllers and redundant data connection paths, two switches, two+ hosts in a cluster, etc.

A real san is also probably going to be better about abstracting the storage volumes from the physical disks, so it will be more flexible for spacing/sizing/expansion in the future than a typical plain old raid array will.

It's not necessarily that the actual hardware is that much more reliable, it's just that you get a specialized OS with storage specific features, lots of redundancy, and typically very good support in case bad things happen.

Thanks for this. I guess my main concern is reducing points of failure in items that aren't readily replaced. There's an IT distributor that's a 45 minute drive, and it would be trivial for me to quickly buy a replacement PSU, switch, network card etc. However, it would be less easy to buy an entire server. The joy of storage spaces is that I can stick the drives in any decent windows PC (even off the shelf if need be) and it'll run OK until a longer-term is found.

I guess I am less worried about uptime than I am long periods of down time. I'll check out gluster though for sure.

Adbot
ADBOT LOVES YOU

Fruit Smoothies
Mar 28, 2004

The bat with a ZING
EDIT: Probably wrong thread

Fruit Smoothies fucked around with this message at 14:08 on May 30, 2018

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply