Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy
I was posting before about wanting to use two identical Proliants as a HA and/or FT VM server setup.

So far I've got the servers with 12 empty 3.5" drives in front and 2 in back. I got some adapter brackets for the rear bays so I can slot 2.5" SSD's in there.

Someone here said I should tie them together with a single NAS for shared storage, but that kinda creeps me out because of the single point of failure.

Would it make any sense to:

1) Toss two SSD's in the back bays, maybe put them in a Raid 1, and use that to put the VMs on? Modern SSD's don't get messed up in RAIDs like the old models did, correct?

2) Toss 12 platters in the front bays, put them in a Raid 1+0, and use it for storage?

Is that a particularly bad or stupid design? My rack space is limited and I'm trying to keep things simple. I don't get the deal with inserting a NAS as shared storage when the servers have all this expandability is the way to go.

Adbot
ADBOT LOVES YOU

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

Docjowles posted:

I have to ask... is this super expensive, redundant hardware going to be used to run all of the software you're asking about how to avoid paying for in the Windows thread? I don't mean that in a judgemental way, I'm sure you're doing what you can within the constraints management gives you. But if their end goal is really "make random lovely legacy applications running on Win 7 VM's achieve 100% uptime" then goondolences :smithicide:

So far the servers are only slated to run management software for our call center (no downtime if it goes out but voicemail and call shadowing would go down) and a PoE security camera system I'm rolling on my own. Who knows what other VMs I'm going to need in production for the future though. I'm building for 100% uptime because we may have more mission-critical stuff in the future, and hey if something stupid like voicemail goes down I'm still going to get a call at 6am so why not build it for resilience, right? I'll probably wind up running as much from Linux VMs as I can get away with.

The redundant hardware isn't super expensive so far! The two Proliants would have cost $10k each in 2010 but I slapped them together entirely from individual components off eBay for $1000 each.

Gyshall posted:

Traditionally your single point of a failure is a SAN device with dual controllers/redundant everything.

Thank you, I'm reading up on these now.

1) Since the servers are HP DL180 G6, does anyone have a recommendation for a corresponding HP StorageWorks SAN from that era that would pair up nicely with these? I'd prefer to use normal Sata drives since they're a fraction of the cost of SAS.

2) I see in additional to the SAN I might also need a SAN Switch. If I'm never going beyond these two servers, is there any kind of PCI expansion I can get or something to plug the SAN's SAS cables directly into the servers and skip this SAN Switch business?

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

evol262 posted:

That's kind of normal depreciation. What are they specced at?

Two DL180 G6, each with dual Xeon X5675 in each (strongest CPUs they can accept), dual 750w PSU, P410 1gb FBWC RAID controllers, 48gb ram. Close to $1000 even for each.

evol262 posted:

Stop there. You don't need a fabric switch. And there's absolutely no point in also getting old, janky storage. You can probably get a MD3200i pretty cheap, but budget...

Get something that does iSCSI and push it all over ethernet.

You want SATA drives? Are you planning on putting your own in, or getting from a vendor? What's your budget.

I'll check into the iScsi, but it seems like it might wind up just as janky as the SAS when finally implemented with the older Proliants at first glance though.

No matter what I go with, I'm buying off-the-shelf SATA drives and popping them in myself. I might make split the storage into two raids, one for SSD and one for HDD if that's OK.

Let's say I've spent 2k on the servers and I have 4k left for SAN and all the drives to jam into it.

KoeK posted:

For the SAN: might be interesting to look into a HP P2000G3 SAS, and a pair of SAS HBA's for each server. You get all the san features but also a very easy way to install and configure. The only downside is that you can only connect 4 hosts to it.

I was looking at the SAS host bus adapters, that might make the most sense. I highly doubt I'll be going past the two physical machines during my time here, it is still only a call center.

Docjowles posted:

So this system requires 100% uptime but there's no vendor support contracts for it?

I guess you guys take the 100% term more seriously than I do. Nothing huge is at stake, people get mildly annoyed if the voice mail or something goes down, my prime directives are:

1) Spend as little money as possible without my poo poo being so flakey/unauthorized that it costs us more money than I've saved, later down the line. That's the balance I'm trying to strike so that I can serve 300 call center grunts.

2) I don't like off-hours calls. I've only gotten one in six months for a forgotten password, so far so good.

But it's time to scale up to 600 people and I figure I better in on the VM action. My last ten years have been physical everything (Navy then Healthcare). So I'm forcing myself to do this from scratch as a nice crash-course and hopefully I'll wind up with something production worthy for on-premises stuff.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

Wicaeed posted:

It hasn't even been mentioned yet, but what licensing level of VMware are you planning on using? You mentioned HA/FT features which require a certain license level of VMware that sounds like it will cost more than your entire environment you have described already.

I'm highly discouraged from using VMware under any circumstance (my company competes with one of their products, rather antagonistically), so I'm gonna have to figure out how to do it all on Hyper-V.

Erwin posted:

You're not striking that balance at all with used ebay poo poo and no support contracts. Your chance of downtime might be very small due to the redundancy, but that downtime will last weeks while you scramble to piece together more used hardware to replace whatever died and try to janitor it all back together. If you think you'll still have a job after 3 weeks of no phones because you saved your company a few grand, you're crazy.

First, I did buy a couple spares of every component so it won't be a scramble if anything goes.

Second, the phones and service is on a support contract. The phone management/voicemail/UC server that we have is a Dell Celeron with a single HDD, modifying it voids warranty, and they have no option for anything nicer. They said I could give them a VM to migrate to and keep the application support while giving up the hardware support on what was a time bomb anyway. Considering the circumstances I found that more prudent since the servers I make can run other stuff too.

I swear I'm not as kamikaze as I sound and I'm pretty drat resourceful in practice. I don't want to rely on these support contracts and SLAs because so far Microsoft, HP, Shoretel, and our ISP have all treated them like toilet paper on multiple occasions.

Still, sorry for the ranting, I expected a heap of criticism and naturally I've got a lot more reading to do before I finalize the design of all this. I appreciate the inputs and I at least have a few months more lead time to play around with all this stuff and get some sanity checks before I flip the switch.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

evol262 posted:

I'd love to pretend this never happened, but I did build out a HA environment by mapping all disks to VMs, setting up primary/primarily drbd, and HA NFS exported back out.

I once took a server we already had, bought and built an identical server, ripped out one of the original's two mirrored drives during live production, popped it into the new hardware, put a second blank drive into both servers and tricked them to rebuild the raids, and finally changed Win2003 product key for the second. It actually worked, I called it mitosis backup. I left that company but my old boss says it still works to this day.

evol262 posted:

KVM doesn't run on Pis (technically KVM on ARM sort-of runs on the 2), and we have alternative architecture support, but nobody's tried on arm that I know of.

There is native gluster support, though, including setting up storage on compute nodes as bricks. $250 is low, but 2 grand is doable

So what you're saying is I should forget the storage array and just pay someone smarter than me to KVM/gluster both of these servers together :cheeky:

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

evol262 posted:

You can just install centos, add the ovirt repos, install engine-setup, and go. Everything else can be done from a web ui that's point and click. Gluster is a check box. Adding bricks is a wizard :)

Update, I received the eBay server components and built two identical Proliants up from scratch, I'm talking like thermal paste and heatsinks on the Xeons, lots of fun actually. I had to scour for a mirror to get the newest HP SPP firmware installer, because half a year ago they apparently stuck it all behind a Cisco-style support contract wall. Pretty hosed up. It worked though, updated my iLO, raid controllers, etc.

Anyhoo, I put a 1tb 850 Evo in each server and fired them up. I installed CentOS7, Gnome, oVirt, engine-setup, and got the web UI up. You're right that it's not too bad for someone who's allergic to CLI.

The stuff I'm stuck on now is figuring out:

1) Each server has two gigabit NICs, I want to team the two in an active-active load balance (i.e. if one of the NICs or switches goes bad, there's a redundant connection). All I can find online is some madness with JSON Runners and more CLI stuff. If I use the Network Setup GUI to create a team, with the two ethernet as slaves, it cuts out the connection until I revert it. I tried telling both NICs to pretend to be the same MAC and that doesn't seem to help. I've done this in Windows with the HP smart start wizards before but this is kicking my rear end.

2) What's the high-level best practice for HA in this setup, i.e. two servers and no NAS (gluster essentially making them both a NAS)? Should I be installing the oVirt web ui on both and somehow linking them after the fact so I can manage the same VMs from either?

3) With each of the servers having a single 1TB flash drive at the moment, I'd want to make an equal sized gluster partition/brick on each drive, and tell Gluster to do a "distributed replicated volume" right? Then just start spinning up VMs on that volume and I'm good to go?

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy
Woo, you're awesome! Guess I've got a lot of reading ahead of me.

By remove Gnome, do you mean there's a better GUI for what I'm doing? I was using it for things like copying and pasting commands from a Firefox window and I was going to install Teamviewer or something so I can keep configuring things from home.

I really can't parse the CLI stuff, I can see how GUI in Linux would be viewed as a crutch but I always take ten times longer to type commands and I need the visual feedback to make up for my toasted short term memory.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy
Oh cool, didn't know I could putty into Linux like it was a switch.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy
Heh, I thought I might have gotten away clean that time, I kinda get hammered every time I ask a question.

adorai posted:

This has to be a troll.

Not a troll, I've just been exclusively administrating Windows for the last decade across three jobs. My *nix experience has been limited to cracking WEP passwords with a Backtrack Live CD and rooting Android phones.

evol262 posted:

IIRC, he's in a role with a senior title with relatively little experience overseeing multiple badly designed sites on a shoestring budget that multiple goons told him to flee but he stayed because a raise or something.

But all of us started somewhere.

I did flee, I made the new and old jobs bid for me and went from $50k/yr to $80k/yr. I've been at the new place for half a year but the previous ten years have all been on a shoestring budget and old habits die hard I guess. At least the CFO loves my cheapskate antics. I should state again that the servers I'm building aren't going to cost us any real productivity if they blow up, I'm just pretending to chase 9s for the learning experience. Who builds poo poo to break?

NippleFloss posted:

I believe he told someone they should run dhcp on their network gear and not use windows DHCP server because some Network guy told him that was a good idea.

I know my opinion isn't the popular one and I don't want to necro this debate again, but I've never seen Windows servers never manage the sheer uptime of running DHCP on say, Cisco switches. Plus you do need to get CALs when you serve off Windows. I do see the merits of both sides, I just sleep better with DHCP on something embedded. For the record, at this place I moved DHCP from a Win2003 Celeron onto to a pair of active-active Peplink Balance routers that are also bonding our three ISPs.

adorai posted:

While I do believe that is an accurate statement, I also recall reading somewhere that Microsoft Licensing and Audit clarified that they weren't interested in dinging you for not having a CAL for your printer to get an address from DHCP. It should be codified as such, but I am very confident that DHCP is not going to be my licensing downfall when I pay microsoft hundreds of thousands of dollars per year already.

It should be codified, but it won't be. Microsoft's lawyers, like everyone else's, love leaving in all those "gotchas". It's all for people like me who take the food out of their mouths by buying used Proliants and Catalysts from eBay. Kind of the way the games industry tries every trick in the book to discourage second-hand game sales.

Microsoft as a company acts a bit entitled, like because I support 300 users, our paycheck and commitments to them necessarily have to keep scaling exponentially. They send salespeople to feel me out all the time and I comb their EULAs to stay out of their licensing clutches. I pay them the $20/mo per user for O365, that's all we need and that's all they're going to get out of me. Especially since it is buggy as gently caress and their support is garbage. Why would I let myself get any deeper into that? That's why I'm psyched to be learning how to build these redundant-everything private-cloud servers, I spent a week learning Azure and combing through RHEV documentation isn't nearly as painful as technet blogs.

evol262 posted:

But even though my advice to him was also to forget about the fact that his title is currently senior systems engineer or whatever and go for a mid-range admin job where he can learn, he didn't, and that's that.

You're a cool, helpful and pragmatic goon. I really appreciate it. I get that it's fun to dogpile too, I steel myself for that when I post. I ask about the things I'm most clueless about so it sounds worse than it is. I've seen people a lot more senior and by-the-book become complacent and allow huge disasters into the production environment, and yeah it'd be hubristic to assume it can't happen to me. That's why I suck it up and come in here and get my rear end kicked until I figure things out. Anyhoo, no time to play the mid-range admin job. Just have to kick rear end and stack bills so I can change careers or retire without needing this as a day job.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

NippleFloss posted:

Honestly the whole thing is weird. He originally stated that his requirements were 100% uptime and when told it was basically impossible with his budget he said "well obviously I don't really mean 100%, you guys just read that all wrong". Then he said he can't use VMware because his company is a competitor, but what competitor to VMware can't scrape together more than like 5k for infrastructure hardware to support a 600 person call center that is presumably pretty i important, otherwise why pay 600 people to man it? And then you go back to his previous job post where he basically jumped into a job that he was very under-qualified for because he wants to stack that paper and retire early or something and the whole thing paints this portrait of a really dysfunctional environment that he's treating as a playground to try whatever random idea pops up.

I think a single hyper-v host running on hardware raid with frequent backups to a cheap nas would end up being more resilient because it would have a support contract backing it and would be more in line with his technical capabilities. And if management truly doesn't care about uptime then the lack of storage redundancy wouldn't be a show stopper.

To address all that:

- I was thinking I was being theoretical when I first said 100% uptime, that was my bad for not realizing that is an actual term and can be taken seriously. The two VMs I have slated so far on this are for voicemails (which management confirmed almost no one actually uses in the call center, to the point that we agreed to buy standalone user licenses without voice mailbox licenses going forward), and recording PoE security camera footage (which was mission-critical at my last job, not for a call center). They can even be taken down for maintenance for several days if need be. But I want to learn to build for high availability because if this winds up being rock-solid, I can start virtualizing other nice-to-have quality of life servers.

- Support is the key word here, I'm planning this system to support the call center staff but the only true line-of-business stuff we run is O365 and Salesforce. Plenty of people can work just as well from home without any VPN.

- My first IT job had me running the whole network on a $2 billion warship, after that I was the only IT guy for three psychiatric hospital facilities. I wasn't under-qualified for either, I was given embarrassingly small budgets and I learned that bitching to management doesn't solve anything when the scrimping is institutional. I decided to embrace it, be really meticulous with backups, deliberate with changes, and document the hell out of all my ghetto hacks. My old bosses still tell me they appreciate it. And I just insisted on some haughty job title because it was free to negotiate it and while it might not help my career it certainly doesn't hurt.

- A single hyper-v host may indeed be more appropriate, but I'm prepared to put in the work to avoid nag calls about needing to reboot the thing, like I do now with the moron IT guy who left this Dell Optiplex running the voicemail.

Can I coin the phrase "developroductionment"?

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy
I'm trying to set up a single server with CentOS7 and oVirt 3.5, and when I tell it to add itself as host, it spits out logs in webadmin about installing packages, then after a minute webadmin hangs completely, then ten minutes later it pops up some error about :

quote:

Error while loading data from server: Download of https://192.168.2.201/ovirt-engine/webadmin/deferredjs/8F788F85BF8B5BF5E5233CF55EF14245/11.cache.js?autoRetry=3 failed with status 0()

If I run engine-cleanup then install engine-setup all over again I still can't seem to make it through. Any ideas? I did notice that hostnames from centos on this server don't resolve on my network, but I edited the /etc/hosts and they do resolve on the server itself. Not sure if that has anything to do with it.

All I'm planning for now is spinning up a VM for FreeNAS taking half the storage on the server, wish there was an idiot's guide beyond the oVirt documentation.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

evol262 posted:

Are you using all in one? You should be

If you have the memory (2gb is enough, 4gbis better), I'd use a hosted engine, which will let you easily expand your setup with resiliency on the engine. If you don't, use all in one.

When you install the engine and try to add the host it's running on as a compute node, vdsm is trying to set up a bridge and waits for it to come up, but that itself will take the engine offline. Then otopi (the framework which does all this) rolls the state machine all the way back and reverts it all.

Unchecking the "verify connectivity" box in the new host may get you past this, but all on one machine really isn't a recommended setup without the all in one plugin.

I have 48gb ram but Hosted Engine says it outright requires two hosts and shared storage.

I'll try out the All In One thing and see how it goes, thanks.

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy
I've been struggling with getting standalone-mode ovirt working to make a freenas server, and one of our sales engineers strolls by and is like gently caress that, give me a flash drive to install Xen server, I'll have you up and running in 30 minutes on this software defined storage thing that is free up to 10tb.

That sounds reasonable enough since this server will be light-duty security cam storage, but am I walking into a trap?

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

evol262 posted:

What's the struggle with bringing up freenas? It sounds like he's recommending nexenta, but not sure.

Xenserver (with XenOrchestra) is also really nice, but setting up a storage VM will be similar to ovirt, probably with the same problems.

Well, I never got as far as setting up freenas, I just threw up my hands with ovirt and this guy is like "hey I'll show you how to do it with Xen and Quantastor".

I wound up not even needing much help from him, I installed Xen myself, then I was stuck on creating a VM from a flash drive iso without any shared storage, but he showed me I can just share any Windows folder and point Xen at it. Then I did manage to get Quantastor running, as of the moment I can browse to the share from Windows and copy files to it, oddly deleting files doesn't work but I'll sort that out tomorrow.

I know you gave me a lot of help with ovirt so I feel kind of guilty not sticking it out, but things like setting Standalone mode, bonding the two nics, or loading isos was "follow the instructions, get error messages thrown, repeat". Where this is my first experience with Xen and those all worked without incident. Much more like my experience with the free gimpy vmware version.

Adbot
ADBOT LOVES YOU

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy
Question about AWS Elastic Load Balancing:

I've had an Asterisk PBX up on Amazon EC2 for a month now, prepping it to replace our production phone system. I spun up a separate VPN VM alongside it so we can access the internal IP (not using the Amazon public IPs for security reasons) and it been rock-solid making calls with a dozen phones for weeks so far. I have two SIP trunks, two dual-wan routers and two ISPs in, etc, but what I haven't decided on yet is the best way to give the Asterisk VM itself some better availability on Amazon.

Do I just take a snapshot of the current PBX instance and set it up in a different availability zone? I see that the Amazon ELB will give it the same IP address, but will it also get the same MAC address? I ask because the PBX server uses the MAC to tell if it is the same machine, and requires reregistration of paid modules if it differs.

Does that seem like the best way to go about improving availability? Anyone else tried something like this?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply