|
Rhymenoserous posted:Roll your own SAN/NAS by getting an individual machine with lots of disk. The situation I'm currently dealing with is a single host with lots of disk.
|
# ? Jun 4, 2012 14:09 |
|
|
# ? Mar 28, 2024 13:40 |
|
Put something like Nexenta on it then.
|
# ? Jun 4, 2012 15:50 |
|
nahanahs posted:We've got a deal where we can get individual machines and lots of disk pretty cheap, to the point where it's way more economical than network storage. Gonna go ahead and quote this and point that while it may have been way more economical in a capital (CAPEX) sense, it's certainly not economical in an operations (OPEX) sense.
|
# ? Jun 4, 2012 16:22 |
|
liveify posted:What would be an example of an entry level SAN for a couple servers? Dell MD series or HP P2000 or various Drobo/ReadyNAS level devices depending on whether you want SAN or NAS and the version of vSphere. Keep in mind the VSA (VMware or P4000) can also give you clustering/redundancy over and above what an entry level SAN will give you, even one with dual controllers.
|
# ? Jun 4, 2012 16:57 |
|
I haven't heard of anyone using a Drobo in production. I would look at Synology or something a little more business oriented. The only problem with that kind of stuff is that there is no HA at all. Single controller, single power supply. I would check the Storage thread for more info. http://forums.somethingawful.com/showthread.php?threadid=2943669
|
# ? Jun 4, 2012 17:06 |
|
FISHMANPET posted:Gonna go ahead and quote this and point that while it may have been way more economical in a capital (CAPEX) sense, it's certainly not economical in an operations (OPEX) sense.
|
# ? Jun 4, 2012 17:37 |
|
Internet Explorer posted:I haven't heard of anyone using a Drobo in production I have, and believe me it wasn't a suggestion
|
# ? Jun 4, 2012 17:38 |
|
For your reading pleasure, VMware has put out an updated vSphere Hardening Guide for 5.0.
|
# ? Jun 4, 2012 17:52 |
|
Bitch Stewie posted:I have, and believe me it wasn't a suggestion Drobo has a new "enterprise" line, to be fair.
|
# ? Jun 4, 2012 18:47 |
|
http://www.qnap.com/useng/index.php?lang=en-us&sn=862&c=355&sc=703&t=706&n=4789 or one with more drives might do what you need, it's got redundant PSUs and dual controllers are possible (of course that's not saying it's anything like enterprise but it's nothing like enterprise costs either.)
|
# ? Jun 4, 2012 20:49 |
|
Internet Explorer posted:I haven't heard of anyone using a Drobo in production. Because it's a terrible idea. Few years back, my boss insisted we try a Drobo Pro ("it's VMware certified!"). iSCSI was half-baked and would soil itself if you actually tried to have more than one iSCSI connection, and once you put significant random I/O on the disk, its performance was actually worse than that of a single hard disk. gently caress Drobo. (Drobo gear is decent enough for home use, and maybe for a small lab, but certainly not for any serious work.)
|
# ? Jun 5, 2012 00:13 |
|
Bitch Stewie posted:Dell MD series or HP P2000 or various Drobo/ReadyNAS level devices depending on whether you want SAN or NAS and the version of vSphere. The Dell MD3200i looks pretty nice, dual controllers, dual power supplies. In the past my boss has opted for the more expensive, do it right, option. So I think he'd be fine with the MD3200i If we got a MD3200i Dual controller, 2x Dell PowerConnect 6248's (I know, not the greatest but we can get them cheap and we don't have that much traffic or layer3), and the essentials plus kit, we could be pretty good with redundancy I think.
|
# ? Jun 5, 2012 03:12 |
|
stubblyhead posted:I'm having some host-only network problems in VMware Player. My two VMs are getting IP addresses and stuff from dhcp and can communicate with the gateway, but not with each other. Everything's pretty much out-of-the-box with the exception of adding the default gateway in the dhcp config file (it didn't work beforehand either, and I added it in hopes that they just didn't know how to get from A to B). What should I try in troubleshooting this? Networking is not my strong suit unfortunately. So it turns out that I'm a big dummy, and that Windows Firewall blocks icmp traffic by default. My network configuration was just fine, though adding that option routers line was probably not necessary.
|
# ? Jun 5, 2012 05:14 |
|
DrOgdenWernstrom posted:The Dell MD3200i looks pretty nice, dual controllers, dual power supplies. In the past my boss has opted for the more expensive, do it right, option. So I think he'd be fine with the MD3200i That sounds very similar to part of my setup. MD3220i and a 6248 (my boss doesn't think redundant switches are worth it). We have a few 6248's, and honestly they are pretty solid. You can use the back modules to toss in 4x10gb fiber connections.
|
# ? Jun 5, 2012 05:26 |
|
Naes posted:Can anyone comment on how well a virtual machine can handle 3d games (diablo 3, dota 2, etc)? I think there are some known issues with Diablo 3. I am not sure about dota2. You are fairly safe playing games that are a few years old. It is the part of the aim of virtualization though. Mostly on the hosted stuff. For now. I worked on the 3D side of things for VMware for several years. Now I am over on the remoting side of things. So I am very familiar with this aspect of the product.
|
# ? Jun 5, 2012 06:10 |
|
DrOgdenWernstrom posted:If we got a MD3200i Dual controller, 2x Dell PowerConnect 6248's (I know, not the greatest but we can get them cheap and we don't have that much traffic or layer3), and the essentials plus kit, we could be pretty good with redundancy I think. Just work out what you want to make yourself redundant against. Respectfully a lot of people go and buy a dual controller SAN thinking it's the answer to any and all redundancy problems - it still leaves you with all your eggs in one basket and open to the biggest cause of failure which is human error. Also don't forget that if you want to replicate you need to buy another one - with the VSA's you can "simply" stretch the storage between rooms or floors or sites depending on your bandwidth quantity and latency.
|
# ? Jun 5, 2012 08:54 |
|
Well if you are calculating for human error there really isnt any SAN that is 'redundant enough'. Lets be honest here. You can work yourself to death and spend yourself into bankruptcy trying to account for 'anything that could possibly happen'. The proper way to do risk management with your gear purchases is to calculate likelihood of occurrence and impact from occurrence for potential risks then mitigate the ones that make sense. The MD3200i is a fine kit for a SMB needing an entry level SAN. The dual controllers and dual power supplies, along with a sufficient RAID level will cover most SMBs for pretty much any reasonable outtage scenario. No, its not going to failover if Joe Admin walks in and starts beating on it with a hammer but just ensure you have backups of your data in case the poo poo hits the fan and it will treat you just fine. EDIT: I would also mention that I wouldn't be too concerned at all with it not doing replication. Replication in the SMB space is wayyyyy easy to take care of. If your kit doesnt do it you can find about a bajillion software vendors who will like Veeam. Heck, Backup Exec 2012 can do a quasi-replication of VMs now. If you want to wait a bit longer, Windows Server 8 will do replication of VMs out of the box. In other words, replication isnt a make or break feature any longer, at least in the SMB space. Syano fucked around with this message at 14:47 on Jun 5, 2012 |
# ? Jun 5, 2012 14:44 |
|
We had a quick power outage yesterday (~20s or so) and discovered that our brand new SAN wasn't hooked up properly. Rather than being plugged into the rack mounted batteries, it was plugged into a standard APC UPS. Good times. Our hardware guy brought in the correct power cables to plug it into the rack UPS, and everything is good now (I hope). What baffles me though, is that I see no evidence of actual issues with our VMs that were all running off that SAN. Lost access to volume 4f8ebe80-4b76f6b8-3b0a-00215e2e0fd2 (shared_database_1) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly. info 6/4/2012 6:53:15 AM shared_database_1 Successfully restored access to volume 4f8ebe80-4b76f6b8-3b0a-00215e2e0fd2 (shared_database_1) following connectivity issues. info 6/4/2012 6:54:39 AM <vmwarehost>.<domain>.com So we had no hard drives in any of our VMs for a minute and a half, and everything magically works like nothing happened. None of our VMs locked up or shut down. Everything appears to be running like normal. I have no idea how everything can just work when we completely lost hard drive connectivity. I'm assuming this is all some kind of VMWare magic with it caching read/writes? Just how long can a VM survive without access to the SAN?
|
# ? Jun 5, 2012 16:27 |
|
I bet your SANs battery backed cache held all the machines up if the outtage was only 20 seconds.
|
# ? Jun 5, 2012 16:32 |
|
Syano posted:The proper way to do risk management with your gear purchases is to calculate likelihood of occurrence and impact from occurrence for potential risks then mitigate the ones that make sense. Agreed. The problem is that I see many SMBs (I did it myself with our first SAN) who do none of this and simply assume that a SAN is the solution to a pretty poorly defined problem. It's very easy to buy something cheap and then a few months later when then need changes find that either you point blank can't do it, or you can but you're going to get bent over (cheap EMC's and Netapp's leap to mind).
|
# ? Jun 5, 2012 16:36 |
|
Frozen-Solid posted:I'm assuming this is all some kind of VMWare magic with it caching read/writes? Just how long can a VM survive without access to the SAN? Our test environment SAN locks up sometimes when you put in a new disk or replace a disk (Promise VessRAID, seriously don't buy it for production). One time it was almost 5 minutes. VMs obviously didn't like what was going on, but nothing locked up and everything was pingable. Even though it wasn't production, it was still several minutes of butt clenching.
|
# ? Jun 5, 2012 16:37 |
|
Syano posted:The MD3200i is a fine kit for a SMB needing an entry level SAN. The dual controllers and dual power supplies, along with a sufficient RAID level will cover most SMBs for pretty much any reasonable outtage scenario. That is how I feel about our MD3220i. It is great for a traditional entry level SAN. We are going to now begin dabbling with introducing SSDs into QNAPs "tier 1" devices and see how they handle everything. With 10gbe networking and redundant internals, I think performance will be fine, as long as there are no huge software bugs. Frozen-Solid posted:So we had no hard drives in any of our VMs for a minute and a half, and everything magically works like nothing happened. None of our VMs locked up or shut down. Everything appears to be running like normal. I have no idea how everything can just work when we completely lost hard drive connectivity. I have had this happen to me a few times before. Once time the iSCSI connectivity for a certain host was knocked out for like 15 minutes (apperently one of the quad port nics was physically damaged, so if cables were stressed at all, they would drop connectivity). After a few minutes, the machines were still pingable, but couldn't hit fileshares/rdp/services. After finding the problem and reconnecting the cables, everything came back online happy.
|
# ? Jun 5, 2012 17:58 |
|
I'm messing around with ESXI, finally. But I'm wanting to test more of the networking and clustering. But I'm a bit limited on processing/space, is there a super slim linux distro or anything like it out there that just has basic networking, a shell and a few other features that is also very small in terms of disk space and other overhead? I mostly just want to test networking numerous machines together and that sort of thing without installing larger OS's like Windows servers etc.
|
# ? Jun 6, 2012 02:00 |
|
We're re-configuring a 'private cloud' at Rackspace which is basically a co-located ESX box. We have dual-quad E5220's (? something like 2.26GHz Xeons) and a paltry 24GB of RAM. We can't spin up any more VM's because we have a 10GB MySQL server and a 14GB server mainly running Apache (Rails apps), so we don't have any more RAM. We're in talks with Rackspace (who want to sell us everything they have) and somehow decided to get an identical setup with 32GB RAM. I don't know why we don't get hex-cores so we don't have to spin up public cloud VM's like we have to handle new apps, and I don't know why we don't get you know, faster CPU's since we've been on these for over 2 years. Anyway, a dev who's been here for 5-6 years (and knows almost nothing about hardware/Linux/VM's gets the idea to put a bunch of public cloud web servers out there, and make our MySQL server 20GB and 8 vCPU's. Now, this server is a 4-vCPU with 10GB right now. I've asked why we don't add more RAM so we can have a huge InnoDB cache and in theory cache the whole DB in RAM, so it'd be faster (we have like 15GB of InnoDB data). The load of the machine is like .5-1.0 throughout the day. It's really not hurting for performance. I made the mistake of saying "You'd never give the MySQL server in our situation 8 vCPU's", and he just went off about "How is it going to get bigger and faster if you don't give it more cores", when I said MySQL doesn't scale like that and you don't just throw that many vCPU's at a server like that, and that it was pointless. Then I said just forget it and tell us the rest of the plan for the new server, but he went on and on about it for the next 15 minutes and I tried explaining it to him (I've actually configured a bunch of ESX boxes before) but it was pointless. I explained how we have 16 vCPU's since it's a dual-quad Xeon and Rackspace enables HT by default. Then he said it's only 1.5 vCPU's per CPU so we only have 12 I explained that's Rackspace's 'best practice' or whatever you want to call it but he didn't want to listen. I wanted to explain how when I setup the virtual servers for our test/mirror MySQL servers, they were actually faster in my tests when I configured them for 2 vCPU's and not 4 but I wasn't even going to bother. Why do you even invite me to these meetings if you're not going to listen to what I say, and you're just going to buy what you think is best either way? He also said we had 'way too much hard drive space' and he wanted to get the new server with less since we aren't using much of it. I explained that even with the smallest drives we could get (146GB SAS) if you put 6 or 8 in RAID you're going to end up with a 750GB-1TB total usable space. You can't just stick a single drive in there and get any performance out of it.
|
# ? Jun 6, 2012 02:02 |
|
This might a question for the windows enterprise thread but I figured I'd try hear first. First things first: I don't control our domain or network, and any changes to those are perhaps not likely to happen. I'm having trouble with Citrix/Microsoft VDI. I've troubleshot the thing to hell, and here's what I got out of it. Our AD environment operates DHCP in a split scope. Our domain is dm.contoso.com, but DDNS/DNS integrates a separate namespace, subdomain.contoso.com. So the FQDNs for my host would be HOSTNAME.dhcp.subdomain.contoso.com but when I try to set up citrix/microsoft, even when I put in the FQDN for my VDI host, it looks for HOSTNAME.dm.contoso.com I've updated the primary DNS suffix so HOSTNAME.dhcp.subdomain.contoso.com shows up in Active directory, and altered the SPN records so there's no reference to anything dm.contoso.com, but it still keeps trying to add HOSTNAME.dm.contoso.com I'm out of ideas, and google's failing me. edit: works now, diregard Guesticles fucked around with this message at 00:02 on Jun 7, 2012 |
# ? Jun 6, 2012 02:14 |
|
Bob Morales posted:Argh.
|
# ? Jun 6, 2012 10:54 |
|
How much of an issue is cpu scheduling now? I still hear it brought up constantly but I guess I don't know how much relaxed coscheduling has improved the situation since the old strict days where it was a big concern.
|
# ? Jun 6, 2012 14:12 |
|
Mierdaan posted:How much of an issue is cpu scheduling now? I still hear it brought up constantly but I guess I don't know how much relaxed coscheduling has improved the situation since the old strict days where it was a big concern. http://communities.vmware.com/docs/DOC-4960 ESX 4: http://vmwise.com/2010/07/09/what-is-co-scheduling-anyway/
|
# ? Jun 6, 2012 14:50 |
|
Bob Morales posted:hilarity This right here is why (good) tech companies are still desperate to hire despite the overall unemployment rate. They keep getting interview candidates like this genius who thinks throwing cores at/removing disks from a system that's almost certainly I/O bound will help performance. Out of curiosity, what version of MySQL are you running? I've stopped paying attention since my new company doesn't use it, but the 5.1 line was pretty abysmal at scaling to bigger hardware. They've made major strides in 5.5, especially if you use the patched Percona version, but there's still a lot of my.cnf options you can tune to take advantage of more cores/higher IOPS than the stock config assumes.
|
# ? Jun 6, 2012 17:02 |
|
Guesticles posted:This might a question for the windows enterprise thread but I figured I'd try hear first. I think I saw your post in another thread, or at least someone else had a very similar problem. I kept meaning to respond but I forgot. First let me say that what you are doing is nothing I've ever had any experience with and that many subdomains on a LAN scare me. :v But where exactly are you running into an issue? You said on your VDI host? vSphere / Hyper-V / XenServer? Where during the setup are you running into the problem? You are not going into a whole lot of detail on that front.
|
# ? Jun 6, 2012 19:52 |
|
I think at the inital post I was having issues with XenServer. I'm now up on Microsoft's VDI solution and Hyper-V. Today's post was trying to set the Remote Desktop Server when configuring the Microsoft Remote Desktop Connection Broker. I've gotten a slight work around (altering the sys32\drivers\etc\hosts file to point HYPERVISOR.dm.contoso.com to the correct place), so I've got my broker hooked up to my hypervisor, but now its pitching a fit about the VM not matching a FQDN, but I'm not done wrestling with it yet. Maybe more hostfile entries Let me also say its not a LAN, it is a CAN. Its almost a MAN.
|
# ? Jun 6, 2012 20:20 |
|
Docjowles posted:This right here is why (good) tech companies are still desperate to hire despite the overall unemployment rate. They keep getting interview candidates like this genius who thinks throwing cores at/removing disks from a system that's almost certainly I/O bound will help performance. We -just- moved to 5.1 because Rackspace gave us a "surprise upgrade" a week or two ago. I said we should look into using 5.5 because the previous versions don't scale well but at this point it's so futile to even try. Strangely enough the person who replaced me at my old job resigned today so my old boss started texting me this morning so who knows. Here's the CPU usage chart from our incredibly taxed DB server that needs 8 cores of powah: That spike in every category at 8:00am (4:00am our time) is our daily < 100,000 user email blast, we have like 15 different programs that stagger their emails for each participant.
|
# ? Jun 6, 2012 20:45 |
|
Bob Morales posted:Here's the CPU usage chart from our incredibly taxed DB server that needs 8 cores of powah: I got yelled at yesterday because after a month of letting a domain controller loaf along with 2 vCPU and 8gb memory, I dropped it to 1 vCPU and 1gb. Performance charts showed that it wasn't using more than 10% of cpu/mem. We have two domain controllers with about the same load, the other one does fine with 1vCPU and 1gb mem, never maxes either of those out. I think I am at the tipping point where I am just going to tell my boss he doesn't know a loving thing he rambles about. He keeps saying "if you give it the resources, it will use it". I did that just so you would shut up, and no, it didn't magically use those resources.
|
# ? Jun 6, 2012 21:06 |
|
To be fair I don't really think CPU provisioning is too critical, and I'd give everything at least 2, and just use reservations to make sure each VM gets a guaranteed minimum. RAM of course I would prefer to provision rationally based on physical amounts. If that's 2008 R2, though, 1GB is a bit low, I would personally say. 8GB for a DC doing nothing else at all though? That's probably excessive for almost any business.
|
# ? Jun 6, 2012 21:19 |
|
Is there anything complicated in a V2P conversion when it comes to Linux? It looks like everything I read about V2P is Microsoft specific. I want to build out an OS for a machine that is yet to be delivered but I'm on schedule so I'd like to do it in VMware Fusion, then just dump it to physical disk once the machine arrives.
|
# ? Jun 6, 2012 22:06 |
|
HalloKitty posted:To be fair I don't really think CPU provisioning is too critical, and I'd give everything at least 2, and just use reservations to make sure each VM gets a guaranteed minimum. RAM of course I would prefer to provision rationally based on physical amounts. If that's 2008 R2, though, 1GB is a bit low, I would personally say. 8GB for a DC doing nothing else at all though? That's probably excessive for almost any business. Maybe I am being a little too strict, but if I have not seen the thing use above 1gb in over a month, why allocate it more then?
|
# ? Jun 6, 2012 22:31 |
|
HalloKitty posted:To be fair I don't really think CPU provisioning is too critical, and I'd give everything at least 2, and just use reservations to make sure each VM gets a guaranteed minimum. RAM of course I would prefer to provision rationally based on physical amounts. If that's 2008 R2, though, 1GB is a bit low, I would personally say. 8GB for a DC doing nothing else at all though? That's probably excessive for almost any business. You really give every VM 2 vCPUs, or am I reading that wrong?
|
# ? Jun 6, 2012 22:33 |
|
HalloKitty posted:To be fair I don't really think CPU provisioning is too critical, and I'd give everything at least 2, and just use reservations to make sure each VM gets a guaranteed minimum. RAM of course I would prefer to provision rationally based on physical amounts. If that's 2008 R2, though, 1GB is a bit low, I would personally say. 8GB for a DC doing nothing else at all though? That's probably excessive for almost any business. To be fair, you're wrong and do not understand how scheduling works. Overallocation is a waste of resources and provides worse performance overall. There are tons of papers out about this. three fucked around with this message at 22:45 on Jun 6, 2012 |
# ? Jun 6, 2012 22:36 |
|
My sales rep just tried to sell me vSphere standard edition... For my two, 192gb ram, dual 8 core hosts... He claimed it supported HA and vMotion, and that I only needed one license... Correct me if I am wrong, but for vSphere Essentials Plus and vCenter foundations, I am looking at $7500~ in licensing for these two hosts...
|
# ? Jun 6, 2012 22:39 |
|
|
# ? Mar 28, 2024 13:40 |
|
the spyder posted:My sales rep just tried to sell me vSphere standard edition... For my two, 192gb ram, dual 8 core hosts... He claimed it supported HA and vMotion, and that I only needed one license... vSphere standard is by CPU, so I believe you would need four licenses of vSphere standard to cover the cpu's, but it has a 32gb/license limit on RAM. As for the vSphere Essentials Plus, if you're talking about the kit that includes vCenter. "VMware vSphere 5 Essentials Plus Kit for 3 hosts (Max 2 processors per host) and 192 GB vRAM entitlement" So 64gb/server EDIT: VMWare noob here, so I might be wrong
|
# ? Jun 6, 2012 22:45 |