Search Amazon.com:
Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«171 »
  • Post
  • Reply
Dilbert As FUCK
Sep 8, 2007

Stay dandy baby....


old thread here,

1. Basic overview of Virtualization
2. Research and learning material
3. Terminology and Liscensing
4. Different Virtualization Companies, differences ect
5. Common Virtualization Techniques (Best practices, network set ups, Allocation of resources)
6. Misc

This will be primarily VMware/HyperV, I haven't worked with citrix enough to give a good rundown on everything citrix. If you feel confident and want it that badly please post about it


1. Basic overview of Virtualization
Keeping it pretty basic it will get more in depth later on, if you want check out the old threads OP at the top of the page.

Q: What is virtualization
A: Virtualization is the process of divvying up physical host resources for multiple operating systems to run on at once.

Q: How does that work
A: By use of a HyperVisor which we have two kinds.
Bare metal, Meaning you install the hypervisor directly onto the physical host and let it schedule resources and use it pass it up to the virtual machine. Common ones are Citrix and Vmware and Hyper-V SPECIAL THANKS TO Syano!!!
Host based, You install this hypervisor just like a program in windows/linux it manages VM's by going through the OS and software layers

Q: Which is better?
A: It depends really Vmware is best for performance and features, but has some tighter hardware requirements than something like Hyper-V. Using Bare metal you will get better performance and have access to more resources, EXSi is completely free too. So if your hardware checks out I would recommend ESXi as you can run hyper-V on top of ESXi.

Q: Who makes virtualization
A: The big players are Vmware, Citrix and Hyper-V

Q: Who uses it?
A: Amazon, apple, MS, Redhat, all the big named companies are on virtualization now.

Q: Other than the host what are good things to know for Vmware
A: Networking and storage

Q: Why would I choose to virtualize my company
A: Lots of reasons,
>No hardware dependencies
Let's say a host goes down in the middle of the night, with VMware it can restart the Server on a new host without ruining your SLA's or you even knowing until you come in
>Cost savings
You don't have to go 1:1 physical box to server, it is possible to have a hundreds of servers + data center + high uptime environment in a very small area that is easy to cool and doesn't cost as much to build
>Automation
VMware has a nice reporting feature that helps you monitor performance across your servers, so instead of having hundreds of reports print out from windows/linux/BSD you now can set rules so if a host is using 95% CPU it emails you which VM, where it is at and what the problem is, now you just give it more resources or look into what service is causing greif
>Snapshots
gently caress up an update that is causing your servers to be unresponsive? Snapshots can roll back to a running server prior to updating so you are back up and running.
>Over provisioning
I can assign a VM 1TB drive if I only have 500GB and it will be fine, ofcourse when it starts filling up you will need to watch it and vmotion it to something with more resources
---I'll add more when I see the need to---

2. Research and learning material
This will cover learning more about VMware and Hyper-V! Certs, learning VMware for the first time, or just to learn more look here!!!:fsiren:

New to Vmware/hyper-v, Want to go for the VCP, or just need to get more into it?
ORDER THE Vsphere 5 BOOK FROM SCOTT LOWE Seriously I can not even being to say how good this book is for learning, even if you are a noob or feel pretty confident with it this book will still teach you alot
Masting hyper-v Like the mastering server 2008 book but only about hyper-v

Want to learn how to script things, automate things, or do everything from vCLI?
Power CLI book is for you

I need to learn how to HA/DRS/Storage DRS work in detail!
vSpher 5 Clustering book is for you! this is just like the yellow book for 4.1

My company is going virtual, next year and they want me to plan it all![/b]
vSphere Design is the book you might want

I need to know storage AND networking?
EMC book and Cisco book will give you good working knowledge of the two areas

I want to learn this stuff hands on! How do I achieve this?
Your best way is to either get some cheap servers off ebay Save My Server generally has good deals Or better yet max your PC out with 32/GB ram and a 6/8core CPU and SSD to hos a whole cluster

That cost me around 500-550 for 32GB ram, mobo, x6, 256GB SSD, and is alot more quite than running 5 servers, I run quite a few machines at once no slowdown as the VM's are on an SSD. Sorry had to extend my
I am using Vmware workstation as I got it free from my school, it cost ~200, 100 if you are a student dollars but things like Virtual box or Vmware player or the trial of workstation 8 for 30 days Workstation 8. like I am doing you can run ESXi inside a VM, and run VM's on that hypervisor(32bit only in 7, 8 supports 64 inside 64), 3 hosts(can add more), Vcenter Enterprise plus, hosting about 10-15 VM's with ease.

I need Hyper-V!!
If you don't feel like reformatting you will need to look into one of the host based hypervisors posted above, like virtual box and install windows server 2008 R2 and go from there, not sure if virtualbox supports nesting so you may have to use 2008 and 32bit OS's in hyper v. You can download server 2008r2 here

Wait if I want to get VCP certified I have to go to a class?
Yes, to keep the cert value high and at least give people who did brain dumps some understanding of it you need to attend a VMware 5 day course or much cheaper option of looking into a local CC for it. You can tell if the school is qualified by asking them or looking on this list here or here Not all schools are fully qualified to teach to take the VCP and VCAP mine is qualified for both so make sure you check. Generally it is cheaper to go to a CC for it, most schools give you a voucher cutting the exam down to 75bucks and online academic resourses

IT SAYS MY HARDWARE ISN'T SUPPORTED!!!
Check to make sure intel-VT/amd-V are enabled in the bios and make sure they are on the HCL . NOTE that just because it isn't on the HCL doesn't mean it won't work, you can get 5 working stable on non HCL stuff but VMware/HP/Dell will not offer assistance performance isntability

3. Terminology/Licensing
There is alot of terminology that VMware gave itself, I'll try to give you the best overview on the most common terms

ESXi - The name of the hypervisor by VMware, also refereed to as ESX which while it is its own product the last build was ESX 4.1 there is no 5 planned
Vsphere Client - The program used to manage ESXi hosts remotely
vCenter - Links multiple hosts together in one easy to see manipulable resource

MOST OF THESE BELOW REQUIRE VCENTER
HA - High Availability allows virtual machines that go down to be automatically restarted without user interaction
DRS - Distributed Resource Management, This allows Vm's to be started up or moved onto different hosts allowing for load balancing across the cluster
DPM - Distributed Power Management, When hosts go to low load states DPM will shut down hosts/machines and migrate them to other hosts saving power (iLo must be supported on the host for this)
vMotion - Allows for live migration from one host to another host without downtime to the VM running
Storage vMotion - Allows the virtual disk and configuration files to move from one Datastore to another
FT - Fault tolerance, allows for a VM to run on 2 different hosts at the same time, if one host goes down the other one takes over, little to no downtime. This is different than HA as HA does a restart FT keeps the machine running
vSwitch - a internal switchin the esxi kernal allowing for virtual network traffic to pass from the vm to other VM's or to a NIC
SMP - Term for how many cpu's the VM can access
VDswitch - Virtual distributed switch, allows for layer 2 support on virtual networking and consolidation of networking across the cluster
Host Profiles - Think of these as unattend files in windows, basically you can have everything a new host needs, set network adapter IP's, storage, NTP settings, HA/DRS, you name it, in about 5 clicks
HBA - Host Bus Adapter, kinda self explanatory
Resource pool - a way to further divvy up resources among a ESXi host/cluster, generally used where resources are low and contention is obvious
Ballooning - A technique used to reclaim unuesed guest memory to keep the host from swapping pages to disk
___Storage/Netowrking___ reworked
NFS - Network File Share, VM's will be stored on these shares and accessed over the network. This method is easy to maintain, it really is as simple as making \\server\share and then adding it to the data stores. After that you are done! NFS is file level not block level meaning when it needs to access the file it puts a small lock on it momentarily this can hurt performance but on most setups you won't notice it much other than latency is a smig higher. More about file locking http://en.wikipedia.org/wiki/File_locking

ISCSI - Block Level, data stores are mounted and accessed via targets and initiators a bit more difficult to set up, but offers things like CHAP, no file locking, MPIO, Lun masking, and smig lower latency.

FCoE - uses the Fibre Channel WWN protocols over Ethernet instead of... Fiber! much faster than Iscsi since it does not have a TCP/IP overhead, speeds range from 4-10Gb/s latency is much lower, costs a bit more.
Fibre Channel(FC) - Runs over Fibre
Thin Provisioned - Disks are only the size the OS asks for, 1TB disk with 10Gb storage will only be 10GB
Thick Provisioned - Files are Zero'd out performance is faster than thin, SQL servers will need this
Raw Device Mapping - The raw LUN is presented to the Guest VM and offers the best performance for that VM


Which should I use?

[url]http://www.youtube.com/watch?v=VO46FyxGf3M]An EMC guy gives a good overview worth the watch [/url] It fully depends on your environment, generally NFS will perform neck and neck with iscsi in terms of throughput/IO, and gets a nice gain with jumbo frames, and is really easy to administer and trouble shoot do to the simplistics of it. Iscsi while it is a bit more difficult to set up offers CHAP which helps increase security, offers a tad lower latency, and MPIO. I generally tend to go for iscsi just for the lower latency and chap support, spending an extra 5 minutes to set it up is worth it, but not all environments need 10ms latency vs 5ms latency and Chap support.
In short both offer about the same performance when configured properly, so it really comes down to if you need the small nitty gritty things that each have over the other.


OH NO WHICH ONE DO I GET!!!
http://www.vmware.com/products/data...mpare-kits.html

I thought there was a free one
There is just the standalone hypervisor ESXi 5, but you have to manage each host individually.

WHAT? I can't use all my ram? What is that!
http://www.vmware.com/products/vsphere/pricing.html
The Stand alone hypervisor allows you to use all the hosts ram I believe, as long as it isn't clustered

Explained

Let's use the Essentials Plus kit, you have 3 hosts total, each with dual socket CPU's, each of those CPU's can address 32GB total, so if you have a VM with 32GB ram, the CPU can not run any more VM's. While this may be a turn off at first but as long as you provision things correctly you should have no trouble. Generally I have my Servers running at 60-65% idle 75-85% load. If say a Mail/web/other server uses 2.5GB ram when low load and 3.5GB during peak I would assign it only 4GB, as unused ram is wasted ram.

4. Different Virtualization Companies, differences ect
I know I will get a lot of here this is a general overview

VMWARE Vmware probably is usually regarded to as the leader of virtualization, has the best market share and most features(that works most of the time anyways). Vmware probably costs more and is a bit more picky on hardware support, but the stuff they sell is rock solid. Generally most of the modern servers are supported, I haven't seen any dells that have not been on the support list 2950 and up. I would generally pick VMware if a client of mine needs up time and stability. Vmware server is a dead host based hypervisor vmware offered up till Jan 2010

Hyper-V Microsofts take at virtualization, cluster-able and easy to use. Comes free with server 2008(ESXi is also free), feels like any other host based hypervisor, if you use windows 7 mode or vmplayer this will be vary familiar, Hyper-V 3.0 is coming out with windows 8 and promises to offer some decent features we'll just have to wait and see. This is a good cheap solution if your hardware is older and not on the HCL or first time virtualizing this is a good choice.

Citrix Bare metal, a lot like VMware only I hear lots of horror stories regarding the instability, I really don't use citrix due to bad past expirences with them, but they are much cheaper than VMware and offer features like vmware does HA/DRS/FT they just have different names and act slightly different. If anyone wants to give there .02 on this feel free, I do hear Citrix VDI solution is much better than Vmwares, but then again we have Windows Terminal Server for a reason.

Which is right for me
It depends on your environment really

First time virtualizing, no budget, or small infrastructure to virtualize??
Chances are you have a windows 2008/2008r2 server on site, hyper-v is simplistic, somewhat clusterable, and included in your server. The other option would be convert the physical host to a vm using Vmconverter, install ESXi 5 installed onto the server and import the VM's and you are done!

Too many servers need to consolidate, uptime is the game, or want to use the most of your hardware?
Citrix of Vmware would be the best, for small businesses I would recommend the Essentials Plus kit is a good place to start, anything more Enterprise or Enterprise plus

Large environment 99.99% up times, need to make sure each vm has a nice host?
Enterprise or enterprise plus, preferably plus but that all depends on your budget an needs, Storage DRS is amazing and only in +

if anyone wants to add citrix stuff here feel free

5. Common Virtualization Techniques (Best practices, network set ups, Allocation of resources)
Now that you have an overview of Virtualization let's look more into the setups, I will mostly be addressing Vmware/Citrix here, some of the things apply to hyper-v as well but not as much

In a virtual environment you want to keep out Single Points of Failure (SPoF) and make things as modular as possible, if set up correctly you can do a Virtual environment server upgrades and network upgrades without any downtime which is pretty cool.

Networking Storage
A common malpractice is keeping storage local to the host, this is a no no for most places. Keeping storage network based allows you to restart VM's on other hosts if said host with all the VMs on it fails, keeping uptime high; shared storage is key to a virtual environment. Without shared storage you will lose HA/DRS/FT and many other key features, planning for a centralized NAS/SAN is a good idea. generally your host will be fine with a small SSD, USB drive, or 2 HDD's in raid 1, you will want to keep it with something decently fast and a good of space for host swapping if it occurs, host swapping is bad, ballooning is okay if kept under control.

But I'll take a performance hit!
This is true you are limited to network throughput, but there are counters to this.

DO's
Put your storage area network on a Vlan, subnet, separate physical network than your client machines! Not only will this increase security but will also cut down on the total traffic on the network
Use jumbo frames when possible!
Teir your storage! If possible run stagnate data on a different server that would be cheaper IE archive data on a NFS share, keep commonly accessed no high performance Storage Processors
Use port aggregation! Binding ports together for better throughput is a smart idea
Invest in 10Gb/E! It is pretty cheap now, your storage will love you!
Thick Provision SQL servers and DB servers, thin provisioning will cause a sizable performance hit!
Use an appropriate RAID, a 20 Disk raid 0 array is fast but deadly
Monitor your logs, just because "it's up" doesn't mean your thin provisioned servers aren't gobbling up GB's you don't have!
Choose the appropriate storage protocol for the job, NFS and Iscsi will give different performance!

Don't
Cheap out on your NAS! This hosts all your data spend the bucks and make sure it is working
Confuse Snapshots as backups for everything! Some servers like DC's will have a hissy fit with other DC's
Skimp on Network adapters! Remember Network is the only way these things will talk to the ESXi hosts and VM's
Not use consumer grade switches
Do the opposite of Do's

Some players in storage solutions are Netapp, emc, Equallogic/Dell, HP.

for more on networking and serious discussions look HERE

Now lets dig more into networking

Making things modular and redundant are key to this environment, to tie all this together you'll need a good solid network, as a refresh

NFS - Network file share, one of the slower storage techniques but very cheap and easy to maintain, can be greatly improved by use of jumbo frames. TCP/IP based Speeds Vary on Network Speed, latency is a bit higher than others
iScsi - Block level access to data over the network, better performance than NFS by a long shot, gets a bump from jumbo frames but isn't as noticeable as NFS gains, TCP/IP based, Speeds from 10Mb/s-10Gb/s, latency is comparable to that of a normal disk if set up on a vlan, network, or subetnet made for storage. Cheap and easy to maintain, good if you have an existing ethernet network
FCoE - uses the Fibre Channel WWN protocols over Ethernet instead of... Fiber! much faster than Iscsi since it does not have a TCP/IP overhead, speeds range from 4-10Gb/s latency is much lower, costs a bit more.
Fibre Channel(FC) - FCoE but runs over Fibre

The more net interfaces you have the better, generally you'll need 6 for a HA/vMotion environment, 1 Management 1backup mgmt, 1 Vm traffic, one for vMotion, 2 for Storage, You can piggy back VM network traffic to the vm management nic, and not see any real network hit, best would be to have one NIC for VM traffic and one for failover. Gig should be standard practice, for your storage and vmotion interfaces 10GB should heavily considered.

for more about vlans, subnets and general networking click here


Resource an allocation!

Resource Pools!
Resource pools help you divvy up resources in an environment where resources are scarce, combined with reservations help make sure resources are always available for vm's running in the pool.
Provisioning!
Just because you can over provision doesn't mean you should giving a host more ram than it needs and having it idle at 25% and expand to no more than 50% is wasting resources, running machines at 50-80% ram usage at all time is fine, unused ram is wasted ram. You also should take out anything not needed, virtual floppy drives, extra CPU cores, CD drives, USB support, ect... This will reduce the amount of overhead the VM needs in order to run, it will use up ram and for CPUs the scheduler does not need to check as often for cores idle and can get to VM's needing higher accesses(not really noticable except in stressed environments).

I'll add more as wanted and address points or clarify things as needed just give me a heads up on what you want me to address.

Dilbert As FUCK fucked around with this message at Oct 22, 2012 around 19:45

Adbot
ADBOT LOVES YOU

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Good start!

One thing did stand out that bugged me though:

Fibre channel can run over copper or fibre. FCoE is actually fibre channel encapsulated in Ethernet frames and can also run over copper or fibre (or anything really that can carry Ethernet.)

Don't confuse FC with fibre optic cables.

Will re-read through the original post more and contribute tomorrow.

evil_bunnY
Apr 2, 2003



nolicense ESXi 5 is limited to 32GB vRAM.

Corvettefisher posted:

Invest in 10Gb/E! It is pretty cheap now, your storage will love you!
Heh. Have you looked at the prices of decent 10GBE switches? I mean 10GBE is a good idea anyway, but it's not exactly commodity-priced yet.

Corvettefisher posted:

NFS - Network file share, one of the slower storage techniques but very cheap and easy to maintain, can be greatly improved by use of jumbo frames. TCP/IP based Speeds Vary on Network Speed, latency is a bit higher than others
iScsi - Block level access to data over the network, better performance than NFS by a long shot, gets a bump from jumbo frames but isn't as noticeable as NFS gains, TCP/IP based, Speeds from 10Mb/s-10Gb/s, latency is comparable to that of a normal disk if set up on a vlan, network, or subetnet made for storage. Cheap and easy to maintain, good if you have an existing ethernet network
Also, I'd be wary to generalize this. All the entry level applicances do iSCSI, but NFS is less of a management pain for vSphere. One of the problems of NFS is vmware not wanting to update their supported version.

evil_bunnY fucked around with this message at Feb 20, 2012 around 14:03

Bleusilences
Jun 23, 2004

Be careful for what you wish for.


edit: Nevermind not appropriate for this thread

Bleusilences fucked around with this message at Feb 20, 2012 around 13:49

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast


evil_bunnY posted:

nolicense ESXi 5 is limited to 32GB vRAM.

I think 4 can still take you up to 192, but at this point, just buy the cheapest VMware licence for 5, it is actually pretty reasonable.

Cidrick
Jun 10, 2001

Praise the siamese


If you're cheap like me, you can check out the non-official list of hardware supported by ESX(i) on the Whitebox HCL. It's essentially a big list of various hardware that people have shoehorned ESX onto, sometimes with the workaround required to do so.

I just bought a used HP DL160 G6 off of eBay and I've consulted it to try and figure out a cheap RAID controller to get. I'll be damned if I'm going to not use RAID-1 in this sucker.

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

Fibre Channel storage needs some more information. FCP is a storage protocol used for accessing disk and tape on a shared network, and works by transporting SCSI commands over the FC network. It can be run over either Fiber Optic cables or copper (though usually the copper is used only for short run interconnects.) It is a connection oriented protocol, which was designed to be lossless (or as close as possible) and for predictability of data flows.
Fibre Channel is available in speeds of 1Gb (legacy), 2Gb (legacy), 4Gb, and 8Gbps, with 16Gbps starting to be available now.

FCoE is an encapsulation of the Fibre Channel protocol to run on ethernet, with a design focused on converging FCP and TCP on a single network based on ethernet. There are controls to make up for the connectionless, best-effort delivery of ethernet, and make things more predictable the way they are with FC. It is currently available in a 10Gbps Ethernet speed. Physical layer is the same as what can currently carry 10GbE, copper or fiber optic.

Great to see a new thread, the old one was becoming quite a bear to get through. Thanks!

Dilbert As FUCK
Sep 8, 2007

Stay dandy baby....


evil_bunnY posted:

nolicense ESXi 5 is limited to 32GB vRAM.

Heh. Have you looked at the prices of decent 10GBE switches? I mean 10GBE is a good idea anyway, but it's not exactly commodity-priced yet.

Also, I'd be wary to generalize this. All the entry level applicances do iSCSI, but NFS is less of a management pain for vSphere. One of the problems of NFS is vmware not wanting to update their supported version.

Hmm I will check on that I am sure it can use more, maybe I was using 4

Yeah but, >10 cable runs + extra switches to make up for port loss + >10 nics costs would probably be the same if not more overall than a dual/single 10Gb/E cable run and switch to a few boxes

Yeah but most server equipment has iscsi support be hardware or software, and for an ESX data store it is safe to say iscsi is generally supported.

Thanks for the constructive critisim I will go back over some things after work or on lunch


VVVV- Ah okay I guess I was using 4 then, 32 is a good number 8 is just wow really low, 32 is just right. If you need more than 32Gb per host you really need to look into essentials+ anyway

Dilbert As FUCK fucked around with this message at Feb 20, 2012 around 16:03

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast


Corvettefisher posted:

Hmm I will check on that I am sure it can use more, maybe I was using 4

http://www.vmware.com/products/vsph...rvisor/faq.html

VMware posted:

How much vRAM does a VMware vSphere Hypervisor license provide?

vSphere Hypervisor license provides a vRAM entitlement of 32GB per server, regardless of the number of physical processors. vSphere Hypervisor can be used on servers with maximum physical RAM capacity of 32GB.

Yeah, 4 lets you use more. Infact, 32GB is generous, since originally when the free version of 5 hit, they wanted to limit it to 8GB. Everyone cried, they upped the limit.

Edit: for some reason I thought 4's free entitlement was 192GB, actually, it is 256:

HalloKitty fucked around with this message at Feb 20, 2012 around 16:01

FISHMANPET
Mar 3, 2007



BOOK REVIEW TIME:
I just finished this book:
http://www.amazon.com/Critical-VMwa...29753079&sr=1-1

I got it because Amazon said it was purchased often with the Mastering VSphere 5 book and the Clustering Deepdive. It's only ~100 pages so I read it cover to cover over the weekend.

As a disclaimer, I'm in no way an expert. I haven't read the two books that I bought this with yet, so I have no idea what wisdom it contains. I have however been running two ESX 4 servers (not in a cluster, each individually) for the past year and a half.

That being said, I didn't gain a whole lot from this book. The biggest point he harped on chapter after chapter was to not make VMs identical in specs (disk size, memory, number of CPUs) to your standard physical server build. I also learned about CPU %Ready, which he thought was a pretty important concept, and one he said most people weren't aware of.

The book was written in 2009, so he talks about ESX4 and barely mentions ESXi, though the same concepts apply so it's not that big a deal. The only really dated piece in the book was about storage. He said that FC is highest tier, iSCSI is middle tier, and NFS is low tier, and then went on to say that NAS and iSCSI SANs are all sold with the disk in one giant array, unlike a FC SAN, which lets you setup arrays as you choose.

He also claims to be laying out best practices, but a lot of them didn't really seem very relevant or possible when you start thinking about HA/DRS clustering, which he barely mentioned.

Overall, the book was $25 and 100 easily readable pages. While I didn't necessarily learn a whole lot from the book, I'd say there's a pretty good chance I can whip that bad boy out at a meeting and tell someone to shut up with it. It seems like the kind of book a manager should read, somebody that isn't hands on with VMWare all the time, so they can understand some of the recommendations you're making to them.

evil_bunnY
Apr 2, 2003



HalloKitty posted:

Edit: for some reason I thought 4's free entitlement was 192GB, actually, it is 256:

Isn't that for 2 CPUs?

Karthe
Jun 6, 2007

やらないか


We recently got into virtualization, but unfortunately neither I nor my cohort had any prior experience with it, aside from whatever personal dabbling we'd done in the past. As such, I feel like our VM environment is almost too simple - we've not dabbled in resource pools, I had no idea about not thin-provisioning SQL environments, we only have 10/100 switches and no NAS/SAN...Things have been Running Alright(tm), but there are just so many red flags in our setup that I hope to god we never have to deal with a serious virtual host catastrophe.

Aside from rolling back the clock and not implementing virtualization until we had the proper infrastructure in place, what are good resources for starters? I think I'll pick up the book recommended in the OP if it's good for beginners, but are there other "must read" sites/books/etc...?

markus876
Aug 18, 2002

I am a comedy trap.

Corvettefisher posted:

NFS - Network file share, one of the slower storage techniques but very cheap and easy to maintain, can be greatly improved by use of jumbo frames. TCP/IP based Speeds Vary on Network Speed, latency is a bit higher than others
iScsi - Block level access to data over the network, better performance than NFS by a long shot, gets a bump from jumbo frames but isn't as noticeable as NFS gains, TCP/IP based, Speeds from 10Mb/s-10Gb/s, latency is comparable to that of a normal disk if set up on a vlan, network, or subetnet made for storage. Cheap and easy to maintain, good if you have an existing ethernet network

I'm not sure that you should be generalizing these two options like this.

For NFS, I wouldn't necessarily say it is "cheaper" (or more expensive for that matter) than iSCSI - they both generally use similar switching and cabling architectures. In fact, iSCSI may involve additional costs with specialized HBAs, although many deployments are no longer bothering with HBAs, especially with ESXi-bootable SD/USB sticks that mean you can run without any local disks on your ESXi servers and the performance of software initiators improving.

You could say that for the most part NFS (and iSCSI) both tend to have lower hardware costs than traditional FC deployments since you don't have to purchase FC switches and HBAs.

I would definitely not say that iSCSI has "better performance than NFS by a long shot".

See this TR from (admittedly-possibly-biased) Netapp: http://media.netapp.com/documents/tr-3697.pdf Yes, it's from 2008, but as far as I know things are pretty similar today still.

See also http://media.netapp.com/documents/tr-3749.pdf that goes into describing NFS best practices, etc.

To summarize - I would say that NFS is typically not noticeably faster or slower (in most cases) than iSCSI for most workloads.

I believe that using NFS is far simpler from the VMware side (adding datastores, having the possibility to move or copy around files by just mounting the NFS share on another host (this is especially great for the shared "media" datastore that you can mount NFS read-only on your ESX hosts, and mount rw on a unix workstation and copy ISOs to it for booting VMs), not worrying about block sizes, and VMFS extents, being able to resize an NFS share and instantly see the new size/capacity in VMware).

The biggest factor in choosing if you want to use NFS or iSCSI should probably just be what kind of support your storage has for NFS. If you are running on a platform that provides good NFS support, I highly recommend exploring and trying it out.

complex
Sep 16, 2003



Indeed a vast majority of the VMworld 2011 labs were run on NFS. See http://virtualgeek.typepad.com/virt...ios-served.html I agree with Markus that NFS should not be considered be considered lower performing than iSCSI.

For Martytoof, from the last thread: Hit F4 in the DCUI to switch to a high contrast scheme. See if that fixes your iLO color problems.

Dilbert As FUCK
Sep 8, 2007

Stay dandy baby....


markus876 posted:

I'm not sure that you should be generalizing these two options like this.

For NFS, I wouldn't necessarily say it is "cheaper" (or more expensive for that matter) than iSCSI - they both generally use similar switching and cabling architectures. In fact, iSCSI may involve additional costs with specialized HBAs, although many deployments are no longer bothering with HBAs, especially with ESXi-bootable SD/USB sticks that mean you can run without any local disks on your ESXi servers and the performance of software initiators improving.

You could say that for the most part NFS (and iSCSI) both tend to have lower hardware costs than traditional FC deployments since you don't have to purchase FC switches and HBAs.

I would definitely not say that iSCSI has "better performance than NFS by a long shot".

See this TR from (admittedly-possibly-biased) Netapp: http://media.netapp.com/documents/tr-3697.pdf Yes, it's from 2008, but as far as I know things are pretty similar today still.

See also http://media.netapp.com/documents/tr-3749.pdf that goes into describing NFS best practices, etc.

To summarize - I would say that NFS is typically not noticeably faster or slower (in most cases) than iSCSI for most workloads.

I believe that using NFS is far simpler from the VMware side (adding datastores, having the possibility to move or copy around files by just mounting the NFS share on another host (this is especially great for the shared "media" datastore that you can mount NFS read-only on your ESX hosts, and mount rw on a unix workstation and copy ISOs to it for booting VMs), not worrying about block sizes, and VMFS extents, being able to resize an NFS share and instantly see the new size/capacity in VMware).

The biggest factor in choosing if you want to use NFS or iSCSI should probably just be what kind of support your storage has for NFS. If you are running on a platform that provides good NFS support, I highly recommend exploring and trying it out.

Going off this http://www.vmware.com/files/pdf/sto...otocol_perf.pdf (they only used GIG and some of the charts all cap at 125MB/s)
Talking Vanilla iscsi vs NFS (no jumbo frames, ToE, ect) iscsi will give you a better performance, it isn't much(Thought it was better than that to be honest) but it is there, and latency is generally lower. Since both are really easy to install configure I would still say iscsi would be a better option over NFS due to lower latency, and higher throughput. Of course it totally depends on the budget and resources you have to work with as well.

I see what you are say about NFS vs. iscsi, NFS is very simplistic I just work with iscsi much more than NFS now so iscsi seems rather easy to manage. FreeBSD and OpenFiler are the two main NAS OS's I use I can say Freenas 7 offered great iscsi performance but always had troubles with NFS, might have been something I was doing. Openfiler seems neck and neck, but I notice lower latency with iscsi which is one reason I generally tend to use it over NFS.

But yeah I'll give the part in the OP a rework tonight.

Karthe posted:

We recently got into virtualization, but unfortunately neither I nor my cohort had any prior experience with it, aside from whatever personal dabbling we'd done in the past. As such, I feel like our VM environment is almost too simple - we've not dabbled in resource pools, I had no idea about not thin-provisioning SQL environments, we only have 10/100 switches and no NAS/SAN...Things have been Running Alright(tm), but there are just so many red flags in our setup that I hope to god we never have to deal with a serious virtual host catastrophe.

Aside from rolling back the clock and not implementing virtualization until we had the proper infrastructure in place, what are good resources for starters? I think I'll pick up the book recommended in the OP if it's good for beginners, but are there other "must read" sites/books/etc...?
Resource pools are usually implemented in places where resources are scarce and you need fine tweaking of the enviroment, generally I don't set them up unless it is clear that I can't get another box and am pushing my Host usage up into the upper 80% for ram/cpu. Small deployments for SQL(SQL express) in a thin provisioned environment can be done I know a few sharepoint servers running on thin provisions just fine but they have slower Writes when uploading documents or changing files. You don't have to have shared storage in a virtual environment, but it makes life a lot simpler for you in the event of a host hardware failure or the like.

this thread is good to ask questions in, no one here knows it all so ask anything regardless you trivial it may seem. The Scott lowe book is good for esxi 5 there is also a 4.1 too

Dilbert As FUCK fucked around with this message at Feb 20, 2012 around 19:23

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast


evil_bunnY posted:

Isn't that for 2 CPUs?

Even the cheapest pay-for VMware package is only 2 physical CPUs, but any number of cores (off the top of my head).
So yes, as far as I'm aware, the licence for 4 and 5 free is as shown - 2 sockets, up to 6 cores.

Megiddo
Apr 27, 2004

Unicorns bite, but their bites feel GOOD.

I asked this in the enterprise SAN thread, too, but I figured someone here might be able to help:

Are there any good 4-post 42u or 44u open frame racks that do not need to be bolted to the floor?

Sorry for the off-topic question.

FISHMANPET
Mar 3, 2007



We've got a couple of HyperV hosts, and I'd like to install some software on them to see how much horsepower is actually being used, with the idea of probably using fewer machines in the future, and also moving to VMWare anyway.

So, recommendations for a HyperV performance/utilization monitoring tool?

Syano
Jul 13, 2005


One change that probably needs to be made to the OP: While Hyper-v surely isnt in the same vein as vmware or xen as far as bare metal hypervisors go, it does not pass through the OS layer

Dilbert As FUCK
Sep 8, 2007

Stay dandy baby....


FISHMANPET posted:

We've got a couple of HyperV hosts, and I'd like to install some software on them to see how much horsepower is actually being used, with the idea of probably using fewer machines in the future, and also moving to VMWare anyway.

So, recommendations for a HyperV performance/utilization monitoring tool?

http://vtcommander.com/Products/vtCommander

that seems to be buzzing around the technet forums

Syano posted:

One change that probably needs to be made to the OP: While Hyper-v surely isnt in the same vein as vmware or xen as far as bare metal hypervisors go, it does not pass through the OS layer

You sure about that I was pretty sure hyper-v was a type 2 hypervisor has to go through the OS to get the hardware

http://www.maximumpc.com/files/u176...per_vm_full.jpg

Dilbert As FUCK fucked around with this message at Feb 20, 2012 around 21:42

Dilbert As FUCK
Sep 8, 2007

Stay dandy baby....


snip

FISHMANPET
Mar 3, 2007



Corvettefisher posted:

http://vtcommander.com/Products/vtCommander

that seems to be buzzing around the technet forums

That doesn't look like it does much for monitoring.

I'm looking for something that will tell me what % of CPU is used, and how much memory is being used. I guess maybe it doesn't even need to be a "HyperV" tool, because I can just install it on each of the guests to see what's going on.

Pantology
Jan 16, 2006



Corvettefisher posted:



You sure about that I was pretty sure hyper-v was a type 2 hypervisor has to go through the OS to get the hardware


It's type 1. Here's a good description of how it works:

http://www.virtuatopia.com/index.ph..._Virtualization

After installing the Hyper-V role, that instance of Windows is somewhat comparable to the ESX Service Console.

Rhymenoserous
May 23, 2008

I can also sing.


evil_bunnY posted:

Heh. Have you looked at the prices of decent 10GBE switches? I mean 10GBE is a good idea anyway, but it's not exactly commodity-priced yet.

You know, this is the only time I've been really happy with my Dell 65** switches. I spent about 1/5th the cost of a dedicated 10G/e switch for a couple of plug in boards and transceivers, created a new vlan and voila I have a 10G/e iSCSI network.

Dilbert As FUCK
Sep 8, 2007

Stay dandy baby....


Pantology posted:

It's type 1. Here's a good description of how it works:

http://www.virtuatopia.com/index.ph..._Virtualization

After installing the Hyper-V role, that instance of Windows is somewhat comparable to the ESX Service Console.

Ah, it works differently than I though, I was under the impression it did a different type of install good to know this thanks! Really doesn't feel that way when I install hyper-V from the add roles, didn't realize it went that deep

Dilbert As FUCK fucked around with this message at Feb 20, 2012 around 22:52

evil_bunnY
Apr 2, 2003



Corvettefisher posted:

Ah, it works differently than I though, I was under the impression it did a different type of install good to know this thanks! Really doesn't feel that way when I install hyper-V from the add roles, didn't realize it went that deep
Yeah it basically jumps from its chair and swiftly slides a virtual pillow under its butt before landing again.

Misogynist
Jul 14, 2003



Pantology posted:

After installing the Hyper-V role, that instance of Windows is somewhat comparable to the ESX Service Console.
Or the Xen dom0.

balakadaka
Jun 30, 2005

robot terrorists WILL kill you

This may be a silly question, but does anyone have recommendations of good 3rd party Citrix books? They always seem like 3rd runner up especially since Hyper-V came onto the scene. The community can be so-so, but it seems like you really have to be careful for incompatibilities even on point releases.

We're looking at trading our Xen environment out at work, since MS licenses are super cheap for us (compared to any other vendors)

madsushi
Apr 19, 2009

Baller.

Corvettefisher posted:

Going off this http://www.vmware.com/files/pdf/sto...otocol_perf.pdf (they only used GIG and some of the charts all cap at 125MB/s)

You're quoting an ESX 3.5 PDF as proof that iSCSI>NFS? NFS performance was greatly improved in 4.x, and in fact, that was around the time VMWare started recommending NFS for new deployments.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

True; though if you're running gigabit you're more likely to get consistently better performance out of iSCSI over NFS. This is mostly due to how ESX 4.X+ handles MPIO to iSCSI LUNs.

That said, if you're not really hitting the limits of gigabit its a wash anyway.

madsushi
Apr 19, 2009

Baller.

1000101 posted:

True; though if you're running gigabit you're more likely to get consistently better performance out of iSCSI over NFS. This is mostly due to how ESX 4.X+ handles MPIO to iSCSI LUNs.

That said, if you're not really hitting the limits of gigabit its a wash anyway.

Here's the way I see it (in today's data center):

If you're not that good at storage, NFS wins. It's just easy.

If you're good enough to configure MPIO the right way and you actually have redundant links and everything, you're probably in a big enough environment that you can just spring for a 10Gb NIC per server and then you can completely forget that your storage network even exists. No single host can tough 10Gb under 99.9% of circumstances, and it's not like most SANs can even serve up data that fast.

joe944
Jan 31, 2004

What does not destroy me makes me stronger.


Install went rather smoothly, I've stayed up way too late playing around with this thing already. The other 8GB of RAM should be here tomorrow.

Not sure why it didn't pick up the motherboard manufacturer. It's an ASUS M5A97, and DirectPath is enabled, although I've yet to test it.

Prof_Beatnuts
Jul 29, 2004
I used to be bad but now I'm good

So I might have done something stupid but I'm not sure. I just bought a HP proliant server to use for my home lab and when I installed ESXi 4.1 on it, it is coming up with localhost.hsd1.nj.comcast.net as the network I'm trying to log into. Obviously I have Comcast for internet. Weird thing is that it already has a root account setup and I can't do anything. WTF did I do ? Anyone know of a way to fix this?

sanchez
Feb 26, 2003


Prof_Beatnuts posted:

So I might have done something stupid but I'm not sure. I just bought a HP proliant server to use for my home lab and when I installed ESXi 4.1 on it, it is coming up with localhost.hsd1.nj.comcast.net as the network I'm trying to log into. Obviously I have Comcast for internet. Weird thing is that it already has a root account setup and I can't do anything. WTF did I do ? Anyone know of a way to fix this?

The root pw should be blank on a new install

Prof_Beatnuts
Jul 29, 2004
I used to be bad but now I'm good

Ok, thanks. I didn't know that it came with a blank pw.

mpeg4v3
Apr 8, 2004
that lurker in the corner

I've got a general virtualization question, that I haven't really been able to find a clear answer to. I'm trying to spec out parts for my first ESXi server, but I'm having trouble figuring out what sort of CPU power I should be going for. This server is going to be running the following VMs:
  • FreeBSD to manage a ZFS storage pool
  • CentOS (or Ubuntu, or something else, whatever) web development server & dedicated XBMC updater.
  • Ubuntu video transcoding server, to handle AirVideo, Emit, and Subsonic
  • Server 2008 R2 for Active Directory
I already understand that RAM is of paramount importance for VMs, and I'm planning to go at least 32GB no matter what. The issue I have is with determining how much power I need for the video transcoding VM. AirVideo (iOS) and Emit (Android) are video transcoding and streaming servers, to transcode video to a format that their respective mobile device clients can play. What I'm looking for is to put together a server that could handle streaming at least two, preferably three, movies in 1080p. That means, for each movie, decoding the movie from 1080p in real time, and then transcoding it to the proper format at 1080p (probably 8-10mbit/sec) in real time.

I plan to compile multithreaded-aware versions of ffmpeg and possibly mplayer (along with possibly even buying a dedicated pass-through'd GPU to handle decoding rather than relying on CPU), but even with those, encoding 1080p in real time for at least two, preferably three, different movies is quite a huge task to handle.

The thing is, I just don't know how well video encoding works in VMs. I don't have a huge budget to work with (~$1500-1750), and I have to devote quite a bit to the "file server requirements" portion of the parts list, so I can't go all out with some 12 core Xeon dual processor monstrosity. I'm okay with non-ECC RAM as this is not mission critical stuff, and am okay with consumer level parts. I've been looking at the following options:
  • A consumer-level X79 motherboard with an Intel i7-3930K 6-core 3.2ghz CPU
  • The same as above, just with a temporary i7-3820 4-core 3.6ghz CPU, as a holdover until Intel releases their 8-core i7.
  • A dual processor Socket C32 motherboard with two 6-core AMD Opteron 4180 2.6ghz CPUs
  • Wait for a dual processor Socket 2011 motherboard and buy two 4-core Xeon E5's
Traditional "as many cores as possible!" logic makes me want to side with the dual 6-core Opteron setup, but my mind keeps coming back to the fact that the processors are potentially going to need high clock speeds in order to handle decoding and reencoding in real time. Or maybe I'm just overthinking this, and the 6-core 3930K, or even the 4-core 3820 would be enough for what I want to do. I just can't seem to find anyone else that runs video transcoding in VMs to ask.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Generally something like video transcoding or rendering or anything of that sort will use 100% of the CPU resources you throw at it.

That said, if you went with the 6 core i7 you could give 4 vCPUs to the Ubuntu VM and run it 24x7 and still have enough CPU power to run your remaining 3 VMs (which I'd generally recommend you do with 1 vCPU each.)

What kind of load are you expecting on the remaining VMs? What kind of disks and how many?

quote:

(along with possibly even buying a dedicated pass-through'd GPU to handle decoding rather than relying on CPU)

What are you looking at to provide this functionality? Be wary of VMdirectPath and understand that not every device works perfectly with pass-through. It was generally intended to let you hook things like network cards, HBAs and disk controllers directly to virtual machines.

I do see here: http://vm-help.com/esx40i/esx40_vmd...hitebox_HCL.php

that someone has managed to get a couple Radeon boards to work with 4.X though. I'd presume it'll still work in 5.X.

Also, make sure you shop the HCL or research hardware before you buy. To boot ESXi you need a supported disk controller and a supported NIC or it will PSOD. If your NIC isn't supported it'll PSOD with an LVM error and you'll spend all your time beating your head against the wall troubleshooting your disk controller.

BnT
Mar 10, 2006



mpeg4v3 posted:

[*]FreeBSD to manage a ZFS storage pool

...

I'm okay with non-ECC RAM as this is not mission critical stuff, and am okay with consumer level parts. I've been looking at the following options:


I'd make sure that whatever you're getting supports VT-d or AMD's thingy, especially if you're planning on running a guest with ZFS direct access. For this budget you could easily make a pretty beefy system centered around a Sandy Bridge Xeon E3 and the Intel C20x series chipsets, but that would require ECC (which I would recommend anyway), also this would only get you four cores.

Another thing to be aware of is that the free vSphere 5 entitlement only supports a single socket CPU (unlimited cores) and 32GB of RAM.

BnT
Mar 10, 2006



Oh yeah, the reason I'm here; Does anyone know of a way to quickly find a list of VMs that have snapshots from vCenter? vSphere 4.1u2 if it matters.

Adbot
ADBOT LOVES YOU

evil_bunnY
Apr 2, 2003



Something like this?

http://www.virtu-al.net/2009/06/22/...i-snapreminder/

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«171 »