|
virt-manager as a frontend to KVM still uses qemu, FYI. qemu/bochs is probably unbearably slow on Windows and there's no reason to use it (KVM is an acceleration driver for qemu -- it's possible to use KVM by itself, but then you need to implement your own virtual hardware from scratch, where qemu provides e1000, etc). Virtualbox was (is?) open source, other than the add-ons.
|
# ? May 31, 2015 17:45 |
|
|
# ? Apr 18, 2024 19:05 |
|
Thanks for the advice. My plan was to install a headless linux server--Ubuntu server I guess, I don't know dick about Linux nowadays--and install Plex and other media server stuff on it via SSH and web browser interfaces. This way I don't have to spend time reconfiguring the drat thing every time I have to nuke my PC, just reinstall the VM manager and reboot the VM. Does the Hyper-V client let me create virtual machines as I need? That's the only reason I didn't mention VMware player in my original post, I figured it couldn't make them at all. Ciaphas fucked around with this message at 19:01 on May 31, 2015 |
# ? May 31, 2015 18:48 |
|
Yes, Hyper-V on Windows 8 allows you to create VMs.
|
# ? May 31, 2015 19:32 |
|
Can it go headless without shutting down the machine, unlike VirtualBox? I finished the media server under VirtualBox already, but I have to shutdown and restart the server to switch between headless and head...ed... and it's getting kind of old while I fix crashes and stuff.
|
# ? Jun 1, 2015 02:13 |
|
Internet Explorer posted:Yes, Hyper-V on Windows 8 allows you to create VMs. It'll even do Gen2 VMs which work with most Linux and newer Windows versions. It is pretty nice and really, really fast..
|
# ? Jun 1, 2015 06:00 |
|
Anyone running Intel x710 nics and having purple screens under any kind of actual load with ESXi 5.5? These loving things are on the HCL, but everyone seems to be pointing fingers around and Intel seems to be a bit of a giant retarded unresponsive roadblock. I found one guy with a forum posts suggesting that turning off some of the offload features magically makes things happier, but we can't seem to reproduce this reliably with any kind of artificial loads to test that out
|
# ? Jun 1, 2015 20:57 |
|
This may be well known here, but I had no clue until last week. VMUG offers a "evalexperience" for $200 that gives you licenses for a number of VMware products (3 hosts + vcenter) for a year. I was looking for this for awhile but couldn't find it. Just finished setting up my R610 and I've got a bunch of shiny toys to play with now. Great for the home lab!
|
# ? Jun 2, 2015 03:14 |
|
Maneki Neko posted:Anyone running Intel x710 nics and having purple screens under any kind of actual load with ESXi 5.5? These loving things are on the HCL, but everyone seems to be pointing fingers around and Intel seems to be a bit of a giant retarded unresponsive roadblock.
|
# ? Jun 2, 2015 04:55 |
|
Kachunkachunk posted:Post or PM your purple screens (maybe the latter if you want to share your case number) and I can give it a gander. Sent a PM with the case #, we've also got a case open with our hardware vendor.
|
# ? Jun 2, 2015 17:18 |
|
I'm doing this wrong...should I continue? I have a hp microserver n54L, running esxi server which is running a windows 2012 vm which shares storage, runs sabnzbd, does dhcp and dns for my local network, is a crashplan target and sends everything important off to amazon glacier each night. The problem is, the storage is all local and I'm mightily pissed off that one of the disks has decided to stop mounting. The disk itself seems fine, no SMART errors so I think something fishy happened with the datastore. I have important stuff mirrored in windows using drivepool and the other drive is fine (and i've backed its contents up to another disk). Should I bin this idea and just run windows server on the bare metal? When i first installed esxi it was because it was a nice way of migrating the existing installation of 2012 from lovely hardware and I thought I'd put more stuff on it. I haven't.
|
# ? Jun 2, 2015 19:50 |
|
If you don't intend to run any other virtual machines you could get rid of the overhead of vSphere. You should look further into your problem because the lack of SMART errors does not indicate a good disc. This is a good reminder that no matter what you do, poo poo will happen, and you do need backups.
|
# ? Jun 2, 2015 19:57 |
|
I finally made the jump from the free ESXi stuff to the paid ESXi / vSphere and vCenter offerings from VMware. Now, what's the best way to monitor hardware RAID with this? Just have vCenter email alerts when the controller is having an issue? I tried Nagios w/ "check_esxi_hardware.py", but it just gives me all the drive stats on one line.
|
# ? Jun 2, 2015 23:25 |
|
Xenomorph posted:I finally made the jump from the free ESXi stuff to the paid ESXi / vSphere and vCenter offerings from VMware. Depends on what you want and the hardware platform. vCenter will work but it isn't particularly robust. If you have HP Proliants and the management agents (as an example), low level issues with storage/disks will report into vCenter and you can trigger SNMP traps or email alerts off that alarm action. Nagios will work too but you have to write it yourself. I suspect there are scripts out there for the different major hardware vendors that poll CIM. I prefer scripting calls to the ILO/DRAC which will usually get you what you are looking for and it is direct from the hardware. Other tools like Insight Manager for HP can do this too.
|
# ? Jun 3, 2015 01:10 |
|
Well, when working with newer PowerEdge "Gen12" systems that have iDRAC 7, I get great notifications. Pull a disk to test = instant email saying which disk has an issue. Unfortunately, I just had to set up a bunch of stuff on older PowerEdge Gen11 servers, which use iDRAC 6, which is just terrible compared to iDRAC 7. It's pretty much useless for notifications (you need Windows or Linux running with full OpenManage installed to get notifications for everything.) I want to monitor disks, but I'm finding myself in this situation: * iDRAC 6 doesn't send storage notifications. Not acceptable. * full OpenManage doesn't install on ESXi. * the versions of OpenManage that does install on ESXi says this for alerts: quote:This feature is not available on this system due to operating system or hardware limitations. * Nagios w/ "check_esxi_hardware.py" spits this out when I pulled a single drive to test: quote:[2015-06-02 12:10:50] SERVICE ALERT: esxi1;Hardware;CRITICAL;HARD;4;CRITICAL : Disk Drive Bay 1 Drive 5: In Critical Array CRITICAL : Disk Drive Bay 1 Drive 4: In Critical Array CRITICAL : Disk Drive Bay 1 Drive 3: In Critical Array CRITICAL : Disk Drive Bay 1 Drive 2: In Critical Array WARNING : Disk Drive Bay 1 Drive 2: Rebuild In Progress WARNING : RAID 10 Virtual Disk 1 Logical Volume 5842B2B044B88E00_1 on controller 5842B2B044B88E00, Drives( - DEGRADED - Server: Dell Inc. PowerEdge R710 s/n: xxxxxxx System BIOS: 6.4.0 2013-07-23 What? Disk 5, 4, 3, and 2 have errors? Critical? The RAID is degraded and is currently rebuilding. Why does it give so many errors? If Disk 2 fails, I'd expect a notice about Disk 2 failing, the RAID being degraded, etc. Not "critical" alerts for every disk in the array. I've been using ESXi for a while as well as the "VMware Server" program before ESX was a thing. I'm not use to working with notifications. I'm also not use to working with SNMP.
|
# ? Jun 3, 2015 03:40 |
|
Doesn't Dell have a vendor version with the openmanage bits baked in?
|
# ? Jun 3, 2015 04:08 |
|
evol262 posted:Doesn't Dell have a vendor version with the openmanage bits baked in? HP and Dell both have ESXi precustomized images to use
|
# ? Jun 3, 2015 04:23 |
|
evol262 posted:Doesn't Dell have a vendor version with the openmanage bits baked in? Sure, with a $495.00 subscription: http://www.dell.com/learn/us/en/555/virtualization/management-plug-in-for-vmware-vcenter The free option has all the good stuff like alerts disabled on ESXi. This isn't a huge problem on Gen12 and Gen13 servers, as the free iDRAC 7 and 8 have tons of functionality built in (negating the need for OpenManage or the OS to do anything). For older servers (such as the Gen 11 w/ iDRAC 6), it seems like you have to pay for an OpenManage subscription to get that missing functionality. Gyshall posted:HP and Dell both have ESXi precustomized images to use And this is the pre-customized image Dell provides: If you pay for their OpenManage subscription, then you can get "monitoring and alerting". Xenomorph fucked around with this message at 04:35 on Jun 3, 2015 |
# ? Jun 3, 2015 04:28 |
|
If you install the Dell customized image you can get hardware alerts directly from VMware. No iDRAC needed.
|
# ? Jun 3, 2015 05:56 |
|
We license all our servers with iLO advanced or Openmanage whatever. Cost of doing business?
|
# ? Jun 3, 2015 06:15 |
|
imo really depends on your business. Virtualization/server heavy shop with lots of infra to monitor, something like a hosting provider? Sure. Small shop where you have to fight tooth and nail for every MB of RAM and every GB of storage? Not so much. You are already shelling out 400 or so bucks for iDrac Enterprise on Dell servers, licensing doubling your price for remote management can be a deal breaker.
|
# ? Jun 3, 2015 09:32 |
|
I've got a CentOS 7 VM that I've prepared and prepped for a template. I've updated it, installed open-vm-tools, the vmware tools yum repo, installed the vm-tools package off of that that every google guide recommends. I've changed the CentOS redhat-release to RHEL like google recommends, killed the udev rules, etc. My problem is that when I go to customize the image, the VM that results is identical to the VM template. Same hostname, IP still set to DHCP, etc. Anyone know where to even begin troubleshooting this? I was hoping to speed up creation of VMs by deploying a single master template, but I just can't seem to get it working. For the sake of time let's assume that I followed VMware's official CentOS7 guide word for word since that's exactly what I did, along with a few tweaks I googled.
|
# ? Jun 3, 2015 14:08 |
|
did you 'touch /.unconfigured'
|
# ? Jun 3, 2015 14:30 |
|
Are you running virt-sysprep against the template when you're done with the image?
|
# ? Jun 3, 2015 14:50 |
|
I'm gonna feel real stupid if that's listed in VMware's guide because I swear I read that thing head to toe. I haven't done either and will try that today, thanks.
|
# ? Jun 3, 2015 15:05 |
|
Martytoof posted:I'm gonna feel real stupid if that's listed in VMware's guide because I swear I read that thing head to toe. I haven't done either and will try that today, thanks. Derp, I missed that you said it was for VMware. It probably won't fix the problem of the customizations not working, but it WILL at least simplify making your base template generic by doing all that udev rule removal stuff for you.
|
# ? Jun 3, 2015 16:21 |
|
hey you guys I inherited this buffalo terastation, anyone have experience on using this for NFS shares for shared storage in vmware? As in should I just toss it and get something else? This is for a really small environment with not a lot of iops requirements. I just want to know that it's reliable enough.
|
# ? Jun 3, 2015 18:28 |
|
Could someone post a link to the guide? Or go into s little more detail about this practice, I've just used easy install with VMware tools and left as-is.
|
# ? Jun 3, 2015 18:33 |
|
NevergirlsOFFICIAL posted:hey you guys I inherited this buffalo terastation, anyone have experience on using this for NFS shares for shared storage in vmware? As in should I just toss it and get something else? This is for a really small environment with not a lot of iops requirements. I just want to know that it's reliable enough. use iSCSI if you can, most of those will support iSCSI
|
# ? Jun 3, 2015 19:11 |
|
Gyshall posted:use iSCSI if you can, most of those will support iSCSI Yeah there is iSCSI. Is iSCSI protocol more reliable in the buffalo? I feel like I've used smaller NASes before where NFS just worked better
|
# ? Jun 3, 2015 19:32 |
|
Tab8715 posted:Could someone post a link to the guide? There's really not that much more TO do, I just have an aversion to easy install mode because I don't create local users, but use LDAP instead, etc. None of that is covered in the guide though. http://partnerweb.vmware.com/GOSIG/CentOS_7.html Basically the guide says next to nothing and I've had to google everything else
|
# ? Jun 4, 2015 00:01 |
|
NevergirlsOFFICIAL posted:Yeah there is iSCSI. iSCSI is block level storage (operating on bits) while most NAS is file level. In file level storage, a file needs to be locked while a client is writing to it, meaning no one else can do the same until it's done. That makes block level storage much better for shared virt or database environments since simultaneous access happens all the time. Roargasm fucked around with this message at 02:33 on Jun 4, 2015 |
# ? Jun 4, 2015 02:28 |
|
Roargasm posted:iSCSI is block level storage (operating on bits) while most NAS is file level. In file level storage, a file needs to be locked while a client is writing to it, meaning no one else can do the same until it's done. That makes block level storage much better for shared virt or database environments since simultaneous access happens all the time. Block access requires locking too. Until fairly recently had problems with scaling LUN based datastores to large VM counts because the VMFS metadata updates required an exclusive lock to the device which meant certain VM operations would have to wait until the host that owned them could get a lock. NFS never had that issue because locks occur at the file (or byte range) level. The ATS VAAI primitive is meant to address this. You can happily run VMware or Oracle on NFS and plenty of places (including Oracle, internally) do just that. Likewise Hyper-V and SQL can run on CIFS these days. The lack of need for a SCSI-3 reservation in a cluster when using file based access is a nice bonus, on to of the added simplicity. There's no shared access scheme that doesn't require a lock to update data, be it block or file or object (or main memory or cache) though.
|
# ? Jun 4, 2015 03:57 |
|
NippleFloss posted:Block access requires locking too. Until fairly recently had problems with scaling LUN based datastores to large VM counts because the VMFS metadata updates required an exclusive lock to the device which meant certain VM operations would have to wait until the host that owned them could get a lock. NFS never had that issue because locks occur at the file (or byte range) level. The ATS VAAI primitive is meant to address this. If by "fairly recently" you mean "almost 4 years ago" then yeah, the dependency on SCSI reservations used to be an issue. The enhanced ATS primitives were introduced in VMFS-5 which made the vSphere 5.0 GA release. Not really intending to nitpick, it's crazy how time flies Edit: on the subject of datastores is anyone here using the NetApp NFS plug-in for VAAI with NFSv4 exports? Always been interested in how well it really works. Pile Of Garbage fucked around with this message at 13:20 on Jun 4, 2015 |
# ? Jun 4, 2015 13:17 |
|
cheese-cube posted:If by "fairly recently" you mean "almost 4 years ago" then yeah, the dependency on SCSI reservations used to be an issue. The enhanced ATS primitives were introduced in VMFS-5 which made the vSphere 5.0 GA release. Not really intending to nitpick, it's crazy how time flies Well, storage support is also required so if you were still using legacy storage you were often SOL until you did a refresh or at least a code upgrade. It's probably only been in the last couple of years where a majority of customers are using ATS.
|
# ? Jun 4, 2015 13:53 |
|
NippleFloss posted:Block access requires locking too. Until fairly recently had problems with scaling LUN based datastores to large VM counts because the VMFS metadata updates required an exclusive lock to the device which meant certain VM operations would have to wait until the host that owned them could get a lock. NFS never had that issue because locks occur at the file (or byte range) level. The ATS VAAI primitive is meant to address this. OK so should I do NFS or iSCSI?
|
# ? Jun 4, 2015 20:32 |
|
Neither: FC! But seriously it depends on what NAS/SAN you're using (e.g. NetApp will usually recommend NFS), the kind of workloads you're running, what you're comfortable with supporting and whether there are any specific requirements for whatever you're using to backup the environment. Check with Buffalo/consult their documentation to see what they recommend. If they do not provide any specific recommendations then you're really just going to have to evaluate them to see what fits best. Not really an easy question to answer without much more information about your environment.
|
# ? Jun 4, 2015 20:51 |
|
ok thank you!
|
# ? Jun 4, 2015 22:31 |
|
NevergirlsOFFICIAL posted:OK so should I do NFS or iSCSI? I would go with nfs all other things being equal, because it's a lot harder to do it wrong.
|
# ? Jun 4, 2015 23:37 |
|
NevergirlsOFFICIAL posted:OK so should I do NFS or iSCSI? I'd find it on the VMware HCL before using it for VMware shared storage at all. Support will be listed by ESX version and storage protocol.
|
# ? Jun 4, 2015 23:42 |
|
|
# ? Apr 18, 2024 19:05 |
|
KS posted:I'd find it on the VMware HCL before using it for VMware shared storage at all. Support will be listed by ESX version and storage protocol. right so buffalo is definitely not a vmware "partner"
|
# ? Jun 5, 2015 17:09 |