|
Generally no, there's not too much interest in prosecuting consumers.
|
# ? Apr 1, 2018 20:15 |
|
|
# ? Apr 26, 2024 04:43 |
|
Yeah not enough ROI on the cost of the lawyers and such.
|
# ? Apr 1, 2018 20:28 |
|
If you’re gonna spend $20 for a discounted key that is most surely sketchy based on the fact that it’s 10% the price of a regular key, then why not just go all the way and get a $4.32 key instead?
|
# ? Apr 2, 2018 05:08 |
|
Sorry for such a dumb question but for some reason someone created additional VMkernel ports for management on one of our hosts and so I cleaned it up but not the other hosts have the error: Where do I fix this so these hosts don't care about these networks? edit: nevermind, it was under the cluster settings.
|
# ? Apr 4, 2018 15:18 |
|
A few weeks ago I virtualized my three-year-old CentOS 7 install when I upgraded my file server from a literally 10-year-old motherboard to a new dual socket Xeon Silver-based setup and installed VMWare ESXi 6.5 Build 7967591. The conversion was mostly easy enough, but I've noticed that I cannot add USB hard drives (for a primitive and to-be-overhauled local backup process) to the VM while said VM is powered on; if I attempt to do so, it says "Unable to perform while powered on." I have to power down the VM, add it, then power it back up, as well as if I want to remove it. This is obviously very annoying and I'm just curious if anyone has ever run into anything like this. I've confirmed that a brand-new CentOS 7.4 VM and a Windows Server 2012 R2 have no issues adding and removing USB drives while they remain powered on. The USB controller for both the virtualized and brand-new VMs are standard USB 3 controllers, not passthrough, and open-vm-tools package is installed on both VMs.
|
# ? Apr 5, 2018 23:49 |
|
CHEF!!! posted:A few weeks ago I virtualized my three-year-old CentOS 7 install when I upgraded my file server from a literally 10-year-old motherboard to a new dual socket Xeon Silver-based setup and installed VMWare ESXi 6.5 Build 7967591. The conversion was mostly easy enough, but I've noticed that I cannot add USB hard drives (for a primitive and to-be-overhauled local backup process) to the VM while said VM is powered on; if I attempt to do so, it says "Unable to perform while powered on." I have to power down the VM, add it, then power it back up, as well as if I want to remove it. This is obviously very annoying and I'm just curious if anyone has ever run into anything like this.
|
# ? Apr 6, 2018 00:14 |
|
Try adding it to quirks for the VM. Lots of USB enclosures are cheap junk which don't arbitrate correctly
|
# ? Apr 6, 2018 10:52 |
|
anthonypants posted:Does the guest VM have VMware Tools installed and updated? What's the guest VM's hardware level? Yes, both open-vm-tools (10.1.5-3.el7, base CentOS repo), then I removed it and installed VMWare Tools 10.1.15-6677369, both of which yielded the same result of requiring a reboot. Hardware version of both VMs is VM version 13 (ESXi 6.5 and up). I have not had time to try this quirks thing, which I did not even know was a thing until evol262 mentioned it, but I'll try to find the time. I can't help but wonder "If a given USB external HDD has a cheap USB enclosure, how come the fresh VM picks it up and the virtualized one doesn't?"
|
# ? Apr 10, 2018 04:54 |
|
What is the guest OS set to in the VM settings?
|
# ? Apr 10, 2018 21:59 |
|
CHEF!!! posted:Yes, both open-vm-tools (10.1.5-3.el7, base CentOS repo), then I removed it and installed VMWare Tools 10.1.15-6677369, both of which yielded the same result of requiring a reboot. Hardware version of both VMs is VM version 13 (ESXi 6.5 and up). I have not had time to try this quirks thing, which I did not even know was a thing until evol262 mentioned it, but I'll try to find the time. I can't help but wonder "If a given USB external HDD has a cheap USB enclosure, how come the fresh VM picks it up and the virtualized one doesn't?" Check dmesg when you pass it through without rebooting. I would guess that it's failing arbitration. In the same way that flaky/dying flash drives sometimes need to be replugged multiple times, cheap Chinese enclosures sometimes don't do a full bus reset. By rebooting, you are not relying on hotplug
|
# ? Apr 11, 2018 12:55 |
Does anyone know of any good courses on Udemy.com or Lynda.com (or similar) that are basically Virtualization 101 - There's a bazillion of these out there, I know... VMWare 201 - The fundamentals of how to use VMWare, including how to perform basic tasks in their GUI Hyper-V 201 - Ditto, just for Hyper-V The company wants to have 10-20 of our tier 1 techs take a course along these lines. Virtualization has become big enough that their ignorance is hurting our ability to support our stuff. Too many simple things get escalated. They don't need to be full virtualization admins. They really just need to know how to get around, and know some high-level do-this/don't-do-that stuff.
|
|
# ? Apr 13, 2018 15:13 |
|
For Hyper-V I think one of the MVA courses will be more than adequate
|
# ? Apr 13, 2018 15:29 |
|
ConfusedUs posted:Does anyone know of any good courses on Udemy.com or Lynda.com (or similar) that are basically Packt Publishing has a number of videos which can be purchased directly from them or are included with a Safari Books Online subscription (which is a great thing for people who need access to lots of IT books but might be overkill for tier 1 folks)
|
# ? Apr 13, 2018 17:59 |
|
Dev Null I could loving kiss you
|
# ? Apr 17, 2018 19:47 |
|
Potato Salad posted:Dev Null I could loving kiss you Wait, what did I do?
|
# ? Apr 17, 2018 21:56 |
|
DevNull posted:Wait, what did I do? Asked vcenter guys to make the next patch include a win/vcsa migration I'm going to pretend your mention was critical for the decision to include it
|
# ? Apr 17, 2018 22:04 |
|
Potato Salad posted:Asked vcenter guys to make the next patch include a win/vcsa migration Oh, yeah. I think I actually had a more solid yes to the answer before, but couldn't promise it because it wasn't released yet. Glad it worked out for you.
|
# ? Apr 17, 2018 22:17 |
|
Is that with the 6.7 announcement?
|
# ? Apr 18, 2018 00:31 |
|
yeah buddy it zoggin is
|
# ? Apr 18, 2018 02:44 |
|
I'm adding a new ESXi host to a cluster, and I can't vMotion to it. I also can't vmkping any of the other hosts from that adapter, but they all can ping each other. It's on the same VLAN and subnet as my other vMotion hosts, all the virtual switch and port group settings match the other hosts. Any suggestions for what to look at? At this point it's a toss up for "you're missing something simple in the host configuration" and "you're missing something simple in the switchport / VLAN configuration"
|
# ? Apr 21, 2018 16:35 |
|
Can you ping your gateway from that VM kernel stack? Are you using nonstandard mtu? lacp?
|
# ? Apr 21, 2018 23:24 |
|
There were two problems: 1. MTU was set on the vswitch but not the physical adapter. 2. There was some kind of network pathing problem between the other three hosts' active uplink adapter (which implies to me that it was actually the fourth host that was set up wrong - the one I did vs the 3 my predecessor did). I switched the active and standby uplinks and now everything works. Slightly concerned but we're also rolling out new storage and networking equipment in less than a month so as long as it stays working that long we will be fine
|
# ? Apr 24, 2018 19:54 |
|
Hi SH/SC, I am hoping to get some tips for how I might want to run my new home server. Specs: Dell PowerEdge T620 - 32GB RAM - 10x 300GB 15K SAS 32GB ECC Memory - 2x Intel Xeon E52609 - 2.4GHz QuadCore - 10x 300GB SAS iDRAC 7 Enterprise PERC H710 This is for home media usage, playing with VMs, and running various crypto project nodes. It should be arriving in the next few days. My existing CJ skills are pretty bare bones, a few years of playing with my solo Ubuntu media server, but I would like to extend them. I am thinking of setting up 8 drives in a RAID6 with two hot spares. My storage needs are not great, my current box is running with 1 TB, so 1.8 TB should be plenty for me. I would like to virtualise, but no idea what the best configuration for this kind of duty. ESXi free-tier? Proxmox? Install to a partition of the main drive, or to vFlash? Will any extra drivers required to be loaded to support the RAID controller? Also taking tips on VM and image/storage management. I would like to have some secure-at-rest volumes, loaded manually with an externally supplied key. catbread.jpg fucked around with this message at 01:51 on May 4, 2018 |
# ? Apr 29, 2018 00:21 |
|
Why the gently caress does the X710 not support RSS?
|
# ? May 1, 2018 12:34 |
|
Because they are bad cards with bad drivers.
|
# ? May 2, 2018 14:55 |
|
Our SAN is going EOL so I'm trying to price out some solutions. Dell is trying to pitch me a 4-node VxRAIL setup that has 4x 4TB 7.2k disks along with a SSD cache disk in each node. Does this seem, uhh, bad to anyone else? Is there any good reason they're trying to pitch me 7.2k disks in TYOOL 2018? Our current VMware environment is 4x newish HPE hosts and a v7000 SAN with like 17TB of space, with a number of 15k disks reserved for MSSQL DBs.
|
# ? May 2, 2018 19:08 |
|
Spring Heeled Jack posted:Our SAN is going EOL so I'm trying to price out some solutions. Dell is trying to pitch me a 4-node VxRAIL setup that has 4x 4TB 7.2k disks along with a SSD cache disk in each node. Does this seem, uhh, bad to anyone else? Is there any good reason they're trying to pitch me 7.2k disks in TYOOL 2018? 15k disk are pretty much a dying breed. If you need the I/O, you go SSD. If you need the space, you go 7.2k drives. In between the question is just how much of one or the other do you need. I would take a more careful look at your I/O needs (DPACK may be a good start) and then get Dell to loan you one of the units for testing. I don't know enough about VxRAIL to comment on that. I'd pay close attention to how those nodes work together, as that will be the most important part. If each node is so small, I'd want to make sure that they way they work together is solid and makes sense. [Edit: Relevant - https://www.servethehome.com/seagate-launches-final-15k-rpm-hard-drive-rip-15k-hdds/ ] Internet Explorer fucked around with this message at 19:23 on May 2, 2018 |
# ? May 2, 2018 19:17 |
|
Spring Heeled Jack posted:Our SAN is going EOL so I'm trying to price out some solutions. Dell is trying to pitch me a 4-node VxRAIL setup that has 4x 4TB 7.2k disks along with a SSD cache disk in each node. Does this seem, uhh, bad to anyone else? Is there any good reason they're trying to pitch me 7.2k disks in TYOOL 2018? I know WD has a consumer level 10k rpm 4tb drive, but is anyone really selling an Enterprise 4tb drive at 10k RPM? That market would be very niche.
|
# ? May 2, 2018 19:21 |
|
If you're at the enterprise-grade, read-optimized SSDs have overtaken 10k SAS for price/GB for a few years now. 7.2k still has some edge for large archival storage but it's getting to the point that accessing bulk data on a 4TB+ spindle is slow enough that is basically competing with tape. Specifically with the hybrid SSD cache/spindle backed architectures, you need to be very careful with your cache sizing. Once the cache is exhausted and can't keep up you're going to swamp the spindles and then the whole thing falls over and catches fire super fast. Having just 4x 7.2k spindles ultimately supporting the whole thing still means you could realistically have around 100+mb/s of data commit coming in once the ops have been through cache in ssd and merged in to big fat 1MB write ops. Monitoring of cache health is going to be critical to know when you are approaching limits of the hardware and storage I/O control will be your friend to cut VM queue depths and push IO back pressure to the guests so they start backing off when poo poo hits the wall. Extremely random workloads against the whole array beyond what the cache can house are going to cause heads to trash on what should normally be a steady stream of writes on the platters and compound the problem of a thing already on fire. Even full backup jobs sweeping across all blocks on the storage could cause problems depending on how they are scheduled and how fast they run. BangersInMyKnickers fucked around with this message at 19:41 on May 2, 2018 |
# ? May 2, 2018 19:32 |
|
VxRail is just VSAN with some additional pieces to simplify the initial configuration. We had a meeting with VMware a couple of months ago where we were told to not sell VxRail due to some initial issues. Perhaps that’s changed. If you don’t care about good snapshots or good clones or good replication or deduplication or compression or better redundancy options than mirroring, and you don’t mind that sometimes disk groups break or alarms trigger for seemingly no reason then VSAN is fine. But it wouldn’t be my first choice for replacing a production SAN.
|
# ? May 2, 2018 19:39 |
|
Buy yourself a nice SSD appliance, vsan licensing is expensive That said, vsan is really fucken nice when your redundancy is application level, restore is from your backup appliances, and frankly idk what random alarms yolo is talking about other than perhaps incorrect storage device firmware incompatibility warnings You tell the client "put your loving data here" and they really can't gently caress it up
|
# ? May 3, 2018 02:34 |
|
Potato Salad posted:Buy yourself a nice SSD appliance, vsan licensing is expensive Even with HCL compatible hardware and firmware we’ve ended up with customers who have phantom health alarms on VSAN components despite everything working fine. I was with one a couple of weeks who was running all flash VSAN on dell FX hardware. Disk groups and VM objects all healthy, but disk health checks failed for mysterious and unexplained reasons. They’d never even noticed. Setting FTT to 2, which you really should be doing, also eats into storage space real quick on hybrid configs. One thing I will say about it is that it performs well, and we haven’t seen any data loss.
|
# ? May 3, 2018 02:48 |
|
Thanks for the feedback buys, I also have calls with Starwind, Tegile, and CDW (our usual VAR) on a few of the storage products they are offering. Our HPE hosts are only a year old (compared to ~7 for the SAN) otherwise we could probably swing throwing more money at a hyper converged solution. My managers concern with getting another SAN is the upside-down pyramid we get with all of our data sitting on this one appliance, no matter how much redundancy is built into the unit itself. It's been a while since I've looked at the current technologies but it seems there are some solutions out there to alleviate this without breaking the bank.
|
# ? May 3, 2018 13:39 |
|
Spring Heeled Jack posted:My managers concern with getting another SAN is the upside-down pyramid we get with all of our data sitting on this one appliance, no matter how much redundancy is built into the unit itself. It's been a while since I've looked at the current technologies but it seems there are some solutions out there to alleviate this without breaking the bank.
|
# ? May 3, 2018 14:26 |
|
Spring Heeled Jack posted:Thanks for the feedback buys, I also have calls with Starwind, Tegile, and CDW (our usual VAR) on a few of the storage products they are offering. Our HPE hosts are only a year old (compared to ~7 for the SAN) otherwise we could probably swing throwing more money at a hyper converged solution. If you guys are evaluating starwind, add DataCore to the mix, they are not cheap but my experience with them has been flawless.
|
# ? May 3, 2018 14:43 |
|
Vulture Culture posted:It shouldn't, all that data should also be sitting on whatever you're using for backups Oh you mean our tapes that live at Iron Mountain?
|
# ? May 3, 2018 16:38 |
|
Make Tapes Great Again!
|
# ? May 3, 2018 19:27 |
|
Spring Heeled Jack posted:Oh you mean our tapes that live at Iron Mountain?
|
# ? May 3, 2018 21:02 |
|
Spring Heeled Jack posted:Thanks for the feedback buys, I also have calls with Starwind, Tegile, and CDW (our usual VAR) on a few of the storage products they are offering. Our HPE hosts are only a year old (compared to ~7 for the SAN) otherwise we could probably swing throwing more money at a hyper converged solution. All your equipment is probably sitting in one room which is a more likely failure domain than a storage array. Even “shared nothing” distributed storage systems like Starwind, VSAN, etc can still fail globally via bugs in the data or control plane, network outages, etc... Modern arrays are build to provide five-to-six nines of availability. Unless you’re a hospital or financial institution that’s probably plenty and you probably don’t have the budget to purchase a solution that can tear that number. Talk to Pure or Nimble. Tegile is ZFS with a new coat of paint and developers that don’t know what they’re doing. Starwind has like a dozen customers.
|
# ? May 4, 2018 00:03 |
|
|
# ? Apr 26, 2024 04:43 |
|
Still liking my Nimble arrays alot. Unsure about the future with HP though.
|
# ? May 4, 2018 00:35 |