Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



Since my searching ability with the forums is either not good enough, it hasn't been asked or I'm just plain stupid: I'm planning on building a home server with ESXi and want to run FreeNAS and OpenELEC virtualized. Here's my question: Can I in ESXi use VT-d to passthrough the graphics card directly to OpenELEC (as FreeNAS obviously doesn't need it)?

The reason I'm not just straight installing FreeBSD and putting XBMC on a mythtv backend connected to a HDHomeRun is that I want to play around with visualization more than I've done in the past (vmware player/vmware fusion/hyper-v in windows 8 pro). I just don't fancy going out and buying new hardware (I do plan on buying new hardware eventuallly, as my current HP N36L-based NAS is getting too small) only to learn that I can't do this.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Well, as I've understood it, View 5.2 and whatever else you guys are talking about is to make the gpu accessible as a resource to all the guest OS' - what I want to do is just point it towards one as FreeNAS doesn't need it at all. I wish I had some lab hardware to test this on, but unfortunately I don't.
Also, it turns out that I have a friend who has a setup quite like the one I'll have, so when I get hold of him I'll have to ask what setup he's running and how it's configured. I'll return with more information when I have it.

BlankSystemDaemon
Mar 13, 2009



I noticed that the USB 3.0 PCI-ex daughterboard I've got plugged into my Windows 10 VM through VMDirectPath I/O wasn't picking up the USB 2.0 device I plugged in, and when I added a USB 2.0 controller through VMDirectPath I/O, something went absolutely apeshit and the audio started stuttering wildly after not playing back for the first minute whenever I'd start something with audio.
I'm thinking DPC latency issues caused by lack of MSI-X or something else along those lines, but haven't really got any solution other than using USB 3.0, so my question is:
Shouldn't the USB 3.0 controller be capable of picking up USB 2.0 devices, so I don't have to use a separate USB 2.0 controller?

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

Yes, USB 3.0 is fully backwards compatible. Sounds like there was an interrupt issue.
Yeah, I'm almost convinced it's an interrupt issue - I'm seeing DPC latency spike in LatencyMon for the NT kernel process to a few thousand miliseconds, occationally.
However, I can play X4 perfectly fine (albeit only at 10% better FPS than my former workstation, but that's entirely expected, since I was both CPU and GPU blocked there).

Anyway, I'm pretty sure I just need to test a bit more with the HDMI-CEC adapter, as I just had my HOTAS connected to play X4, and that uses USB 2.0 too.


Also, from the Linux thread:

SamDabbers posted:

I look forward to your posts about bhyve :)
In most exciting news, bhyve now has proper UCL-like configuration management.
It's not using libucl, so it may be refactored again before long, I need to ask jhb@ about it. It'll likely be ready by 13.1, and will probably be in 12.3 as well, given the MFC.

Also, even if 13.0-RELEASE isn't out yet (a bug was found, and OpenSSL just announced a high-severity vulnerability has been fixed), the release notes are worth reading through for things like VirtIO V1 (read: Q35 chipset support), 9pfs, snapshots (not live snapshots, yet), as well as a PCI HDAudio device and COM ports 3 and 4.
The VNC protocol support has also received a bump, so you can now attach macOS' Screen Sharing feature to it.
Oh, and in case you have more memory than should be allowed, you can now use 57-bit virtual addressing and five-layer nested page tables.

BlankSystemDaemon fucked around with this message at 02:41 on Mar 25, 2021

BlankSystemDaemon
Mar 13, 2009



BlankSystemDaemon posted:

In most exciting news, bhyve now has proper UCL-like configuration management.
It's not using libucl, so it may be refactored again before long, I need to ask jhb@ about it. It'll likely be ready by 13.1, and will probably be in 12.3 as well, given the MFC.
I was wrong about this, what bhyve has is an OID-like configuration interface for a MIB, not unlike what ESXi has although it's not OID compatible with ESXi.

This makes it possible to map UCL compatible files onto those OIDs, and it seems to me that it's a pretty excellent GSoC project in case anyone's looking for something to do over the summer.
Send me a PM or an email to debdrup@freebsd.org if you're interested.

BlankSystemDaemon
Mar 13, 2009



:sigh:

Gaming in Windows on ESXi was a fun adventure, but it doesn't really work out.

A combination of a slightly-janky but memory hungry game (X4) and the EFI boot process in ESXi being piss-poor when doing +4GB PCI BARs means that the game seems to freeze when saving.
It involved at least two PSODs, and a lot of frustration.

BlankSystemDaemon
Mar 13, 2009



SamDabbers posted:

Try KVM with a relatively recent kernel and qemu. I have had good results passing through an Nvidia card and USB3 controller to a Windows VM for this use case under Fedora.
I have just about zero wish to do care and feeding of Linux, as I really don't want to deal with systemd et al.

I can pass through the GPU and USB3 daughterboard just fine, that's not the problem.
The problem is all the ancillary issues that crop up.

BlankSystemDaemon
Mar 13, 2009



SysV init is not the only way to do things, nor am I especially fond of it, and I'm certainly not recommending anyone go back to it.

The suggestion was to not use ESXi and instead use KVM, which means Linux and unless I go out of my way, it also means systemd et al - which is fine except when something inevitably breaks (because it always does, Murphy makes sure of it), I'll want to rootcause it (because that's just the kind of geek I am), and I've been there before, where I end up having to use something other than Linux to do this, which means that little things like log files won't be available to me, because of systemd.

Nowhere was there mention of me running Windows as as bare-metal hypervisor, Windows is just the de-facto gaming platform because even if Linux gaming exists, I don't own X4 on Steam (and any other games I play aren't availble on Steam for Linux) and even if I did, I wouldn't know where to start vis-a-vis finding some Linux appliance with Steam that actually works reliably, and probably also involves systemd et al, so we're back to square one.
Well, with the exception that at least Windows has dtrace. :haw:

And speaking of Windows as a bare-metal hypervisor, I wouldn't touch that with a ten-foot pole, because even such a simple thing as IOMMU is an absolute loving pain in the neck to setup and requires messing around in PowerShell.

BlankSystemDaemon fucked around with this message at 16:08 on Apr 9, 2021

BlankSystemDaemon
Mar 13, 2009



The question I want answered is, why are you rebooting your systems often enough that the speed of the reboot matters.

BlankSystemDaemon
Mar 13, 2009



Well, systemd is popular for a few reasons.
Partially it's because RedHat are well-positioned to affect a lot of other distributions, since a lot of distributions take cues from how RedHat does things, if they don't directly consume most of what's being offered and just alter a few bits and bobs.
Secondly, because a lot of the non-RedHat-affected distributions adopted systemd early on, pretty much through unilateral decisions by a few people (Debian was one of the few places where there was any substantive debate, and even then it was still decided to adopt it - and even more things are based on Debian than are based on RedHat stuff).

There's also a very substantial userbase of Linux users who's happily act like Windows users and just reboot when they encounter problems, instead of roocausing them, so when they do encounter problems, they won't know what's the cause (although this doesn't mean systemd is always at fault, naturally).
This, ironically, is probably also the userbase that's most convinced by the "it boots faster" argument, although that one has arguably been shown to be largely untrue for newer versions, if it ever was faster.

Someone more clever than me can probably also comment on how something doesn't have to be good to be popular.

I think a lot of it comes down to the fact that it's a compound thing for the people who don't like systemd; ie. it's not just systemd itself, it's also Lennart Poettering, RedHat, and a bunch of other factors.

To add to the other link, there's also an (archived) list of articles critical of systemd as well as some (archived) arguments against systemd.

EDIT: For reference, I'm not saying that none of the things systemd does is good - however everything good that it does has been done better and before it by launchd (introduced in Mac OS X, still in macOS, and is an on-demand unit-structured startup handler that replaced BSD init - it only starts up things when there's an actual demand for them by using sockets not unlike how inetd launches daemons on demand, and it also handles units in a way that'd be familiar to a systemd user) and SMF (Solaris Management Facility, one of the only startup processes that've managed to produce a graph that can't get you into a failed state).
Had I the money for it, I wouldn't mind paying someone to make something for FreeBSD out of BSD init, rc, devd, daemon, and all the utilities that're already in base and could be used to achieve all of this, without making a monolithic beast that's consuming seemingly all of the Linux userland.

BlankSystemDaemon fucked around with this message at 23:02 on Apr 10, 2021

BlankSystemDaemon
Mar 13, 2009



If it's objectively better, how can it get into a state when the system is unbootable without the configuration being touched by the user?

Let me guess, "it works for me" - which is the exact response a lot of people get when they report real bugs and Lennart doesn't wanna bother with something.

BlankSystemDaemon
Mar 13, 2009



jaegerx posted:

Elaborate
There's a lot of bugs that're closed on the official bug tracker (and even more bugs that were never filed as such, but have just been blogged about, because the people who blog about them know Lennarts attitude) without any resolution given, despite plenty of rootcausing having been done by the bug reporter.

As a personal anecdote, the last time I was running a Linux distribution on my entire network, it managed to stop booting properly, and when I checked things with zfs-diff(8); no changes related to systemd or its configuration - nor had anything else had changed, only a couple of files of userdata.
That was pretty much the last straw, for me.

BlankSystemDaemon fucked around with this message at 23:34 on Apr 10, 2021

BlankSystemDaemon
Mar 13, 2009



Eletriarnation posted:

I appreciate the insight here. I guess some of it just opens up more questions though - Red Hat, being responsible for countless actual paid-support production systems, surely had good reasons for choosing systemd over... whatever they were using before, right? If launchd and SMF are better and older, why did systemd choose a different path for what it's doing and why did everyone (in Linux-land, at least) go along with that? I don't expect you to educate me on the history of all this, but it feels like there's more going on here than the Linux userbase not knowing a good init system from a bad one.
Well, launchd is not opensource (it's not part of Darwin, which is the tiny opensource parts of macOS that also consists of some FreeBSD bits along with the CMU kernel, and some MIT/ISC/Apache utilities), and SMF is distributed under the same CDDL license that makes some lawyers say that ZFS is incompatible with GPL.

RedHat was looking to replace SysV init, because it needed replacing, and Lennart Poettering was in the right place at the right time.

It also doesn't hurt that RedHat makes money on selling support contracts, and that a non-zero amount of companies moved to paid support, because of systemd.

Eletriarnation posted:

If nothing changed related to systemd, what made you think that the problem was caused by it?

Mr. Crow posted:

So systemd was the cause of this how? Or did you just assume it was at fault?
Because it stopped booting reliably - intermittently, it would attempt to boot but fail in ways that were never consistent. To me this suggests that the graph that systemd creates isn't verified or even verifiable.

If it'd been FreeBSD, I'd have run rcorder /etc/rc.d /usr/local/etc/rc.d a whole bunch of times and looked for differences, but systemd-analyze only prints the last successful boot, it doesn't simulate a new one.

The system in question had ECC memory which I checked by using a known-bad DIMM. I replaced the PSU, and did a bunch of other testing including moving it to a completely different system (virtualized on known-good hardware), none of which isolated the problem.

BlankSystemDaemon fucked around with this message at 10:02 on Apr 11, 2021

BlankSystemDaemon
Mar 13, 2009



Eletriarnation posted:

Sure, I know that they couldn't just wholesale lift closed source software - what I'm saying is that if Poettering set out to emulate those examples knowing how they work, it doesn't really make sense that what he developed would just be worse for no reason instead of working in the same way as the examples. At the very least, we have to claim that he didn't know what he was doing or that he made bad decisions on how to make necessary-for-Linux changes instead of just replicating what was in front of him.

I don't even know the specifics of how they are different because I've never gone into the weeds that far on OSX or used BSD at all, so I'm not going to speculate on why those decisions were made - if you want to say that he made a mistake because he has bad opinions on how things should work, OK. But... like you say, he was in the right place at the right time to solve a problem that needed solving. Not only was his employer who paid for the solution happy to use it, but most of the rest of the Linux world was too.

Even if I believe that Red Hat created a deliberately flawed solution to sell more support contracts (or for the less conspiracy minded, spent a while developing something lovely and fell victim to sunk cost fallacy), for me it doesn't really pass the laugh test that the rest of the open source world would say "well, ok" and get in line and stay there for ten years instead of forking or starting over from scratch.
I'm pretty sure he's said he set out to emulate launchd, so assuming that he didn't seems to be pretty pointless.

The point I'm obviously failing to make is that if systemd is still as full of holes as it obviously is, and if it's still seeing scope creep that doesn't fix the issues and adds new ones, he's not solved anything.

I'm not saying they did it deliberately, I'm not into conspiracy theories. I'm saying they unintentionally benefited from him being in in a particular place at a particular time. Whether it's right is debatable.

EDIT: Also, don't be so quick to assume RedHat won't replace systemd - they're already replacing pulseaudio with PipeWire, despite the fact that that was written by Lennart Poettering too.

BlankSystemDaemon fucked around with this message at 15:30 on Apr 11, 2021

BlankSystemDaemon
Mar 13, 2009



Anywho, I think enough :words: have been said to exhaust this.

The battle lines have been drawn pretty clearly on this topic for a long time, there's no changing anyone's mind - so let's just have a drink and shitpost about something else.

BlankSystemDaemon
Mar 13, 2009



What overhead?
I'd love to see some numbers that prove which one has more overhead, because unless you're looking at OccamBSD (just as an example, because it's what I know; it's FreeBSD stripped to its absolute minimum to only run bhyve and nothing else (not even networking)), I'd be surprised if there's a measurable difference on comparable workloads.

BlankSystemDaemon
Mar 13, 2009



IOMMU passthrough in Hyper-V is embarrassingly bad.

BlankSystemDaemon
Mar 13, 2009



Perplx posted:

Microsoft is working on using linux on as the root partition for hyper-v https://www.theregister.com/2021/02/17/linux_as_root_partition_on_hyper_v/. Eventually Azure will be running linux on bare metal like every other cloud provider.
Stripped-down bare-metal hypervisor, you say?

BlankSystemDaemon
Mar 13, 2009



The question that prompted this got me thinking: I do wish there was a abstraction framework that could multiplex several hypervisors on the same hardware, when they all use hardware-accelerated virtualization (ie. VMENTER/VMEXIT with SLAT).
As it is, when you've got one type-2 hypervisor running, you can only launch additional instances of that hypervisor. You can't, as an example, have bhyve, xen, and virtualbox interchangably, nor can you live migrate between them.
In that sense it seems like there's still a lot of work to be done, but I think there's very little hope for an open standard for either the abstraction nor the API for live migration.

BlankSystemDaemon fucked around with this message at 07:44 on Jun 30, 2021

BlankSystemDaemon
Mar 13, 2009



You're thinking of Windows Defender Application Guard, which makes use of HyperV to sandbox quite a few things, including the Edge browser - it also underpins Windows Sandbox, I think.

BlankSystemDaemon
Mar 13, 2009



Multi-seat configurations (invented for IRIX by SGI, if memory serves) isn't natively supported in Windows, but there are third-party solutions that make it possible.

BlankSystemDaemon
Mar 13, 2009



SlowBloke posted:

It’s supported natively up to windows server 2016 as multipoint server, it was EoL on 2019 due to the introduction of azure desktops and windows 10 enterprise multi-session(formerly EVD). The biggest hurdle is that the 3D support might not be the best for games.
Well colour me surprised.

BlankSystemDaemon
Mar 13, 2009



You can get used Ivy-Bridge Xeon-based servers with ECC memory for the same price as NUCs, if not cheaper.

BlankSystemDaemon
Mar 13, 2009



Clark Nova posted:

I'm pretty sure people are passing on these due to noise and power consumption. I've been waiting a while for a really good deal on something tiny with two NICs or a PCIE slot with no luck :argh:
I have a HPE Proliant DL380p Gen8 with 2x Xeon E5-2667v2 at 3.3GHz (boosting to 4GHz on a single core, and configured properly with the CPUs going to C2 state when idle (because I don't wanna lose cache coherency)), with 264GB memory, and 8x 10k RPM 300GB drives, along with a couple HP-branded LSI SAS-HBA controllers and HP-branded Intel X520-DA2 10G SFP+ NIC.
It idles at ~160W and while it's not exactly quiet, a closed single wooden door between me and it is enough to muffle it so I can't hear it when sitting 2m from the door. Mind you, this is only achievable because of the HP-branded SAS controllers and NIC, as otherwise the ILO configuration will automatically turn the fans up to 40%.

BlankSystemDaemon fucked around with this message at 21:42 on Aug 19, 2021

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

My main NAS is an R720 with 2 x 12 core Xeons and 128GB of RAM + Spinning disks and under load maybe hits ~350-400 watts.

It hosts the storage for my M1000e Bladecenter
That's not under full load, right? I've seen my DL380p Gen8 hit over 1000W during full load using both PSUs, but that was with a GPU that I'm no longer using.

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

Nah. I just have personal issues.
Heck, :same:

BlankSystemDaemon
Mar 13, 2009



Pile Of Garbage posted:

I assume most servers these days come with M.2 NVMe SSD sockets so should be simple enough to use those instead whilst still avoiding the need for a SATA/SAS drive.
You'll be hard pressed to find server boards without any SATADOM and 2 of them is by far the norm, for the exact reason that it's the easiest way to add two SSDs in a mirror for ESXi or another type1 hypervisor to boot from.
ESXi is designed to load its OS into memory and run it entirely from there, only writing to the disks when configuration is changed and saved, so there's hardly any point in using NVMe.
The marginal boot time gains aren't going to mean anything, because any company who's serious about nines will be doing live migration.

SlowBloke posted:

Well the writing was on the wall, hopefully they will not start requiring stupidly big local disks to run.
You can't live migrate VMs that's using local storage, which is one of the big selling points of virtualization - so I don't think there's any need to worry about that.

BlankSystemDaemon
Mar 13, 2009



Number19 posted:

Lol how does something like that make it to production?
That's pretty simple to explain: Their testing environment is incomplete.
Of course, that's not something that's unique to VMware because there's no such thing as too many tests - even if you have unit tests and integration tests for everything, there are combinations of edge-cases that can only be hit by fuzzing, and even then you not guaranteed 100% coverage.

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

Man, say what you want about Xen, but I've never had an update "break" the hypervisors or vms yet.
Same with bhyve and nvmm for me.

BlankSystemDaemon
Mar 13, 2009



I'd like to point out that NFS is supported in Windows, nowadays - so maybe that's worth looking into?

Mr. Crow posted:

Are you trying to make it accessible to the host? If you're just trying to do a full passthrough, you'll want to use virtio. This is how I'm doing it on one of my machines but there are a dozen ways to do it.

code:
<disk type='block' device='lun'>
  <driver name='qemu' type='raw' cache='none' io='native'/>
  <source dev='/dev/disk/by-id/wwn-0x5000cca250ef1816'/>
  <backingStore/>
  <target dev='sda' bus='scsi'/>
  <alias name='scsi0-0-0-0'/>
  <address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='scsi' index='0' model='virtio-scsi'>
  <alias name='scsi0'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</controller>
Important lines are <driver name='qemu' type='raw' cache='none' io='native'/> and model=virtio-sci in the controller.
VirtIO isn't full passthrough, though - it's very explicitly peripheral paravirtualisation, with all the downsides associated with that.
With the added "benefit" that the block device that you're suggesting features even more buffering that keeps the data away from the disks in order to manipulate it, in addition to what Linux does on its own - which risks dataloss.

BlankSystemDaemon
Mar 13, 2009



Slack3r posted:

Hmm.. I forgot about NFS. I have a SAMBA share setup from sdc1 as ext4 fs. All my test Window instances can see that share just fine and are mapped to it. Last time I messed with NFS, I was able to gain root access on a university BSD server in the mid 90s. lol. Was able to mount / on my linux box via dialup. Since I was root locally.... boom done. Good times. I'm still wary of NFS.
Just because you can mount as the root user using NFS doesn't mean you have root access; in fact, that's very explicitly the point of the maproot function that's been part of the BSDs NFS implementation since 1990 (if not before).
More than likely, you either had access the 'nobody' user, since that's historically the user that's been used for that, as the default UID of -2 means that you get errors when trying to mount as root.

NFS has changed considerably since then, because NFSv4 no longer sends UIDs/GIDs over the wire and instead uses usernames (well, unless you set vfs.nfsd.enable_stringtouid (or its equivalent on not-FreeBSD), which exists to fix interoperability with NFSv4 implementations that still use UIDs/GIDs over the wire).

Also, Rick Macklem, who documented the thing about mapping the uid of root above, is still the maintainer of FreeBSDs NFS implementation to this day.

BlankSystemDaemon fucked around with this message at 15:14 on Nov 5, 2021

BlankSystemDaemon
Mar 13, 2009



GreenBuckanneer posted:

How is the performance of setting up a linux distro as the main system, firing up some sort of virtualbox on that, installing windows 10 on that, and then giving it as much resources as possible, and gaming on the VM?

I haven't tried it but I've been thinking about doing it for years.
When I did some testing on my setup, I found a 4-5% difference (median ~4.42% if memory serves) at 95% confidence over 10 runs with a complete reboot between each run, so it's measurable but not important unless you're already resource-starved.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

I'm setting up my NAS with TrueNAS Scale, which uses Kubernetes.

I never worked with it, and it probably has a reason why it is the way it is, but holy hell is it convoluted. If I do kubectl describe nodes, the Memory Requests column is the actual memory the container is using right now, right? It doesn't just plain echo the requests: value in the resources: section in the deployment YAML, does it?

i.e. this:



Also, what's k3s-server and why is it using 20% CPU while the Kubelets are doing practically nothing?
I'm just gonna quote some nerd from another thread:

BlankSystemDaemon posted:

kubernetes is made for hyperscalers that need massive scale-out orchestration
nobody else should ever touch it, ever, on penalty of being tickled

Methanar posted:

It's incredible we've progressed from containers being good because they're lightweight and portable to just fuckin shipping entire single-node kubernetes stacks as a distribution mechanism.
i mean, that's what universal binaries, flatpaks, rust, go and basically anything with vendored dependencies et cetera are

BlankSystemDaemon fucked around with this message at 01:12 on Dec 9, 2021

BlankSystemDaemon
Mar 13, 2009



Keito posted:

Lightweight compared to running a whole VM, you mean. Everything is relative.
If anything, since the introduction of containers on Linux, they've grown more heavy-weight rather than light-weight:
Service jails on FreeBSD, where you have a single binary executable and its configuration file(s) in a container, have been a thing since 1999.

BlankSystemDaemon
Mar 13, 2009



Mr Shiny Pants posted:

It's a poor man's Erlang. :)
That's the best description of kubernetes I've ever heard.

BlankSystemDaemon
Mar 13, 2009



Mr Shiny Pants posted:

Clustering physical machines and running microservices on top of them is Erlang in a nutshell. Erlang is just one VM (BEAM) that can run actors (microservices that pass messages) with supervisors (if you use OTP). You need to program in Erlang but from a technical standpoint they are roughly comparable at what they accomplish. Erlang is elegant though (like really elegant), Kubernetes is not IMHO.

All that is old is new again.

It was more of a reply to: This Kubernetes thing, at least outside of the alluded "hyperscaling setting", feels like someone's pulling a huge practical joke on me/us.
I got that feeling as well. :)
This is just rumours, but with Facebook having moved WhatsApp from the FreeBSD+Erlang setup that Jan Koum originally set up to their own in-house PHP on Linux solution with Twine (their scale-out container-management solution), they've supposedly had to throw 100-200x the number of machines to handle the same level of traffic.

BlankSystemDaemon
Mar 13, 2009



Bob Morales posted:

It's like Microsoft buying HoTMaiL all over again
You're not wrong.
But wait, does this mean Facebook is going to have an apparent face turn like Microsoft did, before doing a slow heel turn like Microsoft is doing?

BlankSystemDaemon
Mar 13, 2009



The only legitimate reason to restart a computer nowadays is when the kernel binary executable has been updated.

BlankSystemDaemon
Mar 13, 2009



Broadcom is so bad, they managed to infect Avago after Avago sold off their networking to Intel and acquired Broadcom, then renamed themselves to Broadcom.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



I don't know Cohesity, but if a backup system doesn't let you practice restoring the data easily and programmatically, so that you can assure yourself that 1) you can do it when the poo poo hits the fan and 2) that it works and isn't just a theoretical backup, then I'm not sure it's worth paying for.
If it does both, it's better than a lot of other software you could spend money/time on.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply