Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Buttcoin purse
Apr 24, 2014

I run CentOS 7 on my desktop but occasionally need to do things in Windows. Currently I've just got a "slightly dated" Windows XP VirtualBox VM I keep isolated from the network. Since Microsoft really really wants me to upgrade to Windows 10 before the end of the month, and I have a spare Windows 8 license, I was thinking of installing Windows 10 in a VM using the Windows 8 key so at least I have Windows 10 at no cost (well, other than my time and my hair) if I can stand using it.

So far my experience with Windows 10 as a guest has been:

VirtualBox 4.3: seems awfully slow to do things like install Visual Studio, but the UI felt responsive
VirtualBox 5.1: blue (or some color, it went by so fast I don't know) screens quite often
KVM (the stock qemu-kvm-1.5.3, not the virt-preview version): I think it was faster, but the UI felt frustratingly laggy, like there was a slight but annoying delay between moving the mouse and the cursor moving on the screen

I guess it doesn't help that my PC is pretty old, I can throw plenty of RAM at the VM but I don't have any more MHz :v:

Does anyone have any suggestions about which hypervisor I should use or ways to improve performance?

Adbot
ADBOT LOVES YOU

Buttcoin purse
Apr 24, 2014

Varkk posted:

What display protocol are you using for the VM? I have always found that running the VM and then connecting to it by RDP gave the best results for a Windows virtual.

I always used to use RDP for VMs, but then a few years ago I used VMware Workstation for a while and the console worked so well I didn't bother setting it up, and then I moved to VirtualBox and KVM at different times and they were both fine with Windows XP and 7 guests too.

With this Windows 10 testing I've bee using KVM and VirtualBox's VM consoles. In the case of KVM it's virt-manager, and I think it should be using spice.

I tried RDP with a VM on KVM and it certainly was a lot nicer. I mean everything is still slow because of my slow PC, but the mouse cursor could keep up. I think that was just the killer for me and RDP fixes it. Thanks for reminding me about how things used to work for me, I seem to remember using the console always used to be a terrible idea in the past, maybe that was back in a time when my PC's specs relative to the OS I was trying to run were bad again.

:siren: However, I haven't had a chance to play around with this fully, but is xfreerdp on CentOS 7 defaulting to redirecting any USB device that gets plugged in?? It sure seems like that was what was happening, and that doesn't seem like a nice default.

fletcher posted:

Do you have the "use host i/o cache" and "solid-state drive" checkboxes ticked in the VirtualBox settings for your VM?

No and no. Thanks, I'll look those up!

quote:

How many CPU cores, RAM, and VRAM did you give it and what's your host machine specs?

The host is very old :v: Core 2 Duo 2.13GHz with 8GB of RAM. At various points I gave the VM 2 or 3 GB of RAM but it never seemed to get close to using 2. I don't think I tried giving it more than 1 core under VirtualBox. As for the display settings, with VirtualBox 4.3, right now I have it set to 128MB; one interesting thing is I noticed that when I had a Command Prompt open behind the Start Menu, which has transparency, and it was scrolling a directory listing, there was really bad flicker (it seemed like some part of the screen was being cleared and then redrawn), and I'd also seen similar issues elsewhere randomly, so at one point I had turned off 3D acceleration and that seemed to help.

quote:

I've never used Windows as the guest OS but on my Linux VMs I hardly notice any performance different from my host machine for 99% of what I need to do.

I guess I wouldn't say I normally don't notice a slowdown but it's not normally annoying. In this case I haven't tried Windows 10 on this machine, but I have a laptop with similar specs and it Windows 10 seems to be much more responsive on there even though it's using the fallback graphics driver.

Buttcoin purse
Apr 24, 2014

DeaconBlues posted:

So do the likes of VitrualBox pass motherboard information through to the guest? My guess is that it'd be generic information and you'd be throwing money away using a license key for a 10 VM, but what the conglomerational telemetry do I know?

Good question. I assume some unique-enough things are passed through to the guest, like the CPU type, but then KVM lets me specify that and I gather I need to use Core 2 Duo for Windows 10. On Windows XP running under VirtualBox, I noticed that the VirtualBox version number was showing up in some parts of the device information for things like IDE controllers, so I guess that might be one area where you'd have differences between different people's VMs. I guess like you say there's a good chance someone has already activated a VM with similar enough virtual hardware to mine that mine will activate automatically? :shrug: I assume if they notice multiple systems seem to be fetching updates for that single license at the same time they might decide to deactivate that system though? I suppose those are questions for a Windows thread.

Back to something we can actually know answers to rather than just guess:

so loving future posted:

Yeah, agreed. If you have real longer term needs maybe think harder about this choice. If you're mostly trying to avoid writing a shell script that calls ssh a bunch, use Ansible.

Ansible looks fairly nice and simple to me from the quick look and play I've had, I assume it probably wouldn't be hard to migrate whatever I do in Ansible to Salt or something later? By migrate, I mean copy-and-paste, reformat configuration, etc.

Buttcoin purse
Apr 24, 2014

DeaconBlues posted:

So do the likes of VitrualBox pass motherboard information through to the guest? My guess is that it'd be generic information and you'd be throwing money away using a license key for a 10 VM, but what the conglomerational telemetry do I know?

I installed Windows 10 on a KVM VM, didn't tell it a license key, and it said it couldn't activate because I didn't have a "digital entitlement" or whatever, so I guess nobody had already activated with the same hardware :shrug: I didn't particularly want it to work because I don't want to end up using someone else's license key.

Buttcoin purse
Apr 24, 2014

bobfather posted:

So I’m curious as to what people think about selinux? I’m running CentOS 7 trying to learn about Linux, and there’s a shocking number of things that selinux appears to gently caress with. To name a few:
...
As a know-nothing Linux beginner, am I in for a world of pain with CentOS? Is there a more friendly distro I should try instead?

Just to add to/reinforce some of the other comments (including your own):

- I think the only pain you might have with CentOS is it will lack some packages, and have older versions of some packages. To offset that pain, you don't have the pain of having to upgrade so often and having things get broken by regular updates like you do with Fedora. I chose to move from Fedora to CentOS because I no longer want to spend all my time fighting with the OS, I actually want to do things with it. And I also had a proven track record of not getting around to doing distribution upgrades when Fedora stopped supporting whatever version I was on. At least with CentOS I can leave those upgrades for quite a few years instead of having to do them once a year with Fedora.

- Lots of people I know think SELinux is too hard and they just turn it off. I do find it painful but I think it's worth the pain, so I leave it on. In CentOS 7, I've only had minimal issues with SELinux.

- If you're still considering other distributions, I gather that Ubuntu uses some alternative to SELinux which anecdotally might be less annoying (maybe some other distributions use it too), but I have no idea how it compares to SELinux in terms of effectiveness.

Buttcoin purse
Apr 24, 2014

telcoM posted:

Or maybe you can bring that subnet to your host. You could use a second physical network interface for that, or set up a VLAN on top of your existing physical interface. Then you can bridge your guest to that interface.

You can still just attach two IP addresses to a single interface too, without using VLANs, right? I mean if there are two separate IP networks on the same physical network.

Buttcoin purse
Apr 24, 2014

Kaluza-Klein posted:

Yeah separate broadcast domains are for scrubs :rolleyes:.

Yeah I was kind of overwhelmed by the number of things I could say about why people don't normally do that, and gave up trying to think of what to write :shrug: I don't think we ever figured out what the OP was trying to do did we?

DeaconBlues posted:

Brilliant! I've installed smartmontools and I'll have a play with smartctl. Thanks.

smartmontools comes with a daemon, smartd, which you might want to take a look at. I don't think it will actually do offline scans (which is what I think you were asking for) but it does do periodic checks for attributes and sends out mail when they change, for example.

TimWinter posted:

I want to 3d render a new home I'm finalizing the sale of now. It's both for figuring out what furniture and paint colors make sense, and possibly designing expansions or additions.

I'm honestly thinking of breaking out HL2's source tools and doing basic map making through that, as I'm a programmer by day and for some reason I'd like something I can hack on a bit. Also, as a programmer, I have no idea how to scope projects and think this will be easy.

I know the feeling, you're used to some game mapping tool and these 3D tools aren't easy to learn :v: I once came across BRL-CAD, which is open source, cross platform, and uses Constructive solid geometry (CSG) mapping as we're used to from our game engines (at least I think that's what HL uses from a quick web search), although I never actually got beyond installing it so I can't tell you if it is any good.

Buttcoin purse
Apr 24, 2014

alienhunter3 posted:

I could run apache on a computer at my house and control it via ssh from anywhere.

You can do that with Windows, although there are a lot less people doing it so it's harder to get help. I shouldn't have wasted so much time with Cygwin :suicide:

Buttcoin purse
Apr 24, 2014

ToxicFrog posted:

I set up cygwin sshd on windows 7 and 10 machines recently, and it was really easy -- install cygrunsrv, run ssh-host-config, start it as a windows service. What went wrong when you did it?

If I recall correctly I also ran named/bind, ircd, tftpd and some other rubbish, and most importantly it was on Windows 95 :v: Windows 95 didn't have services, so while Windows was showing the login prompt, a console would appear below it running a script where all my daemons would start. The time I invested in that could have been spent learning Linux better, but oh well I was young and having fun and didn't have anything else important to do.

Buttcoin purse
Apr 24, 2014

Saukkis posted:

I also suspected that the adapter would be limiting the speed, but the adapter fuf linked supposedly supports 500 Mbps, so it would be an unusually sad scam if it only had 100 Mbps ethernet.

[edit] And now that I've scrolled the product further down I notice the feature comparison table and see that this model actually does not have a gigabit port.

I've got another brand which claims 400Mbps but only has 100Mbps ports, but each adapter has two ports, and I suppose if I got another pair then I could bond 4 ports to kinda get 400Mbps? Otherwise I don't know how their math works.

I'm not planning to actually try bonding them though.

Buttcoin purse
Apr 24, 2014

Toalpaz posted:

I was wondering if there were other [Linux distributions] that would be regarded as more hard than ubuntu but less hard then gentoo and arch-linux?

I think it would be fair to say that the answer is "all of them".

evol262 posted:

It's notable, though, that it's still a minority of people doing "real" stuff with Linux (development, :yaycloud:, etc) use those distros.

Yeah, I like things to not break frequently so I can get on with things I want to do, so I use CentOS. Sure, I might not get a new feature or bugfix for 5-10 years but I'll probably have gotten used to the bug or developed a workaround in that time. I'd rather have the same bug for 5 years than a new one every month :v:

Which is not to say there aren't ways of getting some newer stuff on CentOS, e.g. newer compilers and development tools are available via SCL/DevToolsets.

Buttcoin purse
Apr 24, 2014

Toalpaz posted:

sorry I'm using Arch and making GBS threads up the thread with my problems.

You never need to apologize for posting comedy on the comedy forums.

Sorry for laughing at you though. This is also educational, and not just in a "reasons not to use Arch" way - I never realized "doesn't work well with Wayland yet" was a thing as I got sick of being on the bleeding edge. I expect all the problems with it will be ironed out by the time Red Hat unleashes it on us CentOS users :flashfap:

Buttcoin purse
Apr 24, 2014

I keep hearing about how great ZFS is because it has checksums of some sort, but I'd prefer to stick to a mainstream filesystem, preferably one supported by Red Hat, so I think I'll stick with XFS because I'm a coward. Are there any better options than just going into every directory and running sha256sum>sha256sums.txt, then using sha256sum --check sha256sums.txt later, such as a tool that can do that recursively for me, and maybe tell me if there are any files that don't have checksums stored or automagically work out which ones need to be recalculated?

Buttcoin purse
Apr 24, 2014

When I saw that somebody called apropos man had replied to my post, I was worried that there was a lmatfy.com!

I do have semi-regular backups (it only took a few decades), but I was thinking that checksums would help me work out which copy was the correct one if I have two accessible copies but one is bad. I suppose this optimistically assumes I don't just have a total disk failure where I can't read anything at all like I usually do. Also, in lots of cases, applications can tell you when the file is corrupt, so that's a good way to work out which file is the correct one, but this isn't always the case, or isn't always that easy to check.

Buttcoin purse
Apr 24, 2014

ToxicFrog posted:

First of all, yes there is a program that does this already, called hashdeep. It does both the "compute checksums for everything in a directory tree" part and the "compare results against known good checksums" part.

I saw that, but it sounded to me more like the kind of software you use to work out "C:\WINDOWS\SYSTEM\BLAH.DLL is BLAH.DLL from Windows XP SP3" with a big set of known checksums you get from the Internet, rather than comparing each file against the checksum for that particular file. Does it also do the latter?

quote:

The biggest flaw in this plan that I see is: how do you tell the difference between a file that fails to checksum because of silent corruption, and a file that fails to checksum because you modified or replaced it and forgot to regenerate the checksum file?

I was only planning to use this for files that I don't expect to change, so yeah not exactly 100% coverage of my filesystem :v:

quote:

It also seems kind of weird to call ZFS "not mainstream" when it's been available on Solaris and BSD (and battle tested in datacenters) for a decade; it's only linux support that's relatively new. It's even officially supported by Ubuntu. :v: (It's not supported by Red Hat, but the ZFSonLinux project does release RHEL 6 and 7 packages.)

Oh yeah, sorry, I meant ZFS on Linux isn't mainstream, or at least I didn't think so, but I guess if Ubuntu supports it, then it's getting there?

Buttcoin purse
Apr 24, 2014

ToxicFrog posted:

Yes.


Oh, that's a different sack of ferrets. (And you might also want to look into something like Tripwire.)

Thanks, and I can't believe I didn't think of Tripwire.

Buttcoin purse
Apr 24, 2014

fletcher posted:

This was on RHEL7. I'm not using a package based install though, I'm installing nginx from source.

Are you aware that there are a few options for getting various versions of nginx from Red Hat or at least EPEL, which is Fedora-based? EPEL has 1.10.2, and there are SCL packages for 1.6 and 1.8.

Buttcoin purse
Apr 24, 2014

anthonypants posted:

Is there any reason I shouldn't be using Continuous Release packages for CentOS 7?

I don't really know, but since nobody answered (as far as I noticed):

I'm guessing that since they're just rebuilding Red Hat packages, there shouldn't be too many issues, and any issues that do exist are probably glaring ones, like things not installing due to broken packages. That's just a guess, though.

I recently got tired of waiting for the Firefox security fix which was blocked behind them releasing CentOS 7.3, so I installed Firefox or maybe everything (I can't remember exactly) from CR and didn't have any problems.

I've haven't actually enabled the CR repository permanently but I'd consider it, even though I really don't like things to be broken on my machines (which is why I use CentOS in the first place), because I have so few problems with CentOS generally.

Buttcoin purse
Apr 24, 2014

Sheep posted:

Trying to do a CentOS install via kickstart on some wonky machines with NVIDIA cards; the text installer (it's all automated) seems to be dying at some point in the install on these machines but I can't tell because the screen is garbled. First time I've seen something like this in text mode.

Maybe you could try getting it to use a serial console - perhaps it won't make the install work, but you'll at least be able to see where it went wrong?

my bitter bi rival posted:

code:
{
  package => 'package',
  version => '1.0.0',
  commands => [
    ['jslib', '.'],

For the record, that looks like Perl, but since you already have a working solution there's probably no point in suffering through turning it into a slightly smaller one in Perl :v:


This is what I came here and caught up on the last few pages of the thread for. It's true, there's no KDE :suicide:

I really don't think I want to get to know how GNOME 3 works. Are there any other options that don't involve moving to another distribution?

Using KDE on CentOS in the past meant I didn't always have all the config tools, but I didn't mind running some GNOME/GTK-based config tools under KDE, and I wouldn't mind having to run some of them under some other window manager or desktop environment on CentOS 8.x.

I suppose by the time CentOS 7.x is no longer supported, EPEL might include KDE for CentOS 8.x if I'm lucky. Or maybe IBM will prevent CentOS 8.x somehow.

Buttcoin purse
Apr 24, 2014

RFC2324 posted:

Don't use RHEL/CentOS on the desktop, use Fedora.

But I don't want to be a beta tester and I don't want everything to be broken once a year, I want stable software and to only have a major upgrade once a decade or so.

Maybe I should learn Debian and try out their LTS releases, I gather they're supported for 5 years?

Perhaps to make up for the sin of not joining in the collective beta testing effort, I could run Fedora in a VM and do.. something with it. I already run some things under Fedora but I really want important stuff like the machine booting and my email and web browser running to just work.

Salt Fish posted:

What is the execute bit actually used for in linux? I learned in school that it's required to have the system run a binary, but what is the practical use case for it? I was thinking today that a user who can write to any file with the +x bit can just copy binary contents and then run the code they want as if it originally had the +x bit:

One thing you can perhaps do is prevent the user from writing anywhere other than their home directory by not putting them into the necessary groups and not having any directories outside /home that are world-writable, and then have /home mounted with the "noexec" flag:

code:
noexec Do  not  allow  direct  execution of any binaries on the mounted
       filesystem.  (Until recently it was  possible  to  run  binaries
       anyway  using a command like /lib/ld*.so /mnt/binary. This trick
       fails since Linux 2.4.25 / 2.6.0.)

Buttcoin purse
Apr 24, 2014

I remember when I used to use tcsh, you had to run the command rehash for it to see new binaries in the PATH. Is that still a thing in any shells? I've been using bash for a long time now.

Buttcoin purse
Apr 24, 2014

Mr.Radar posted:

The issue I think is you have two different package managers which don't (can't) talk to each other: Cargo and apt-get. apt-get is the Debian/Ubuntu system package manager, which manages the kernel and all the system packages (pretty much everything under /bin, /lib, etc) and knows how all they depend on each other. Cargo is the Rust package manager which knows everything about rust packages and how those depend on each other. The disconnect is that Cargo may install packages that depend on packages managed by apt-get but since they're two separate package managers Cargo can't automatically install those dependencies. The reason Cargo can't just call apt-get is that you may be running it on a system with a different package manager (dnf on RedHat, pacman on Arch, brew on MacOS, etc.) and different system package managers packages also tend to have different names/naming conventions. You'll run into this issue pretty much any time you use a programming language-specific package manager to install a package with system dependencies.

I've seen and used one sort-of solution to this type of problem, http://cpanspec.sourceforge.net/, which will take a package from Perl's CPAN packaging system and generate an RPM .spec file you can use to build an .rpm for the Perl package. Unfortunately it's not perfect and doesn't always get the dependencies right, but you can always go and fix up the dependencies in the .spec manually if you find out that they're insufficient.

Perhaps there's something like this for taking Cargo packages and generating .deb packages (or their source files)?

Buttcoin purse
Apr 24, 2014

Mr. Crow posted:

I'm new to KVM and trying to get libvirt+qemu up and running and am having a weird issue; was hoping someone could shed some light?

I have several NICs on the same machine, two are trunked and one is not. The not-trunked port and subsequent stack / VMs are great fine and wonderful.

The other two are identical and the network stack is basically:

eth0 --> eth0.1 --> br0 --> (vnet0 --> VM)

My libvirt network is a simple bridge.

I've run tcpdump and all the packets looks fine as far as vlan tagging goes.

While in the VM when pinging anything but br0, I get no response; however if I am running tcpdump on my router on the relevant interface, communication and everything is fine? The gently caress? Why would running tcpdump impact this at all? What am I looking over (is this a routing issue)? I can post configs as necessary.

What is this router you're running tcpdump on - another physical machine connected to eth0?

Running tcpdump will by default put the interface in promiscuous mode, meaning that at layer 2 it will receive frames that aren't for your host (by default it will only receive frames with the interface's MAC, or broadcast or multicast frames). So it makes sense that it could work around whatever this problem is.

Maybe if you tell tcpdump not to enable promiscuous mode, it will help you to figure out where the packets are getting lost.

e: vvv :psyduck:

Buttcoin purse fucked around with this message at 00:49 on Jan 10, 2019

Buttcoin purse
Apr 24, 2014

Shaocaholica posted:

Can't seem to find any static builds for MPlayer for RHEL 6 and RHEL 7. Is there a better tool to play back rando videos?

My solution to this is to run Fedora in a chroot (on CentOS 7, not tried with 6) so I can install all the stuff like that which you can't otherwise get (but have to create a new chroot every year due to the Fedora lifecycle).

It's like a "this is why people don't use Linux on the desktop" project though. If you want I can write up some instructions. Basically it involves running rpm and yum with options to make them chroot and install the Fedora packages you need for a base install (straightforward, the challenge is just in knowing that you can do this), setting up schroot so you can get into the chroot to do other things (not too hard), configuring schroot to set up some additional things so that PulseAudio works inside the chroot (somewhat obscure), and at least in my case getting the right nvidia libraries in the chroot.

On the plus side, just about everything I've ever wanted to do in the chroot has worked, other than installing Steam. With various Fedora versions, I've used this to run kdenlive and other video-related tools, various games, Amarok (also available on CentOS, but not with MP3 support), and other things I've forgotten.

I should probably start using Docker though :v:

Buttcoin purse
Apr 24, 2014

nem posted:

Supposedly it’s in EPEL?

Actually the article says it's in the repo "nux-dextop" (which I've never heard of before).

e: While trying to install mpv, I noticed that mplayer itself is available from the rpmfusion-free-updates repo. I can't remember why I don't use that - I think it might be because not all of the codec packages are available, or it's too old. I guess I should try it again and report back.

Buttcoin purse fucked around with this message at 03:18 on Feb 6, 2019

Buttcoin purse
Apr 24, 2014

Yeah, I know I'm not using a desktop distribution, and it was a dumb way to say that it was a lot of screwing around.

Buttcoin purse
Apr 24, 2014

Major Ryan posted:

Nux is a guy who's done a lot of stuff for CentOS with extra packages. The repo or some version of it has been around for years. It's definitely unofficial, but it's one of the good ones.

I've used it in production for packages where someone's been desperate for a GUI on a CentOS box; I can't remember it ever causing any problems.

Thanks, nice to know.

Looking forward to seeing whether any of these repos include KDE for RHEL 8 when the time comes :ohdear:

Buttcoin purse
Apr 24, 2014

https://mintty.github.io/ "is based on code from PuTTY 0.60" (which is itself definitely a good terminal emulator) and is the default terminal that is included in Cygwin (where I've used it a reasonable amount, and it seems nice enough, haven't really been a power user of Cygwin since they started using it though), but you apparently don't need to use it with Cygwin. It says you can use it with WSL. So you'd essentially be using the same terminal emulation as Computer viking suggests, but without SSH in the middle.

For actual Windows applications though "mintty is not a full replacement for the Windows Console window (by default running the Windows Command Processor / command prompt / cmd.exe). While native console programs with simple text output usually work fine, interactive programs often have problems ...".

Oh also it has "SIXEL graphics display support" since apparently that's a thing someone upthread cares about :v:

Buttcoin purse
Apr 24, 2014

apropos man posted:

Keep a text file containing the partition layout.

I should do this in my backup system too. https://unix.stackexchange.com/questions/12986/how-to-copy-the-partition-layout-of-a-whole-disk-using-standard-tools has some information about getting partitioning tools to generate a machine-readable dump of the partition tables.

Buttcoin purse
Apr 24, 2014

BaseballPCHiker posted:

I really foobared my first install by going sudo apt-get crazy and not knowing what I am doing so I'm going to format and try again.

It sounds like this was probably a one-off and people have explained how you can get Steam in a better way, but if you ever want to do any more crazy experimentation, you might want to consider installing in a VM (under VirtualBox or something) and then taking a snapshot of the entire VM before trying anything crazy, then if you break stuff there's basically one big undo button to go back to a previous state of the VM.

You most likely wouldn't want to be playing with Steam in a VM though, unless you're just playing some old DOS games or something :v:

apropos man posted:

5. The volume containing backed up files is much smaller than making a copy of a 20G or 40G qcow for each VM.

I'm not sure that that should be the case. Ignoring the effect of VM snapshots and files that have been deleted/shrunk/etc. in the VM, if you did a sparse copy of the .qcow file shouldn't it take up about as much space as the sum of its contents (files)? Sure, there will be some overheads, but they shouldn't be too drastic. And there are tools which can zero out the unused space to account for files that have been deleted/shrunk/etc. in the VM so that a sparse copy won't copy old data (or at least not very much of it).

Of course you'd need a backup tool that could support doing sparse copies, and unless it's a VM-specific tool it's probably not going to do any kind of useful incremental backups of .qcow files. Copying the entire .qcow file every day when you really just want the changed files copied would probably be slower than you want.

The ideal solution would be if Veeam supported .qcow, but it only supports VMware ESXi and Hyper-V as I understand it. I think it does stuff like changed block tracking and/or looking at the filesystem to work out which blocks are actually referenced and not reading the rest regardless of whether they're full of zeros or have old unreferenced data, but it still leaves you with just a disk image on the backup target which is just as good as if you'd done a full backup. If there's a solution out there which can be used for .qcow files, VirtualBox disk images, etc., I'd love to hear about it!

gourdcaptain posted:

Inverse with Linux. NTFS has pretty widely distributed Linux drivers that are more mature (ntfs-3g, most distros included them by default),, exFat has some but they're not as common and a legal mess.

I just want to second this in case anyone had any doubt. I can't speak for the legal mess part though, but I seem to recall Microsoft having some fun with patents in that area.

Buttcoin purse
Apr 24, 2014

Some more nerdy information about the DISPLAY variable which is probably more relevant for the tech relics thread:

On CentOS 7 (not exactly a tech relic yet) with KDE, the plain old "Switch user" feature starts a new X server, so you end up with DISPLAY=:1 in the new one.

I never used dual-head setups until well after it was really mainstream and worked without any effort on video cards with multiple outputs, but I found this old HOWTO which says one solution to dual heads with separate video cards involved multiple servers and hence different DISPLAY settings. I guess you wouldn't be dragging your windows between them without some extra effort, I guess Xinerama was what let you eventually combine physical displays into one desktop.

This HOWTO seems to be about using multiple video cards and keyboards so you can have multiple people using consoles on the one Linux machine. I couldn't really be bothered reading something so crazy beyond the first few lines :v:

Buttcoin purse
Apr 24, 2014

Hollow Talk posted:

nobody uses emacs in non-x11-mode on servers.

Why not? I'm not looking at images and I can just use the terminal's mouse support so who needs X? I spend a lot of my time using Emacs under screen (the TTY multiplexer). If it's okay to use vi in non-X mode, why not Emacs?

VostokProgram posted:

E: actually / is also extremely useful and you can use it in less and when reading manpages too

Or learn Emacs, and then use the info command instead of man, or just read man/info documentation and all text files from within Emacs :smuggo: (do not try for really large text files, and to be honest I still run man frequently too)

Buttcoin purse
Apr 24, 2014

VostokProgram posted:

Does anyone besides GNU actually use info?

The command? I doubt it, it's too hard to use unless you know Emacs, or Emacs's keystrokes (which do overlap a bit with bash). I definitely prefer the info-format documentation - such as for bash and GCC - over plain man pages, but I read them in Emacs itself. For non-Emacs users, KDE's khelp can read that info documentation too. I suspect in the end that most people would just find that documentation online somewhere though.

Buttcoin purse
Apr 24, 2014

kiwid posted:

No I don't have to use 80.

Yet another option is to grant a "capability" to php, stored in an extended attribute in the filesystem, allowing use of low port numbers: https://stackoverflow.com/a/414258 I admit I've never followed this advice: "Read this long and hard if you're going to use capabilities in a production environment. There are some really tricky details of how capabilities are inherited across exec() calls that are detailed here."

That SO link probably has some other options not mentioned here too.

Buttcoin purse
Apr 24, 2014

xzzy posted:

So do the command line ones. On our compute servers that are running full tilt 24/7 yum corrupts the rpmdb on steady percentage of nodes every week. It's single digit percentages, but still enough to be annoying.

Do you run 'yum update' all the time or something? I work with test infrastructure that often can't be updated so YUM doesn't get run much and I've never experienced a corrupt YUM or RPM database (probably just jinxed myself).

Buttcoin purse
Apr 24, 2014

General_Failure posted:

Quick question. on people's personal machines, where do you like to store all your big stuff, like source trees, downloads etc?

I tend to have a separate drive mounted, but some software seems adament that where the source and builds take place should be in my home directory. Personally I don't like my home to be clogged up with tens / hundreds of GB of source and misc. poo poo. Thoughts?

I tend to have a separate drive or directory outside of /home for that kind of thing too. In fact I have one for pristine download tarballs and Git repository clones, and if I'm actually building I untar/clone to a temporary location and build there. The only time I build out of my home directory is if it's something I'm going to modify and hence I want it to be backed up.

I don't recall ever encountering anything that cared that it wasn't in my home directory, neither open source stuff I've downloaded, nor crappy build systems I've used in various workplaces!

Can you just fool these weird build systems using symlinks that live in your home directory and point to where you actually want the build to take place?

Buttcoin purse
Apr 24, 2014

The Dell XPS laptops from 5-10 years ago that I touched at work tended to overheat a lot, to the point that some of the models actually destroyed themselves. They all tended to be lower build quality and end up with bits of plastic broken off them. I'd much prefer their business laptops like the Latitude or Precision lines, they seem to be better quality and still work after 10 years. I assume you get less performance for your dollar though, although I wouldn't actually know how much they cost.

I hadn't heard of System76 before, so I did a search for "system76 rebranded" (just on a hunch) and came across https://news.ycombinator.com/item?id=17039414 which says that at least in 2018 they sold a lot of rebranded Clevo laptops. I've had experience with exactly one rebranded (not by System76) Clevo laptop and it's okay but definitely low build quality. It's my wife's, and admittedly she doesn't take great care of tech stuff, but there are a few things broken on it now (power button needs to be pressed in just the right way, one USB port needs the plug at just the right angle) and she's never broken anything before, even on the cheapest Toshiba laptop available (the Toshiba cost about $350, the Clevo over $2K). Also she had someone else playing games on it when it wasn't plugged in to the charger and it just drained its battery extremely fast, making the laptop almost too hot to touch, and getting down to 0% and refusing to ever charge again, so I suspect their battery charging circuit isn't great and is possibly bordering on unsafe. This particular system is running Windows so it's not like the battery problem is due to having the "wrong", "unsupported" software on it.

Anyway I didn't fully read up about System76 using Clevo, you might want to do your own research. I found when I was buying the laptop for my wife that you could sometimes find equivalence information online saying that company X model Y of laptop is a rebranded Clevo model Z, and then you can look around for what people have to say about Clevo model Z.

Buttcoin purse
Apr 24, 2014

Computer viking posted:

The setting: A modern Fedora install, KDE, and a ruby rake buildscript started in a terminal. The ruby process was started as normal (so niceness 0), then made slightly less nice with "sudo renice -3" shortly after starting - and on checking, it is indeed nice -3.
It spawns a lot of child processes to do imageMagick things. Those child processes spawn with niceness -1. Shouldn't they inherit the -3 of the parent ruby process, or in the worst case the 0 it had before the renice?

The build script isn't doing nice -n 2 mogrify ... or something? Because that would probably explain it, as the value you pass to nice isn't an absolute value, it's added to the current value.

Adbot
ADBOT LOVES YOU

Buttcoin purse
Apr 24, 2014

cage-free egghead posted:

I use two mice, one regular gaming one and another ergonomic one. Obviously I'd rather not use the ergo one for gaming but is there a way to disable a specific device for games or something to that effect? Don't really want to unplug the dongle each time to disable it. With it plugged in, it basically pretends its a controller? I'm not quite sure but I cannot play with it.

In addition to what others have suggested above, here's some basic low-level stuff which works for me, at least on CentOS 7: if you run xinput it should give you a list of all the input devices, which will hopefully include your ergonomic mouse. For example I can see this line with my mouse:

⎜ ↳ Microsoft Microsoft® Classic IntelliMouse® id=11 [slave pointer (2)]

You can run some really basic commands, passing in that full name string (which seems to be the vendor and name concatenated, where the name also includes the vendor name, so the vendor name appears twice):

xinput disable "Microsoft Microsoft® Classic IntelliMouse®"
xinput enable "Microsoft Microsoft® Classic IntelliMouse®"

When my mouse is disabled in that way, all input from it is ignored. If you can't find a better option maybe you can use those commands, possibly in a script wrapped around your games or something. Maybe some of the other options will give you some magic where different windows have different input devices though so this might be the last resort!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply