|
peepsalot posted:Does anyone have experience with video editing software on linux? I want to make youtubes, and I have not tried anything yet. Looking for recommendations. Kdenlive is the most feature complete and easy to use linux editor I've tried but it can be a bit crashy sometimes
|
# ? Feb 2, 2018 06:32 |
|
|
# ? Apr 28, 2024 00:26 |
|
Boris Galerkin posted:I want to set up ssl on my website with the caveat that I want a different ssl certificate for a specific subdomain. So, I'd like example.com and www.example.com to use the same certificate, while test.example.com uses one specific to that subdomain. I'm going to be requesting these from Let's Encrypt and I was a bit confused about the part where I generate an account.key. As I understand it, this account key represents something like a user account at Let's Encrypt, so it makes sense to me that I only want one account.key that's shared between these two certificates. I'll generate two domain keys: example.com.key and test.example.com.key. Is this correct or should I be using two different account keys? Maybe the tool that gives your certs the rating thinks that using subject alternative names make it slightly less secure? I could imagine that if you used wildcards, but you didn’t. Not sure if LE even supports wildcards.
|
# ? Feb 2, 2018 20:05 |
|
peepsalot posted:Does anyone have experience with video editing software on linux? I want to make youtubes, and I have not tried anything yet. Looking for recommendations. DaVinci Resolve recently (last year) got a Linux version. From what I've heard it's far better than the open source alternatives, on par with Adobe Premiere. Haven't tried it myself though. I can tell you Kdenlive has been janky as hell any time I've tried that. Maybe you'll have better luck with it.
|
# ? Feb 2, 2018 20:19 |
|
LochNessMonster posted:Maybe the tool that gives your certs the rating thinks that using subject alternative names make it slightly less secure? End of February, LE supports wildcard in production, right now it's only for staging. Soooooo looking forward to it.
|
# ? Feb 2, 2018 20:20 |
|
insularis posted:End of February, LE supports wildcard in production, right now it's only for staging. Soooooo looking forward to it. It’s not the most secure way of setting things up but I can understand why people are doing it.
|
# ? Feb 2, 2018 22:21 |
|
LochNessMonster posted:Maybe the tool that gives your certs the rating thinks that using subject alternative names make it slightly less secure? I never figured it out but after thinking about it I also realized I was trying to make something way more complicated than it should be. I just have a single Let’s Encrypt cert now for the entire domain and decided to stop fussing over A+ vs A on an arbitrary scale. The LE’s website says they are doing wild card certs in “January 2018”.
|
# ? Feb 3, 2018 10:05 |
|
I have some CentOS machines that are periodically "losing" their sshd config somehow. They stop recognizing authorized_keys cert auth and don't recognize passwords either. During these events, it looks like cron may not be executing either (possibly, can't get in to test). I've never seen this on any of the Debian or Ubuntu boxes I have. Any ideas? edit: I'm really guessing it's a sshd problem somehow so when I can get back into that machine I guess I will start checking the log files. Paul MaudDib fucked around with this message at 16:45 on Feb 7, 2018 |
# ? Feb 7, 2018 16:34 |
|
In my experience, when cron stops working you have a failed disk or hung nfs mount. I would guess sshd is barfing trying to open files too.
|
# ? Feb 7, 2018 16:49 |
|
The cron thing is unrelated (bad crontab line), I thought I had an issue on one server that had appeared on another server too. I'm perplexed about the SSH thing though. When this bug manifests, I can open a SSH connection to the server, but it won't accept key or password auth, it just always says 'incorrect password'. When this happens, the logs are still running (I see systemd creating and removing sessions exactly once an hour in /var/log/message) but there's no messages from the logind about the connection attempts. When I reboot the server I see logind logging connection attempts normally. Given that I'm connecting to something, presumably something that handshakes like a SSH server, my operating assumption is this is a sshd problem somehow. I have etckeeper and the only commit that changed sshd_config over the life of the server was adding this line: AddressFamily inet I don't see why that would be doing it (especially as we're not IPv6 internally, shame, etc) but I'll roll that back, and I guess hope that it doesn't recur again. Regardless, I don't see why it would be rejecting authentication, instead of just not listening on those interfaces, and why it wouldn't be logging it. Paul MaudDib fucked around with this message at 20:20 on Feb 7, 2018 |
# ? Feb 7, 2018 20:15 |
|
I would get some kind of serial access configured so if it happens again, you still have a way into the system. Or plug a cart into it. Or leave an ssh session open 24/7 so you have an active connection to investigate with when things go sour. Or set up sshd on a server to run in debug mode (look into the -E option if it's centos 7) and you should have lots of logs to dig through the next time it happens. Usually once you can get an error message out of the sshd side of things it's pretty easy to figure out what blew up.
|
# ? Feb 7, 2018 20:28 |
|
Paul MaudDib posted:The cron thing is unrelated (bad crontab line), I thought I had an issue on one server that had appeared on another server too. What about /var/log/secure ? Also, crazy theory: a duplicate IP.
|
# ? Feb 7, 2018 21:39 |
|
other people posted:What about /var/log/secure ? I checked secure as well. Same deal, it doesn't show any login attempts this morning.
|
# ? Feb 7, 2018 21:45 |
|
I have Ubuntu postfix mystery. Apparently a bunch of servers were keeping a few thousand connections open to the SMTP host. Mail.log showed they were all addressed to root@machinename and the SMTP server was rejecting them as domain not recognized or something similar. Changing the hostname to have the FQDN and restarting postfix seems to have solved the problem. A) shouldn't these mails be addressed to root@localhost, and what could I do to make that happen? B) is this just how Ubuntu works, or is our VM image or provisioning script gibbled?
|
# ? Feb 8, 2018 02:04 |
|
How would I completely disable detection of my second video card in Arch Linux? I have a 2500K with intel igpu that I have my monitors hooked up to, and the dedicated video card unsupported by Linux (AMD 7870 Tahiti). I just want it to completely ignore the Tahiti, it's not important to me to have 3D in Linux and Intel HD 3000 is more than enough for composition. Not sure if I'd have to recompile the Kernel, blacklist some kernel modules or something else. I just want it to basically pretend my card isn't even there, without having to physically remove the card.
|
# ? Feb 8, 2018 04:47 |
|
Alpha Mayo posted:How would I completely disable detection of my second video card in Arch Linux? Blacklist the modules. Linux will still find that device but it won’t work if the module isn’t loaded.
|
# ? Feb 8, 2018 04:54 |
|
There might be a way to disable it in the BIOS, too.
|
# ? Feb 8, 2018 05:03 |
|
anthonypants posted:There might be a way to disable it in the BIOS, too. True. If it’s integrated I’m sure there is.
|
# ? Feb 8, 2018 07:00 |
|
I can disable integrated in the Bios but not the dedicated. I'm actually wanting to use integrated with Linux since Intel graphics are well supported as opposed to be red headed stepchild Radeon 7870 Tahiti.
|
# ? Feb 8, 2018 07:06 |
|
Alpha Mayo posted:I can disable integrated in the Bios but not the dedicated. I'm actually wanting to use integrated with Linux since Intel graphics are well supported as opposed to be red headed stepchild Radeon 7870 Tahiti. You could PCI Stub it, for all intends and purposes it might as well not be there anymore.
|
# ? Feb 8, 2018 09:11 |
|
Mr Shiny Pants posted:You could PCI Stub it, for all intends and purposes it might as well not be there anymore. Thanks that worked perfectly, Xorg finally started with no issues and LXQt running. Intend to do all my web development on Linux so that I don't poo poo where I eat. I just wish my CPU supported VT-D (2500K), is gaming with GPU bypass and Windows running in a VM actually doable? Might be the first reason I've had to upgrade my CPU/Mobo if it does, since I have a GTX 1070Ti coming next week.
|
# ? Feb 10, 2018 13:22 |
|
Alpha Mayo posted:I just wish my CPU supported VT-D (2500K), is gaming with GPU bypass and Windows running in a VM actually doable? Might be the first reason I've had to upgrade my CPU/Mobo if it does, since I have a GTX 1070Ti coming next week. Gaming with a passed-through GPU is very doable. You can expect ~95% performance compared to bare metal with minimal fiddling.
|
# ? Feb 10, 2018 16:03 |
|
Any particular reason why Mutter's compositing framerate goes to poo poo whenever there's more than four windows open, or when there's something rendering at some random framerate, like a Youtube video in a browser? I gave Fedora another stab a few days ago, and I can do 144hz at 1440p just fine, so long it's just two terminal windows or whatever. Play something in mpv, and byebye 144hz. I've also tried the EGLStreams version of Mutter (NVIDIA, by the way) for giggles, and it doesn't even get near 144hz regardless of what's running. Altho I think there might be a framerate cap configured somewhere that I couldn't find and change (then again, the same Mutter binaries on Xorg did render at full framerate, if nothing else was messing with things). Also, it seems like Mutter renders frames even when nothing's happening, because my card kept being pegged at maximum performance, unless I dropped the refresh rate and therefore framerate.
|
# ? Feb 10, 2018 17:21 |
|
Alpha Mayo posted:I just wish my CPU supported VT-D (2500K), is gaming with GPU bypass and Windows running in a VM actually doable? Might be the first reason I've had to upgrade my CPU/Mobo if it does, since I have a GTX 1070Ti coming next week. It's more than doable; I've been able to play twitch shooters without noticing that I'm in a VM. It can take a while to figure out all the steps necessary to get it working, however. For instance, with NVIDIA you have to hide KVM from the NVIDIA driver to get it to install properly. You also need to be sure to enable MSI interrupts. Once you get it working you'll be chasing all the little things to get near-native performance. The only incompatible game I've run into is CS:GO, which used to detect that it was running virtualized which caused it to kick you from any online match. They've relaxed this recently though, rumor is they're supporting cloud-based gaming providers.
|
# ? Feb 10, 2018 18:42 |
|
^^ I still want to do this, but gently caress memory prices at the moment. I already have a little dongle that tricks your videocard into thinking a monitor is attached.
|
# ? Feb 11, 2018 16:35 |
|
I just want to make sure I am not doing something stupid I have Django dev server running on on an Ubuntu Linux VPS with command python manage.py runserver 0.0.0.0:9000 >/dev/null 2>&1 & 9000 port is not forwarded on the firewall (DigitalOcean) to this VPS, just 80, 443 and 23 so it shouldn't be accessible by the Internet, instead I set up an SSH tunnel to forward Local 9000 to mydomain.com:9000 I edited /etc/ssh/sshd_config and added: AllowAgentForwarding yes AllowTcpForwarding yes GatewayPorts yes to it. SSHD is also set to only allow key exchange login. and now I access Django admin by going to localhost:9000/admin on my local PC. Is this a safe setup? No one should be able to access Port 9000 running on the remote Linux box unless they have my SSH keys, right? My intent is for this to allow me to stage the app on the VPS, while only being accessible through SSH tunnel (which only I have the keys to).
|
# ? Feb 12, 2018 06:51 |
|
Is there software that will let me run a cluster of PCs but have it appear as a single NUMA computer to app software rather than a cluster that uses special scheduling software? I realize that performance will probably tank but I'm just curious what is possible.
|
# ? Feb 12, 2018 22:55 |
|
taqueso posted:Is there software that will let me run a cluster of PCs but have it appear as a single NUMA computer to app software rather than a cluster that uses special scheduling software? I realize that performance will probably tank but I'm just curious what is possible. The term you're looking for here is "distributed shared memory". OpenSSI would be one implementation of this, but your performance is going to suck for most tasks because essentially you will only be able to access/write data at network speed/latency, so you'll be getting 1 Gbit/s of throughput vs 40+ on a single system with ms of latency per access, and any threading or interprocess communication is going to suck. It would be a little better if you had a fast low-latency interconnect like Infiniband, but it's still a lot more efficient if applications are aware of the processor on which they're running. You can also get load-balancing software which simplifies the task of scheduling, so you can simply say "run this task on the least-loaded computer with at least 4 cores and at least 1 GB of memory free" and the scheduler does it for you.
|
# ? Feb 12, 2018 23:24 |
|
Thanks, seems like it might be fun to try out this weekend.
|
# ? Feb 12, 2018 23:51 |
|
Paul MaudDib posted:The term you're looking for here is "distributed shared memory".
|
# ? Feb 12, 2018 23:58 |
|
I’ve got a docker compose file running a bunch of services, with nginx sitting at the front doing url-prefix routing to the other containers. I’ve also got a watchtower container auto-updating the services as new images are pushed to docker.io But when a container is updated by watchtower, nginx’s proxy_pass directive can no longer talk to that service. I’m guessing because the containers internal ip has changed. Is there a way to get nginx to not cache dns entries?
|
# ? Feb 13, 2018 09:52 |
|
Looks like if you use a variable to set the hostname rather than doing it directly, there is a “resolver” setting that can override nginx’s handling of TTL’s. You could set it to something super low assuming this isn’t a high traffic site. It should also flush its cache on SIGHUP if you have a way of issuing that automatically when a container reboots. https://www.jethrocarr.com/2013/11/02/nginx-reverse-proxies-and-dns-resolution/ https://www.nginx.com/blog/dns-service-discovery-nginx-plus/
|
# ? Feb 13, 2018 13:42 |
|
Weird sshd issue resolved: IT allocated the same IP to another server on the network
|
# ? Feb 13, 2018 21:34 |
|
Paul MaudDib posted:Weird sshd issue resolved: IT allocated the same IP to another server on the network you are welcome
|
# ? Feb 13, 2018 21:59 |
|
Paul MaudDib posted:Weird sshd issue resolved: IT allocated the same IP to another server on the network In the future if you suspect that you've got an IP address conflict, arpwatch is a good tool to find it
|
# ? Feb 13, 2018 22:03 |
|
Methanar posted:In the future if you suspect that you've got an IP address conflict, arpwatch is a good tool to find it In a similar vein, I just used nmap against my management VLAN today because I couldn't remember a management port for an XMPP server.
|
# ? Feb 14, 2018 02:54 |
|
other people posted:you are welcome when you eliminate the improbable, only the idiotic remains OK so lesson learned here, and if I run into this set of symptoms again, I'll know. Is there a way to easily glue arpwatch to an alert system so I can get an email if the idiotic ever happens again?
|
# ? Feb 14, 2018 04:48 |
|
In Linux, there is vmtouch to pin something into cache. Is there a zfstouch that lets you immediately force something into l2arc?
|
# ? Feb 14, 2018 08:00 |
|
I'm having a weird issue, affecting only Linux installs on my Thinkpad x230. Occasionally, the laptop will just turn off. Boom, click - off. No errors, nothing. I thought it was temperature related, but temperatures are normal (~40C) when the system boots back up. I thought it was the distro/kernel I was running, but no - it happens across Debian, Ubuntu, Fedora, etc. System journal doesn't say a thing - just that the system shutoff, and turned back on. On a whim, I installed Windows on the laptop, and it worked a-OK for about a month until I went back to Linux. The week I was back, it happened again. What the gently caress?
|
# ? Feb 14, 2018 12:41 |
|
How would I delete the swapfile on shutdown, then automate the creation of it on startup (with dd if=/dev/zero)? The reason is, I am using DigitalOcean droplets and I don't want to include the swapfile of the droplet during snapshots because I get charged for it. Essentially somewhere in the startup, after drives are mounted, I need to run dd if=/dev/zero of=/swapfile bs=1M count=1024 mkswap /swapfile swapon /swapfile then on shutdown: rm -f swapfile Just not sure where these commands should go on Ubuntu Linux 16.04
|
# ? Feb 14, 2018 13:36 |
|
|
# ? Apr 28, 2024 00:26 |
|
You don't need to copy zeroes, you can use the fallocate command. Should use much less IO.
|
# ? Feb 14, 2018 16:02 |