Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
GobiasIndustries
Dec 14, 2007

Lipstick Apathy
so this should be an easy one: I've got a tiny device running a branch of OpenWRT that I'm using as an airplay server. I've got everything working except I'd like it to auto-run the airplay server program on startup. When I SSH in all I need to do is type 'shairport sync' and it starts up, how do I get this to happen without my intervention?

Adbot
ADBOT LOVES YOU

mobby_6kl
Aug 9, 2009

by Fluffdaddy
I hosed this up with Linux so I'm trying to fix it with Linux as well. Basically I used an SD card to make a bootable recovery system and when I needed some storage on it, created a new partition in the empty space. This worked perfectly and I got the data off, but I now can't unfuck the card to make it usable elsewhere. Basically my camera and phone fail to format it, and every Windows tool fails to do anything too. I finally got back into a Linux system but parted and gparted basically have the same effect:



I'm just trying to completely wipe it and make a big FAT32 partition...

CaptainSarcastic
Jul 6, 2013



mobby_6kl posted:

I hosed this up with Linux so I'm trying to fix it with Linux as well. Basically I used an SD card to make a bootable recovery system and when I needed some storage on it, created a new partition in the empty space. This worked perfectly and I got the data off, but I now can't unfuck the card to make it usable elsewhere. Basically my camera and phone fail to format it, and every Windows tool fails to do anything too. I finally got back into a Linux system but parted and gparted basically have the same effect:



I'm just trying to completely wipe it and make a big FAT32 partition...

This might be redundant, but if it is a full-size SD card did you check to make sure it doesn't have a physical switch to make it read-only that has been turned on?

Aside from that I'd clear the flags on the hidden partition before deleting/formatting.

Horse Clocks
Dec 14, 2004


mobby_6kl posted:

I hosed this up with Linux so I'm trying to fix it with Linux as well. Basically I used an SD card to make a bootable recovery system and when I needed some storage on it, created a new partition in the empty space. This worked perfectly and I got the data off, but I now can't unfuck the card to make it usable elsewhere. Basically my camera and phone fail to format it, and every Windows tool fails to do anything too. I finally got back into a Linux system but parted and gparted basically have the same effect:



I'm just trying to completely wipe it and make a big FAT32 partition...

Zero the disk out with
code:
dd if=/dev/zero of=/dev/sdcard bs=1M
sync
Then try again

Volguus
Mar 3, 2009
You are probably fine if you just zero out the first 512 bytes since that's where the partition data is stored. To be on the safe side, wipe few kilobytes. Otherwise you're gonna wait forever for dd to zero out a 30GB sdcard.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

mobby_6kl posted:

I hosed this up with Linux so I'm trying to fix it with Linux as well. Basically I used an SD card to make a bootable recovery system and when I needed some storage on it, created a new partition in the empty space. This worked perfectly and I got the data off, but I now can't unfuck the card to make it usable elsewhere. Basically my camera and phone fail to format it, and every Windows tool fails to do anything too. I finally got back into a Linux system but parted and gparted basically have the same effect:



I'm just trying to completely wipe it and make a big FAT32 partition...

You could just write a new partition table and then make your FAT32 partition in gparted. Even Windows disk management should be able to do this.

Horse Clocks
Dec 14, 2004


Volguus posted:

You are probably fine if you just zero out the first 512 bytes since that's where the partition data is stored. To be on the safe side, wipe few kilobytes. Otherwise you're gonna wait forever for dd to zero out a 30GB sdcard.

Good point...

Add
code:
count=1
To the end of the dd command.

mike12345
Jul 14, 2008

"Whether the Earth was created in 7 days, or 7 actual eras, I'm not sure we'll ever be able to answer that. It's one of the great mysteries."





Eletriarnation posted:

Yeah, I went back and tried both methods listed in the wiki (setting KVM to hidden and adding vendorID to Hyper-V extensions, as well as just disabling Hyper-V extensions entirely) and after each I'm still seeing Code 43 in Device Manager. I also tried some script from GitHub that modifies the drivers downloaded from Nvidia and claims that they won't trigger the issue if installed after modification, but it doesn't help either.

It's academic at this point, the RX 460 is in the mail. I am sure that Nvidia doesn't give a poo poo anyway but I like the idea of sending some kind of message with the purchase.

just curious, if you do that pci-passthrough thing, is there any way to display your vm on the host desktop? basically is it possible to interact* with your vm via a window, like you do with virtualbox.

*graphics and all, not connect to it via ssh or cli means

Horse Clocks
Dec 14, 2004


mike12345 posted:

just curious, if you do that pci-passthrough thing, is there any way to display your vm on the host desktop? basically is it possible to interact* with your vm via a window, like you do with virtualbox.

*graphics and all, not connect to it via ssh or cli means

In a month or two. Yes. It’s not ready yet, but Some guy is working on a render to ram driver for windows.

Guest dumps frames to ram and then the host just dumps the contents of that to the host display.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

mike12345 posted:

just curious, if you do that pci-passthrough thing, is there any way to display your vm on the host desktop? basically is it possible to interact* with your vm via a window, like you do with virtualbox.

*graphics and all, not connect to it via ssh or cli means

You can set it up with the normal Spice configuration that's attached to a new VM by default in virt-manager and allows you to use the KVM console, and then once you have Windows installed and Remote Desktop enabled you can remove those components and just roll with Remote Desktop for access. I was able to find a script which disconnects me from Remote Desktop and opens a local session, which is required for Steam streaming since Steam can't unlock a locked screen for you and I'm running the box headless. You also need something that makes your GPU think there's actually a monitor of the appropriate resolution attached, for which there are little HDMI/DP dongles available if you don't want to leave a monitor attached.

I don't think you're actually required to remove the Spice bits at any point as they're just another display controller and monitor logically, but I expect that there's a performance impact to leaving them in.

Eletriarnation fucked around with this message at 15:23 on Dec 4, 2017

evol262
Nov 30, 2010
#!/usr/bin/perl

mike12345 posted:

just curious, if you do that pci-passthrough thing, is there any way to display your vm on the host desktop? basically is it possible to interact* with your vm via a window, like you do with virtualbox.

*graphics and all, not connect to it via ssh or cli means

There is no reason not to just use RDP for this.

Eletriarnation posted:

Yeah, I checked into that and apparently at this point to get it to work with current drivers you have to lock out some Hyper-V extensions too that commenters say actually affect performance. gently caress that noise, the 1050 is still well within the return period and Amazon has an RX 460 for $85 so I'll switch teams.

I can post my config if you want, which works fine with a 970 on Fedora 27. Performance is basically native.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
If it's no trouble, I'd be interested to see it. Right now I have the 460 installed and am trying to figure out why my VM keeps going into Paused (same as sleep, I assume?) state instead of booting up properly. I'll probably find something once I dig into the logs, but don't think I have the motivation to do it today.

Mr Shiny Pants
Nov 12, 2012

Eletriarnation posted:

You can set it up with the normal Spice configuration that's attached to a new VM by default in virt-manager and allows you to use the KVM console, and then once you have Windows installed and Remote Desktop enabled you can remove those components and just roll with Remote Desktop for access. I was able to find a script which disconnects me from Remote Desktop and opens a local session, which is required for Steam streaming since Steam can't unlock a locked screen for you and I'm running the box headless. You also need something that makes your GPU think there's actually a monitor of the appropriate resolution attached, for which there are little HDMI/DP dongles available if you don't want to leave a monitor attached.

I don't think you're actually required to remove the Spice bits at any point as they're just another display controller and monitor logically, but I expect that there's a performance impact to leaving them in.

I have on of these for that: https://www.megamac.com/products/newertech-hdmi-headless-video-accelerator-nwtadp4khead

Runs 1920 x 1080 which is ideal for Steamlink.

The problem I have with passthrough is that my Mac Pro does not like PciE devices resetting themselves, making KVM poo poo the bed it seems.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Horse Clocks posted:

Good point...

Add
code:
count=1
To the end of the dd command.
All right, this is seems to complete fine :
code:
# dd if=/dev/zero of=/dev/sdb bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 1.21378 s, 864 kB/s
But it didn't actually do anything, everything was exactly as before... eventually while dicking around with this I noticed some I/O errors pop up in the console, turns out it's this:


Finally I also tried doing it in windows with the special SD Card Formatter tool - which failed at 92% :v:
This was in the windows log: The IO operation at logical block address 0x2493 for Disk 3 (PDO name: \Device\000002cc) was retried.
So yeah I guess the card is just hosed physically. It's just strange that it never had any issues until this exact moment but I suppose it could be a coincidence.

Double Punctuation
Dec 30, 2009

Ships were made for sinking;
Whiskey made for drinking;
If we were made of cellophane
We'd all get stinking drunk much faster!
SD cards are notoriously lovely. They can die if you look at them funny.

Volguus
Mar 3, 2009
It can also be that the blocks were bad from before but only when you created the recovery system, wrote something to the new partition, etc. is when it actually poo poo the bed. They are better than floppies, but that isn't saying much.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

fletcher posted:

I've got two machines running Ubuntu 16.04 and Logwatch 7.4.2, both with identical logwatch.conf files that have MailTo and MailFrom set appropriately. Both also have identical postfix configs to relay mail to Amazon SES.

One machine sends the logwatch email just fine, the other fails with:
code:
554 Message rejected: Email address is not verified. The following identities failed the check in region US-WEST-2: root@ubuntu
Where is it getting root@ubuntu for the MailFrom address??

Bumping this one...any ideas?

Volguus
Mar 3, 2009

fletcher posted:

Bumping this one...any ideas?

I am not familiar with Logwatch, but surely they can't have the same config files since one works and one doesn't. root@ubuntu is just the default email address of a user in a *NIX system: user@host.
So one instance takes the email address of the user is running under, while the other doesn't. I'd re-check the conf files.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
Didn't run newaliases on that one?

xzzy
Mar 5, 2009

Double Punctuation posted:

SD cards are notoriously lovely. They can die if you look at them funny.

Always buy sandisk, everyone else has poo poo reviews.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Volguus posted:

I am not familiar with Logwatch, but surely they can't have the same config files since one works and one doesn't. root@ubuntu is just the default email address of a user in a *NIX system: user@host.
So one instance takes the email address of the user is running under, while the other doesn't. I'd re-check the conf files.

I was thinking the same but I've triple checked them, and they are identical :(


anthonypants posted:

Didn't run newaliases on that one?

/etc/aliases is the same for both as well:
code:
# See man 5 aliases for format
postmaster:    root
Tried running newaliases but didn't seem to fix it

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved
What's your /etc/hosts value for your edge-facing IP address + output from "postconf myhostname"? If it isn't spitting out a FQDN, check that mydomain is properly set. Otherwise does sending an email via "mail" work?

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

I have an Ubuntu Server 16.04 machine where I host a Gitlab instance and play around with various CI/CD tools. It's starting to move from "don't care if it dies and I lose all the data on it" to "would be kind of annoying if it died and I lost all the data on it".

The disk is pretty small, so I'd prefer to image it nightly rather than worrying about which folders I need to backup. Is there a simple tool that will take an image of a running system and back it up to a network share?

tl;dr: is there a Macrium Reflect or DriveImage XML for Linux?

thebigcow
Jan 3, 2001

Bully!

NihilCredo posted:

I have an Ubuntu Server 16.04 machine where I host a Gitlab instance and play around with various CI/CD tools. It's starting to move from "don't care if it dies and I lose all the data on it" to "would be kind of annoying if it died and I lost all the data on it".

The disk is pretty small, so I'd prefer to image it nightly rather than worrying about which folders I need to backup. Is there a simple tool that will take an image of a running system and back it up to a network share?

tl;dr: is there a Macrium Reflect or DriveImage XML for Linux?

I think LVM snapshots can do in Linux what VSS does for Windows, but I don't know of any software that wraps this all up neatly.

An alternative would be getting some configuration management set up that could restore all the software you use, and then backup the data with more conventional means.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

NihilCredo posted:

I have an Ubuntu Server 16.04 machine where I host a Gitlab instance and play around with various CI/CD tools. It's starting to move from "don't care if it dies and I lose all the data on it" to "would be kind of annoying if it died and I lost all the data on it".

The disk is pretty small, so I'd prefer to image it nightly rather than worrying about which folders I need to backup. Is there a simple tool that will take an image of a running system and back it up to a network share?

tl;dr: is there a Macrium Reflect or DriveImage XML for Linux?

Gitlab has specific guidance for how to do backups: https://docs.gitlab.com/ee/raketasks/backup_restore.html

RFC2324
Jun 7, 2012

http 418

thebigcow posted:

I think LVM snapshots can do in Linux what VSS does for Windows, but I don't know of any software that wraps this all up neatly.

An alternative would be getting some configuration management set up that could restore all the software you use, and then backup the data with more conventional means.

Worst case scenario you could just shutdown the VM and copy the image to backup with a script, if there is no other good(presumably free) option available.

Never not roll your own :v:

thebigcow
Jan 3, 2001

Bully!
It didn't sound like a VM

RFC2324
Jun 7, 2012

http 418

thebigcow posted:

It didn't sound like a VM

my bad, you're right. i just assumed it was because putting things like that in a vm seems to be standard even for home labs nowadays

Mr Shiny Pants
Nov 12, 2012

NihilCredo posted:

I have an Ubuntu Server 16.04 machine where I host a Gitlab instance and play around with various CI/CD tools. It's starting to move from "don't care if it dies and I lose all the data on it" to "would be kind of annoying if it died and I lost all the data on it".

The disk is pretty small, so I'd prefer to image it nightly rather than worrying about which folders I need to backup. Is there a simple tool that will take an image of a running system and back it up to a network share?

tl;dr: is there a Macrium Reflect or DriveImage XML for Linux?

Don't know your exact setup, but for my Gitlab instance I run an LXC on ZFS and just snapshot the pool. Works a treat.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Currently I'm running everything on that server as Docker containers. So yes, I could rsync the volume folders and have all my data, but in case of a screwup of some kind I'd still have to restore everything to the right place, re-start up the right images, etc. And if I run something outside of Docker, I'd need to remember to add the right extra stuff to the backup.

Since the entire system is using a little over 100GB in disk space, I'd much prefer to image the whole thing and restore it with one click.

Mr Shiny Pants
Nov 12, 2012

NihilCredo posted:

Currently I'm running everything on that server as Docker containers. So yes, I could rsync the volume folders and have all my data, but in case of a screwup of some kind I'd still have to restore everything to the right place, re-start up the right images, etc. And if I run something outside of Docker, I'd need to remember to add the right extra stuff to the backup.

Since the entire system is using a little over 100GB in disk space, I'd much prefer to image the whole thing and restore it with one click.

That's why I decided to just snapshot the pool, I have all my VMs on it and some other containers as well. In case of a mess-up I just revert to the latest snapshot and all is well again. Snapshots are made automatically every 15 minutes by some script.

You could go a step further and have all the containers in their own ZVOL or Dataset and do per dataset snapshots, this way you can easily revert a single container or VM. As you pointed out you might have dependencies on other containers/VMs so it might be handy to just snapshow the whole lot and treat it as logically being one system.

telcoM
Mar 21, 2009
Fallen Rib

thebigcow posted:

I think LVM snapshots can do in Linux what VSS does for Windows, but I don't know of any software that wraps this all up neatly.

No, LVM snapshots alone are useless as backups. However, they can be an useful tool for avoiding downtime when making an actual backup.

LVM snapshots work at the block level, using chunks of 4 KB by default. It's an implementation of copy-on-write at the LVM level: once a snapshot is set up for a particular logical volume, the first time each chunk is written to in the original of the snapshot, the old data is first copied into the disk space allocated for the snapshot, and only then the write operation is allowed to complete. Any subsequent writes into the same chunk will proceed normally. As a result, the snapshot will appear as an alternate "view" into the original LV as it existed at the moment of snapshot creation.

However, a LVM snapshot is just a view, not a true copy: if the original LV is destroyed, the snapshot LV will contain only old versions of those chunks you have modified since the creation of the snapshot; any data that hasn't been changed is only stored on the original LV, and will be lost.

LVM snapshots can be useful when you have a large filesystem or database, and not enough downtime to make a backup of it. You'll just need enough extra disk space to cover the amount of changes expected to happen in the time the backup will take (+ some percentage extra, for safety). You get a short amount of downtime, during which you'll stop applications/put the database in backup mode/do the needful to ensure the filesystem/database is in a good state for backups, then create the snapshot and resume regular service. Now, you can mount the snapshot at some convenient location and let the backup take its sweet time on it, while the actual filesystem/database keeps receiving new data. Once the backup is complete, you just delete the snapshot (no need to sync anything to the original, so it will be a quick and easy operation).

Yes, you can allocate less disk space for the snapshot than its original has if you don't expect the original to receive too many changes during the time you'll need the snapshot. If your guesstimate is wrong and the snapshot space becomes full while the snapshot is still in use, the original LV keeps working just fine, while the snapshot LV disables itself. Then your backup operation will fail and you'll need to do it all over again...

thebigcow
Jan 3, 2001

Bully!

telcoM posted:

No, LVM snapshots alone are useless as backups. However, they can be an useful tool for avoiding downtime when making an actual backup.

LVM snapshots work at the block level, using chunks of 4 KB by default. It's an implementation of copy-on-write at the LVM level: once a snapshot is set up for a particular logical volume, the first time each chunk is written to in the original of the snapshot, the old data is first copied into the disk space allocated for the snapshot, and only then the write operation is allowed to complete. Any subsequent writes into the same chunk will proceed normally. As a result, the snapshot will appear as an alternate "view" into the original LV as it existed at the moment of snapshot creation.

However, a LVM snapshot is just a view, not a true copy: if the original LV is destroyed, the snapshot LV will contain only old versions of those chunks you have modified since the creation of the snapshot; any data that hasn't been changed is only stored on the original LV, and will be lost.

LVM snapshots can be useful when you have a large filesystem or database, and not enough downtime to make a backup of it. You'll just need enough extra disk space to cover the amount of changes expected to happen in the time the backup will take (+ some percentage extra, for safety). You get a short amount of downtime, during which you'll stop applications/put the database in backup mode/do the needful to ensure the filesystem/database is in a good state for backups, then create the snapshot and resume regular service. Now, you can mount the snapshot at some convenient location and let the backup take its sweet time on it, while the actual filesystem/database keeps receiving new data. Once the backup is complete, you just delete the snapshot (no need to sync anything to the original, so it will be a quick and easy operation).

Yes, you can allocate less disk space for the snapshot than its original has if you don't expect the original to receive too many changes during the time you'll need the snapshot. If your guesstimate is wrong and the snapshot space becomes full while the snapshot is still in use, the original LV keeps working just fine, while the snapshot LV disables itself. Then your backup operation will fail and you'll need to do it all over again...

Almost exactly like Volume Shadow Service and your backup software of choice. But I don't know of any backup software for Linux designed for that.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

I use rsnapshot for that. It makes an LVM snapshot, copies everything that has changed using rsync (and makes links to anything that hasn't), and rotates the backups so I end up with unlimited weekly, 7 daily, and 24 hourly snapshots.

jre
Sep 2, 2011

To the cloud ?



Mr Shiny Pants posted:

That's why I decided to just snapshot the pool, I have all my VMs on it and some other containers as well. In case of a mess-up I just revert to the latest snapshot and all is well again. Snapshots are made automatically every 15 minutes by some script.

You could go a step further and have all the containers in their own ZVOL or Dataset and do per dataset snapshots, this way you can easily revert a single container or VM. As you pointed out you might have dependencies on other containers/VMs so it might be handy to just snapshow the whole lot and treat it as logically being one system.

Where do you copy the snapshots to ?

Ashex
Jun 25, 2007

These pipes are cleeeean!!!
Anyone have some advice on how to handle mass converting a bunch of videos in nested folders with HandbrakeCLI? I'm squeezing a bunch onto a memory card for a tablet so I'm running them all through handbrake but am having issues with files that have spaces in them.

I almost got it with this command using find:

code:
find /videos -name "*.mkv" -exec bash -c '~/temp/HandBrake-1.0.7/build/HandBrakeCLI -Z "Android 576p25" --input "{}" --output "/out/$(basename {}).mp4" --two-pass --turbo' \
But the output filename always comes out as '.mp4'.

Tigren
Oct 3, 2003

Ashex posted:

Anyone have some advice on how to handle mass converting a bunch of videos in nested folders with HandbrakeCLI? I'm squeezing a bunch onto a memory card for a tablet so I'm running them all through handbrake but am having issues with files that have spaces in them.

I almost got it with this command using find:

code:
find /videos -name "*.mkv" -exec bash -c '~/temp/HandBrake-1.0.7/build/HandBrakeCLI -Z "Android 576p25" --input "{}" --output "/out/$(basename {}).mp4" --two-pass --turbo' \
But the output filename always comes out as '.mp4'.

Totally unhelpful answer:

Set up a kubernetes cluster with custom golang tools and dockerized Handbrake to automatically convert all your files

http://carolynvanslyck.com/blog/2017/10/my-little-cluster/

RFC2324
Jun 7, 2012

http 418

Ashex posted:

Anyone have some advice on how to handle mass converting a bunch of videos in nested folders with HandbrakeCLI? I'm squeezing a bunch onto a memory card for a tablet so I'm running them all through handbrake but am having issues with files that have spaces in them.

I almost got it with this command using find:

code:
find /videos -name "*.mkv" -exec bash -c '~/temp/HandBrake-1.0.7/build/HandBrakeCLI -Z "Android 576p25" --input "{}" --output "/out/$(basename {}).mp4" --two-pass --turbo' \
But the output filename always comes out as '.mp4'.

this may help
https://stackoverflow.com/questions/19562785/handbrakecli-bash-script-convert-all-videos-in-a-folder

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Ashex posted:

But the output filename always comes out as '.mp4'.

code:
find /videos -name "*.mkv" -exec ~/convert.sh '{}' \;
convert.sh:
code:
#!/bin/sh
~/temp/HandBrake-1.0.7/build/HandBrakeCLI -Z "Android 576p25" --input "$1" --output "/out/${1##*/}.mp4" --two-pass --turbo
Subshell is processed before the variable substitution takes place in find's pipeline? :iiam:

Adbot
ADBOT LOVES YOU

xzzy
Mar 5, 2009

There's a practical limit to how much you can do with find before you're spending all your time adjusting for quirks with subshells and substitutions.

I usually pipe it into a while loop so I have a little more control.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply