Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
General_Failure
Apr 17, 2005
Question. I'm fine with the answer being "No." or equivalent by the way.
A portable ARM based linux device would be extremely useful to me right now. The most useful thing I can find is the Pinebook, but they have a system which involves giving them an email address and waiting. Perhaps forever for my turn in the build to order queue.
The only really usable things I dug up in the house were my ASUS TF700T and Ainol Elf II. The trouble with both is that they would seem to have a Linux version circa 2012. The Elf II is a bit easier to work with. Ubuntu Quantal vs. Gentoo with some real tinkering on the TF700T.

I shoved the boot stuff and fs on an SD card last night for the Elf II. Worked first go. Set up Wifi. That worked too. Touch screen, no. It needs a package. Quantal is EOL. I know it's on the ubuntu archive server, old-releases or something like that, but the current entries in sources.list are something like ports.ubuntu.com. Sorry I don't have it in front of me. I'm unfamiliar with it. Is it something magical or just deprecated? Ie if I point sources.list towards the normal archive, will it still fetch functional armhf packages?

Really I couldn't give a drat about the integrity of the package based nature of Ubuntu. I just want to drag as much as I can up to more modern versions while keeping the kernel.
I'm using a crappy 7" USB tablet case keyboard with it, but I need touchscreen for the mouse. Most of what I need it for I'll be building from source anyway, but some dependencies do need to be filled.

Why ARM based? Well, it's either that or QEMU. The only portable thing I have besides my phone is my lovely old netbook, which an RPi B would probably outperform. Not suitable clearly.

Adbot
ADBOT LOVES YOU

mystes
May 31, 2006

General_Failure posted:

Question. I'm fine with the answer being "No." or equivalent by the way.
A portable ARM based linux device would be extremely useful to me right now. The most useful thing I can find is the Pinebook, but they have a system which involves giving them an email address and waiting. Perhaps forever for my turn in the build to order queue.
The only really usable things I dug up in the house were my ASUS TF700T and Ainol Elf II. The trouble with both is that they would seem to have a Linux version circa 2012. The Elf II is a bit easier to work with. Ubuntu Quantal vs. Gentoo with some real tinkering on the TF700T.
Maybe you should look into whether you can install linux on an ARM based chromebook? I think people prefer using intel based chromebooks for this purpose so I don't know what the state of support for arm chromebooks is, but it might be worth looking into.

General_Failure
Apr 17, 2005

mystes posted:

Maybe you should look into whether you can install linux on an ARM based chromebook? I think people prefer using intel based chromebooks for this purpose so I don't know what the state of support for arm chromebooks is, but it might be worth looking into.

I'm just doing this as a short term solution.
Just had a quick look on eBay to see if the Australian Chromebook situation has improved. Short answer is "no". Looking at AUD$150+ for working ones, and they are x86 based anyway. That's a shame.

Quick update on what I'm doing. I tried changing from ports.ubuntu.com to old-releases.ubuntu.com and changing from ubuntu-ports (I think it was anyway) to ubuntu and it seemed to work. Didn't get the touchscreen to work yet, but I got vim installed instead of vi thankfully and it worked, and right now I'm letting it attempt to update all the packages to the latest ones in the archive. If it shits itself, no big deal. Just dump the files back over to the SD card and try again.

At the risk of causing some aneurysms, I'll explain what all this is about.

I'm working on porting RISC OS 5 (currently 5.25) to the Allwinner H3. There are some hurdles with this. One being I can't always use where my RPi etc. is set up. I need to use something running RISC OS to build it. Now, some clever people did a Linux port of it. It uses virgin sacrifices and demonic summoning to function. Their build system is even clever enough to use QEMU on non native architectures. This needs to be done because a large part of the OS is written in ARM assembly. On aarch32 and aarch64 (I've tested both) it runs comparably to native speeds. Graphics is slower and the port is incomplete but it's enough to run the build system.

My netbook just isn't up to the job of running it. I've tried a few times. I've also tried RISC OS in RPCEmu. Some things aren't worth it.
There are some performance hits for using older Linux kernels unfortunately, but they are still bearable.

My reason for considering the Pinebook at all is because it uses an aarch64 SoC from the same family as the one I'm working on. Later I'd like to get it running on them too in aarch32 mode.

So, yeah using linux for not linux.

e: I have to say it's kind of nice to have a use for the old tablet again. Shame the old projects like Opie(?) and that other one... err... aren't really around anymore.

Varkk
Apr 17, 2004

Maybe a Pi Top it is a laptop shell with keyboard and screen. Which you can then put your own Raspberry into to power it.

General_Failure
Apr 17, 2005

Varkk posted:

Maybe a Pi Top it is a laptop shell with keyboard and screen. Which you can then put your own Raspberry into to power it.

Yeah. That was my other idea. Sadly I saw one in an affordable range on eBay shortly before I thought of taking this line of attack. I think it was about $AUD120. The others I've seen have been $400+. Yeah no. This is just a hobby. Certainly not investing that on something with no returns.

The tablet just finished updating all the packages with the ones in the ubuntu archive and still boots somehow! This is interesting.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
Re: Pinebook two things are going on, the $99 version is two expensive to produce at the moment and they are trying to build a black 1080p IPS version at some point(but no news on that in awhile).



Also re pinebook/sopine, a lot of things are super broken while stuff gets into mainline, and the community is more focused on the rock hip stuff at the moment. It sucks, I have a clusterboard that is barely functional.

General_Failure
Apr 17, 2005

freeasinbeer posted:

Re: Pinebook two things are going on, the $99 version is two expensive to produce at the moment and they are trying to build a black 1080p IPS version at some point(but no news on that in awhile).
Also re pinebook/sopine, a lot of things are super broken while stuff gets into mainline, and the community is more focused on the rock hip stuff at the moment. It sucks, I have a clusterboard that is barely functional.

Ah man. That suuuucks. Kind of related there's someone who just got a port of RISC OS up to a decent status on a Rockchip board. Can't remember what though.

I got my Linux experiment up to Saucy. I'm making an image of the SD card right now. Going to strip out the stuff I don't want and try for the next upgrade.

e: Trying for trusty now. The difference in layout of the apt repository directories caught me off, but it's looking good so far. Hopefully a supported release will actually work.

General_Failure fucked around with this message at 08:55 on May 20, 2018

evol262
Nov 30, 2010
#!/usr/bin/perl
I'm honestly curious why you don't just use a hosted build server, or bring a 18650->5V USB battery pack plus a pineboard with you. It'd be significantly more efficient than an EOL Chromebook...

General_Failure
Apr 17, 2005

evol262 posted:

I'm honestly curious why you don't just use a hosted build server, or bring a 18650->5V USB battery pack plus a pineboard with you. It'd be significantly more efficient than an EOL Chromebook...

Excellent question. I'm using an Orange Pi PC, but it's neither here nor there. The SoC is the important part.

Alright... It's a multi tiered issue.

RISC OS unfortunately needs to be built with the DDE toolchain which only runs on RISC OS. With some minor caveats this also includes the Linux port of RISC OS. A year or two ago my ill fated first attempt was using a terrifying combination of things. It almost worked too, but for some (many potential) reason/s the landing points were missing their mark after linking. So I caved and tried again the normal way.

My netbook just can't seem to do the Linux port for some reason. It's also too slow to effectively run RPCEmu. It's a half decayed piece of poo poo. Thanks for nothing, Acer.

I have power banks. Even an unwieldy 20000mAh one. With a solid power connection and a decent capacitor on the board it seems to work okay. But that still leaves me with no way to build. I mean, my port is almost far enough along to self build, but it's not practical. No UI, lovely supervisor prompt and no text editor.

There are a few other options that are kind of unwieldy too, but potentially okay.

Build server... not so easy because of what the toolchain runs on.

In other news, I got the tablet updated to Ubuntu Trusty and got the touch screen working. It's working in relative mode, but it's better than nothing. Especially for an ancient Android BSP kernel from a Chinese manufacturer.

e: All that pretty much means the Linux port is about the only thing I can get away with.
Changing the sources to old-releases.ubuntu.com then going through the motions until the next distribution is supported. From then the only way seems to be to edit sources.list to ports.ubuntu.com and changing the dist name before running do-release-upgrade. Otherwise it either tries to fetch the new package list from old-releases.ubuntu.com even though it knows about the newer releases, or it falls over because just the server has changed.

So that's that question answered, I guess.

General_Failure fucked around with this message at 00:05 on May 21, 2018

ewe2
Jul 1, 2009

General_Failure posted:

Excellent question. I'm using an Orange Pi PC, but it's neither here nor there. The SoC is the important part.

Alright... It's a multi tiered issue.

RISC OS unfortunately needs to be built with the DDE toolchain which only runs on RISC OS. With some minor caveats this also includes the Linux port of RISC OS. A year or two ago my ill fated first attempt was using a terrifying combination of things. It almost worked too, but for some (many potential) reason/s the landing points were missing their mark after linking. So I caved and tried again the normal way.

My netbook just can't seem to do the Linux port for some reason. It's also too slow to effectively run RPCEmu. It's a half decayed piece of poo poo. Thanks for nothing, Acer.

I have power banks. Even an unwieldy 20000mAh one. With a solid power connection and a decent capacitor on the board it seems to work okay. But that still leaves me with no way to build. I mean, my port is almost far enough along to self build, but it's not practical. No UI, lovely supervisor prompt and no text editor.

There are a few other options that are kind of unwieldy too, but potentially okay.

Build server... not so easy because of what the toolchain runs on.

In other news, I got the tablet updated to Ubuntu Trusty and got the touch screen working. It's working in relative mode, but it's better than nothing. Especially for an ancient Android BSP kernel from a Chinese manufacturer.

e: All that pretty much means the Linux port is about the only thing I can get away with.
Changing the sources to old-releases.ubuntu.com then going through the motions until the next distribution is supported. From then the only way seems to be to edit sources.list to ports.ubuntu.com and changing the dist name before running do-release-upgrade. Otherwise it either tries to fetch the new package list from old-releases.ubuntu.com even though it knows about the newer releases, or it falls over because just the server has changed.

So that's that question answered, I guess.

Uh, you tried apt pinning in preferences?

General_Failure
Apr 17, 2005

ewe2 posted:

Uh, you tried apt pinning in preferences?

...no. Remember I'm dragging someone's ancient, Android kernel based, unsupported for 6 years linux port up to a supported release. The repos that came with it were dead, so I had to shift it over to the old versions server. Due to some failing somewhere, the updater fails to hop to a different server even though it is able to deduce the existence of a newer release so I had t get a little heavy handed. It was just a total gamble what would break and what wouldn't. Due to a known bug which I can fix with an entry in a script that I think was clobbered, xorg's is a touch wonky. To do with the colour depth being wrong initially or something. I just get around it mostly for now by switching to a terminal and back.

Right now I have an ssh session open to it, because I'm using the PC. It's currently building the Linux port of RISC OS. Should be interesting to see if the UI works.

On that whole options with building / setting up a build server thing, there is another option. It's a little complex but could be made to work. RO Linux self builds essentially. However the important thing here is it's running a standard-ish build tree in the CLI. Possibly I could change the target and find where the good stuff starts to utilise the CLI based build environment which honestly I've never touched. Some how it's calling these things from a linux CLI. It's so fascinating and bizarre.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
That last part is really common. Lots of things are built from a terminal.

code:
./configure
make install
Or

code:
mkdir build
cd build && cmake ..
make install

Boris Galerkin fucked around with this message at 05:18 on May 21, 2018

General_Failure
Apr 17, 2005

Boris Galerkin posted:

That last part is really common. Lots of things are built from a terminal.

code:
./configure
make install
Or

code:
mkdir build
cd build && cmake ..
make install

True that.
Here's a link to the RO Linux port. Remember, I didn't do this. I just use it.

https://github.com/TimothyEBaldwin/RISC_OS_Dev/tree/Linux2

The main makefile definitely holds secrets about how it achieves things, but beyond that it's a little hazy to me.

These lines from the makefile are the key to it though, definitely. It's from the self build though.
TARGET could either be the target OS, or the name of the build environment.
METHOD Would be possibly native, or RPCEmu.
PHASES are the truly useful ones. They correspond to the build phases of the OS.
Being able to tease this out and run them in scripts or even I guess maybe even a makefile and I'd be sorted.
code:

TARGET=Linux
METHOD=Linux
PHASES=export_hdrs export_libs resources rom install_rom join

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

SoftNum posted:

well click some boxes in the GUI it's that simple always

Yeah I already tried that. There's nothing I can find in the GUI options, and I literally didn't change any configurations; I just upgraded from Fedora 27 to Fedora 28. I found this post that seems like he's having the same problem but I've tried one of the things it suggested and it didn't work.

Uncommented in /etc/systemd/logind.conf:

code:
HandleLidSwitch=hibernate
IdleAction=hibernate
The other thing that he mentioned was

quote:

Ok after a fresh installation of Fedora 28 the other day (after UPDATE 4), resume is now added by default to the appropriate line in /etc/default/grub

Except he suggests that a resume parameter should be added to /etc/default/grub by default but mine doesn't have it, even though I haven't touched it.

quote:

sodapop:~ $ cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora_sodapop/root rd.luks.uuid=luks-[UUID] rd.lvm.lv=fedora_sodapop/swap rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

The instructions for this are unclear so I'm hesitant to try cause I don't wanna be in a situation where I can't boot up. It's unclear because 1) this reply:

quote:

> Then add it to GRUB_CMDLINE_LINUX in /etc/default/grub, e.g. add the
> following (replace /dev/dm-0 with your path):
>
> GRUB_CMDLINE_LINUX="resume=/dev/dm-0"

Suggests that I replace my line completely with what he wrote, but other posts say to prepend/append it to the line. All of the examples are also not dealing with FDE/LUKS and I don't know if that will be an issue.

Yaoi Gagarin
Feb 20, 2014

RFC2324 posted:

no i mean any internal only network with no access to the outside network at all. the only issue i can see is that i don't know how tat would handle being clustered, if that's happening.

essential you can set up a network that only communicates with the vms or the host, no external communication at all. it works functionally just like mounting a drive locally but you issue the commands to mount it as nfs

I've spent some time trying to look up how to do this but couldn't really get anywhere, do you mind providing more details?

RFC2324
Jun 7, 2012

http 418

VostokProgram posted:

I've spent some time trying to look up how to do this but couldn't really get anywhere, do you mind providing more details?

The network definition, and the thing to select in virt-manager while you set it up.


Add a second nic device to the VM, and put it on the internal network:


Once you set up the network, define the virtual host(which is at 10.10.220.1 in the images shown) via the hosts file, and share the nfs mount out like normal.

my hosts file:
code:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.220.1     makhost makhost.makult
10.10.220.191   transmission    transmission.makult
10.10.220.246   samba   samba.makult
10.10.220.213   puppet  puppet.makult
10.10.220.206   git     git.makult
10.10.220.128   home    home.makult
10.10.220.138   pxeboot pxeboot.makult
10.10.220.245   plex    plex.makult
10.10.220.163   minecraft       minecraft.makult
10.10.220.175   dhcp    dhcp.makult
and my fstab:
code:
/dev/mapper/centos-root /       xfs     defaults        0       0
UUID=f0b7b5ac-1720-4d7a-9300-4f44e2416d50       /boot   xfs     defaults        0       0
/dev/mapper/centos-swap swap    swap    defaults        0       0
10.10.220.1:/data       /data   nfs     rsize=8192,wsize=8192,timeo=14,intr     0       0
makhost:/data   /mnt    nfs     rsize=8192,wsize=8192,timeo=14,intr     0       0
And I manage any changes I need through puppet.

libvirtd will automagically do dhcp on that internal only network, so it ends up stupid easy once you wrap your head around the concept.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

All of the following on my simple home network:

On Machine A I run nginx as a reverse proxy in front of different web apps like sonarr and plex as well as serving some php sites. This has worked fine for a long time accessing the stuff via http://whatever.com/something

I recently added Machine B which handles my lets encrypt cert and dynamic dns updating. I don't really want to move lets encrypt to Machine A right now for reasons.

How do I configure nginx on Machine A to let me access the services on that machine via https? Or is it not possible?

I don't really know nginx at all, but all the ssl config examples I see assume that the certificates are local to the nginx machine.

Volguus
Mar 3, 2009

Thermopyle posted:

All of the following on my simple home network:

On Machine A I run nginx as a reverse proxy in front of different web apps like sonarr and plex as well as serving some php sites. This has worked fine for a long time accessing the stuff via http://whatever.com/something

I recently added Machine B which handles my lets encrypt cert and dynamic dns updating. I don't really want to move lets encrypt to Machine A right now for reasons.

How do I configure nginx on Machine A to let me access the services on that machine via https? Or is it not possible?

I don't really know nginx at all, but all the ssl config examples I see assume that the certificates are local to the nginx machine.

Stupid idea: make the certs on Machine B local to Machine A. Without actually making them local, you can network mount the location where the certs are and then tell Machine A nginx to just get them from there (simply follow the guides, just change the paths: /mnt/machineb/certs instead of /etc/certs or wherever the gently caress they usually put them).

Edit: or like nem says, add Machine C to the mix.Proof, yet again, that any problem in CS is solved by adding another level of indirection.

Volguus fucked around with this message at 00:27 on May 26, 2018

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved
Store the certificates on a network mount that's shared between machine A and machine B (nfs, cifs), then on machine A point your Nginx config to use those SSL files from LE?

Or comedy option rsync after you finish. If you're tied down by Python or other library reasons on machine A, getssl is a bash implementation that'll work without issue.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Probably a question that I could've Googled, but what the hell:

When I copy a file from my laptop to my cifs mount, such as this:

code:
 rsync -avhP Downloads/Fedora-Workstation-Live-x86_64-28-1.1.iso /cifs/Programs_etc/system-ISO/Fedora/ 
It copies and then stalls for a few seconds at the end of the transfer, presumably emptying the caching used by rsync and/or the cifs share.

I want to see exactly how fast my local transfers are, without any caching enabled, so that when rsync gets to 100% ir really means the sync is finished.

Is there a flag to disable caching, so I can get a realtime appreciation of how quick my transfers are?

I haven't got any network issues or anything, I just want to see how fast the hardware is blazing :ninja:

evol262
Nov 30, 2010
#!/usr/bin/perl
It's waiting for sync. Change the mount options on your CIFS mount and/or server

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

evol262 posted:

It's waiting for sync. Change the mount options on your CIFS mount and/or server

Ah! You mean specify the cifs mount with 'sync' to negate async in fstab? I hadn't thought of that one.

I'm in work at the moment but when I get home I'll try that. Thanks!

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
The share I was trying to configure as syncronous is an autofs cifs mount, so I tried changing the auto.foo file.

From this:

code:
 foo   -fstype=cifs,rw,noperm,credentials=/etc/cifscreds.txt   ://192.168.0.6/fooshare 
To this:


code:
 foo   -fstype=cifs,sync,rw,noperm,credentials=/etc/cifscreds.txt   ://192.168.0.6/fooshare 
And also this:


code:
 foo   -fstype=cifs,rw,noperm,credentials=/etc/cifscreds.txt	-o sync   ://192.168.0.6/fooshare 
But neither would mount. Never mind. It wasn't important anyway. I was just loving around to see how fast my local net is.

I have another question:

I've set up WOL on my gaming rig so that I don't have to go to the effort of getting up off the couch and walking 3 paces to tap the conveniently placed and effortlessly waist-heighted power button on my rig.

It's working a treat and I can turn it on from the shell on another machine, or via the pfSense WOL interface.

But I'm wondering about the security behind WOL. Is the 'magic packet' generated cryptographically when you first set it up, either by your firmware or by the OS or something? Are there measures in place to make sure that an intruder couldn't just spoof a WOL packet and turn stuff on?

SoftNum
Mar 31, 2011

apropos man posted:


But I'm wondering about the security behind WOL. Is the 'magic packet' generated cryptographically when you first set it up, either by your firmware or by the OS or something? Are there measures in place to make sure that an intruder couldn't just spoof a WOL packet and turn stuff on?

No. WOL packets aren't (typically) routable, so the security is the same as all your other security preventing people from getting onto your broadcast (level 2) domain (WiFi security, physical switch access, etc.)

SoftNum fucked around with this message at 14:17 on Jun 2, 2018

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Easiest way to run a scheduled job to mirror two directories on a Windows box with a Linux box?

I've tried WinSCP and Cygwin but I want something that detects changes on both ends. Like having a small Dropbox without resorting to using Dropbox.

xzzy
Mar 5, 2009

Do you want something rsync style, or btsync style?

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

xzzy posted:

Do you want something rsync style, or btsync style?

Actually, btsync wouldn't be a bad idea. I'd like to sync about 6 devices, two running Windows 10 and the rest running either commandline or desktop Linux.

Just for sharing useful files and stuff. Keepass for work and snippets of stuff. It doesn't have to be a large amount of data.

I tried Syncthing a couple of times and found it a bit convoluted. I'll have a look at btsync, because one or two of my Linux boxes is on 24/7.

xzzy
Mar 5, 2009

btsync is a kickass tool until you run into their attempts at monetization. Great tech, silly strategy.

Unfortunately none of the open source options are anywhere near as stable. Syncthing was garbage last time I tried it.

mystes
May 31, 2006

All file synchronization programs are really terrible. Are you sure you definitely need automatic two way synchronization and can't use samba, scp, git or something?

ToxicFrog
Apr 26, 2008


I've been using syncthing between my server, desktop, laptop, and phone for ages and it's great. I moved away from btsync for this and have no regrets. :shrug:

Horse Clocks
Dec 14, 2004


As of when I last looked (2017?), the one perk btsync has over syncthing, is the encrypted share utility.

As I usually only ever have one machine on at a time, the effectiveness of p2p sync is minimal.

but with access to a shared server that’s running btsync I can have the latest copy available all the time. Plus, Nobody else on the machine, even with root access, can see the contents of the files.

mystes posted:

All file synchronization programs are really terrible. Are you sure you definitely need automatic two way synchronization and can't use samba, scp, git or something?

I used to use unison and often found I forgot to sync, or it didn’t sync.

With an automated system, I’ve had 0 “I could have sworn I copied this file there” scenarios.
Once or twice, I’ve had “I could have sworn I deleted this poo poo”, but less often than I had with a manual/scripted setup.

Horse Clocks fucked around with this message at 21:45 on Jun 4, 2018

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Horse Clocks posted:

As of when I last looked (2017?), the one perk btsync has over syncthing, is the encrypted share utility.

As I usually only ever have one machine on at a time, the effectiveness of p2p sync is minimal.

but with access to a shared server that’s running btsync I can have the latest copy available all the time. Plus, Nobody else on the machine, even with root access, can see the contents of the files.


I used to use unison and often found I forgot to sync, or it didn’t sync.

With an automated system, I’ve had 0 “I could have sworn I copied this file there” scenarios.
Once or twice, I’ve had “I could have sworn I deleted this poo poo”, but less often than I had with a manual/scripted setup.

How well does it work for detecting file changes if you had, say, 5 systems and one of them was a permanent 24/7 Linux server? If you turn one of the four other machines on, say open a file and make a quick edit and save is it synced pretty effectively? btsync doesn't get confused between which is the 'right' version?

ToxicFrog
Apr 26, 2008


apropos man posted:

How well does it work for detecting file changes if you had, say, 5 systems and one of them was a permanent 24/7 Linux server? If you turn one of the four other machines on, say open a file and make a quick edit and save is it synced pretty effectively? btsync doesn't get confused between which is the 'right' version?

It's been a while since I used btsync, but when I did use it my use case involved an always-on server and a laptop that was constantly dropping in and out of connection due to being put to sleep and/or moved out of wifi range. As long as you didn't the edit the same file on both systems while they were out of communication it was fine. If you do, whichever file was edited most recently wins and the changes from the other version(s) are discarded.

Syncthing also handles this case fine, if you turn on inotify support¹. Wake up the laptop and it grabs the latest version from the server (if it's been edited there while the laptop was asleep); edit the file and syncthing sees the edit and sends it back to the server. It handles conflicts slightly better than btsync: rather than just throwing out all but one version of the file, the most recent one wins and the others are put next to it with a .sync-conflict-<DATETIME> suffix. Noticing that this happened and merging the various versions (if necessary) is on you, though.

¹ Without this the default behaviour is to scan for changes every hour, which means it probably won't pick them up if the laptop is only on briefly, and if you're using multiple systems at once you're much more likely to generate conflicts. It's also useless for the "edit file on laptop, immediately test changes on server" use case. Until recently you had to run a separate program for inotify support; in 0.14 it was merged into syncthing itself, and in 0.15 it will probably be on by default.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Yeah. The .sync-conflict-<DATETIME> think happened to me a few times when I tried Syncthing.

I've tried btsync/resilio this morning but I can't even get two devices to sync with an approval request.

Might just try and use Gitlab/SSH/PuTTY :(

xzzy
Mar 5, 2009

Maybe look up Unison too. It's a manual file sync tool but is more fire and forget than something like rsync.

https://www.cis.upenn.edu/~bcpierce/unison/

ToxicFrog
Apr 26, 2008


apropos man posted:

Yeah. The .sync-conflict-<DATETIME> think happened to me a few times when I tried Syncthing.

Yeah, that happens whenever you modify a file on one machine, then modify it on another before the two have a chance to sync. The default (scan every hour) settings make it easy to hit this.

BTSync has it happen more rarely because it uses inotify by default, but when it does happen it's much worse because it just silently throws away all but one version of the file with no indication this has happened.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
OK. Thanks. Looks like I might be trying syncthing again. With some tweaked scan times.

BoyBlunder
Sep 17, 2008
:shrug: I use syncthing and it's great. Only thing I've modified was the scan times (made them more aggressive) and it's been stable for years for me.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Yeah. I think I'm happier with the idea that if I use syncthing and it gets confused, then at least there's a sync-conflict file for me to deal with. Then I don't lose anything and can also debug my own working processes by using the datestamp on the conflict file.

I'm guessing that the fact that I was getting conflict files was in part due to bad working practices (having the same file open on two PC's) rather than purely syncthing getting confused.

I'm back on syncthing now. :)

Adbot
ADBOT LOVES YOU

ToxicFrog
Apr 26, 2008


apropos man posted:

OK. Thanks. Looks like I might be trying syncthing again. With some tweaked scan times.

It looks like as of quite recently the filesystem watcher is enabled by default, so you may not even need to adjust the scan times -- it'll detect changes as soon as they happen, then start syncing them after ten seconds (configurable in advanced per-folder settings) without further changes.

If you're using qsyncthingtray or similar you should see the tray icon start spinning ten seconds after you make a change if the watcher is enabled.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply