|
evol262 posted:Can you ping out with static configuration? I'd guess that ip forwarding is off or -m physdevisbridged got unset I haven't had a chance to mess with tcpdump yet, but I can ping out when I set a static ip. I cannot ping the vm from my LAN. I'm not sure what either of those things you mentioned is referring to. I'll go Google around and see what I can find.
|
# ? Feb 15, 2017 20:48 |
|
|
# ? Apr 19, 2024 22:29 |
|
Thermopyle posted:I haven't had a chance to mess with tcpdump yet, but I can ping out when I set a static ip. I cannot ping the vm from my LAN. I'm not sure what either of those things you mentioned is referring to. I'll go Google around and see what I can find. Did your VMs get set to a NAT network with no DHCP instead of the actual bridge?
|
# ? Feb 15, 2017 21:48 |
|
Thermopyle posted:I haven't had a chance to mess with tcpdump yet, but I can ping out when I set a static ip. I cannot ping the vm from my LAN. I'm not sure what either of those things you mentioned is referring to. I'll go Google around and see what I can find.
|
# ? Feb 15, 2017 22:02 |
|
I'd just try this for a start: https://docs.fedoraproject.org/en-U...th_libvirt.html
|
# ? Feb 15, 2017 22:16 |
|
Just do the tcpdumps. Until you identify where the flow of traffic is disrupted you are just guessing. >:(
|
# ? Feb 15, 2017 22:44 |
|
other people posted:Just do the tcpdumps. Until you identify where the flow of traffic is disrupted you are just guessing. Yessir! Unfortunately for everyone who is waiting with bated breath, I won't get time to work on it until tomorrow. Well, I've got time now but I'm tired and I'd rather post on SA and look at kitten videos.
|
# ? Feb 15, 2017 22:55 |
|
So I'm looking at exporting all my mails from Google to my self-hosted mail server. The Dovecot Gmail migration mentions that Gmail uses virtual dirs, which tends to duplicate emails. What's the best method to migrate everything without duplicating emails? Untag everything, delete all tags/folders, then migrate?
|
# ? Feb 16, 2017 00:58 |
|
Could you use POP instead of IMAP?
|
# ? Feb 16, 2017 01:00 |
|
So this is all probably very basic but I'm new to this (and I hope this is the right thread): I've got a linux website with sftp access and just set up a local ubuntu vm running server 16.04.1 (installed lamp from the prompt). What I'd like is to use my local vm as a testing ground (running the same versions of php, ruby, etc.) and push updates to my server. I'd be doing development on my mac laptop (Sublime Text) and have a github account, is there an easy workflow for this?
|
# ? Feb 16, 2017 06:19 |
|
I just wasted an entire evening pulling out my hair because I thought UEFI keeping me from installing Arch was too simple/dumb to be the reason.
|
# ? Feb 16, 2017 06:29 |
|
other people posted:
OK, so I did this. eth0 does not see any traffic. br0 and vnet0 both do see traffic. So, for some reason my bridge isn't...bridging?
|
# ? Feb 16, 2017 17:34 |
|
UEFI is awful. Parts of it are OK, perhaps necessary even, but the whole NVRAM boot order shenanigans and the inconsistency across vendors is horrible. The one thing BIOS got right is that, if you attach a bootable disk to a machine, the machine will boot. With UEFI you often have to open the boot menu, and select the correct of the three "Disk 0" entries, which is never the first one. Or futz with the UEFI shell until you can get the machine booted and run grub-install again.
|
# ? Feb 16, 2017 17:38 |
|
Thermopyle posted:OK, so I did this. Did you set the sysctls from the link above (Fedora docs)? This is probably iptables. The sysctls will stop if from mucking with bridge traffic. ExcessBLarg! posted:UEFI is awful. Parts of it are OK, perhaps necessary even, but the whole NVRAM boot order shenanigans and the inconsistency across vendors is horrible. Yes, it's something new to learn. That's ok. And UEFI 2 is remarkably consistent unless you have a Lenovo consumer-grade laptop. Having a consistent boot setup across architectures and being able to write binaries in a native format is just icing on the cake. ExcessBLarg! posted:The one thing BIOS got right is that, if you attach a bootable disk to a machine, the machine will boot. With UEFI you often have to open the boot menu, and select the correct of the three "Disk 0" entries, which is never the first one. Or futz with the UEFI shell until you can get the machine booted and run grub-install again. You actually don't. UEFI will attempt to find an EFI system partition on the first disk. This is no different from legacy boot. Except you can actually boot from other disks by setting nvram instead of hitting the BIOS menu and swapping disks around so Windows doesn't clobber grub/etc.
|
# ? Feb 16, 2017 19:12 |
|
Speaking as someone who owns one, what's wrong with Lenovo UEFI?
|
# ? Feb 16, 2017 19:22 |
|
evol262 posted:Did you set the sysctls from the link above (Fedora docs)? Yeah I did. Doesn't seem to have made any difference. For shits and giggles I just tried adding the iptables rule as mentioned on that page. And it fixed the problem! I'm a little nervous and/or bewildered as to why the sysctl stuff didn't actually disable iptables for bridged traffic...
|
# ? Feb 16, 2017 19:41 |
|
evol262 posted:As someone who works with it, it's much less awful than legacy boot. evol262 posted:The mess of BIOS boot partitions on large disks is hell. Sure you might need a separate boot partition because of LBA offset restrictions, or because your bootloader doesn't understand the fancy root volume, but at least you were relatively free to partition things however you wanted subject to the offset restrictions. Now you need a GPT, a FAT32 (but not really FAT32) EFI partition, and hope you got your boot paths right. evol262 posted:Troubleshooting booting is hell (especially vs an EFI shell). On half the UEFI machines I use GRUB won't display anything unless you add EFI graphics support to grub.cfg, although maybe that's a GRUB issue. I also regularly encounter UEFI display problems on 4K monitors, making any kind of troubleshooting impossible without a monitor "downgrade". Serial is much simpler, but console redirection on PCs has always been a vendor-specific option and rarely works well. evol262 posted:And UEFI 2 is remarkably consistent unless you have a Lenovo consumer-grade laptop. evol262 posted:Having a consistent boot setup across architectures and being able to write binaries in a native format is just icing on the cake. evol262 posted:UEFI will attempt to find an EFI system partition on the first disk. This is no different from legacy boot. ExcessBLarg! fucked around with this message at 20:08 on Feb 16, 2017 |
# ? Feb 16, 2017 20:01 |
|
ExcessBLarg! posted:As someone who "casually" works with UEFI machines and GRUB it's still awful. ExcessBLarg! posted:No worse than EFI partitions. In the old days all you needed to boot a disk was a valid MBR signature and the bootloader took care of the rest. Things got messy when BIOSes started peering into the MBR partition table and looked for a bootable partition, but this wasn't an issue before 2008. ExcessBLarg! posted:Sure you might need a separate boot partition because of LBA offset restrictions, or because your bootloader doesn't understand the fancy root volume, but at least you were relatively free to partition things however you wanted subject to the offset restrictions. Now you need a GPT, a FAT32 (but not really FAT32) EFI partition, and hope you got your boot paths right. The EFI SP also doesn't need to be FAT32, but it almost always is. Some vendors don't do it, but don't blame EFI because people don't follow the specs. ExcessBLarg! posted:t ExcessBLarg! posted:On half the UEFI machines I use GRUB won't display anything unless you add EFI graphics support to grub.cfg, although maybe that's a GRUB issue. I also regularly encounter UEFI display problems on 4K monitors, making any kind of troubleshooting impossible without a monitor "downgrade". Serial is much simpler, but console redirection on PCs has always been a vendor-specific option and rarely works well. grub actually can do console redirection, too. You'd just be missing it from the firmware menu. ExcessBLarg! posted:It's not. On Intel UEFI I can install a bootloader to "EFI/BOOT/BOOTX64.EFI" and it will automatically boot with the default NVRAM configuration, which is great. But Dell UEFI (not sure the actual vendor) won't boot unless I select it from the boot menu. That means I can't just clone an image on a disk and have it automatically boot on arbitrary UEFI hardware--it either requires boot-menu intervention or setting NVRAM variables (which itself requires boot intervention). I'd bet a lot of money that Dell would tell you "upgrade your firmware to fix this", but that's a common problem with vendor firmwares anyway. Usually not something so common as booting, but still. ExcessBLarg! posted:Open Firmware? ExcessBLarg! posted:Sure it will find an EFI system partition, but it won't (automatically) boot "EFI/ubuntu/grubx64.efi", which is the default Ubuntu bootloader location, until I launch it from a shell and set it in NVRAM. Kind of a dumb argument to have in general. Some people like it, some don't. But, like systemd, it has a number of technical advantages even if some implementations are flawed, and it is the future. You can miss sysvinit and legacy boot until the end of the world, but they're dead, baby. They're dead.
|
# ? Feb 16, 2017 22:38 |
|
ExcessBLarg! posted:The one thing BIOS got right is that, if you attach a bootable disk to a machine, the machine will boot. What kind of charmed life have you been living? There's a shitload of BIOS machines out there (I have two in my basement!) that have ports you can attach mass storage devices to that nonetheless won't boot from them; laptops that have SD card slots but won't boot from SD, in particular, are common, although as late as 2012 I was still seeing machines with USB ports that wouldn't boot from USB. On top of that, you have machines that will boot but have a really narrow definition of "bootable disk". The most prevalent of these are probably systems that support booting from USB, but only if the USB disk "looks like" a Zip disk, meaning it needs to conform to fairly specific restrictions on partition layout. Doesn't matter that the disk would boot fine if it tried executing the MBR; if you can't convince the BIOS to try that in the first place, it's not bootable. My experiences with UEFI haven't been good, but they have at least been bad in ways that are more consistent and easier to debug than BIOS. gently caress bootloaders, in hell, forever.
|
# ? Feb 17, 2017 00:22 |
|
Thermopyle posted:Yeah I did. Doesn't seem to have made any difference. How did you apply the sysctl changes? if you run "sysctl net.bridge.bridge-nf-call-iptables" what is the result?
|
# ? Feb 17, 2017 03:37 |
|
evol262 posted:EFI framebuffers are different from legacy. grub should handle this seamlessly when the config is generated. evol262 posted:I don't know if you're deploying a single image over and over again (with a fixed grub configuration), evol262 posted:Kind of a dumb argument to have in general. Some people like it, some don't. But, like systemd, it has a number of technical advantages even if some implementations are flawed, and it is the future. You can miss sysvinit and legacy boot until the end of the world, but they're dead, baby. I really want to like UEFI. I knew it would be different, but it's a more powerful platform than legacy BIOS with a lot of advantages. However, since UEFI is just a spec to which the actual implementations have varying degrees of non-compliance or variation in behavior, it's resulted in practical headaches that I didn't have to deal with previously. This always happens with complicated specs with insufficient compliance checking. See ACPI, USB-C, etc. ExcessBLarg! fucked around with this message at 14:48 on Feb 17, 2017 |
# ? Feb 17, 2017 14:45 |
|
I need to set up something to get remote access to my work box (Ubuntu MATE 16.04). I'm no Linux guru, so bear with me if I use the wrong terms or whatever. I would like to be able to connect to my existing session and be able to handle the differences between my work display (3 monitors with different resolutions and orientations) and whatever computer I'm connecting from (typically a single 1920x1080 display). This would probably be something like being able to choose to only display one of the displays, and resize it so I don't have to scroll on my lower res client monitor. A Windows client would be nice for when I'm away from home at my Mom's place, but not entirely necessary. I had tried NoMachine, which was a mixed bag. I cannot get any client to connect when the server computer is using an Nvidia driver. When I use the nouveau driver, I can connect, but that's not really a practical solution as my non-remote experience is much better with the Nvidia driver. My best guess based on some searching is that to get it to work with the Nvidia driver, I have to do something related to H.264. There's some stuff I've read that says I can either install the NoMachine AVC Pack ($5) or build/install libx264. I've done (or at least think I've done) the latter, to no positive effect. I've also tried setting it to use VP8 or MJPEG to no effect as well. NoMachine allows me to connect to an existing session, and I can choose to just display my main remote display and resize it to fit my client screen. Aside from the above problems, I find the visual quality and responsiveness (including after messing with various quality options) to be sub-par relative to my experience with TigerVNC. I will admit I may have no idea what I'm doing. So the combination of the price for commercial use ($125 for a license), difficulty in getting it working (and so far being completely unable to get it to work with the Nvidia driver), and not great quality, I don't think it is a viable option. After trying NoMachine, I have since tried TigerVNC. When running vncserver and connecting and starting a new session, everything is great. The visual quality is great and very responsive. I haven't yet had luck connecting to an existing session with x0vncserver, and even if I did, I'm not sure there is anything that would allow display it nicely in a 1920x1080 client. There are so many remote access options, it is hard to sort through and know what to try. So is there something out there that will allow to me connect to my existing session, handle differences in resolution between server computer and client, and is either free for commercial use or has a license for a single instance that my boss won't balk at? venutolo fucked around with this message at 15:11 on Feb 17, 2017 |
# ? Feb 17, 2017 15:08 |
|
ExcessBLarg! posted:Even without a grub.cfg, GRUB is supposed to drop into its command line where you can manually prod stuff. It's hard to use when the video doesn't also come up though. That's never been a problem with BIOS, but it's a problem in some UEFI environments. The inconsistency is killer. ExcessBLarg! posted:Yes, it's common to use disk duplicators to blat an image onto an SSD build, and have a script that runs on first-boot to do all the hardware-specific configuration. Doing so without intervention requires that UEFI have reasonable boot order defaults, which, true is the same as BIOS, but it seems that non-sane default configurations are more prevalent in UEFI implementations. Again, though, UEFI does have reasonable defaults. Don't blame the spec for vendors. ExcessBLarg! posted:I actually like systemd. It solves a lot of long-standing problems, some of which were previously addressed by daemontools and upstart, but the difference is that everyone has adopted the one implementation of it. So while systemd operates differently from sysvinit or upstart, once you learn it, it pretty much works everywhere. ExcessBLarg! posted:I really want to like UEFI. I knew it would be different, but it's a more powerful platform than legacy BIOS with a lot of advantages. However, since UEFI is just a spec to which the actual implementations have varying degrees of non-compliance or variation in behavior, it's resulted in practical headaches that I didn't have to deal with previously. This always happens with complicated specs with insufficient compliance checking. See ACPI, USB-C, etc.
|
# ? Feb 17, 2017 23:09 |
|
GobiasIndustries posted:So this is all probably very basic but I'm new to this (and I hope this is the right thread): I've got a linux website with sftp access and just set up a local ubuntu vm running server 16.04.1 (installed lamp from the prompt). What I'd like is to use my local vm as a testing ground (running the same versions of php, ruby, etc.) and push updates to my server. I'd be doing development on my mac laptop (Sublime Text) and have a github account, is there an easy workflow for this? This post got kinda buried. But imo you want to use Vagrant and a "box" file that matches the OS you're running in production. And a simple shell provisioner that installs all the same configs and packages you are running on your site. This will give you a dev VM on your local machine that matches prod. Pass your git repo through to the VM as a shared folder. When your tests pass locally, push to prod.
|
# ? Feb 17, 2017 23:42 |
|
Docjowles posted:This post got kinda buried. But imo you want to use Vagrant and a "box" file that matches the OS you're running in production. And a simple shell provisioner that installs all the same configs and packages you are running on your site. This will give you a dev VM on your local machine that matches prod. Thank you! Like I said I wasn't sure if this was the right thread but my main server is on linux and it involves so much stuff I figured I"d start here. I'm working this weekend but Monday I'm off so I'll give it a shot. If anyone else has any recs I'd love to hear them, it seems so simple in my mind (use my mac to dev stuff on a local vm, push to prod/save to github) but nothing is ever simple.
|
# ? Feb 18, 2017 05:32 |
|
Honestly, I'd use a container for this. It's pretty much the ideal case for containers.
|
# ? Feb 18, 2017 07:58 |
|
Why is this bash script ignoring the 'for' loop when I add #!/bin/bash to the start?code:
I'm doing something stupid, aren't I? Also, I haven't managed to get the 'history -d $linenumber' part to work yet. I'm doing something stupid there also, aren't I?
|
# ? Feb 18, 2017 10:13 |
|
Is the shell you use day to day bash? If not, I suspect you won't have any history to remove, so it loops through nothing
|
# ? Feb 18, 2017 14:40 |
|
apropos man posted:Why is this bash script ignoring the 'for' loop when I add #!/bin/bash to the start? When you call /bin/bash its initializing a new shell, so if what you are trying to grep out is in the current shells history and hasn't been written to disk, it won't be found by the script.
|
# ? Feb 18, 2017 16:29 |
|
I use byobu for my terminal from day to day. I don't think that it's a shell as such, rather a jazzy interface for bash. If I disable byobu and use 'plain bash' instead then all my history is still there. They both seem to use the same .bash_history file, therefore the history command in that script works the same whether I use byobu or not. I also tried changing it to use /usr/bin/sh as my terminal and when I close all terminals and open a /usr/bin/sh terminal I still have the same history, so the shell 'sh' is also using the same .bash_history file and I still don't see why specifying a shell in the first line of my script is causing it not to find any instances of the string 'magnet'.
|
# ? Feb 18, 2017 17:38 |
|
What happens if you open the script with history -cr? What's the value of $HISTFILE in your interactive shell? In the script? What happens if you define it as a function, source the function definition, and call it rather than making it a stand-alone script? Writing a function is generally a more reliable way to write stuff that's meant to interact with your current shell session in any case.
|
# ? Feb 18, 2017 17:43 |
|
It seems that I didn't realise running history commands in a script (or non-interactive shell) results in having no history file enabled. I have modified it and it's currently in this messy state: code:
|
# ? Feb 18, 2017 18:19 |
|
Do you need the `set -o history`? I think the `history` command itself will still work without it.
|
# ? Feb 18, 2017 18:42 |
|
To be honest, my idea of grepping the word 'magnet' is not working either because enabling the history within the script causes the 'for xxxxxxx magnet xxxxxxx' line to be added to history and then the next time I run the script it's picking up that in the history and messing with the grep command because of the extra carriage returns in that line. I just wanted a simple script to clean up instances of where I've been downloading torrents from the terminal, but I give up.
|
# ? Feb 18, 2017 18:59 |
|
Try this: rm -f .bash_history Problem solved, you don't really need to preserve all the other poo poo in your history do you?
|
# ? Feb 18, 2017 19:03 |
|
grep -v magnet .bash_history > temphistory; mv temphistory .bash_history I'm sure that doesn't work for some snowflake bash history reason, but whatever.
|
# ? Feb 18, 2017 19:15 |
|
sed '/magnet/d' .bash_history
|
# ? Feb 18, 2017 20:41 |
|
evol262 posted:Honestly, I'd use a container for this. Docker?
|
# ? Feb 18, 2017 21:36 |
|
jaegerx posted:sed '/magnet/d' .bash_history Ding. Sed is perfect. Wondered why it was a script to start with. GobiasIndustries posted:Docker? Sure. Set up a docker container with whatever base image you want, add on dependencies (ruby/Python/whatever). Check out the git repo. Install nginx. Start the webserver and export the port. Wanna host it on a server? Dockerfile. New laptop? Dockerfile. Vagrant is sweet if you need a "real" VM for some reason. Webdev really doesn't
|
# ? Feb 19, 2017 06:03 |
|
xzzy posted:Try this: Heh. I usually delete bash_history about once a month when it gets cluttered, that's with erasedups enabled. VOTE YES ON 69 posted:grep -v magnet .bash_history > temphistory; mv temphistory .bash_history Why didn't I think of some kind of inverse grepping? Because I can be incredibly forgetful/stupid. It's not like I haven't used inverse grep before, either. jaegerx posted:sed '/magnet/d' .bash_history Bingo! Although I haven't got a clue how the syntax works here. When I see a regex I want to use I usually copy and paste and think "I dunno how it works, but it works". Cheers!
|
# ? Feb 19, 2017 09:03 |
|
|
# ? Apr 19, 2024 22:29 |
|
Yet more annoying bash stuff. Bash is getting on my nerves this weekend. I keep my scripts in /opt. I refined the script to trim my history and popped it onto my server in the /opt directory and then logged out. I then ran scp to put a copy on my laptop. I ran scp without sudo, thus: code:
If I try and copy a scrap file into /opt on my laptop it won't allow me without sudo, so why was scp allowed to do it?
|
# ? Feb 19, 2017 09:27 |