|
MrPablo posted:Use OUTPUT instead of PREROUTING, since it's traffic coming from your local machine: Just wanted to thank you for this, worked perfectly. I'm now trying to set up a CRON job to run this every minute to check for IP changes. Right now what it does is run the following script every minute, which resolves the DDNS address, sets the iptables nat rule, and outputs the changes to a text file. Only problem is that my text file basically looks like: 2015/08/26 - 00:00:00: IP set to X.X.X.X 2015/08/26 - 00:01:00: IP set to X.X.X.X 2015/08/26 - 00:02:00: IP set to X.X.X.X Is there any simple way of only executing the rule change + output when the IP address changes? Edit: I think I might have figured it out myself: code:
dpkg chopra fucked around with this message at 20:01 on Aug 26, 2015 |
# ? Aug 26, 2015 19:37 |
|
|
# ? Apr 25, 2024 23:33 |
|
jre posted:Is there no specific error from the mount command in /var/log/messages or /var/log/glusterfs/MOUNTPOINT.log ? Nothing useful in /var/log/messages. (Literally nothing related to this, honestly.) The Gluster mountpoint logs have ~100 lines for each failed mount, but about 90 of them are basically the same as for a successful mount. Here's some snippets from a failed mount code:
If I remove the security context from the mount options, the file system will mount at boot time (though obviously without the intended security context). I've run the entirety of my audit.log for the last week through audit2allow, and it only outputs two suggestions -- the one I posted before, and one related to httpd and log file renaming, almost certainly not germane to this problem. And the timestamps match up. From audit.log: code:
|
# ? Aug 26, 2015 20:17 |
|
Please don't laugh too hard (unless you really want to) I'm a Windows guy, and had some spare time at work today so I decided to take a spare PC and install Arch on it. Surprisingly, everything went well through the base install, and I even have a base install of Xorg on it. The problem is that the pointer won't move in X. I have the base cursor, but it won't budge. Two button MS USB basic mouse. I've done a fair bit of RTFM trying to figure out why there might be an issue, but it's hard when I don't have a clue as to where to even start looking. Feel free to say 'dumbass why did you install Arch" or something similar, I probably deserve it for the sheer audacity of the effort
|
# ? Aug 27, 2015 00:48 |
|
Do you have xf86-input-evdev installed?
|
# ? Aug 27, 2015 01:15 |
|
Suspicious Dish posted:Do you have xf86-input-evdev installed? Yes. Problem solved. I had an image of WinXP on it, and wiped it for arch. Mouse and KB worked fine. After the wipe, X wouldnt work so I assumed all the hardware was working properly. I decided to move the mouse to a different USB port, all is now well. Not sure why Arch disliked the original port the mouse was plugged in on (no not on shared hub). Thanks for the help, I'm now going to try and put i3-gaps on, which is AUR
|
# ? Aug 27, 2015 01:50 |
|
jre posted:Stick this at the top of the crontab Ahh, success! For some reason I thought I had already tried this. Thanks.
|
# ? Aug 27, 2015 04:01 |
|
What's a decent rolling-release distro based off Debian/Ubuntu/whatever? edit: if it has PPA compatibility, even better! goose willis fucked around with this message at 01:26 on Aug 28, 2015 |
# ? Aug 28, 2015 01:12 |
|
goose fleet posted:What's a decent rolling-release distro based off Debian/Ubuntu/whatever? Aptosid. Or just use debian testing
|
# ? Aug 28, 2015 05:05 |
|
evol262 posted:just use debian testing
|
# ? Aug 28, 2015 07:04 |
|
I've got all the fun networking problems. Today the infrastructure I manage had an issue related to a yet-undiagnosed networking condition. Since I don't have access to the switches directly to pull any better diagnostics personally, that's all I can say right now -- things were working strangely, and connectivity between servers was intermittently fidgety. During this state, some of our systems (presumably) became so severely backed up that attempting to telnet to a local service on 127.0.0.1 would result in the sending socket going into SYN_SENT state. tcpdump of lo showed that the SYN packet to initiate the connection was hanging and not being sent at all. Likewise, sometimes trying to ping a server resulted in errors for a few send iterations while the system tried to open the raw ICMP socket. Our rmem/wmem are tuned very high relative to the network I/O of the systems, nf_conntrack's backlog was nowhere near full, and I'm out of my depth in how to dig any further into what's getting backed up so I can follow that back up to the network infrastructure. Anyone have any ideas on where to start looking?
|
# ? Aug 28, 2015 18:21 |
|
Those symptoms are very reminiscent of a network loop, but I don't want to believe that that could really be a possibility... (unless maybe they just installed some new HP switches which come with loving Spanning-Tree DISABLED BY DEFAULT )
|
# ? Aug 28, 2015 18:42 |
|
ChubbyThePhat posted:Those symptoms are very reminiscent of a network loop, but I don't want to believe that that could really be a possibility... (unless maybe they just installed some new HP switches which come with loving Spanning-Tree DISABLED BY DEFAULT ) I think I'm more interested in figuring out what part of the network stack is bottlenecking on the host so I can definitely start tracking it back up the pipeline.
|
# ? Aug 28, 2015 19:02 |
|
Route handler? Proto handler? Prerouting? I'd probably start walking backwards from whatever you can dig out of ss, and go from there, attempting tcp_limit_output_bytes, tcp fastopen, tcp_tw_reuse, and tcp_tw_recycle. Of course, none of those really apply to ICMP, and they're all predicated on the same root problem. If it's proto handler or route handler, I don't even know how to start troubleshooting that without systemtap. And you could theoretically adjust thash_entries, but it's really unlikely that that's a problem on modern systems with lots of memory. Thomas Graf did an interesting talk on it last year, and the slides are available, which gives a good 10,000 ft view of how everything passes through everywhere, but I can't video of the session or the q&a afterwards Not even close to my area of expertise either, but a lot of the guys who know this stuff in and out and work on it are very active on freenode/#openvswitch, and they may be able to point you in the right direction.
|
# ? Aug 28, 2015 19:49 |
|
Any thoughts on best practices on creating a partition vs using the raw disk for an lvm PV? I typically create an 8e type partition on /dev/sdX1 and then pvcreate that, but I've been seeing some tutorials on just using the raw dev/sdX device rather than a subset partition.
|
# ? Aug 28, 2015 21:39 |
|
Martytoof posted:Any thoughts on best practices on creating a partition vs using the raw disk for an lvm PV? I haven't read a compelling case for partitioning a disk and then pvcreating that. The only thing I read was that an idiot admin may mistakenly see an unpartitioned block device and reformat an in-use block device without thinking first, but that's more of a what-if scenario if you work with braindead co-workers. I find it's much cleaner - especially when working with remote storage or a vmdk - to just pvcreate the raw block device. This is especially true when you go to resize it. No having to futz with kpartx to re-read the new partition boundaries, just echo 1 > rescan, pvscan, and lvextend.
|
# ? Aug 28, 2015 21:45 |
|
I think I started doing the partition just because it was mentioned in some centos document but now you make a really compelling argument for not using a partition to the point where I want to reconsider.
|
# ? Aug 28, 2015 21:48 |
|
There's a stackexchange discussion that disagrees with me, though. However, I find most of the reasons to be... not very good, and certainly not worth the headache of having to deal with partitions when you're dicking with drive resizing.
|
# ? Aug 28, 2015 22:30 |
|
Martytoof posted:Any thoughts on best practices on creating a partition vs using the raw disk for an lvm PV? Depends on whether your coworkers are incompetent and whether you wish to reboot your VM every time you resize it. In some morons infinite wisdom they patched the kernel to block partition table reloads for disks that have in-use partitions (for Redhat/Centos at least.), this means that if you resize a partition on say Centos/RHEL 6 you have to reboot afterwards before you can do a pvresize & lvextend. Not sure if this applies to other distros Using the full disk rather than partitioning means you can do live resizes, which is preferable to me, and the people I work with are not dumb enough to gently caress it up. theperminator fucked around with this message at 00:46 on Aug 29, 2015 |
# ? Aug 29, 2015 00:19 |
|
This maybe better posted in the Virtualization thread, but its all linux based so I'll start here. What would be the best filesystem to use for a partition that I want to share as a block device to KVM guests? To elaborate, have one drive that is a shared data drive, that the host can read/write, as well as multiple guests who are unable to communicate with the host over the network due to a bridged network setup. I am looking at gluster right now, but from what I can see that wants to work over the network, which might make it a no go.
|
# ? Aug 29, 2015 01:40 |
|
You're looking at traditional clustered shared filesystems. ocfs2, gfs, etc. But none of them are going to be happy with concurrency without a way to communicate. There are janky solutions involving locking files from all the nodes placed in a particular portion of the disk (or another partition on it), and clvm can do some of those, but seriously consider adding a private NATed cluster network to synchronize
|
# ? Aug 29, 2015 03:10 |
|
evol262 posted:You're looking at traditional clustered shared filesystems. ocfs2, gfs, etc. But none of them are going to be happy with concurrency without a way to communicate. There are janky solutions involving locking files from all the nodes placed in a particular portion of the disk (or another partition on it), and clvm can do some of those, but seriously consider adding a private NATed cluster network to synchronize So do 2 interfaces on the guests, one bridged and one using just an internal only virtual network?
|
# ? Aug 29, 2015 03:28 |
|
Yep -- separate cluster networks used to be pretty common for this crap 5+ years ago (and to negotiate who held the service address, and cluster heartbeats, and a bunch of other stuff) in the days before haproxy and cloud services ruled the world. If you want to be able to share a disk across a bunch of systems and have them all able to write it, it's still something you can do. Of course, you'd do it for something like vcs/vxfs shared disks and rac and ocfs and rhcs with a scsi array cabled to both or the same fc/iscsi lun mapped to both. Locally, you may as well just export it over nfs unless you need "real filesystem" semantics for some reason, or you want to learn about clustered filesystems
|
# ? Aug 29, 2015 03:34 |
|
evol262 posted:Yep -- separate cluster networks used to be pretty common for this crap 5+ years ago (and to negotiate who held the service address, and cluster heartbeats, and a bunch of other stuff) in the days before haproxy and cloud services ruled the world. This is on my home server/lab, so its mostly for learning, hopefully without upsetting the delicate balance of torrents and other stupid poo poo too badly. Trying to set it up so all the host is doing is running libvirtd and holding the data, while having a dedicated VM for each different service I am running(samba, transmission, etc) I wanted to have a central data store mainly for purposes of flexibility, and just to know how to do it for when I need to deal with it in a production environment, preferably without having to deal with vxfs. I have worked with vxfs, but don't want to have to pay for something like that in my home lab, and thought it was mainly for SAN storage.
|
# ? Aug 29, 2015 04:29 |
|
You can use a macvtap interface on the host, but NM doesn't have a friendly way to manage this. You can use ovs bridges, which don't have that problem at all, but neither virt-manager nor NM (last time I looked) have friendly ways to manage this, and you need to edit guest xml and interface confs by hand. Or use a plain Linux bridge, which is also OK with this, but it takes an iptables rule and a couple of sysctls vxfs is used for shared block storage. Like ocfs2 and gfs. ocfs2 is pretty painless to set up, actually, though gfs2 is arguably more widely used (and less painful than it used to be, since pacemaker takes over a lot of the bullshit cluster.xml used to). If you want to share a block device, you need some clustered filesystem. Alternatively, libvirt can just pass poo poo in directly ala virtualbox, which maybe sounds like what you want if the host won't need to access samba/etc evol262 fucked around with this message at 05:15 on Aug 29, 2015 |
# ? Aug 29, 2015 05:05 |
|
evol262 posted:Alternatively, libvirt can just pass poo poo in directly ala virtualbox, which maybe sounds like what you want if the host won't need to access samba/etc I was trying this one originally, but everything I saw said you need to use the 9p filesystem, which requires custom compiling it into the kernel. Thanks for the pointers
|
# ? Aug 29, 2015 06:04 |
|
Something spooky is happening here and I don't know how to fix it. Summary: traffic that appears identical to tcpdump is being treated by iptables differently. Details below. Test setup: * device 0 - A linux machine sitting on IP 192.168.0.121 * device 1 - a dumb device that just sends packets to port 4000 on 192.168.0.121. The IP address of this device is set with the command: sudo arp -s 192.168.0.27 MAC_ADDRESS, because it does not make DHCP requests and can't be made to do anything other than its job. * device 2 - A linux machine sending packets to port 4000 on 192.168.121 with the command: code:
code:
code:
The weird junk is from the sensor, and the dateis from the above command. Note that the destination IP/port of this traffic is the exact same according to tcpdump. Now if I forward all traffic from UDP port 4000 to 2700: code:
code:
code:
Anyone have some insight into what might be happening? The Gay Bean fucked around with this message at 06:47 on Sep 3, 2015 |
# ? Sep 3, 2015 06:11 |
|
Hey, it's me again. I installed Korora on a Thinkpad Yoga 12, and thanks to my experience in this thread, everything went off without a hitch. But now I want to get touch support working and I can' t seem to find any relevant documentation. For instance, where is the Korora driver source? I can't seem to find xorg.conf, or any udev rules, or whatever. All the config files seem to be completely missing? Anyway, I'll cut to the chase. I was hoping to get started with linux (driver/kernel?) development. Does anyone know where I can learn more and get started? Also, does anyone know where I can find Korora specific code?
|
# ? Sep 3, 2015 07:14 |
|
Re: iptables. Can you post your entire ruleset, please?Storgar posted:Hey, it's me again. I installed Korora on a Thinkpad Yoga 12, and thanks to my experience in this thread, everything went off without a hitch. But now I want to get touch support working and I can' t seem to find any relevant documentation. For instance, where is the Korora driver source? I can't seem to find xorg.conf, or any udev rules, or whatever. All the config files seem to be completely missing? Korora is a fedora remix. Less than 1/1000 odds they've written any drivers, and almost no chance that they haven't upstreamed them if they have. Install kernel-devel. Storgar posted:Anyway, I'll cut to the chase. I was hoping to get started with linux (driver/kernel?) development. Does anyone know where I can learn more and get started? Also, does anyone know where I can find Korora specific code? kernel-devel. For korora-specific stuff, I'd look at korora-release and their kickstarts, but it bet it's renaming in anaconda, some repos added, some additional packages, and swapping branding, with no significant engineering/code.
|
# ? Sep 3, 2015 16:08 |
|
Oh I see. I'm taking a look at Fedora and I realized that they support KDE Plasma 5 too. You're right about the Korora differences I think.
|
# ? Sep 3, 2015 17:18 |
|
evol262 posted:Re: iptables. Can you post your entire ruleset, please? code:
* Disabling rp_filter * Enabling logging of martians and looking at logs * Checking for invalid checksums or problems in Wireshark * Writing a C UDP listener; when I did this, the packets from host1 did not come through to recv_from calls, but the packets from host2 did (without the firewall rule active). The data from the sensor is visible in Wireshark, tcpdump, and socat but invisible to my C program and iptables. The super crazy thing is that a default boost::asio socket is capable of seeing the data, so it seems like there's some socket option that can be set to make it magically visible. I just haven't found it yet.
|
# ? Sep 3, 2015 18:19 |
|
The Gay Bean posted:
code:
|
# ? Sep 3, 2015 23:24 |
|
Seems like "-p all" and "-p udpite" both don't take the "--dport" option. Alright, I'm convinced that this is actually a bug and not something I'm doing wrong, so I'll report this to the iptables guys.
|
# ? Sep 4, 2015 01:03 |
|
The Gay Bean posted:Seems like "-p all" and "-p udpite" both don't take the "--dport" option. netstat -s? Does a log rule in front of your NAT rule match both packets? Does one of the packets have a VLAN tag? If all else fails, try dropwatch. It's a pretty simple rule; I have to believe there is something wonky with your packet...
|
# ? Sep 4, 2015 03:42 |
|
Okay, I have a better idea of what is going on now. There are two devices on the network sending UDP/IP packets with: source = 192.168.0.27 destination = 192.168.0.121 Only the mac address of the source is different. This appears to be confusing iptables. It seems like "-m --mac-source" can only apply to ACCEPT/DROP rules so I'm still a bit stumped on what to do.
|
# ? Sep 4, 2015 06:04 |
|
The Gay Bean posted:Okay, I have a better idea of what is going on now. Huh? You have two devices with the same IP address?
|
# ? Sep 4, 2015 13:19 |
|
It's a sensor (not a polished ready for market product, from one of our partner companies) with an ethernet interface. When connected to a network it has a very specific set of behaviors that it can perform: wait for udp packets on port 4000 containing a startup key, send a response to 192.168.0.121, then start sending data to 192.168.0.121:4002. All packets they send have the sender ip set to 192.168.0.27 in the IP header. It was designed to be connected to another NIC, but it's much more convenient to just hook it up to the wall. We also want to connect multiple sensors to the same computer, but we don't want to install 4 NICs to connect 3 sensors, especially since we want to use it on site with laptops. So I have 3 choices: get it to work with iptables, ask our partner company if they can redesign their product for us, or resign to having every deployment require more hardware and either stick to desktop boxes or buggy usb3 network adapters. Right now I'm making a tee program in libpcap to listen on port 4000/4002 and forward to a different port on the local host depending on the incoming packet's source MAC address. If there's a better way to do it I'm willing to give it a shot. The Gay Bean fucked around with this message at 14:26 on Sep 4, 2015 |
# ? Sep 4, 2015 14:22 |
|
I'm running Freenas, and I have NFS enabled for my Linux machines. When I share /mnt/foo read only, it works, when I share /mnt/foo/bar as read write it works, but when I do both it gives me an error: can't change attributes for /mnt/foo/bar: MNT_DEFEXPORTED already set for mount 0xfffffe0037f869a8 bad exports list line /mnt/foo/bar -mapall And then of course when I try to mount I get: mount request denied from 192.168.1.106 for /mnt/foo/bar Is there a way around? Am I just being stupid about my mounting? I thought you could mount a parent directory and subdirectory in NFS. Or is NFS not that sophisticated?
|
# ? Sep 4, 2015 16:03 |
|
If ARP who has has multiple systems holding the same IP, you're gonna have a bad time with iptables. Have you looked at arptables? Megaman posted:I'm running Freenas, and I have NFS enabled for my Linux machines. When I share /mnt/foo read only, it works, when I share /mnt/foo/bar as read write it works, but when I do both it gives me an error: You should set up another export if you want to mount a subdirectory with different options.
|
# ? Sep 4, 2015 17:31 |
|
Re: iptables. off the top of my head, I'd suggest putting a trace on the packet as it goes through iptables so you can see exactly what rules are getting hit. It looks something like: code:
It's a long shot and probably wrong, but it seems iffy to me that (a) you're putting your rules in the NAT table when NAT (in the traditional sense) doesn't apply to UDP because it's connectionless, and (b) you're having to specify the IP address of the destination, rather than just the new port. But I doubt those things are causing your problem.
|
# ? Sep 4, 2015 23:37 |
|
|
# ? Apr 25, 2024 23:33 |
|
evol262 posted:If ARP who has has multiple systems holding the same IP, you're gonna have a bad time with iptables. Have you looked at arptables? I'm confused, I already have two exports. One for the parent directory, and one for the subdirectory. The subdirectory should have r/w, and the parent r/o, but this does not work, or am I misunderstanding your comment?
|
# ? Sep 6, 2015 20:13 |