|
ExcessBLarg! posted:OS bugs. Imagine releasing a laptop, especially 15 years ago, that has to work with a site-licensed Ghost image of Windows 2000 but also do XP and maybe even Windows 98. I mean, sure, it's not going to have the latest graphics driver but it still needs to boot in some VESA mode at least so the IT guy can say "oh, it needs a new graphics driver". Newer ACPI versions actually standardized a list of OS-side power management feature names the firmware would query using the _OSI mechanism (instead of or in addition of the "OS name" string), and the OS would be required to answer positively if and only if it supported the named feature. You may have seen this in the "dmesg" output: code:
Another example of sort-of-similar process: When you make a SSH connection, the first thing the sshd daemon at the remote end sends is its version number: something like "SSH-2.0-OpenSSH_6.7p1" for example. The SSH client will likewise send a similar description of its own version to the server before proceeding further in the connection negotiation. It has an actual purpose: if the client is newer than the server and "knows" for example that offering a particular new protocol feature to a particular old version of server will cause a connection failure, the client can tailor its protocol negotiation to work around the problem. And if the server is newer than the client, it can happen the other way round too. If either end has no special knowledge regarding the version string returned by the other end, it will just follow the protocol standard as usual. I think OpenSSH 6.x introduced a number of new negotiable protocol options, and it caused a problem with some switches and other devices with SSH management access built into their firmware: the buffer reserved for SSH protocol options package was too small for all the new options. A workaround was to use command-line options to disable enough of the new features that the total size of the options package fit within the buffer of the firmware-based SSH implementation. But that was inconvenient, and required that the users keep track of the problematic devices and the options required for each. Some of those devices got a firmware update, but others were so old that manufacturer was not likely to develop any new firmware versions for them any more. No problem: once the OpenSSH developers were made aware of the problem and the version strings returned by those problematic old firmware implementations, new versions will apply the required workaround automatically, completely transparent to the user.
|
# ? Feb 5, 2017 00:55 |
|
|
# ? Apr 24, 2024 15:38 |
|
Are there any decent up-to-date guides regarding painless installation of the latest ELK stack (5.x)? As I set it up over the last couple of days, but it started using way too much RAM/CPU and not playing nice behind a nginx reverse proxy. I was only feeding it nginx logs, so I'm not sure why it consistently shat itself.
|
# ? Feb 7, 2017 22:09 |
|
Odette posted:Are there any decent up-to-date guides regarding painless installation of the latest ELK stack (5.x)?
|
# ? Feb 7, 2017 22:13 |
|
I encountered several issues in 5.0 and 5.0.1. 5.0.2 and 5.1 ran fine for me. You could try installing 5.2 which got released this or last week
|
# ? Feb 7, 2017 22:20 |
|
LochNessMonster posted:I encountered several issues in 5.0 and 5.0.1. 5.0.2 and 5.1 ran fine for me. You could try installing 5.2 which got released this or last week I'm on the 5.2 packages of all 3. Vulture Culture posted:It's probably better if we try to figure out what the issue is. Are you able to post your configs? /etc/logstash/conf.d/nginx.conf code:
code:
code:
code:
|
# ? Feb 7, 2017 22:52 |
|
Odette posted:
Can you try setting network.host to 0.0.0.0 and restart Elasticsearch?
|
# ? Feb 7, 2017 23:31 |
|
LochNessMonster posted:Can you try setting network.host to 0.0.0.0 and restart Elasticsearch? No change.
|
# ? Feb 8, 2017 00:08 |
|
In production scenarios, what are the pros and cons of kdump to local disk versus a network endpoint like NFS?
|
# ? Feb 8, 2017 01:34 |
|
Well if you send to nfs you're dependent on the network stack still functioning. Then there's the speed thing.. you gotta sit around and wait for an image the size of system memory to be transferred over a wire. I can't think of any normal situations where dumping to a network share will help forensics where a local dump would fail.
|
# ? Feb 8, 2017 01:40 |
|
Well, most people are discarding userspace pages so the dump should be quite a bit smaller. If you have lots of systems to manage then having a central storage location for dumps can have some advantages. Also, if the system doesn't have much local storage to begin with....
|
# ? Feb 8, 2017 01:44 |
|
other people posted:Well, most people are discarding userspace pages so the dump should be quite a bit smaller. Kdump filled my drives and now my system won't boot!
|
# ? Feb 8, 2017 01:48 |
|
Kdump uses swap by default. Also, network kdumps kexec a new kernel on trap, so you shouldn't need to rely on networking being up (on the kernel), since it'll re-init. I'd probably use SSH in general over NFS. But I suppose the only advantage to kdump is reporting kernel bugs. I've never seen a real world core that isn't a hardware failure or an intentional core to capture the state of some driver...
|
# ? Feb 8, 2017 05:56 |
|
I've only ever used kdump to placate fussy users into thinking we're working real hard to figure out why their lovely code keeps crashing the server. Which usually ends up being no more than telling them what function it was in from the stack trace and moving on with my day. I ain't a kernel developer and never will be so if they want more than that, they can crack the dump open.
|
# ? Feb 8, 2017 06:04 |
|
evol262 posted:But I suppose the only advantage to kdump is reporting kernel bugs. I've never seen a real world core that isn't a hardware failure or an intentional core to capture the state of some driver... Your pretty much right every time we have used a dump it's just been helpful to confirm pointed to underlying issues usually in our case to hypervisior related drivers. https://access.redhat.com/solutions/2056743 Was a fun issue!
|
# ? Feb 8, 2017 06:07 |
|
Ubuntu/apt-get question: is there a way to ban a package (and any otherwise-unrequired dependencies) from a meta-package without forcing the whole meta-package to be uninstalled? I'll be goddamned if I'll have HP's lovely print drivers installed if there's not a really good reason for it. I don't even own an HP printer, it just comes along with lubuntu-desktop. I made the mistake of buying an HP printer to keep in my cube, when I saw a small-office color laser unit at a university surplus sale for $10. I regretted it until the day I graduated. I actually legitimately don't even want it on my PC anymore. Is there any reason that uninstalling a meta-package once it's installed is a problem anyway? (maybe dist-upgrades?) I've run into this situation before when I remove Firefox and install Chromium. Paul MaudDib fucked around with this message at 07:02 on Feb 8, 2017 |
# ? Feb 8, 2017 06:58 |
|
Vulture Culture posted:In production scenarios, what are the pros and cons of kdump to local disk versus a network endpoint like NFS? Is there possibly any security considerations to think of?
|
# ? Feb 8, 2017 07:28 |
|
I'd like to use a terrible low-powered laptop as a remote programming terminal. What I have is basically a x86 Raspberry Pi v1 Model B with a screen (P3-based Celeron 650 with 256 MB of RAM). It's gonna strugglebus on anything serious, so I guess just pretend it's a Raspberry Pi client since building would be difficult on it anyway. Ideally I'd like to work in Eclipse for C++ or Java, and possibly IntelliJ or NetBeans for smaller programs. Are there any optimizations that would allow me to push the heavy lifting off to a build sever? So I can work on a filesystem that was cached locally, but with write-through to the remote filesystem (SSH, NFS, etc) ? Ideally also I can tell it to compile, and then have my IDE hook a build server or something like that, with the output files getting pushed to my local cache. And since we're wishing for a pony, I could also transparently hook a CUDA or Java instance that was running a debug server. Obviously I can do everything on a build server, but I'd like to have nice integration with my IDE and stuff. Am I thinking of remotely similar to anything that exists?I used to work on an OpenMPI cluster and it had some of this kind of functionality in terms of being able to push stuff between front-end and processing nodes. Could I pretend that (multiple) local machines are actually all front-ends on an OpenMPI instance that I serve? Or is there anything I could glue together with a FUSE filesystem to get closer? My advantages over a Raspberry Pi are that I do have a real system architecture, I have a mSATA SSD in an IDE adapter, and my network's not happening over USB either. Swap or fast local disk is not a problem, I have 8 GB in there right now, it's not going to be as fast as IDE but it'll be a lot better than over USB. Paul MaudDib fucked around with this message at 08:35 on Feb 8, 2017 |
# ? Feb 8, 2017 08:22 |
|
Also, I've never done Emacs before but I guess I'm willing to try anything once
Paul MaudDib fucked around with this message at 09:19 on Feb 8, 2017 |
# ? Feb 8, 2017 09:14 |
|
evol262 posted:Kdump uses swap by default. /var/crash is the default location, isn't it? I work in kernelspace so I guess I am bias towards the utility of vmcores.
|
# ? Feb 8, 2017 12:15 |
|
Yeah, /var/crash is the location, but it dumps to swap on a core, then swap is scanned at the next boot and copied to /var/crash (if there's space), IIRC.
|
# ? Feb 8, 2017 12:55 |
|
Paul MaudDib posted:Are there any optimizations that would allow me to push the heavy lifting off to a build sever? So I can work on a filesystem that was cached locally, but with write-through to the remote filesystem (SSH, NFS, etc) ? Ideally also I can tell it to compile, and then have my IDE hook a build server or something like that, with the output files getting pushed to my local cache. And since we're wishing for a pony, I could also transparently hook a CUDA or Java instance that was running a debug server. Anything you can ssh into you can mount as a filesystem using sshfs or do bulk file transfers to/from using rsync, so that may be the best place to start; keep your code on the build server, mount it over sshfs, edit it locally with whatever editor or IDE you prefer. Configure your IDE to do builds by sshing into the remote server and running the build command, or just keep a shell open and do it yourself.
|
# ? Feb 8, 2017 13:03 |
|
evol262 posted:Kdump uses swap by default.
|
# ? Feb 8, 2017 14:59 |
|
I tried setting up a raspberry as a thin client a year or so ago and it was pretty miserable. Working in a terminal was sluggish, but technically doable. Web browsing was worthless and nothing could be done about it. The only way I could comfortably do work on it was run a vnc session on a real computer and display it on the raspberry.
|
# ? Feb 8, 2017 15:09 |
|
Odette posted:I'm on the 5.2 packages of all 3. I only run Kibana 4.6 so shooting in the dark a bit here. But maybe try anchoring the locations you're proxy_passing with a ^ at the beginning? In case something is overlapping and messing it up. Also read through kibana.yml in the config directory and see if something jumps out. What happens when you do some curl tests from another machine, for url's like https://api.domain.tld/elasticsearch http://api.domain.tld:9200/_cluster/health?pretty Anything interesting in the Elasticsearch logs? Maybe the service is not coming up cleanly.
|
# ? Feb 8, 2017 17:40 |
|
Vulture Culture posted:Come on man, I know what kdump is for. I was clarifying for others. The "ssh" recommendation for you. Sorry if it wasn't clear. Genuinely curious how often you see cores, though... kdump is always one of things I've set up in environments, and never actually needed.
|
# ? Feb 8, 2017 18:40 |
|
Docjowles posted:Nothing in what you've posted looks obviously insane. Solved the issue by reverting to default configuration & incrementally changing things. Was a combination of: logrotate setting nginx logs as the wrong group & permissions nginx kibana config being overly zealous Now I just have to add more logs (php/mail/etc).
|
# ? Feb 9, 2017 00:47 |
|
What is a good way to organize an NFS share that contains software installations used by multiple Unix platforms? Is it necessary to silo binaries and libraries for every Linux distribution or should it be enough to just organize them by architecture? I'd like to avoid backing myself into a corner that requires reorganizing the directory structure in the future.
|
# ? Feb 9, 2017 20:45 |
|
Odette posted:Solved the issue by reverting to default configuration & incrementally changing things. Thanks for the update, I couldn't find anything strange about the elastic/kibana setup, like docjowles already said. I' more familiar with logstash and the dashboarding side of Kibana than the infra side of it. Was still curious what the issue was. What exactly went wrong with the nginx setup?
|
# ? Feb 9, 2017 21:02 |
|
Charles Mansion posted:What is a good way to organize an NFS share that contains software installations used by multiple Unix platforms? Is it necessary to silo binaries and libraries for every Linux distribution or should it be enough to just organize them by architecture? Not that I recommend going down this dark road, but I've seen it done where they organize directories by OS and kernel version. And when we were doing the 32->64 bit transition, architecture as well. So there would be directories like /mnt/software/IRIX_6_5 /mnt/software/SunOS_5_10 /mnt/software/Linux_2_2 /mnt/software/Linux_2_4 /mnt/software/Linux_2_4_2_32 /mnt/software/Linux_2_4_2_64 And so on. Then there were scripts that would read the output of uname and build a path and tweak $PATH to add the appropriate directory. As you can see there was a hierarchy in place so that if there was some oddball release that needed a specific version it could be used but a less restrictive system could use something more globally usable. As for whether it's necessary anymore, it depends. If the package is available in the distribution's database, don't bother. If you have users building code that targets specific versions of libraries you might need it.
|
# ? Feb 9, 2017 21:38 |
|
LochNessMonster posted:Thanks for the update, I couldn't find anything strange about the elastic/kibana setup, like docjowles already said. I' more familiar with logstash and the dashboarding side of Kibana than the infra side of it. Was still curious what the issue was. Something to do with this particular line and how kibana expects particular URLs to work. code:
|
# ? Feb 10, 2017 10:25 |
|
God drat, I'm having a heck of a time here... I've got a rarely used KVM VM on a server. Today someone went to use it and couldn't access it. Turns out that for some reason VMs on this server are no longer getting DHCP from my router. This did work fine at some point but because it's so rarely used I don't know what happened. The particular guest OS is XP, but I've tried an ubuntu guest with no success. My ifconfig: http://termbin.com/j1y2 /etc/network/interfaces: http://termbin.com/sdsh The XML for the VM: http://termbin.com/2g13 Anyone have any ideas? edit: also, I just found this termbin.com...pretty neato! Thermopyle fucked around with this message at 22:46 on Feb 13, 2017 |
# ? Feb 13, 2017 22:14 |
|
Thermopyle posted:My /etc/network/interfaces: http://termbin.com/j1y2 That's your ifconfig output or something What does fgrep -i 'dhcp' syslog look like? edit: If two of your VM's are not getting DHCP now I'd bet it's a setting on your router or more likely in KVM's networking, bridge perhaps Bob Morales fucked around with this message at 22:47 on Feb 13, 2017 |
# ? Feb 13, 2017 22:42 |
|
Bob Morales posted:That's your ifconfig output or something Yeah, I fixed it sorry. It might be a router setting, but I don't have a problem with any actual machines on my network. pre:> fgrep -i 'dhcp' /var/log/syslog Feb 13 11:26:32 ehud dhclient: DHCPREQUEST of 192.168.1.2 on br0 to 192.168.1.1 port 67 (xid=0x131cf724) Feb 13 11:26:32 ehud dhclient: DHCPACK of 192.168.1.2 from 192.168.1.1 Feb 13 12:19:39 ehud kernel: [ 35.518203] audit: type=1400 audit(1487009978.437:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=4362 comm="apparmor_parser" Feb 13 12:19:39 ehud kernel: [ 35.518351] audit: type=1400 audit(1487009978.437:6): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=4359 comm="apparmor_parser" Feb 13 12:19:41 ehud dhclient: Internet Systems Consortium DHCP Client 4.2.4 Feb 13 12:19:41 ehud dhclient: For info, please visit https://www.isc.org/software/dhcp/ Feb 13 12:19:41 ehud dhclient: DHCPDISCOVER on br0 to 255.255.255.255 port 67 interval 3 (xid=0x674ced6e) Feb 13 12:19:44 ehud dhclient: DHCPDISCOVER on br0 to 255.255.255.255 port 67 interval 3 (xid=0x674ced6e) Feb 13 12:19:44 ehud dhclient: DHCPREQUEST of 192.168.1.2 on br0 to 255.255.255.255 port 67 (xid=0x6eed4c67) Feb 13 12:19:44 ehud dhclient: DHCPOFFER of 192.168.1.2 from 192.168.1.1 Feb 13 12:19:44 ehud dhclient: DHCPACK of 192.168.1.2 from 192.168.1.1 Feb 13 12:19:44 ehud /proc/self/fd/9: DEBUG: ADDRFAM='inet'#012IFACE='br0'#012IFS=' #011#012'#012LOGICAL='br0'#012METHOD='dhcp'#012OPTIND='1'#012PATH='/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin'#012PPID='1'#012PS1='# '#012PS2='> '#012PS4='+ '#012PWD='/'#012TERM='linux'#012UPSTART_EVENTS='local-filesystems net-device-up'#012UPSTART_INSTANCE=''#012UPSTART_JOB='eleventy' Feb 13 12:19:44 ehud kernel: [ 41.751987] audit: type=1400 audit(1487009984.673:11): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=5103 comm="apparmor_parser" Feb 13 12:19:46 ehud dnsmasq[6014]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth Feb 13 12:19:46 ehud dnsmasq-dhcp[6014]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h Feb 13 12:19:46 ehud dnsmasq-dhcp[6014]: DHCP, sockets bound exclusively to interface virbr0 Feb 13 12:19:46 ehud dnsmasq-dhcp[6014]: read /var/lib/libvirt/dnsmasq/default.hostsfile
|
# ? Feb 13, 2017 22:51 |
|
So your winxp guest nic is vnet0. It and eth0 should be members of bridge br0. Does brctl confirm that? Otherwise, I would do a pcap on eth0 to confirm the guest dhcp requests are leaving the host (filter on bootp). If not, back up and capture on br0 and vnet0 to see where the traffic disappears. If the request is getting out and receiving a reply, check vnet0 to ensure the response is making its way back to the VM.
|
# ? Feb 13, 2017 23:03 |
|
Also, if some loop has screwed up the bridge MAC filter, "brctl br0 setaging 0" will turn it off (not persistent) to confirm. This turns your bridge into a dumb hub. 300 is the standard aging time btw.
|
# ? Feb 13, 2017 23:05 |
|
other people posted:So your winxp guest nic is vnet0. It and eth0 should be members of bridge br0. Does brctl confirm that? Ok, this is a bit out of my comfort zone. I guess the best way to do this is using tcpdump to create the pcap file and then look at that with wireshark on a gui-having machine?
|
# ? Feb 14, 2017 18:46 |
|
Thermopyle posted:Ok, this is a bit out of my comfort zone. I guess the best way to do this is using tcpdump to create the pcap file and then look at that with wireshark on a gui-having machine? You could record a binary pcap file (-w filename.pcap), but all you really want to do for now is verify which interfaces see the dhcp request (and possibly the reply). So you can just have it print to the screen: code:
The current path from VM to the phyiscal nic is: vnet0 -> br0 -> eth0 ... and the path back is obviously the reverse. If the dhcp request hits eth0 then it almost certainly made it onto the wire. And so if you dont see a response then the problem is external to the hypervisor.
|
# ? Feb 14, 2017 23:03 |
|
Can you ping out with static configuration? I'd guess that ip forwarding is off or -m physdevisbridged got unset
|
# ? Feb 15, 2017 00:40 |
|
If I want to learn Arch by poking around in versions that are already complete would I be better off with Antergos or Manjaro? e:Going to go with Manjaro. Looks like it is a bit more baby friendly. Robo Reagan fucked around with this message at 07:13 on Feb 15, 2017 |
# ? Feb 15, 2017 06:38 |
|
|
# ? Apr 24, 2024 15:38 |
|
other people posted:You could record a binary pcap file (-w filename.pcap), but all you really want to do for now is verify which interfaces see the dhcp request (and possibly the reply). So you can just have it print to the screen: There's also tshark which is kind of a tcpdump/wireshark hybrid. It's a CLI app but presents the packets in an easier to read format (imo) than raw tcpdump.
|
# ? Feb 15, 2017 20:43 |