Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
telcoM
Mar 21, 2009
Fallen Rib

ExcessBLarg! posted:

OS bugs. Imagine releasing a laptop, especially 15 years ago, that has to work with a site-licensed Ghost image of Windows 2000 but also do XP and maybe even Windows 98. I mean, sure, it's not going to have the latest graphics driver but it still needs to boot in some VESA mode at least so the IT guy can say "oh, it needs a new graphics driver".

I agree that it's a regrettable part of the ACPI spec. But Intel and Microsoft have both put major efforts into backwards compatibility in the PC space, being one of the main reasons for the success of Wintel in the 90s. When the ACPI spec was being drafted they knew OEMs would need the OS query as an escape hatch.

Newer ACPI versions actually standardized a list of OS-side power management feature names the firmware would query using the _OSI mechanism (instead of or in addition of the "OS name" string), and the OS would be required to answer positively if and only if it supported the named feature. You may have seen this in the "dmesg" output:
code:
 ACPI: Added _OSI(Module Device)
 ACPI: Added _OSI(Processor Device)
 ACPI: Added _OSI(3.0 _SCP Extensions)
 ACPI: Added _OSI(Processor Aggregator Device)
In 20/20 hindsight, this makes much more sense than trying to make the firmware have knowledge of all possible OSs.

Another example of sort-of-similar process:
When you make a SSH connection, the first thing the sshd daemon at the remote end sends is its version number: something like "SSH-2.0-OpenSSH_6.7p1" for example. The SSH client will likewise send a similar description of its own version to the server before proceeding further in the connection negotiation. It has an actual purpose: if the client is newer than the server and "knows" for example that offering a particular new protocol feature to a particular old version of server will cause a connection failure, the client can tailor its protocol negotiation to work around the problem. And if the server is newer than the client, it can happen the other way round too. If either end has no special knowledge regarding the version string returned by the other end, it will just follow the protocol standard as usual.

I think OpenSSH 6.x introduced a number of new negotiable protocol options, and it caused a problem with some switches and other devices with SSH management access built into their firmware: the buffer reserved for SSH protocol options package was too small for all the new options. A workaround was to use command-line options to disable enough of the new features that the total size of the options package fit within the buffer of the firmware-based SSH implementation. But that was inconvenient, and required that the users keep track of the problematic devices and the options required for each.

Some of those devices got a firmware update, but others were so old that manufacturer was not likely to develop any new firmware versions for them any more. No problem: once the OpenSSH developers were made aware of the problem and the version strings returned by those problematic old firmware implementations, new versions will apply the required workaround automatically, completely transparent to the user.

Adbot
ADBOT LOVES YOU

Odette
Mar 19, 2011

Are there any decent up-to-date guides regarding painless installation of the latest ELK stack (5.x)?

As I set it up over the last couple of days, but it started using way too much RAM/CPU and not playing nice behind a nginx reverse proxy. I was only feeding it nginx logs, so I'm not sure why it consistently shat itself.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Odette posted:

Are there any decent up-to-date guides regarding painless installation of the latest ELK stack (5.x)?

As I set it up over the last couple of days, but it started using way too much RAM/CPU and not playing nice behind a nginx reverse proxy. I was only feeding it nginx logs, so I'm not sure why it consistently shat itself.
It's probably better if we try to figure out what the issue is. Are you able to post your configs?

LochNessMonster
Feb 3, 2005

I need about three fitty


I encountered several issues in 5.0 and 5.0.1. 5.0.2 and 5.1 ran fine for me. You could try installing 5.2 which got released this or last week

Odette
Mar 19, 2011

LochNessMonster posted:

I encountered several issues in 5.0 and 5.0.1. 5.0.2 and 5.1 ran fine for me. You could try installing 5.2 which got released this or last week

I'm on the 5.2 packages of all 3.

Vulture Culture posted:

It's probably better if we try to figure out what the issue is. Are you able to post your configs?

/etc/logstash/conf.d/nginx.conf
code:
input {
  file {
    path => "/var/log/nginx/**/*access.log"
    type => "nginx-access"
  }

  file {
    path => "/var/log/nginx/**/*error.log"
    type => "nginx-error"
  }
}

filter {
  if [type] == "nginx-access" {
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
      add_tag => [ 'access' ]
    }
  }

  if [type] == "nginx-error" {
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
      add_tag => [ 'error' ]
    }
  }

  geoip {
    source => "clientip"
  }
}

output {
  elasticsearch {
    hosts => [ "localhost:9200" ]
  }
}

/etc/elasticsearch/elasticsearch.yml
code:
cluster.name: randomstring
network.host: localhost
http.port: 9200
/etc/nginx/sites-available/api.domain.tld.conf
code:
server {
        listen         80;
        server_name    api.domain.tld;
        return         301 [url]https://[/url]$host$request_uri;
}

server {
        listen         443 ssl http2;
        server_name    api.domain.tld;

        include snippets/ssl-domain.tldconf;

        access_log      /var/log/nginx/domain.tld/api.access.log;
        error_log       /var/log/nginx/domain.tld/api.error.log;

        root /var/www/api;

        index index.html index.htm;

        auth_basic "Restricted";
        auth_basic_user_file /path/to/file;

        location ~ (/app/kibana|/bundles|/kibana4|/status|/plugins|/elasticsearch) {
                proxy_pass              [url]http://localhost:5601;[/url]
                proxy_set_header        Host $host;
                proxy_set_header        X-Real-IP $remote_addr;
                proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header        X-Forwarded-Proto $scheme;
                proxy_set_header        X-Forwarded-Host $http_host;
        }

        location ~ /.well-known {
                allow all;
                root /var/www/html;
        }
}

Currently getting these errors with Kibana. I solved this before, but the "solution" meant that I wasn't getting anything displayed from Elasticsearch in Kibana. I could connect to Elasticsearch from outside localhost, so I have no loving idea what's wrong.

code:
INFO: 2017-02-07T21:47:12Z
 Adding connection to https://api.domain.tld/es_admin

kibana.bundle.js:13:23378
INFO: 2017-02-07T21:47:12Z
 Adding connection to https://api.domain.tld/elasticsearch

kibana.bundle.js:13:23378
failed to load API 'es_5_0': <html>

<head><title>404 Not Found</title></head>

<body bgcolor="white">

<center><h1>404 Not Found</h1></center>

<hr><center>nginx</center>

</body>

</html>

kibana.bundle.js:133:18278
Error: Not Found
ErrorAbstract@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:13:30086
StatusCodeError@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:645
respond@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:6928
checkRespForFailure@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:6156
AngularConnector.prototype.request/<@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:24563
processQueue@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23621
scheduleProcessQueue/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23888
$eval@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:4607
$digest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:2343
$apply@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:5026
done@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:25016
completeRequest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:28702
createHttpBackend/</xhr.onload@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:29634
EventHandlerNonNull*createHttpBackend/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:29409
sendReq@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:26505
serverRequest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:23333
processQueue@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23621
scheduleProcessQueue/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23888
$eval@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:4607
$digest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:2343
$evalAsync/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:4751
completeOutstandingRequest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:4607
Browser/self.defer/timeoutId<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:7706
setTimeout handler*Browser/self.defer@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:7650
$evalAsync@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:4706
$QProvider/this.$get</<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:22786
scheduleProcessQueue@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23868
then@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:24896
$http@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:23803
AngularConnector.prototype.request@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:24256
CustomAngularConnector/this.request<@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:19071
wrapper@https://api.domain.tld/bundles/commons.bundle.js?v=14695:2:5892
sendReqWithConnection@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:5345
_.applyArgs@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:26899
wrapper@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:10:5924
Item.prototype.run@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:23199
drainQueue@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:22354
setTimeout handler*runTimeout@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:21339
process.nextTick@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:23145
_.nextTick@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:27388
ConnectionPool.prototype.select@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:12922
Transport.prototype.request@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:8787
exec@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:23234
action@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:20083
getIds@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:26:15466
module.exports/<@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:91:18304
invoke@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:1271
invokeEach/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:64:4949
Promise.try@https://api.domain.tld/bundles/commons.bundle.js?v=14695:75:22351
Promise.map/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:75:21710
Promise.map@https://api.domain.tld/bundles/commons.bundle.js?v=14695:75:21675
invokeEach@https://api.domain.tld/bundles/commons.bundle.js?v=14695:64:4907
value@https://api.domain.tld/bundles/commons.bundle.js?v=14695:64:5422
invoke@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:1271
__prep__@https://api.domain.tld/bundles/commons.bundle.js?v=14695:63:24377
invoke@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:1271
commitRoute/</<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:69:3114
forEach@https://api.domain.tld/bundles/commons.bundle.js?v=14695:35:1706
commitRoute/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:69:3014
processQueue@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23621
scheduleProcessQueue/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23888
$eval@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:4607
$digest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:2343
$apply@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:5026
doBootstrap/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:35:12391
invoke@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:1271
doBootstrap@https://api.domain.tld/bundles/commons.bundle.js?v=14695:35:12282
bootstrap@https://api.domain.tld/bundles/commons.bundle.js?v=14695:35:12764
chrome.bootstrap@https://api.domain.tld/bundles/commons.bundle.js?v=14695:1:2765
@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:489
__webpack_require__@https://api.domain.tld/bundles/commons.bundle.js?v=14695:1:211
window.webpackJsonp@https://api.domain.tld/bundles/commons.bundle.js?v=14695:1:862
@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:1
commons.bundle.js:38:11454
Error: Not Found
ErrorAbstract@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:13:30086
StatusCodeError@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:645
respond@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:6928
checkRespForFailure@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:6156
AngularConnector.prototype.request/<@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:24563
processQueue@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23621
scheduleProcessQueue/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23888
$eval@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:4607
$digest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:2343
$apply@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:5026
done@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:25016
completeRequest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:28702
createHttpBackend/</xhr.onload@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:29634
EventHandlerNonNull*createHttpBackend/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:29409
sendReq@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:26505
serverRequest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:23333
processQueue@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23621
scheduleProcessQueue/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23888
$eval@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:4607
$digest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:2343
$evalAsync/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:4751
completeOutstandingRequest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:4607
Browser/self.defer/timeoutId<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:7706
setTimeout handler*Browser/self.defer@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:7650
$evalAsync@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:4706
$QProvider/this.$get</<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:22786
scheduleProcessQueue@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23868
then@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:24896
$http@https://api.domain.tld/bundles/commons.bundle.js?v=14695:37:23803
AngularConnector.prototype.request@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:24256
CustomAngularConnector/this.request<@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:19071
wrapper@https://api.domain.tld/bundles/commons.bundle.js?v=14695:2:5892
sendReqWithConnection@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:5345
_.applyArgs@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:26899
wrapper@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:10:5924
Item.prototype.run@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:23199
drainQueue@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:22354
setTimeout handler*runTimeout@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:21339
process.nextTick@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:23145
_.nextTick@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:27388
ConnectionPool.prototype.select@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:12922
Transport.prototype.request@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:8787
exec@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:23234
action@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:14:20083
getIds@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:26:15466
module.exports/<@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:91:18304
invoke@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:1271
invokeEach/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:64:4949
Promise.try@https://api.domain.tld/bundles/commons.bundle.js?v=14695:75:22351
Promise.map/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:75:21710
Promise.map@https://api.domain.tld/bundles/commons.bundle.js?v=14695:75:21675
invokeEach@https://api.domain.tld/bundles/commons.bundle.js?v=14695:64:4907
value@https://api.domain.tld/bundles/commons.bundle.js?v=14695:64:5422
invoke@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:1271
__prep__@https://api.domain.tld/bundles/commons.bundle.js?v=14695:63:24377
invoke@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:1271
commitRoute/</<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:69:3114
forEach@https://api.domain.tld/bundles/commons.bundle.js?v=14695:35:1706
commitRoute/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:69:3014
processQueue@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23621
scheduleProcessQueue/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:38:23888
$eval@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:4607
$digest@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:2343
$apply@https://api.domain.tld/bundles/commons.bundle.js?v=14695:39:5026
doBootstrap/<@https://api.domain.tld/bundles/commons.bundle.js?v=14695:35:12391
invoke@https://api.domain.tld/bundles/commons.bundle.js?v=14695:36:1271
doBootstrap@https://api.domain.tld/bundles/commons.bundle.js?v=14695:35:12282
bootstrap@https://api.domain.tld/bundles/commons.bundle.js?v=14695:35:12764
chrome.bootstrap@https://api.domain.tld/bundles/commons.bundle.js?v=14695:1:2765
@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:489
__webpack_require__@https://api.domain.tld/bundles/commons.bundle.js?v=14695:1:211
window.webpackJsonp@https://api.domain.tld/bundles/commons.bundle.js?v=14695:1:862
@https://api.domain.tld/bundles/kibana.bundle.js?v=14695:1:1
commons.bundle.js:38:11454

LochNessMonster
Feb 3, 2005

I need about three fitty


Odette posted:


/etc/elasticsearch/elasticsearch.yml
code:

cluster.name: randomstring
network.host: localhost
http.port: 9200

Can you try setting network.host to 0.0.0.0 and restart Elasticsearch?

Odette
Mar 19, 2011

LochNessMonster posted:

Can you try setting network.host to 0.0.0.0 and restart Elasticsearch?

No change.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
In production scenarios, what are the pros and cons of kdump to local disk versus a network endpoint like NFS?

xzzy
Mar 5, 2009

Well if you send to nfs you're dependent on the network stack still functioning. Then there's the speed thing.. you gotta sit around and wait for an image the size of system memory to be transferred over a wire.

I can't think of any normal situations where dumping to a network share will help forensics where a local dump would fail.

other people
Jun 27, 2004
Associate Christ
Well, most people are discarding userspace pages so the dump should be quite a bit smaller.

If you have lots of systems to manage then having a central storage location for dumps can have some advantages.

Also, if the system doesn't have much local storage to begin with....

RFC2324
Jun 7, 2012

http 418

other people posted:

Well, most people are discarding userspace pages so the dump should be quite a bit smaller.

If you have lots of systems to manage then having a central storage location for dumps can have some advantages.

Also, if the system doesn't have much local storage to begin with....

Kdump filled my drives and now my system won't boot!

evol262
Nov 30, 2010
#!/usr/bin/perl
Kdump uses swap by default.

Also, network kdumps kexec a new kernel on trap, so you shouldn't need to rely on networking being up (on the kernel), since it'll re-init.

I'd probably use SSH in general over NFS. But I suppose the only advantage to kdump is reporting kernel bugs. I've never seen a real world core that isn't a hardware failure or an intentional core to capture the state of some driver...

xzzy
Mar 5, 2009

I've only ever used kdump to placate fussy users into thinking we're working real hard to figure out why their lovely code keeps crashing the server. Which usually ends up being no more than telling them what function it was in from the stack trace and moving on with my day.

I ain't a kernel developer and never will be so if they want more than that, they can crack the dump open. :v:

Viktor
Nov 12, 2005

evol262 posted:

But I suppose the only advantage to kdump is reporting kernel bugs. I've never seen a real world core that isn't a hardware failure or an intentional core to capture the state of some driver...

Your pretty much right every time we have used a dump it's just been helpful to confirm pointed to underlying issues usually in our case to hypervisior related drivers.

https://access.redhat.com/solutions/2056743 Was a fun issue!

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Ubuntu/apt-get question: is there a way to ban a package (and any otherwise-unrequired dependencies) from a meta-package without forcing the whole meta-package to be uninstalled? I'll be goddamned if I'll have HP's lovely print drivers installed if there's not a really good reason for it. I don't even own an HP printer, it just comes along with lubuntu-desktop.

I made the mistake of buying an HP printer to keep in my cube, when I saw a small-office color laser unit at a university surplus sale for $10. I regretted it until the day I graduated. I actually legitimately don't even want it on my PC anymore.

Is there any reason that uninstalling a meta-package once it's installed is a problem anyway? (maybe dist-upgrades?) I've run into this situation before when I remove Firefox and install Chromium.

Paul MaudDib fucked around with this message at 07:02 on Feb 8, 2017

theperminator
Sep 16, 2009

by Smythe
Fun Shoe

Vulture Culture posted:

In production scenarios, what are the pros and cons of kdump to local disk versus a network endpoint like NFS?

Is there possibly any security considerations to think of?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I'd like to use a terrible low-powered laptop as a remote programming terminal. What I have is basically a x86 Raspberry Pi v1 Model B with a screen (P3-based Celeron 650 with 256 MB of RAM). It's gonna strugglebus on anything serious, so I guess just pretend it's a Raspberry Pi client since building would be difficult on it anyway.

Ideally I'd like to work in Eclipse for C++ or Java, and possibly IntelliJ or NetBeans for smaller programs.

Are there any optimizations that would allow me to push the heavy lifting off to a build sever? So I can work on a filesystem that was cached locally, but with write-through to the remote filesystem (SSH, NFS, etc) ? Ideally also I can tell it to compile, and then have my IDE hook a build server or something like that, with the output files getting pushed to my local cache. And since we're wishing for a pony, I could also transparently hook a CUDA or Java instance that was running a debug server.

Obviously I can do everything on a build server, but I'd like to have nice integration with my IDE and stuff.

Am I thinking of remotely similar to anything that exists?I used to work on an OpenMPI cluster and it had some of this kind of functionality in terms of being able to push stuff between front-end and processing nodes. Could I pretend that (multiple) local machines are actually all front-ends on an OpenMPI instance that I serve?

Or is there anything I could glue together with a FUSE filesystem to get closer?

My advantages over a Raspberry Pi are that I do have a real system architecture, I have a mSATA SSD in an IDE adapter, and my network's not happening over USB either. Swap or fast local disk is not a problem, I have 8 GB in there right now, it's not going to be as fast as IDE but it'll be a lot better than over USB.

Paul MaudDib fucked around with this message at 08:35 on Feb 8, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Also, I've never done Emacs before but I guess I'm willing to try anything once :gay:

Paul MaudDib fucked around with this message at 09:19 on Feb 8, 2017

other people
Jun 27, 2004
Associate Christ

evol262 posted:

Kdump uses swap by default.

Also, network kdumps kexec a new kernel on trap, so you shouldn't need to rely on networking being up (on the kernel), since it'll re-init.

I'd probably use SSH in general over NFS. But I suppose the only advantage to kdump is reporting kernel bugs. I've never seen a real world core that isn't a hardware failure or an intentional core to capture the state of some driver...

/var/crash is the default location, isn't it?

I work in kernelspace so I guess I am bias towards the utility of vmcores. :ohdear:

evol262
Nov 30, 2010
#!/usr/bin/perl
Yeah, /var/crash is the location, but it dumps to swap on a core, then swap is scanned at the next boot and copied to /var/crash (if there's space), IIRC.

ToxicFrog
Apr 26, 2008


Paul MaudDib posted:

Are there any optimizations that would allow me to push the heavy lifting off to a build sever? So I can work on a filesystem that was cached locally, but with write-through to the remote filesystem (SSH, NFS, etc) ? Ideally also I can tell it to compile, and then have my IDE hook a build server or something like that, with the output files getting pushed to my local cache. And since we're wishing for a pony, I could also transparently hook a CUDA or Java instance that was running a debug server.

Anything you can ssh into you can mount as a filesystem using sshfs or do bulk file transfers to/from using rsync, so that may be the best place to start; keep your code on the build server, mount it over sshfs, edit it locally with whatever editor or IDE you prefer. Configure your IDE to do builds by sshing into the remote server and running the build command, or just keep a shell open and do it yourself.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

Kdump uses swap by default.

Also, network kdumps kexec a new kernel on trap, so you shouldn't need to rely on networking being up (on the kernel), since it'll re-init.

I'd probably use SSH in general over NFS. But I suppose the only advantage to kdump is reporting kernel bugs. I've never seen a real world core that isn't a hardware failure or an intentional core to capture the state of some driver...
Come on man, I know what kdump is for. :downs:

xzzy
Mar 5, 2009

I tried setting up a raspberry as a thin client a year or so ago and it was pretty miserable. Working in a terminal was sluggish, but technically doable. Web browsing was worthless and nothing could be done about it.

The only way I could comfortably do work on it was run a vnc session on a real computer and display it on the raspberry.

Docjowles
Apr 9, 2009

Odette posted:

I'm on the 5.2 packages of all 3.
Nothing in what you've posted looks obviously insane.

I only run Kibana 4.6 so shooting in the dark a bit here. But maybe try anchoring the locations you're proxy_passing with a ^ at the beginning? In case something is overlapping and messing it up. Also read through kibana.yml in the config directory and see if something jumps out.

What happens when you do some curl tests from another machine, for url's like

https://api.domain.tld/elasticsearch
http://api.domain.tld:9200/_cluster/health?pretty

Anything interesting in the Elasticsearch logs? Maybe the service is not coming up cleanly.

evol262
Nov 30, 2010
#!/usr/bin/perl

Vulture Culture posted:

Come on man, I know what kdump is for. :downs:

I was clarifying for others. The "ssh" recommendation for you. Sorry if it wasn't clear.

Genuinely curious how often you see cores, though... kdump is always one of things I've set up in environments, and never actually needed.

Odette
Mar 19, 2011

Docjowles posted:

Nothing in what you've posted looks obviously insane.

I only run Kibana 4.6 so shooting in the dark a bit here. But maybe try anchoring the locations you're proxy_passing with a ^ at the beginning? In case something is overlapping and messing it up. Also read through kibana.yml in the config directory and see if something jumps out.

What happens when you do some curl tests from another machine, for url's like

https://api.domain.tld/elasticsearch
http://api.domain.tld:9200/_cluster/health?pretty

Anything interesting in the Elasticsearch logs? Maybe the service is not coming up cleanly.

Solved the issue by reverting to default configuration & incrementally changing things.

Was a combination of:

logrotate setting nginx logs as the wrong group & permissions
nginx kibana config being overly zealous

Now I just have to add more logs (php/mail/etc). :v:

dodecahardon
Oct 20, 2008
What is a good way to organize an NFS share that contains software installations used by multiple Unix platforms? Is it necessary to silo binaries and libraries for every Linux distribution or should it be enough to just organize them by architecture?

I'd like to avoid backing myself into a corner that requires reorganizing the directory structure in the future.

LochNessMonster
Feb 3, 2005

I need about three fitty


Odette posted:

Solved the issue by reverting to default configuration & incrementally changing things.

Was a combination of:

logrotate setting nginx logs as the wrong group & permissions
nginx kibana config being overly zealous

Now I just have to add more logs (php/mail/etc). :v:

Thanks for the update, I couldn't find anything strange about the elastic/kibana setup, like docjowles already said. I' more familiar with logstash and the dashboarding side of Kibana than the infra side of it. Was still curious what the issue was.

What exactly went wrong with the nginx setup?

xzzy
Mar 5, 2009

Charles Mansion posted:

What is a good way to organize an NFS share that contains software installations used by multiple Unix platforms? Is it necessary to silo binaries and libraries for every Linux distribution or should it be enough to just organize them by architecture?

I'd like to avoid backing myself into a corner that requires reorganizing the directory structure in the future.

Not that I recommend going down this dark road, but I've seen it done where they organize directories by OS and kernel version. And when we were doing the 32->64 bit transition, architecture as well. So there would be directories like

/mnt/software/IRIX_6_5
/mnt/software/SunOS_5_10
/mnt/software/Linux_2_2
/mnt/software/Linux_2_4
/mnt/software/Linux_2_4_2_32
/mnt/software/Linux_2_4_2_64

And so on. Then there were scripts that would read the output of uname and build a path and tweak $PATH to add the appropriate directory. As you can see there was a hierarchy in place so that if there was some oddball release that needed a specific version it could be used but a less restrictive system could use something more globally usable.

As for whether it's necessary anymore, it depends. If the package is available in the distribution's database, don't bother. If you have users building code that targets specific versions of libraries you might need it.

Odette
Mar 19, 2011

LochNessMonster posted:

Thanks for the update, I couldn't find anything strange about the elastic/kibana setup, like docjowles already said. I' more familiar with logstash and the dashboarding side of Kibana than the infra side of it. Was still curious what the issue was.

What exactly went wrong with the nginx setup?

Something to do with this particular line and how kibana expects particular URLs to work.

code:
        location ~ (/app/kibana|/bundles|/kibana4|/status|/plugins|/elasticsearch) {

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

God drat, I'm having a heck of a time here...

I've got a rarely used KVM VM on a server. Today someone went to use it and couldn't access it.

Turns out that for some reason VMs on this server are no longer getting DHCP from my router. This did work fine at some point but because it's so rarely used I don't know what happened. The particular guest OS is XP, but I've tried an ubuntu guest with no success.

My ifconfig: http://termbin.com/j1y2

/etc/network/interfaces: http://termbin.com/sdsh

The XML for the VM: http://termbin.com/2g13

Anyone have any ideas?

edit: also, I just found this termbin.com...pretty neato!

Thermopyle fucked around with this message at 22:46 on Feb 13, 2017

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Thermopyle posted:

My /etc/network/interfaces: http://termbin.com/j1y2

That's your ifconfig output or something

What does fgrep -i 'dhcp' syslog look like?

edit: If two of your VM's are not getting DHCP now I'd bet it's a setting on your router or more likely in KVM's networking, bridge perhaps

Bob Morales fucked around with this message at 22:47 on Feb 13, 2017

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Bob Morales posted:

That's your ifconfig output or something

What does fgrep -i 'dhcp' syslog look like?

edit: If two of your VM's are not getting DHCP now I'd bet it's a setting on your router or more likely in KVM's networking

Yeah, I fixed it sorry.

It might be a router setting, but I don't have a problem with any actual machines on my network.

pre:
> fgrep -i 'dhcp' /var/log/syslog
Feb 13 11:26:32 ehud dhclient: DHCPREQUEST of 192.168.1.2 on br0 to 192.168.1.1 port 67 (xid=0x131cf724)
Feb 13 11:26:32 ehud dhclient: DHCPACK of 192.168.1.2 from 192.168.1.1
Feb 13 12:19:39 ehud kernel: [   35.518203] audit: type=1400 audit(1487009978.437:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=4362 comm="apparmor_parser"
Feb 13 12:19:39 ehud kernel: [   35.518351] audit: type=1400 audit(1487009978.437:6): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=4359 comm="apparmor_parser"
Feb 13 12:19:41 ehud dhclient: Internet Systems Consortium DHCP Client 4.2.4
Feb 13 12:19:41 ehud dhclient: For info, please visit https://www.isc.org/software/dhcp/
Feb 13 12:19:41 ehud dhclient: DHCPDISCOVER on br0 to 255.255.255.255 port 67 interval 3 (xid=0x674ced6e)
Feb 13 12:19:44 ehud dhclient: DHCPDISCOVER on br0 to 255.255.255.255 port 67 interval 3 (xid=0x674ced6e)
Feb 13 12:19:44 ehud dhclient: DHCPREQUEST of 192.168.1.2 on br0 to 255.255.255.255 port 67 (xid=0x6eed4c67)
Feb 13 12:19:44 ehud dhclient: DHCPOFFER of 192.168.1.2 from 192.168.1.1
Feb 13 12:19:44 ehud dhclient: DHCPACK of 192.168.1.2 from 192.168.1.1
Feb 13 12:19:44 ehud /proc/self/fd/9: DEBUG: ADDRFAM='inet'#012IFACE='br0'#012IFS=' #011#012'#012LOGICAL='br0'#012METHOD='dhcp'#012OPTIND='1'#012PATH='/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin'#012PPID='1'#012PS1='# '#012PS2='> '#012PS4='+ '#012PWD='/'#012TERM='linux'#012UPSTART_EVENTS='local-filesystems net-device-up'#012UPSTART_INSTANCE=''#012UPSTART_JOB='eleventy'
Feb 13 12:19:44 ehud kernel: [   41.751987] audit: type=1400 audit(1487009984.673:11): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=5103 comm="apparmor_parser"
Feb 13 12:19:46 ehud dnsmasq[6014]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth
Feb 13 12:19:46 ehud dnsmasq-dhcp[6014]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Feb 13 12:19:46 ehud dnsmasq-dhcp[6014]: DHCP, sockets bound exclusively to interface virbr0
Feb 13 12:19:46 ehud dnsmasq-dhcp[6014]: read /var/lib/libvirt/dnsmasq/default.hostsfile

other people
Jun 27, 2004
Associate Christ
So your winxp guest nic is vnet0. It and eth0 should be members of bridge br0. Does brctl confirm that?

Otherwise, I would do a pcap on eth0 to confirm the guest dhcp requests are leaving the host (filter on bootp). If not, back up and capture on br0 and vnet0 to see where the traffic disappears.

If the request is getting out and receiving a reply, check vnet0 to ensure the response is making its way back to the VM.

other people
Jun 27, 2004
Associate Christ
Also, if some loop has screwed up the bridge MAC filter, "brctl br0 setaging 0" will turn it off (not persistent) to confirm. This turns your bridge into a dumb hub. 300 is the standard aging time btw.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

other people posted:

So your winxp guest nic is vnet0. It and eth0 should be members of bridge br0. Does brctl confirm that?

Otherwise, I would do a pcap on eth0 to confirm the guest dhcp requests are leaving the host (filter on bootp). If not, back up and capture on br0 and vnet0 to see where the traffic disappears.

If the request is getting out and receiving a reply, check vnet0 to ensure the response is making its way back to the VM.

Ok, this is a bit out of my comfort zone. I guess the best way to do this is using tcpdump to create the pcap file and then look at that with wireshark on a gui-having machine?

other people
Jun 27, 2004
Associate Christ

Thermopyle posted:

Ok, this is a bit out of my comfort zone. I guess the best way to do this is using tcpdump to create the pcap file and then look at that with wireshark on a gui-having machine?

You could record a binary pcap file (-w filename.pcap), but all you really want to do for now is verify which interfaces see the dhcp request (and possibly the reply). So you can just have it print to the screen:

code:

# tcpdump -nn -e -i <interface> port 67 or port 68

(replace <interface> as needed)

The current path from VM to the phyiscal nic is:

vnet0 -> br0 -> eth0

... and the path back is obviously the reverse.

If the dhcp request hits eth0 then it almost certainly made it onto the wire. And so if you dont see a response then the problem is external to the hypervisor.

evol262
Nov 30, 2010
#!/usr/bin/perl
Can you ping out with static configuration? I'd guess that ip forwarding is off or -m physdevisbridged got unset

Robo Reagan
Feb 12, 2012

by Fluffdaddy
If I want to learn Arch by poking around in versions that are already complete would I be better off with Antergos or Manjaro?

e:Going to go with Manjaro. Looks like it is a bit more baby friendly.

Robo Reagan fucked around with this message at 07:13 on Feb 15, 2017

Adbot
ADBOT LOVES YOU

Docjowles
Apr 9, 2009

other people posted:

You could record a binary pcap file (-w filename.pcap), but all you really want to do for now is verify which interfaces see the dhcp request (and possibly the reply). So you can just have it print to the screen:

code:
# tcpdump -nn -e -i <interface> port 67 or port 68

(replace <interface> as needed)

There's also tshark which is kind of a tcpdump/wireshark hybrid. It's a CLI app but presents the packets in an easier to read format (imo) than raw tcpdump.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply