Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
RFC2324
Jun 7, 2012

http 418

Tab8715 posted:

That's interesting you're able to have a Linux OS join an AD-Domain but once it's joined... What exactly do you gain out of this?

How does user auth work? Permissions? Are you able to do any GPO-like things?

Kerberos5 auth against ad for users, which also makes mapped samba drives easier. Probably other things, but that's all i have worked with.

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

Tab8715 posted:

That's interesting you're able to have a Linux OS join an AD-Domain but once it's joined... What exactly do you gain out of this?

How does user auth work? Permissions? Are you able to do any GPO-like things?

AD is LDAP+krb+DNS+DHCP. Joining Linux gets you single sign on (through kerberbos), AD user mapping (through LDAP), kerberos principals for the computer object, etc. AD users have the same UID and GID as Windows on Linux systems (because, again, AD is backed by LDAP)

GPOs (policies in general) are LDAP objects which are queried by Windows clients. Linux doesn't do anything with these, but being able to register a Linux system and say "ok, this is now a managed computer, and you can specify all the normal AD stuff, like what groups can log on, etc through AD" is available.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I'm messing around with docker for the first time and using someone else's docker file on my home server.

So, I've got ZoneMinder running in a container on my server...how can I access the exposed ports from other computers on my LAN? To be clear, when I'm SSH'ed into my server, I can ssh -p 32769 zoneminder@localhost and access the container fine. If I try to do it from my desktop, that doesn't work.

evol262
Nov 30, 2010
#!/usr/bin/perl
Can you show us docker ps and iptables? It may be bound to localhost (I haven't looked at the dockerfile for EXPOSE).

If you'll be doing this a lot, I strongly suggest flannel, calico, kubernetes, or a combination thereof (kubernetes uses flannel by default, but I think calico is somewhat nicer)

fatherdog
Feb 16, 2005
Running into a weird problem with X forwarding on ssh.

user foo and bar are both members of the group foobar

ssh -X foo@server
chmod 640 .Xauthority
su - bar
xauth merge /home/foo/.Xauthority
xterm

On RHEL5 and RHEL6, this Just Works. On OEL7, it fails complaining about the display. After some poking, I realized that doing the xauth merge on RHEL5 and 6 also sets the $DISPLAY variable properly for bar, but on RHEL7 the $DISPLAY variable remains blank. After manually setting DISPLAY to the correct value, it works fine, so xauth is correctly getting the encryption information and such, just not setting the DISPLAY. I can't figure out why the behavior differs.

other people
Jun 27, 2004
Associate Christ

pseudorandom name posted:

Try connecting it using DisplayPort, it (or your GPU) might not support full-res 60Hz over HDMI.

Using a DisplayPort cable gets me the 60Hz :o and now the mouse movement is nice and smooth. Woo hoo.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Thermopyle posted:

So, I've got ZoneMinder running in a container on my server...how can I access the exposed ports from other computers on my LAN? To be clear, when I'm SSH'ed into my server, I can ssh -p 32769 zoneminder@localhost and access the container fine. If I try to do it from my desktop, that doesn't work.

Be sure to do the docker run with the -p option to expose the ports onto your external interface, e.g. -p 12345:80 will bind external port 12345 to the container's internal port of 80. If that doesn't work, it's probably a firewall issue.

Alternatively, try --net=host which will bind your container's ports directly to your host's.

HPL
Aug 28, 2002

Worst case scenario.
I love Linux as much as the next guy, but Jesus Christ, just because I'm from Canada doesn't mean I speak French or have a French keyboard.

RFC2324
Jun 7, 2012

http 418

HPL posted:

I love Linux as much as the next guy, but Jesus Christ, just because I'm from Canada doesn't mean I speak French or have a French keyboard.

Pretend you are from America like all right thinking people.

HPL
Aug 28, 2002

Worst case scenario.
And what the heck is a Cherokee keyboard supposed to look like?

Mr Shiny Pants
Nov 12, 2012

HPL posted:

And what the heck is a Cherokee keyboard supposed to look like?

It's a bow and arrow.......

kujeger
Feb 19, 2004

OH YES HA HA

HPL posted:

And what the heck is a Cherokee keyboard supposed to look like?

https://en.wikipedia.org/wiki/Cherokee_syllabary :)

Hyvok
Mar 30, 2010
Can someone explain why is filling my SSD drive with zeros so slow? I have booted from Arch linux USB stick and copying my SSD to a new one progressed at ~225 MB/s (I only have SATAII so thats why it is so "slow"). Filling the old drive with zeros goes at like > 1GB/s for a couple of seconds then starts slowing down exponentially. In few seconds speed is like 150 MB/s and in a few minutes under 60 MB/s... In both cases I am using dd with the same block size. For filling with zeros I am using "dd if=/dev/zero of=/dev/sdX bs=1M count=66666666 status=progress". Can't really understand how this is possible. I've tried playing with the blocksize and such but I can mainly make it worse with a small blocksize but other than that no help. Copying from /dev/zero to /dev/null seems to be plenty fast (depends on block size and count but like 700 MB/s minimum).

Hyvok fucked around with this message at 17:43 on Feb 6, 2016

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Why are you doing that?

Anyway, the result you're seeing is the garbage collection and wear leveling having a shitfit because you're flooding the drive with data.

pseudorandom name
May 6, 2007

Because that's how SSDs work.

Note that writing zeroes to your old disk won't actually erase everything because the actual capacity of the SSD is larger than the advertised capacity.

You need to issue a ATA Secure Erase command to the drive, which as a bonus will be much faster than attempting to zero it.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

evol262 posted:

Can you show us docker ps and iptables? It may be bound to localhost (I haven't looked at the dockerfile for EXPOSE).

If you'll be doing this a lot, I strongly suggest flannel, calico, kubernetes, or a combination thereof (kubernetes uses flannel by default, but I think calico is somewhat nicer)

code tag breaks tables posted:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a57251f81d8 kylejohnson/release-1.27 "/bin/sh -c \"/tmp/sta" 17 hours ago Up 17 hours 0.0.0.0:32769->22/tcp, 0.0.0.0:32768->80/tcp sad_thompson

code tag breaks tables posted:

~/tmp>> sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-ISOLATION all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Chain DOCKER (1 references)
target prot opt source destination

Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere



minato posted:

Be sure to do the docker run with the -p option to expose the ports onto your external interface, e.g. -p 12345:80 will bind external port 12345 to the container's internal port of 80. If that doesn't work, it's probably a firewall issue.

Alternatively, try --net=host which will bind your container's ports directly to your host's.

I have yet to try either of these, and I will, but I thought using the -P option negates the need for using something like -p 12345:80.

Admittedly, I'm just this side of completely ignorant about this right now. I'll be doing a deep dive into wrapping my head around docker later this month, I was just hoping to get this container running in the meantime.

evol262
Nov 30, 2010
#!/usr/bin/perl
It looks like it's mapped fine, but I don't see any DNAT rules for Docker, which it still needs IIRC (it's been a long time since I've worked with plain docker). Localhost won't hit iptables at all.

Try:

-A DOCKER-ISOLATION ! -i docker0 -p tcp -m tcp --dport 32769 -j DNAT --to-destination $docker_ip:22

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Thermopyle posted:

I have yet to try either of these, and I will, but I thought using the -P option negates the need for using something like -p 12345:80.
Oh, it's 100% essential. The ports are not exposed at all otherwise. -P will expose the ones it knows about (80+25, as specified in your Dockerfile) to random ports that it will display during the run, -p <outer port>:<inner port> will expose it to a specific port, and --net=host just bypasses all this by not setting up the container with its own network namespace.

ToxicFrog
Apr 26, 2008


Does anyone have recommendations for simple monitoring software? Specifically, I'm looking for something with a configurable web-based dashboard that can collect metrics from a home network of 4 linux machines -- ideally via SSH, without needing to install a monitoring daemon or similar on each machine. If it can send me alerts via IRC or gtalk when things are on fire, that would be even better.

telcoM
Mar 21, 2009
Fallen Rib

Thermopyle posted:

~/tmp>> sudo iptables -L

Without the -v option, iptables -L won't tell you if a rule is tied to a particular network interface, which might be rather important.
For example:
code:
$ sudo iptables -L
Chain INPUT (policy DROP)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere          
[...]
If the first rule accepts everything, there is no point of having any other rules at all. So what is going on?
code:
$ sudo iptables -L -v
Chain INPUT (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination 
65158 5392K ACCEPT     all  --  lo     any     anywhere             anywhere  
[...]
Oh, the first rule accepts everything from the loopback interface only. That makes way more sense.

Also, without the -t option, it defaults to displaying the filter tables only.
To get a full understanding of your iptables rules, you'll usually need two commands:
code:
$ sudo iptables -L -v         # -t filter = default table 

$ sudo iptables -L -v -t nat
There are three other tables: mangle, raw and security. However, the filter and nat tables are the most commonly used ones.

Varkk
Apr 17, 2004

ToxicFrog posted:

Does anyone have recommendations for simple monitoring software? Specifically, I'm looking for something with a configurable web-based dashboard that can collect metrics from a home network of 4 linux machines -- ideally via SSH, without needing to install a monitoring daemon or similar on each machine. If it can send me alerts via IRC or gtalk when things are on fire, that would be even better.

Complete overkill for just 4 machines but Nagios does this.

RFC2324
Jun 7, 2012

http 418

Varkk posted:

Complete overkill for just 4 machines but Nagios does this.

And is useful to know for employment purposes.

HPL
Aug 28, 2002

Worst case scenario.
Does Webmin do monitoring?

Hyvok
Mar 30, 2010

Combat Pretzel posted:

Why are you doing that?

Anyway, the result you're seeing is the garbage collection and wear leveling having a shitfit because you're flooding the drive with data.

How come copying from drive to drive (/dev/sda to /dev/sdb) doesn't cause the same thing?

Anyway I just figured I'd clean the drive since I was giving it away and figured it would take almost no time since SSDs are so fast... Never heard of secure erase, I'll have a look for that for the future.

ToxicFrog
Apr 26, 2008


Varkk posted:

Complete overkill for just 4 machines but Nagios does this.

I've heard of Nagios, but was hoping there was something a bit less huge and terrifying I could look at first. :)

Also, wow, last night's post was kind of vague and useless. I shouldn't post when jetlagged. Here's something a bit more concrete:
  • 7 computer network, but two are windows machines used only for gaming and one is an HTPC, so I only really care about monitoring four of them, all running OpenSUSE Tumbleweed.
  • I want to monitor a different subset of things on each machine.
  • Some of the things I want to monitor are pretty common and I'd expect them to be supported out of the box (disk fullness for specific filesystems, number of pending updates, mail queue length, whether certain daemons are running).
  • Others I do not expect to be supported out of the box (per-tag age in bup, load on the DoomRL server), so being able to easily add new monitoring inputs (e.g. by writing a small amount of bash or python, as opposed to compiling and installing a custom version of the monitoring software) is important.
  • I want pretty graphs I can look at in my browser and/or the terminal. Ideally I want the httpd built in. I might be convinced to run nginx or lighttpd, but anything heavy like Apache is a nonstarter.
  • I need some kind of configurable-per-{machine,metric} alerting when poo poo catches fire.

Horse Clocks
Dec 14, 2004


Hyvok posted:

How come copying from drive to drive (/dev/sda to /dev/sdb) doesn't cause the same thing?

Write buffering.

evol262
Nov 30, 2010
#!/usr/bin/perl

ToxicFrog posted:

I've heard of Nagios, but was hoping there was something a bit less huge and terrifying I could look at first. :)

Also, wow, last night's post was kind of vague and useless. I shouldn't post when jetlagged. Here's something a bit more concrete:
  • 7 computer network, but two are windows machines used only for gaming and one is an HTPC, so I only really care about monitoring four of them, all running OpenSUSE Tumbleweed.
  • I want to monitor a different subset of things on each machine.
  • Some of the things I want to monitor are pretty common and I'd expect them to be supported out of the box (disk fullness for specific filesystems, number of pending updates, mail queue length, whether certain daemons are running).
  • Others I do not expect to be supported out of the box (per-tag age in bup, load on the DoomRL server), so being able to easily add new monitoring inputs (e.g. by writing a small amount of bash or python, as opposed to compiling and installing a custom version of the monitoring software) is important.
  • I want pretty graphs I can look at in my browser and/or the terminal. Ideally I want the httpd built in. I might be convinced to run nginx or lighttpd, but anything heavy like Apache is a nonstarter.
  • I need some kind of configurable-per-{machine,metric} alerting when poo poo catches fire.

You've just described a full-blown monitoring system. Almost anything which is this modular and extensible is going to be huge and terrifying.

Cockpit gets you most of what you want (pretty web graphs, requires basically no setup, multiple server support, adding new clients is easy, polling over SSH/websockets, built-in http server -- though I strongly disagree with calling Apache "heavy", even if the configuration can be), and it's supported on SuSE.

Others (alerting, custom metrics) call zabbix or nagios, though any kind of pretty interface to Nagios means you should just learn ELK and dump Nagios alerts to Grafana/Graphite.

jre
Sep 2, 2011

To the cloud ?



ToxicFrog posted:

I've heard of Nagios, but was hoping there was something a bit less huge and terrifying I could look at first. :)

Also, wow, last night's post was kind of vague and useless. I shouldn't post when jetlagged. Here's something a bit more concrete:
  • 7 computer network, but two are windows machines used only for gaming and one is an HTPC, so I only really care about monitoring four of them, all running OpenSUSE Tumbleweed.
  • I want to monitor a different subset of things on each machine.
  • Some of the things I want to monitor are pretty common and I'd expect them to be supported out of the box (disk fullness for specific filesystems, number of pending updates, mail queue length, whether certain daemons are running).
  • Others I do not expect to be supported out of the box (per-tag age in bup, load on the DoomRL server), so being able to easily add new monitoring inputs (e.g. by writing a small amount of bash or python, as opposed to compiling and installing a custom version of the monitoring software) is important.
  • I want pretty graphs I can look at in my browser and/or the terminal. Ideally I want the httpd built in. I might be convinced to run nginx or lighttpd, but anything heavy like Apache is a nonstarter.
  • I need some kind of configurable-per-{machine,metric} alerting when poo poo catches fire.

evol262 posted:

You've just described a full-blown monitoring system. Almost anything which is this modular and extensible is going to be huge and terrifying.

Why do need any of this ? Seems total overkill for a small network, particularly if you find nagios huge and terrifying.

If you don't mind using a web based service why not something like new relic ?

quote:

any kind of pretty interface to Nagios means you should just learn ELK and dump Nagios alerts to Grafana/Graphite.

This doesn't make sense

ToxicFrog
Apr 26, 2008


evol262 posted:

You've just described a full-blown monitoring system. Almost anything which is this modular and extensible is going to be huge and terrifying.

It doesn't seem like it should be huge and terrifying! At least in my head, it would work something like this:
- some command you can run on the nodes that looks at the system and horfs up a bunch of JSON or similar
- server periodically runs that command on all nodes, collects the results in a time-series database for graphing
- server emits alerts if any of the metrics go outside of configured bounds
- frontend shows the contents of the DB and lets you slice by metric or system

...and as I write that, I realize that I'm glossing over enough fiddly implementation details that I probably have just described Nagios or something like it. :sigh:

quote:

Cockpit gets you most of what you want (pretty web graphs, requires basically no setup, multiple server support, adding new clients is easy, polling over SSH/websockets, built-in http server -- though I strongly disagree with calling Apache "heavy", even if the configuration can be), and it's supported on SuSE.

It doesn't appear to be, although they offer Fedora and RHEL RPMs, and I've sometimes had success getting RHEL RPMs to install on SuSE.

And yeah, "heavy" here is with respect to the amount of bullshit I have to deal with to get it installed and working, not the resource footprint.

quote:

Others (alerting, custom metrics) call zabbix or nagios, though any kind of pretty interface to Nagios means you should just learn ELK and dump Nagios alerts to Grafana/Graphite.

I've never even heard of Graphite. I'll poke at it.

jre posted:

Why do need any of this ? Seems total overkill for a small network, particularly if you find nagios huge and terrifying.

Because I'm really bad at keeping an eye on my machines "by hand" and other family members depend, sometimes critically, on them. I want something that can yell at me on IRC "hey, (wife's laptop hasn't been backed up in a week|the mail server is about to run out of disk space|you forgot to start syncthing after kernel updates on your laptop|smartd is losing its poo poo about one of your disks, should probably look at that)".

The graphs are not a must-have but would be nice for when I'm setting it up and those times when I do remember to check on things manually.

(The specific incident that prompted this was the mail server actually running out of room for /var/spool/imap and all mail delivery hanging for nearly a day as a result before someone noticed and IMed me.)

ToxicFrog fucked around with this message at 14:52 on Feb 7, 2016

jre
Sep 2, 2011

To the cloud ?



ToxicFrog posted:

I've never even heard of Graphite. I'll poke at it.

Graphite is purely for recording and graphing metrics, most setups that use it install an agent like diamond on the monitored machines. It doesn't do alerting, for that you'd need something like seyren or bosun. . Also its graphing is a bit dated so most folk now stick grafana in front of it.

Configuring and maintaining all these services is a not trivial , particularly for home use. Unless you are wanting to learn this stuff to help you with work things. :shrug:

ToxicFrog
Apr 26, 2008


jre posted:

Graphite is purely for recording and graphing metrics, most setups that use it install an agent like diamond on the monitored machines. It doesn't do alerting, for that you'd need something like seyren or bosun. . Also its graphing is a bit dated so most folk now stick grafana in front of it.

Configuring and maintaining all these services is a not trivial , particularly for home use. Unless you are wanting to learn this stuff to help you with work things. :shrug:

None of this is likely to be at all useful at work unless I decide I want a career change -- I'm primarily a developer, and our monitoring is all deployed and configured by dedicated teams who Aren't Me. The extent of my involvement is knowing how to add/remove datacenters from our monitoring, and occasionally writing a few lines of configuration for the monitoring system telling it to record/alert on a new metric/service that we've just made available.

I inherited a lot of infrastructure from my dad, and a lot of my work since has been trying to simplify it. I've replaced dovecot with cyrus, a disorganized and mostly redundant collection of rsync backups with a centralized bup cron, apache with nginx, and ad hoc collection of barely-documented, rarely-versioned config files and system configurations with versioned Ansible configs. In a few weeks I'm going to be replacing sendmail and postfix with a simple exim setup, too. I was kind of hoping that there was something similarly straightforward for monitoring.

evol262
Nov 30, 2010
#!/usr/bin/perl

jre posted:

This doesn't make sense

Nagios is ugly, but you can tie a display of the number of alerts into graphite easily.

ToxicFrog posted:

It doesn't appear to be, although they offer Fedora and RHEL RPMs, and I've sometimes had success getting RHEL RPMs to install on SuSE.

Are you sure?. Cockpit is under very heavy development right now, and Fedora packages (or building your own) is probably preferable anyway

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Is there a way I can configure XFCE4 to fallback on another icon set, when the current one is missing icons? Right now I have plenty of red X'es all over the place.

ToxicFrog
Apr 26, 2008


evol262 posted:

Are you sure?. Cockpit is under very heavy development right now, and Fedora packages (or building your own) is probably preferable anyway

Aah, I didn't look at software.opensuse because I assumed by "supported" you meant "by the devs", and I didn't see it listed on the Cockpit site.

At any rate, after poking at the Cockpit site some, it looks like it's more management than it is monitoring. Zabbix honestly looks very much like what I want; from skimming the docs, it can do a great deal, but just setting it up with the bits I want shouldn't be too hard and it's very easy to add new data sources to using zabbix_sender or similar.

...except that you need to install and administer Apache+PHP for the frontend, and a full power database server for the backend. You can't get away with sqlite because they run as separate processes, so using the frontend will corrupt the database. :sigh:

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

jre posted:

Graphite is purely for recording and graphing metrics, most setups that use it install an agent like diamond on the monitored machines. It doesn't do alerting, for that you'd need something like seyren or bosun. . Also its graphing is a bit dated so most folk now stick grafana in front of it.

Configuring and maintaining all these services is a not trivial , particularly for home use. Unless you are wanting to learn this stuff to help you with work things. :shrug:
Graphite's not bad if you just want to play with it. You can grab Vagrant or Docker images or whatever you need that has all the stuff pre-installed and you just need to wire it to your clients. Trying to get it to scale is another matter entirely, though. Even the backends like OpenTSDB or InfluxDB don't do all that well under huge write load.

evol262
Nov 30, 2010
#!/usr/bin/perl

ToxicFrog posted:

Aah, I didn't look at software.opensuse because I assumed by "supported" you meant "by the devs", and I didn't see it listed on the Cockpit site.

At any rate, after poking at the Cockpit site some, it looks like it's more management than it is monitoring. Zabbix honestly looks very much like what I want; from skimming the docs, it can do a great deal, but just setting it up with the bits I want shouldn't be too hard and it's very easy to add new data sources to using zabbix_sender or similar.

...except that you need to install and administer Apache+PHP for the frontend, and a full power database server for the backend. You can't get away with sqlite because they run as separate processes, so using the frontend will corrupt the database. :sigh:

Zabbix is really easy to set up. And administering apache takes almost no work, since zabbix is packaged almost everywhere.

Cockpit is monitoring+management, but not alerting. Seeing available package updates, disk usage, network usage, etc is really easy, even for remote servers. But Zabbix is much better for what you want if you're up for it

jre
Sep 2, 2011

To the cloud ?



Vulture Culture posted:

Graphite's not bad if you just want to play with it. You can grab Vagrant or Docker images or whatever you need that has all the stuff pre-installed and you just need to wire it to your clients. Trying to get it to scale is another matter entirely, though. Even the backends like OpenTSDB or InfluxDB don't do all that well under huge write load.

Graphite won't scale for poo poo because it uses a file per metric so doing any kind of aggregation requires massive globs on the file system. People have tried to jam things like cassandra underneath to make it less poo poo but this doesn't really scale that well either.

Influx is still not mature so who knows how it will work if they ever stop loving about with the storage layer.
Configured properly OpenTSDB will do crazy write load e.g. 1M data points per second on fairly meh boxes

Or 100M per second on this kind of hardware

jre fucked around with this message at 21:30 on Feb 7, 2016

ToxicFrog
Apr 26, 2008


evol262 posted:

Zabbix is really easy to set up. And administering apache takes almost no work, since zabbix is packaged almost everywhere.

Cockpit is monitoring+management, but not alerting. Seeing available package updates, disk usage, network usage, etc is really easy, even for remote servers. But Zabbix is much better for what you want if you're up for it

After reading the docs it's not so much setting up Zabbix that I'm worried about as setting up Apache (which I've done before, and it was a pain in the rear end -- but that was years and years ago, maybe it's better now?) and Postgres (which I have zero experience with apart from the general reputation database servers have of being scary to set up and administer).

I may try setting it up on my laptop while I'm on vacation and see how it goes, though.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I barely know anything about databases, but unless you need to tune for massive requirements it's easy to get Postgres working just fine.

Install the package and set up your users and then follow the directions for whatever database-using software you're installing.

There's tons of guides out there, and they're easy to follow because it's an easy process.

There's not much

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

ToxicFrog posted:

After reading the docs it's not so much setting up Zabbix that I'm worried about as setting up Apache (which I've done before, and it was a pain in the rear end -- but that was years and years ago, maybe it's better now?) and Postgres (which I have zero experience with apart from the general reputation database servers have of being scary to set up and administer).

I may try setting it up on my laptop while I'm on vacation and see how it goes, though.

This is the idea behind good packaging.

Zabbix's setup scripts already handle apache for you (it'll create a vhost), but most distros I've seen already have Apache ready to go with Zabbix once Zabbix is installed. No configuration necessary.

Postgres is also really simple to get going. Again, it mostly comes configured out of the box, and all you need to do is su to postgres (the user) and do whatever you want. Zabbix comes with database setup scripts.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply