|
The fun part with the Super Nintendo feature is how they do it.. it's not a library parsing a file format in the way mp3 or something is handled. They literally fire up a super nintendo emulator and feed the file through that emulator to produce the audio stream. You can play old amiga music files the same way. It's both crazy and awesome at the same time.
|
# ¿ Dec 16, 2016 22:35 |
|
|
# ¿ Apr 24, 2024 19:16 |
|
X forwarding with ssh is "the" way to do it, there's no special magic. If it's laggy there's not much you can do.. maybe check the droplet isn't memory or cpu starved but other than that you're pretty much stuck. 20ms is actually quite a bit of lag for X11, you'll definitely feel it. Forwarded programs on a LAN can be choppy depending on the load of the redraws.
|
# ¿ Dec 18, 2016 01:48 |
|
Nope, the X11 protocol hasn't changed in any meaningful way for 20+ years. If anything it's gotten slower because they keep bolting new features into it and the programs are cramming more icons and animations into everything. VNC is probably the smoothest way to run a gui on a remote server, just because it's easier to optimize what's basically a screenshot than it is to pass all the X11 inputs/events over the wire one by one.
|
# ¿ Dec 18, 2016 01:56 |
|
Linux device enumeration is just the best. Without knowing the specifics of your system there's really only general advice. You can try blacklisting the module that the new drive needs to prevent it being probed by the kernel, put the old drive in the spot of the new drive, or update all configs (grub/fstab/etc) to use UUID's for all devices. To get the uuid, run 'blkid' on the devices in question.
|
# ¿ Dec 20, 2016 21:37 |
|
Recycle it and buy a raspberry if you want low power computing.
|
# ¿ Dec 22, 2016 05:41 |
|
syslog is designed to be a fire and forget system, I think it even continues to rely on udp for sending over the network. You're probably not going to be able to pull it off. rsyslog probably allows control like you need but don't hold me to it.
|
# ¿ Dec 22, 2016 21:10 |
|
It's poo poo like that that makes me glad I swore off dual-booting years ago and have multiple computers. (now that we have raspberries and nucs it's not a bad way to go)
|
# ¿ Dec 22, 2016 23:53 |
|
I was a moderate hoarder until earlier this year when I realized I was never going to use all that poo poo ever again. Feels good man.
|
# ¿ Dec 23, 2016 00:35 |
|
iptables to lock down every port except the ones you really need, run a software update once a month or so. Should pretty much cover the bases. Some kind of fail2ban for the ssh port is handy too, it isn't really "security" but it will clean up your logs from brute force attacks and will maybe punt script kiddies who don't have a bunch of ip's to attack from. No wordpress is a good plan too.
|
# ¿ Dec 29, 2016 02:55 |
|
Yes, rsync is super sensitive about the trailing slash and is completely silent about what it's doing. Which I've always felt was a lovely feature, when we're migrating disks someone always fucks it up at least once, but hey, that's unix!
|
# ¿ Jan 4, 2017 15:31 |
|
Well again, if rsync would just print out a "hey I'm about to create a directory called /remoteserver/destination/destination" question and ask for confirmation that would fix a lot of issues too. Just have it check if the new subdirectory is the same name as the parent and a billion errors would be fixed.
|
# ¿ Jan 4, 2017 16:32 |
|
I force my jenkins users to pull from git and push artifacts if they want their binaries. Means I don't have to do jack poo poo. They do have the matrix plugin for handling branches/architectures too. (https://wiki.jenkins-ci.org/display/JENKINS/Building+a+matrix+project)
|
# ¿ Jan 6, 2017 22:10 |
|
Means the log file got a new entry while it was being rotated. Unless you care a lot about your logs I'd just ignore it.
|
# ¿ Jan 9, 2017 14:41 |
|
I assume the maintainers realized their mission was irrelevant around the time someone started selling a credit card sized computer with 512MB for $35.
|
# ¿ Jan 11, 2017 17:29 |
|
You always have the option of building your own satellite install from its various upstream components. (we actually had someone working on that until it was finally decided it was madness and they coughed up money for a subscription)
|
# ¿ Jan 12, 2017 18:25 |
|
From what I've heard from the people running a satellite install, the biggest problems are that version updates so far have been a "rm -rf the fucker and reinstall from scratch" style process. Which seems suboptimal. (I am not a fan of satellite and am just venting)
|
# ¿ Jan 12, 2017 18:36 |
|
Hey at least we learned to stop putting "xhost +" in our init scripts.
|
# ¿ Jan 12, 2017 18:42 |
|
Odette posted:I just setup my own mail server with Dovecot/Postfix using virtual hosts/domains/etc. Have installed ClamAV, SpamAssassin, OpenDKIM & setup a few extras (SPF/DKIM/DMARC) postgrey is really good too.
|
# ¿ Jan 14, 2017 19:20 |
|
For the ultimate challenge drop $10/month on a digitalocean droplet and run email under your own domain.
|
# ¿ Jan 15, 2017 00:31 |
|
SPF and DKIM are not all that hard.. just take notes as you set it up so you can remember the details later. It helps that there's a billion tutorials floating around out there these days.
|
# ¿ Jan 15, 2017 01:02 |
|
You need to find out what part of the php script is failing. Is the ssh successful? Is it running with the correct permissions (check selinux too)? Is the data returned in a sensible format? Basically sprinkle your php script with echo statements and figure out where it's crapping.
|
# ¿ Jan 18, 2017 23:13 |
|
RFC2324 posted:Here is the thing... It works. For one server it works every time, for others it works sometimes. For one it only appears to work when manually invoked(even invoked as non-root), but never when invoked by cacti itself. It TECHNICALLY works. It just isn't returning the expected values(results from free -wb on various servers). My first guess is the credentials are wrong. Like an ssh-agent exits and it can't log in anymore, a kerberos ticket is being purged, known_hosts is getting wiped.. something like that. Without knowing your environment it's just guessing.
|
# ¿ Jan 18, 2017 23:28 |
|
Isn't gparted just a frontend for parted? Learn the cli version.
|
# ¿ Jan 19, 2017 20:34 |
|
I've been typing fdisk for so many years now it's the first thing I try too. But it just don't cut the mustard anymore given how large disks have grown.
|
# ¿ Jan 19, 2017 20:38 |
|
Well so it does.. at least in actually modern distributions. We're still stuck on RHEL6 and the fdisk it provides flips out on gpt disks. Maybe someday when we're allowed to move to RHEL7 I'll get to try it!
|
# ¿ Jan 19, 2017 20:51 |
|
Well if you send to nfs you're dependent on the network stack still functioning. Then there's the speed thing.. you gotta sit around and wait for an image the size of system memory to be transferred over a wire. I can't think of any normal situations where dumping to a network share will help forensics where a local dump would fail.
|
# ¿ Feb 8, 2017 01:40 |
|
I've only ever used kdump to placate fussy users into thinking we're working real hard to figure out why their lovely code keeps crashing the server. Which usually ends up being no more than telling them what function it was in from the stack trace and moving on with my day. I ain't a kernel developer and never will be so if they want more than that, they can crack the dump open.
|
# ¿ Feb 8, 2017 06:04 |
|
I tried setting up a raspberry as a thin client a year or so ago and it was pretty miserable. Working in a terminal was sluggish, but technically doable. Web browsing was worthless and nothing could be done about it. The only way I could comfortably do work on it was run a vnc session on a real computer and display it on the raspberry.
|
# ¿ Feb 8, 2017 15:09 |
|
Charles Mansion posted:What is a good way to organize an NFS share that contains software installations used by multiple Unix platforms? Is it necessary to silo binaries and libraries for every Linux distribution or should it be enough to just organize them by architecture? Not that I recommend going down this dark road, but I've seen it done where they organize directories by OS and kernel version. And when we were doing the 32->64 bit transition, architecture as well. So there would be directories like /mnt/software/IRIX_6_5 /mnt/software/SunOS_5_10 /mnt/software/Linux_2_2 /mnt/software/Linux_2_4 /mnt/software/Linux_2_4_2_32 /mnt/software/Linux_2_4_2_64 And so on. Then there were scripts that would read the output of uname and build a path and tweak $PATH to add the appropriate directory. As you can see there was a hierarchy in place so that if there was some oddball release that needed a specific version it could be used but a less restrictive system could use something more globally usable. As for whether it's necessary anymore, it depends. If the package is available in the distribution's database, don't bother. If you have users building code that targets specific versions of libraries you might need it.
|
# ¿ Feb 9, 2017 21:38 |
|
Try this: rm -f .bash_history Problem solved, you don't really need to preserve all the other poo poo in your history do you?
|
# ¿ Feb 18, 2017 19:03 |
|
drat that's eye opening. I had no idea that universe existed. And I'm not sure I like it. I mean I can see the value of making scripting easier, but a major benefit of shell scripts is they have no dependencies. Making a framework a dependency for basic server administration seems like it could be risky.
|
# ¿ Feb 20, 2017 16:33 |
|
emacs is a text editor where some programmer decided it should be able to to everything and spent 40 years hacking features into it.
|
# ¿ Feb 22, 2017 15:33 |
|
No one builds software anymore, it's all software config these days. If you really insist on getting your hands dirty do one of the more bare installs like arch or slackware or something. Otherwise just pick one of the big ones like Ubuntu or one of the RedHat versions.
|
# ¿ Feb 24, 2017 00:58 |
|
Airgap everything, get rid of all speakers and microphones, run only code you yourself have written. Don't even C&P snippets from stackexchange.
|
# ¿ Feb 28, 2017 04:50 |
|
~/t is my default test destination. Or ~/u if ~/t already exists. And so on until I run out of letters. Usually I clean up the mess before I get to z.
|
# ¿ Mar 15, 2017 22:31 |
|
Prefix is the superior option because it allows the rpm --prefix option to work. The only part I don't like about it is the files section, you still have to specify the full path (like if you set prefix to /bar you still have to put /bar/etc/myconfig.conf in the files section). The build process isn't smart enough to build the path for you.
|
# ¿ Mar 16, 2017 20:22 |
|
Pfft, we set up a thing in our ticketing system where issues involving ssh connections don't make it to us until after they've stepped through a FAQ. Job security ain't worth that kind of ticket volume.
|
# ¿ Mar 23, 2017 03:19 |
|
Some people are neatnicks and like to have things super tidy. Why do you gotta be grumpy about how people use their personal property? I know I've had phases like that. It's usually after something complex goes haywire or some random piece of software I don't give a poo poo about starts bitching in the logs.
|
# ¿ Apr 3, 2017 01:43 |
|
gnome (or kde) and words like "minimal" are mutually exclusive. As soon as you bring in one or the other desktop environment it's gonna pull in a hundred packages. You can get pretty small if you just bring in the xorg packages and run one of the standalone window managers. But you'll get gnome/kde packages over time anyways because eventually you'll want to try some software that is built against one of them and it forces your hands.
|
# ¿ Apr 3, 2017 06:18 |
|
|
# ¿ Apr 24, 2024 19:16 |
|
Ganglia is my go to. "Oh look that thing you're complaining about is showing an upward trend for the last six months and now you're overloading your machine, go away and buy new hardware."
|
# ¿ Apr 4, 2017 04:20 |