Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
LochNessMonster
Feb 3, 2005

I need about three fitty


Jerk McJerkface posted:

You could probably set up and elastic search cluster, there's probably and rss application that can grab the content and send it to the elastic search http Port.

There’s a logstash plugin for rss input. Haven’t tried it myself though.

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Unfortunately, I need something with a web ui that is at least somewhat user friendly.

I can probably come up with something with elasticsearch, which I didn't even consider as I kept trying to think of something more purpose-built for document storage and management.

Da Mott Man
Aug 3, 2012


Thermopyle posted:

Unfortunately, I need something with a web ui that is at least somewhat user friendly.

I can probably come up with something with elasticsearch, which I didn't even consider as I kept trying to think of something more purpose-built for document storage and management.

https://tt-rss.org/ might be what your looking for.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Jerk McJerkface posted:

You could probably set up and elastic search cluster, there's probably and rss application that can grab the content and send it to the elastic search http Port.
There's a pretty good chance you could do the ES import with just Logstash using the http_poller input and the xml filter.

Tad Naff
Jul 8, 2004

I told you you'd be sorry buying an emoticon, but no, you were hung over. Well look at you now. It's not catching on at all!
:backtowork:
I'm using this thing: http://selfoss.aditu.de/

xzzy
Mar 5, 2009

selfoss is pretty cool but eventually you'll find flaws with the importers and end up writing php yourself.

Fortunately it supports mods really well, once you sort out the interface.

DOG AT THE DOOR
Aug 29, 2007

bwha
Ukranians, or possibly someone in the relative geographic area of the Ukraine, are spamming me or at least trying really hard to. I'm on Debian 8 for reasons too complicated to explain. Partially it's my fault, I run a super-low-traffic phpbb forum that previously had no user authentication on signup whatsoever, they put up a canary in mid-December and I didn't deal with it because gently caress it, year's ending. Shortly before turn of the year, they spun up their workers and when I looked back there were 7k new threads offering me fabulous prizes and credit approvals and estate sales and so on. Well, deleted all that poo poo, banned all the accounts, etc, end of story, right? No. Despite banned accounts and killed sessions they continued to send requests at a pretty brisk rate, between 15-30k requests per day going by statistics.

So, the question. I iptables dropped the offending IPs and things returned to normal for a few days. Now, there's 3 new IPs, also geoip'd to Ukraine, also doing exactly the same thing. They're slightly intelligent as they cycle useragents so I can't just alter the webserver config to 403 those or use robots.txt. The web stats application I run that let me identify all this in the first place can export to JSON. I think I can parse down the top 10 hosts by requests using jq in a shell script, checking that they meet my definition of attempted spammer (Ukraine, uses a buncha different useragents) and drop those addresses once a day or so automated by cron. Does this seem like a reasonable course of action? Has someone already made this thing? Is there a simpler way that I'm completely oblivious to?

I know there's an iptables patch that integrates geoip to let you drop whole countries but that seems extreme, also I don't want to be patching and recompiling the kernel for every security update. Mostly I only care about this because it's distorting the poo poo out of my web statistics and wasting a tiny bit of bandwidth.

DOG AT THE DOOR fucked around with this message at 20:42 on Jan 16, 2018

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

DOG AT THE DOOR posted:

Ukranians, or possibly someone in the relative geographic area of the Ukraine, are spamming me or at least trying really hard to. I'm on Debian 8 for reasons too complicated to explain. Partially it's my fault, I run a super-low-traffic phpbb forum that previously had no user authentication on signup whatsoever, they put up a canary in mid-December and I didn't deal with it because gently caress it, year's ending. Shortly before turn of the year, they spun up their workers and when I looked back there were 7k new threads offering me fabulous prizes and credit approvals and estate sales and so on. Well, deleted all that poo poo, banned all the accounts, etc, end of story, right? No. Despite banned accounts and killed sessions they continued to send requests at a pretty brisk rate, between 15-30k requests per day going by statistics.

So, the question. I iptables dropped the offending IPs and things returned to normal for a few days. Now, there's 3 new IPs, also geoip'd to Ukraine, also doing exactly the same thing. They're slightly intelligent as they cycle useragents so I can't just alter the webserver config to 403 those or use robots.txt. The web stats application I run that let me identify all this in the first place can export to JSON. I think I can parse down the top 10 hosts by requests using jq in a shell script, checking that they meet my definition of attempted spammer (Ukraine, uses a buncha different useragents) and drop those addresses once a day or so automated by cron. Does this seem like a reasonable course of action? Has someone already made this thing? Is there a simpler way that I'm completely oblivious to?

I know there's an iptables patch that integrates geoip to let you drop whole countries but that seems extreme, also I don't want to be patching and recompiling the kernel for every security update. Mostly I only care about this because it's distorting the poo poo out of my web statistics and wasting a tiny bit of bandwidth.
Your server is low-hanging fruit, and you should fix your server.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

DOG AT THE DOOR posted:

Ukranians, or possibly someone in the relative geographic area of the Ukraine, are spamming me or at least trying really hard to. I'm on Debian 8 for reasons too complicated to explain. Partially it's my fault, I run a super-low-traffic phpbb forum that previously had no user authentication on signup whatsoever, they put up a canary in mid-December and I didn't deal with it because gently caress it, year's ending. Shortly before turn of the year, they spun up their workers and when I looked back there were 7k new threads offering me fabulous prizes and credit approvals and estate sales and so on. Well, deleted all that poo poo, banned all the accounts, etc, end of story, right? No. Despite banned accounts and killed sessions they continued to send requests at a pretty brisk rate, between 15-30k requests per day going by statistics.

So, the question. I iptables dropped the offending IPs and things returned to normal for a few days. Now, there's 3 new IPs, also geoip'd to Ukraine, also doing exactly the same thing. They're slightly intelligent as they cycle useragents so I can't just alter the webserver config to 403 those or use robots.txt. The web stats application I run that let me identify all this in the first place can export to JSON. I think I can parse down the top 10 hosts by requests using jq in a shell script, checking that they meet my definition of attempted spammer (Ukraine, uses a buncha different useragents) and drop those addresses once a day or so automated by cron. Does this seem like a reasonable course of action? Has someone already made this thing? Is there a simpler way that I'm completely oblivious to?

I know there's an iptables patch that integrates geoip to let you drop whole countries but that seems extreme, also I don't want to be patching and recompiling the kernel for every security update. Mostly I only care about this because it's distorting the poo poo out of my web statistics and wasting a tiny bit of bandwidth.
just put up a loving captcha

Love Stole the Day
Nov 4, 2012
Please give me free quality professional advice so I can be a baby about it and insult you
I heard about a thing that will auto-ban any IP that fails to login or whatever on the first try until you manually unlock it. Maybe that would be good in your case?

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

insularis posted:

Anyone have any ideas on a (all things being relative) "slow" CIFS mount in Ubuntu 17.04/17.10?

Situation: Several VMs accessing a separate FreeNAS array via a point-to-point network link of 10GbE. Works great, no problems, a Windows VM can read at 500-600MB/sec and write at 900MB/sec (weird, but whatever). Close-ish to line speed, at least for write. So that kinda eliminates the ESXi portion as far as I'm concerned. The Ubuntu VMs are using an fstab cifs mount with vers=3.02 specified and verified on the FreeNAS side with smbstatus ... that's the protocol level they're connecting at. iperf tests to one of those VMs shows 10GbE speeds. But an actual large file copy never goes above 220MB/sec. I've got these additional parameters set in smb.conf:

socket options = TCP_NODELAY SO_RCVBUF=524288 SO_SNDBUF=524288 IPTOS_LOWDELAY

But that didn't seem to change anything. Anything else I could look into? I'm just baffled as to why it's 2x faster than gigabit, but no better, and CPU/memory/disk isn't the issue, either. All the Ubuntu VMs behave this way in my setup.

No one had anything, but I figured this out. It was related to NFS queue depth in ESXi, even though it presented as an SMB issue. Setting it to <64 on ESXi solved it. I had the same point to point network mounted as an NFS storage share, and traffic over the vmkernel NIC and guest mounted storage was causing timeouts and huge latency.

insularis fucked around with this message at 05:55 on Jan 18, 2018

jre
Sep 2, 2011

To the cloud ?



insularis posted:

No one had anything, but I figured this out. It was related to NFS queue depth in ESXi, even though it presented as an SMB issue. Setting it to <64 on ESXi solved it. I had the same point to point network mounted as an NFS storage share, and traffic over the vmkernel NIC and guest mounted storage was causing timeouts and huge latency.

Thanks for posting the solution, that's really interesting

Da Mott Man
Aug 3, 2012


DOG AT THE DOOR posted:

Ukranians, or possibly someone in the relative geographic area of the Ukraine, are spamming me or at least trying really hard to. I'm on Debian 8 for reasons too complicated to explain. Partially it's my fault, I run a super-low-traffic phpbb forum that previously had no user authentication on signup whatsoever, they put up a canary in mid-December and I didn't deal with it because gently caress it, year's ending. Shortly before turn of the year, they spun up their workers and when I looked back there were 7k new threads offering me fabulous prizes and credit approvals and estate sales and so on. Well, deleted all that poo poo, banned all the accounts, etc, end of story, right? No. Despite banned accounts and killed sessions they continued to send requests at a pretty brisk rate, between 15-30k requests per day going by statistics.

So, the question. I iptables dropped the offending IPs and things returned to normal for a few days. Now, there's 3 new IPs, also geoip'd to Ukraine, also doing exactly the same thing. They're slightly intelligent as they cycle useragents so I can't just alter the webserver config to 403 those or use robots.txt. The web stats application I run that let me identify all this in the first place can export to JSON. I think I can parse down the top 10 hosts by requests using jq in a shell script, checking that they meet my definition of attempted spammer (Ukraine, uses a buncha different useragents) and drop those addresses once a day or so automated by cron. Does this seem like a reasonable course of action? Has someone already made this thing? Is there a simpler way that I'm completely oblivious to?

I know there's an iptables patch that integrates geoip to let you drop whole countries but that seems extreme, also I don't want to be patching and recompiling the kernel for every security update. Mostly I only care about this because it's distorting the poo poo out of my web statistics and wasting a tiny bit of bandwidth.

Setup fail2ban and geoiplookup. If they are all coming from the Ukraine and you know no legit traffic is coming from that area. Although I think if they are persistent they'll just use owned boxes to hit you instead. Captcha and moderator approval would prob help also.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Hello, I'm on Linux Mint 18.3 and I've started experiencing some weird stuttering from cinnamon desktop. Its very periodic and the whole UI basically pauses for 2 seconds, then works ok for 2or 3 seconds, then the cycle repeats forever.

Using Alt-F2 and then "r" to restart cinnamon does not alleviate the problem. In gnome-system-monitor or htop I can see that the CPU load for process "cinnamon" spikes to 115% or so every time it lags, then drops down into the 10-20% range.

I'm not sure what triggers it to start doing this but it doesn't do it on a fresh boot. I have been experimenting with some tutorials on tensorflow lately and it seems like the pauses are roughly correlated to when I start working in tensorflow. The problem is once the stuttering starts it doesn't seem to matter if I close out every application, jupyter notebook, chrome, etc. Only a reboot fixes it. Does anyone have an idea why my computer is doing thi s or how I might fix it?

e: One other thing is that the freezing makesediting text loving infuriating, however the mouse cursor repainting works fine. So I can move my mouse smoothly during a stutter, but if I were to try to click anything I would have to wait till the end of the stutter before the click is registered.

e2: After observing it some more, I no longer think there is any correlation with tensorflow as I thought before. It seems that the stuttering just starts out less noticeable and the stutters become longer as the uptime increases.

peepsalot fucked around with this message at 20:38 on Jan 23, 2018

effika
Jun 19, 2005
Birds do not want you to know any more than you already do.

peepsalot posted:

Hello, I'm on Linux Mint 18.3 and I've started experiencing some weird stuttering from cinnamon desktop. Its very periodic and the whole UI basically pauses for 2 seconds, then works ok for 2or 3 seconds, then the cycle repeats forever.

Using Alt-F2 and then "r" to restart cinnamon does not alleviate the problem. In gnome-system-monitor or htop I can see that the CPU load for process "cinnamon" spikes to 115% or so every time it lags, then drops down into the 10-20% range.

I'm not sure what triggers it to start doing this but it doesn't do it on a fresh boot. I have been experimenting with some tutorials on tensorflow lately and it seems like the pauses are roughly correlated to when I start working in tensorflow. The problem is once the stuttering starts it doesn't seem to matter if I close out every application, jupyter notebook, chrome, etc. Only a reboot fixes it. Does anyone have an idea why my computer is doing thi s or how I might fix it?

e: One other thing is that the freezing makesediting text loving infuriating, however the mouse cursor repainting works fine. So I can move my mouse smoothly during a stutter, but if I were to try to click anything I would have to wait till the end of the stutter before the click is registered.

I had that too in the 14-17 days but never saw it again after I switched to Fedora with the Cinnamon desktop. I think it might be a Mint problem.

Mellifluenza
Jan 8, 2018
Is there a command to determine if DDR4 RAM is in dual-channel mode?

I'm currently running a single stick in my Thinkpad T570 but have just ordered an identical Hynix module.

Googling around, the general suggestion is to run dmidecode with root privs, but the suggestion is to grep for "Interleaved Data Depth" and "Interleave Position" but it's not showing at all. I'm guessing that dmidecode only shows interleave on older DDR3 systems.

The BIOS screen on this thing doesn't have much of a diagnostic section as far as RAM goes, apart from showing the total amount installed.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

Mellifluenza posted:

Is there a command to determine if DDR4 RAM is in dual-channel mode?

I'm currently running a single stick in my Thinkpad T570 but have just ordered an identical Hynix module.

Googling around, the general suggestion is to run dmidecode with root privs, but the suggestion is to grep for "Interleaved Data Depth" and "Interleave Position" but it's not showing at all. I'm guessing that dmidecode only shows interleave on older DDR3 systems.

The BIOS screen on this thing doesn't have much of a diagnostic section as far as RAM goes, apart from showing the total amount installed.
Does it show up during POST?

Horse Clocks
Dec 14, 2004



I*think* memteSt shows you memory configuration.

Mellifluenza
Jan 8, 2018

anthonypants posted:

Does it show up during POST?

POST is just a big red Lenovo splash. I'll see if there's a more 'geek' option than telling me that I bought a Lenovo laptop, just in case I hadn't noticed what I was buying when I paid over £1000 for something :-)

SoftNum
Mar 31, 2011

Mellifluenza posted:

POST is just a big red Lenovo splash. I'll see if there's a more 'geek' option than telling me that I bought a Lenovo laptop, just in case I hadn't noticed what I was buying when I paid over £1000 for something :-)

Sorry, detailed POST screens do not test well in market research.

Mellifluenza
Jan 8, 2018

SoftNum posted:

Sorry, detailed POST screens do not test well in market research.

Actually, I'm being an idiot. When you interrupt the Lenovo splash by pressing enter there's an option to go into BIOS by pressing F1 and 'diagnostics' by pressing F10.

I'd never noticed the diagnostics section before. At the moment it shows a single 8GB stick in the diagnostics menu, so I'm hoping when the other 8GB arrives it'll show as being in dual-channel mode.

Failing that, I've got a bootable USB with memtest86 on it, so I'll use that when I get the RAM upgraded.

EDIT: Just as an aside, when I removed the rear cover to check the model number of the module that's installed it was surprisingly easy to undo the screws and prize the rear cover off the laptop without leaving any marks and then refitting it. Top marks to Lenovo for actually allowing you to open this thing up and put it back together as if it had never been tampered with. :-)

EDIT2: Should be 'prise'. Pedantry.

Mellifluenza fucked around with this message at 19:30 on Jan 24, 2018

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

effika posted:

I had that too in the 14-17 days but never saw it again after I switched to Fedora with the Cinnamon desktop. I think it might be a Mint problem.
That's weird, I didn't know cinnamon on fedora was a thing. I would expect cinnamon to run best on mint since that's basically its reference implementation. Years ago I ran cinnamon on ubuntu and it had a lot of buggy integration, like confusing duplicated apps for various settings dialogs, network manager, etc. That's why I eventually switched to cinnamon on mint. Anyways, I'm not much of a fedora fan, and I really don't want to have to reinstall and configure another OS along with every application I use.

looks like its something in the latest cjs (cinnamon's javascript engine), being basically unusable with some(all?) applets. also the mint maintainer in the thread is a real dickhead about the whole thing.
https://github.com/linuxmint/Cinnamon/issues/6850
I had recently added two applets to my top panel: a CPU temp and a GPU temp monitor. I removed the applets and restarted cinnamon, and the problem seems gone for now.

Apparently the problem doesn't happen in earlier versions of cjs, so installing version 3.2 instead of 3.6 allegedly fixes it, but I'm not sure I can do that without completely breaking apt

xzzy
Mar 5, 2009

Is this a good place to chat about AWS?

Idea floating around here is shipping user batch jobs (data analysis) into the cloud. Which I assume is a pretty typical workload, but my concern is the scheme being floated for accounting the work people are running. We're cheap as gently caress here they want to penny pinch, and the proposal I've heard is fire up an instance with influxdb, collect some numbers, package them up somehow, ship them to an on-site server, then shut the instance down.. saving on both cpu and bandwidth.

I don't have any experience with AWS but this sets off little red alarms in my head, so my question is what the convention for this type of monitoring would be or if it's actually a good idea.

Risket
Apr 3, 2004
Lipstick Apathy
Google is failing me. I have a question return codes, or why if/then/else isn't handling them correctly.

For instance, I have a bash script that is running an application and that application is failing as expected and has a return code of 255.

code:
#!/bin/bash

some_app $path_to_file/file

if [ $? -eq 0 ]
then
	echo "The audio comparison test failed"
	exit 1
else
	echo "The audio comparison test passed"
fi
For whatever reason the output of this is "The audio comparison test passed"

However, when I echo the return code immediately after running the application:
code:

#!/bin/bash

some_app $path_to_file/file
echo $?
if [ $? -eq 0 ]
then
	echo "The audio comparison test failed"
	exit 1
else
	echo "The audio comparison test passed"
fi
The script echoes the return code and exits with the output "The audio comparison test failed" as it should.

Any ideas as to why this is happening?

Sheep
Jul 24, 2003
In the second one 'echo $?' itself returns 0 (the 'echo $?' command completed successfully) before your if clause, negating the previous command's return.

Also don't forget that 0 is the normal return code for 'all good', a non-zero value (1, for example) indicates whatever.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

Risket posted:

Google is failing me. I have a question return codes, or why if/then/else isn't handling them correctly.

For instance, I have a bash script that is running an application and that application is failing as expected and has a return code of 255.

code:
#!/bin/bash

some_app $path_to_file/file

if [ $? -eq 0 ]
then
	echo "The audio comparison test failed"
	exit 1
else
	echo "The audio comparison test passed"
fi
For whatever reason the output of this is "The audio comparison test passed"

However, when I echo the return code immediately after running the application:
code:

#!/bin/bash

some_app $path_to_file/file
echo $?
if [ $? -eq 0 ]
then
	echo "The audio comparison test failed"
	exit 1
else
	echo "The audio comparison test passed"
fi
The script echoes the return code and exits with the output "The audio comparison test failed" as it should.

Any ideas as to why this is happening?
No one knows what return codes some_app is supposed to return, but in your second example, echo is returning 0, because it exited successfully, and your if/then statement says that if something returns 0, it should report that the audio comparison test failed.

Risket
Apr 3, 2004
Lipstick Apathy

Sheep posted:

In the second one 'echo $?' itself returns 0 (the 'echo $?' command completed successfully) before your if clause, negating the previous command's return.

Also don't forget that 0 is the normal return code for 'all good', a non-zero value (1, for example) indicates whatever.
Ah poo poo, I meant to put -ne instead of -eq. gently caress, 15 wasted minutes...

Thanks

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

Risket posted:

Ah poo poo, I meant to put -nq instead of -eq. gently caress, 15 wasted minutes...

Thanks
You're looking for -ne.

Risket
Apr 3, 2004
Lipstick Apathy

anthonypants posted:

You're looking for -ne.
Yup, typo.

ToxicFrog
Apr 26, 2008


Risket posted:

Ah poo poo, I meant to put -ne instead of -eq. gently caress, 15 wasted minutes...

Thanks

BTW, ff you're targeting bash, you should be using [[ ]] instead of [ ] -- but for this you don't even need to use [[ ]]/test:

code:
#!/bin/bash

if some_app $path_to_file/file; then
  echo "The audio comparison test passed."
else
  echo "The audio comparison test failed."
  exit 1
fi

Risket
Apr 3, 2004
Lipstick Apathy

ToxicFrog posted:

BTW, ff you're targeting bash, you should be using [[ ]] instead of [ ] -- but for this you don't even need to use [[ ]]/test:

code:
#!/bin/bash

if some_app $path_to_file/file; then
  echo "The audio comparison test passed."
else
  echo "The audio comparison test failed."
  exit 1
fi
Oh, I didn't know you could do that in bash, nice to know thanks.

What do you mean by [[ ]] vs [ ], what's the difference?

Sheep
Jul 24, 2003
This might be worth a read if you want an in depth answer.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

xzzy posted:

Is this a good place to chat about AWS?

Idea floating around here is shipping user batch jobs (data analysis) into the cloud. Which I assume is a pretty typical workload, but my concern is the scheme being floated for accounting the work people are running. We're cheap as gently caress here they want to penny pinch, and the proposal I've heard is fire up an instance with influxdb, collect some numbers, package them up somehow, ship them to an on-site server, then shut the instance down.. saving on both cpu and bandwidth.

I don't have any experience with AWS but this sets off little red alarms in my head, so my question is what the convention for this type of monitoring would be or if it's actually a good idea.
I wish I had half as much free time as the people at your company who apparently have nothing more valuable to contribute than operating this thing

If I had this stupid constraint and couldn't just use Datadog or Librato or something, I might fire the data at a Lambda function and store it in S3 for on-demand analysis via Athena, RedShift (via Spectrum), or whatever other thing you can basically pay by the query

Vulture Culture fucked around with this message at 06:02 on Jan 25, 2018

thebigcow
Jan 3, 2001

Bully!

xzzy posted:

Is this a good place to chat about AWS?

Idea floating around here is shipping user batch jobs (data analysis) into the cloud. Which I assume is a pretty typical workload, but my concern is the scheme being floated for accounting the work people are running. We're cheap as gently caress here they want to penny pinch, and the proposal I've heard is fire up an instance with influxdb, collect some numbers, package them up somehow, ship them to an on-site server, then shut the instance down.. saving on both cpu and bandwidth.

I don't have any experience with AWS but this sets off little red alarms in my head, so my question is what the convention for this type of monitoring would be or if it's actually a good idea.

There's an AWS thread in the Cavern of Cobol.

ToxicFrog
Apr 26, 2008


Risket posted:

Oh, I didn't know you could do that in bash, nice to know thanks.

What do you mean by [[ ]] vs [ ], what's the difference?

The tl;dr is that [...] is an external program (an alias for test), which means you have to be careful about quoting variables, output redirection, differences on different platforms, etc, while [[...]] is part of the bash syntax and is thus somewhat more robust. It also supports a few features that test doesn't.

The long answer is Sheep's link, which I recommend reading.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

ToxicFrog posted:

The tl;dr is that [...] is an external program (an alias for test), which means you have to be careful about quoting variables, output redirection, differences on different platforms, etc, while [[...]] is part of the bash syntax and is thus somewhat more robust. It also supports a few features that test doesn't.

The long answer is Sheep's link, which I recommend reading.
Being even more anal-retentive: [ is a builtin in bash and most other shells that emulates test in accordance with the POSIX standard. (Everything you said about quoting is, of course, correct.)

An Enormous Boner
Jul 12, 2009

You made me wonder about overriding shell builtins and at least in bash you can do the following for no good reason:

code:
[() { $(which [) ; }

Alpha Mayo
Jan 15, 2007
hi how are you?
there was this racist piece of shit in your av so I fixed it
you're welcome
pay it forward~
I don't really game anymore and want to go back to Linux, how is Arch Linux these days? I remember trying all the big ones in 2006 or so and Arch was my favorite since it was like the flexibility and optimization of Gentoo without the constant compiling because a minor security patch for Open Office came out.

xzzy
Mar 5, 2009

Arch is still Arch. If you liked it then you'll like it now. If you hated it then, you'll hate it now.

Adbot
ADBOT LOVES YOU

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


Fedora is probably the best desktop now.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply