Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
jre
Sep 2, 2011

To the cloud ?



evol262 posted:

We use Jenkins everywhere, for everything.

You should:
  • Set up a web-based code review system. Gerrit, barkeep, whatever.
  • Developers push a commit to gerrit. It automatically kicks of a jenkins job which runs linters, unit tests, and spins up a new VM for functional tests (you may be able to get away without the VM, but you should definitely be testing).
  • If any of these fail, it automatically -1s the patch because it's broken.
  • Go through code review
  • Once it passes code review and gets merged into the actual repo, jenkins builds again. Tests. Functional tests not optional here. Deploy to a dedicated test environment if you need to. Vagrant, Cloudformations, Heat, or some other tool to spin up multiple clean test servers with the same config every time not optional.
  • If it passes tests, Jenkins deploys. Ideally by tearing down existing VMs and building new ones from puppet/salt/whatever. Again, you don't have to teardown, but the deployment should be automatic, and through config management

I'm a bit further along with this now. I have jenkins and gitorious VMs up and running and jenkins is correctly picking up pushes to the git repo.

The bit I'm unsure of is how to manage sending changes to the production servers. What I was thinking was:

Devs will develop with their local git repo and vms and push changes that are ready for testing to the 'staging' remote which is the one linked to jenkins. This triggers all the unit tests , linters etc, then send the files to our staging webservers for manual testing.

What I was unsure about was the best way to get the changes which are ready for deployment onto the production web servers. Most of the DeployOverProtocolX jenkins plugins appear to send every file in the codebase instead of just the changes if I'm understanding it correctly. Is it better to push changes to a 'production' remote git repo instead. e.g. like this?

Or is there a better way which is more tightly integrated into jenkins?

Adbot
ADBOT LOVES YOU

nescience
Jan 24, 2011

h'okay

evol262 posted:

You don't have a virus. Most Linux exploits are problems with code which become known shortly before after fixes, at which point systems which haven't been updated are vulnerable (Debian/Ubuntu had a problem with breakable encryption keys a few years ago). Viruses exist, but are very rare these days (there were a few in the earlier days of the internet on various UNIX flavors).

Do use rootkit checkers just to be safe, though.

It's almost certainly a bad password. Or simply a misconfigured mail server which was an open relay. There's almost no reason to run your own mail, and I'd suggest you avoid it as a beginner. This requires no cleanup other than not running your own mail.

Check /var/log/secure or auth.log (depending on distro) instead of "last".

Thank you for the tips! I didn't see any unrecognized IPs in auth.log, and rkhunter & chrootkit both came up clean, so I guess I'm safe? I deleted my dummy account and so far it doesn't seem to be sending out more spam. Just to be safe I stopped postfix and dovecot.

I only made the mail server to learn, I don't use it as a main email or anything, but after this I'll probably move the mailboxes to a paid provider.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I almost posted this in the packrats thread, but as I'm guessing this is more of a linux question, I'll ask here instead.

How can I get a list of drives not in a zfs pool? It's extremely tedious to compare the list of all disk ids in the system to the list of disk ids already in a zfs pool when I add a bunch of new drives that I want to create a new zfs pool with.

spankmeister
Jun 15, 2008






Thermopyle posted:

I almost posted this in the packrats thread, but as I'm guessing this is more of a linux question, I'll ask here instead.

How can I get a list of drives not in a zfs pool? It's extremely tedious to compare the list of all disk ids in the system to the list of disk ids already in a zfs pool when I add a bunch of new drives that I want to create a new zfs pool with.

It would probably involve listing all drives, then grepping -v in that list with the output of the list of drives that are part of the pool.

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki
For a routed packet (i.e. coming in interface A and out interface B, not destined for a listening socket), are there any situations where iproute2 will reassemble a packet before routing it? For instance, if A has MTU 1500 and B has MTU 9000, would a 6000 byte frame ever leave B intact, or would fragments always be delivered as received?

evol262
Nov 30, 2010
#!/usr/bin/perl

jre posted:

I'm a bit further along with this now. I have jenkins and gitorious VMs up and running and jenkins is correctly picking up pushes to the git repo.

The bit I'm unsure of is how to manage sending changes to the production servers. What I was thinking was:

Devs will develop with their local git repo and vms and push changes that are ready for testing to the 'staging' remote which is the one linked to jenkins. This triggers all the unit tests , linters etc, then send the files to our staging webservers for manual testing.

What I was unsure about was the best way to get the changes which are ready for deployment onto the production web servers. Most of the DeployOverProtocolX jenkins plugins appear to send every file in the codebase instead of just the changes if I'm understanding it correctly. Is it better to push changes to a 'production' remote git repo instead. e.g. like this?

Or is there a better way which is more tightly integrated into jenkins?

Sending every file is fine, because you're gating commits with gerrit or similar, right? So bad commits don't ever make it into the actual repo? Just unmerged patchsets?

Normally I'd say "build a package, build a fresh VM with that package, that's your deployment", but I don't know how :cloud: you are. Definitely build packages and use those for deployment and versioning

Prince John
Jun 20, 2006

Oh, poppycock! Female bandits?

NOTinuyasha posted:

Ok, I'm down to this. A 5GHz 802.11n USB adapter that works with Linux, I can't find any confirmed reports of any working. Is the state of wireless on Linux really this terrible?

Apologies in advance for the price.

Note that I don't actually own this, but it looks kosher and is pretty specific about its Linux credentials.

I did a heck of a lot of research a couple of years ago, asked around in various places and got no further than you did - this is literally the only dual band USB adapter I've found that has any evidence of Linux support. I'm sure the support is there in the kernel, but I agree it's really hard to find the right 5GHz models as a consumer.

Prince John fucked around with this message at 01:20 on Aug 8, 2014

CaptainSarcastic
Jul 6, 2013



Prince John posted:

Apologies in advance for the price.

Note that I don't actually own this, but it looks kosher and is pretty specific about its Linux credentials.

I did a heck of a lot of research a couple of years ago, asked around in various places and got no further than you did - this is literally the only dual band USB adapter I've found that has any evidence of Linux support. I'm sure the support is there in the kernel, but I agree it's really hard to find the right 5GHz models as a consumer.

I use a Buffalo router as a wireless bridge to get my desktop on my home network. Sure, it ain't ideal for portable use, but at home it's nice to have the machine think it is just running ethernet. If 5ghz is important, and mobility isn't, then a cheap second router set up as a bridge would be a more cost-effective way of accomplishing getting wireless network access.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Thermopyle posted:

I almost posted this in the packrats thread, but as I'm guessing this is more of a linux question, I'll ask here instead.

How can I get a list of drives not in a zfs pool? It's extremely tedious to compare the list of all disk ids in the system to the list of disk ids already in a zfs pool when I add a bunch of new drives that I want to create a new zfs pool with.

I use this on Solaris:
http://hardforum.com/showthread.php?t=1566966

I don't think there's a direct equivalent of Solaris' format command, but it might be a useful start:

jre
Sep 2, 2011

To the cloud ?



evol262 posted:

Normally I'd say "build a package, build a fresh VM with that package, that's your deployment", but I don't know how :cloud: you are. Definitely build packages and use those for deployment and versioning

We're moderately :yaycloud:, but for this web app I don't have access to the load balancer so config changes have to be requested via email which stops me spinning up new VMs and switching over. Current deployment method is via rsync and sticky tape.

It's a huge php monstrosity , so would something like this be sensible

commits to testing branch get checked and sent to in house testing cluster,

commits to master branch get packaged with phing and deployed via publishoverssh to remote production webservers.

the
Jul 18, 2004

by Cowcaster
edit: I guess I figured it out, thanks

the fucked around with this message at 03:33 on Aug 9, 2014

the
Jul 18, 2004

by Cowcaster
How can I boot to usb from Ubuntu? I have a formatted bootable USB for Windows 8. I went into the UEFI and "USB Bootable" or whatever is turned on. But Ubuntu seems to supercede every time I reboot and launch itself.

spankmeister
Jun 15, 2008






the posted:

How can I boot to usb from Ubuntu? I have a formatted bootable USB for Windows 8. I went into the UEFI and "USB Bootable" or whatever is turned on. But Ubuntu seems to supercede every time I reboot and launch itself.

If your BIOS/UEFI is set up correctly and your USB stick is set up correctly Ubuntu will have zero influence on this.

the
Jul 18, 2004

by Cowcaster

spankmeister posted:

If your BIOS/UEFI is set up correctly and your USB stick is set up correctly Ubuntu will have zero influence on this.

Hmm, I'm not sure what's happening then, Thanks though.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

the posted:

How can I boot to usb from Ubuntu? I have a formatted bootable USB for Windows 8. I went into the UEFI and "USB Bootable" or whatever is turned on. But Ubuntu seems to supercede every time I reboot and launch itself.

Try another USB drive. Some don't like to boot for some reason.

the
Jul 18, 2004

by Cowcaster

Bob Morales posted:

Try another USB drive. Some don't like to boot for some reason.

I hope not, because I went out and bought this 16gb one because I didn't have any :(

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

the posted:

I hope not, because I went out and bought this 16gb one because I didn't have any :(

They are like $5 at Walmart

Hz so good
Jan 25, 2014
I tried posting this here originally, but stupid me poo poo-posted it to the main forum, and the mods moved it to comedy.

I've got Linux Mint 17 running in Virtualbox (works fine), and I think I've got atftpd and tacacs+ installed correctly )at least, Software manager says so.

What I don't know how to do is get atftpd running (and copy CME-FULL-7.1.0.0.tar to it's root (/tftpboot, supposedly).

I've got this running in GNS3 (loopback communications only, on purpose), and am trying to install CME to a C3745 ipvoice router, so I can test iphones (IP Blue MultiLab) for an upcoming cert exam.

I'm a total linux newb, and have only managed to get Mint17 and Haiku running (Windows 2008R2 that I'm running PRTG and radius on WILL NOT install a tftp server, just a client)

Can some kind soul please hold my hand, and walk me through this? I really need to get this working for practice, and lack a physical C375 and Cisco IP phones.

I've already placed the needed tar file on the desktop. Now what do I do?

I tried following someones advice for doing it in ubuntu, but I somehow hosed it up, and needed to do a fresh re-install of MINT17 (where I am now)

Thank you in advance!

db franco
Jul 14, 2014
just installed ubuntu on an xps 15 notebook.

everything runs lovely, except the fact that everything on the screen is super teeny tiny.

i've tried scaling icons and window borders, but things like terminal and even default chrome scaling is just awful.

any experience using ubuntu with high-dpi displays? any/all suggestions appreciated.

evol262
Nov 30, 2010
#!/usr/bin/perl

db franco posted:

just installed ubuntu on an xps 15 notebook.

everything runs lovely, except the fact that everything on the screen is super teeny tiny.

i've tried scaling icons and window borders, but things like terminal and even default chrome scaling is just awful.

any experience using ubuntu with high-dpi displays? any/all suggestions appreciated.

Not Ubuntu specifically, but change the minimum font sizes and default zoom level for apps and browsers, which mostly fixes this. Icons may still be small depending on Windowing toolkit.

Newer versions of gnome are much better.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
How come the vagrant-berkshelf plugin expects my Berksfile to be in the same file as the Vagrantfile, but berkshelf expects it to be in the cookbook folder?

edit: Ah, I was thinking my Vagrantfile should be in my project root (chef repo). Berkshelf generates a Vagrantfile in the cookbook directory itself with berks cookbook <name> which makes more sense when I thought about it.

fletcher fucked around with this message at 01:00 on Aug 11, 2014

reading
Jul 27, 2013

loose-fish posted:

I assume you are using the standard xfce terminal. Just start a new terminal like this
pre:
xfce4-terminal --show-menubar

Thanks! As it turns out I had to edit the xfce4/terminal/terminalrc file to get a blinking, underline cursor rather than the block cursor, because for some reasong Xubuntu takes those options out of the GUI terminal preferences interface. But now it works and looks great.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
I couldn't figure out why I would get an infinite username/password prompt trying to proxy subsonic through nginx with basic http auth. I finally fixed it by adding:

code:
proxy_set_header Authorization $upstream_http_authorization;
I don't even remember why I thought to try this, I'm really surprised that a google search doesn't seem to bring up anything meaningful about it. Am I missing something here?

netcat
Apr 29, 2008
Don't know if there's a Fedora thread so I'll ask here.

I just put my new computer together with an Asus H97-PRO motherboard with i218-v ethernet. So I figured I'd try to install Fedora since I don't have any free 64 bit Windows license available at the moment. Everything seems to work fine except the network; the LED is flashing correctly and Linux detects my hardware but the ethernet driver (e1000e) is apparently not supported by the kernel Fedora 20 uses (I think 3.11.*, I saw some forum post claiming 3.14.* should work but that doesn't really help me when I don't have a network connection).

I've googled a bit but haven't really found anything that has helped me so far. The source for the driver can be downloaded and built from intel but requires the kernel headers which of course aren't available in the default installation so that's quite the catch 22.

Anyone have any ideas? Or should I just give up and try another distro (Ubuntu I guess)?

spankmeister
Jun 15, 2008






e1000e is supported and has been since forever so your issue is probably something else.

How about you explain what exactly the issue is you're running into.

evol262
Nov 30, 2010
#!/usr/bin/perl

netcat posted:

Don't know if there's a Fedora thread so I'll ask here.

I just put my new computer together with an Asus H97-PRO motherboard with i218-v ethernet. So I figured I'd try to install Fedora since I don't have any free 64 bit Windows license available at the moment. Everything seems to work fine except the network; the LED is flashing correctly and Linux detects my hardware but the ethernet driver (e1000e) is apparently not supported by the kernel Fedora 20 uses (I think 3.11.*, I saw some forum post claiming 3.14.* should work but that doesn't really help me when I don't have a network connection).

I've googled a bit but haven't really found anything that has helped me so far. The source for the driver can be downloaded and built from intel but requires the kernel headers which of course aren't available in the default installation so that's quite the catch 22.

Anyone have any ideas? Or should I just give up and try another distro (Ubuntu I guess)?

I would suggest using nightlies. You can certainly try Ubuntu, but I think 14.04.1 is still 3.13, so it may not help.

spankmeister posted:

e1000e is supported and has been since forever so your issue is probably something else.

How about you explain what exactly the issue is you're running into.
e1000e is not homogenous. i218-v is not the same

netcat
Apr 29, 2008

spankmeister posted:

e1000e is supported and has been since forever so your issue is probably something else.

How about you explain what exactly the issue is you're running into.

My issue is that ethernet doesn't work, at all. I guess it's wrong to say that e1000e is not supported with this kernel but, yeah, what evol262 said. my hardware is detected fine so it should be a driver issue.

evol262 posted:

I would suggest using nightlies. You can certainly try Ubuntu, but I think 14.04.1 is still 3.13, so it may not help.

e1000e is not homogenous. i218-v is not the same

Ok, I'll try the nightlies tomorrow I guess, thanks

spankmeister
Jun 15, 2008






Ok so the e1000e driver in your kernel doesn't support your chipset yet.

JHVH-1
Jun 28, 2002
Place I used to work out we would use kmod packages from elrepo.org for e1000 stuff.

They have one that is pretty recent, but they are based off RHEL/CentOS kernels (they do have SRPMS though):
kmod-e1000e-3.1.0.2-1.el7.elrepo.x86_64.rpm 03-Aug-2014 12:02 133K

http://elrepo.org/linux/elrepo/el7/x86_64/RPMS/


And here is the source for the newer module code:
http://sourceforge.net/projects/e1000/files/e1000e%20stable/

Between those options and the nightly fedora, something is bound to work.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Having some more trouble with my nginx config: https://gist.github.com/fletchowns/3a7507b1c834f520ed0a

When I access https://mycoolsite.com/rtgui/ the browser makes a request to https://mycoolsite.com/rtgui/submodal/common.js but it 404s:

code:
2014/08/13 06:19:39 [error] 16572#0: *2 open() "/var/www/rtgui/current/rtgui/images/downarrow.gif" failed (2: No such file or directory), client: 10.0.2.2, server: mycoolsite.com, request: "GET /rtgui/images/downarrow.gif HTTP/1.1", host: "mycoolsite.com", referrer: "https://mycoolsite.com/rtgui/"
The correct path of the file is /var/www/rtgui/current/images/downarrow.gif - how come nginx is sticking rtgui in the path? Also, have I done anything else stupid in my config?

supermikhail
Nov 17, 2012


"It's video games, Scully."
Video games?"
"He enlists the help of strangers to make his perfect video game. When he gets bored of an idea, he murders them and moves on to the next, learning nothing in the process."
"Hmm... interesting."
Oh, hello.:shobon:

Apparently sometime recently a new version of Ubuntu was released... or maybe not very recently, but I have notifications only for LTS. And I'm considering opting out completely until the LTS runs out, because I have mixed experience with upgrading if my configuration is not standard. Mostly it's the Xubuntu flavor, and the fact that I've removed a bunch of standard apps. Does upgrading mess it all up, or am I misremembering or something?

BoyBlunder
Sep 17, 2008
How would I grep the following, if I want the first 24 hosts?

Host name is as follows: hostXY01, hostXY02 ... hostXY47, hostXY48

I'm brainfarting hard, and can't come up with the regex to do it.

code:
grep 'hostXY[01-24]' foo
makes it so that any host with any instance of 0, 1, 2, 4 print instead of first 24.

YouTuber
Jul 31, 2004

by FactsAreUseless
You should be fine upgrading. The real problem is the fact you're using Ubuntu and half the ends don't meet and give random loving errors for no apparent reason. I've heard of no glaring failures of 14.04 or upgrading to it; provided you're not stupidly far back on the version curve. A jump from 12.04 to 14.04 should be fairly seamless.

Weird Uncle Dave
Sep 2, 2003

I could do this all day.

Buglord
Anyone have any experiences, good bad or otherwise, with Oracle Linux? At first glance, it looks like another RHEL clone, only one that gets updates out faster than CentOS, and with a custom kernel (that you don't even have to use if you really want 100% RHEL compatibility). I'm looking for the downside to suggesting it over CentOS for a new project at work, and not finding it.

evol262
Nov 30, 2010
#!/usr/bin/perl

fletcher posted:

Having some more trouble with my nginx config: https://gist.github.com/fletchowns/3a7507b1c834f520ed0a

When I access https://mycoolsite.com/rtgui/ the browser makes a request to https://mycoolsite.com/rtgui/submodal/common.js but it 404s:

The correct path of the file is /var/www/rtgui/current/images/downarrow.gif - how come nginx is sticking rtgui in the path? Also, have I done anything else stupid in my config?

Because the reference is relative. The correct path of the file is the one nginx is looking for unless you've modified the rtgui source to use absolute paths.

Weird Uncle Dave posted:

Anyone have any experiences, good bad or otherwise, with Oracle Linux? At first glance, it looks like another RHEL clone, only one that gets updates out faster than CentOS, and with a custom kernel (that you don't even have to use if you really want 100% RHEL compatibility). I'm looking for the downside to suggesting it over CentOS for a new project at work, and not finding it.

It's fine, except that it's Oracle. You're basically going between choosing between a known project with a long lineage (and the long delay when EL6 came out was due to project leadership issues that won't come up again) and trusting Oracle. Do you trust Oracle to continue producing OEL in the future? Are you ok not having even community support? If so, go for it.

BoyBlunder posted:

How would I grep the following, if I want the first 24 hosts?

Host name is as follows: hostXY01, hostXY02 ... hostXY47, hostXY48

I'm brainfarting hard, and can't come up with the regex to do it.

code:
grep 'hostXY[01-24]' foo
makes it so that any host with any instance of 0, 1, 2, 4 print instead of first 24.
hostXY([2][0-4]|[01][0-9])

Or with \d

hostXY([2][0-4]|[01]\d)

use "grep -e"

If you want to range like that, you need to use restrictive groups combined with ors.

BoyBlunder
Sep 17, 2008

evol262 posted:

hostXY([2][0-4]|[01][0-9])

Or with \d

hostXY([2][0-4]|[01]\d)

use "grep -e"

If you want to range like that, you need to use restrictive groups combined with ors.

Thanks!

spankmeister
Jun 15, 2008






Weird Uncle Dave posted:

Anyone have any experiences, good bad or otherwise, with Oracle Linux? At first glance, it looks like another RHEL clone, only one that gets updates out faster than CentOS, and with a custom kernel (that you don't even have to use if you really want 100% RHEL compatibility). I'm looking for the downside to suggesting it over CentOS for a new project at work, and not finding it.

I've used it and it's fine. It has a few Oracle specific things that are useful if you want to run Oracle database software on it but it's nothing you can't also do with RHEL or CentOS.

Question is: do you want support and if so do you want to pay Oracle for it?

Also, if you're not an Oracle shop why would you choose it over RHEL or CentOS? I see no reason to use OEL if you're not an Oracle shop.

And I'm not convinced their patches are quicker than CentOS, the "CentOS is slow releasing updates" hasn't been true for years.

spankmeister fucked around with this message at 17:13 on Aug 13, 2014

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Weird Uncle Dave posted:

Anyone have any experiences, good bad or otherwise, with Oracle Linux? At first glance, it looks like another RHEL clone, only one that gets updates out faster than CentOS, and with a custom kernel (that you don't even have to use if you really want 100% RHEL compatibility). I'm looking for the downside to suggesting it over CentOS for a new project at work, and not finding it.

The reason not to use it is that you would be supporting Oracle. Oracle basically rebadges RHEL and sells it for more money while calling it "Unbreakable". The reason we had to close off our knowledge base to customers only was that over 50% of our traffic was coming from Oracle call centers. The reason we had to ship all our kernel patches in one big bundle was to stop Oracle from being able to support them.

evol262
Nov 30, 2010
#!/usr/bin/perl

Suspicious Dish posted:

The reason not to use it is that you would be supporting Oracle. Oracle basically rebadges RHEL and sells it for more money while calling it "Unbreakable". The reason we had to close off our knowledge base to customers only was that over 50% of our traffic was coming from Oracle call centers. The reason we had to ship all our kernel patches in one big bundle was to stop Oracle from being able to support them.

I guess by this, a little clarification:

The idea that Oracle can support Linux (and basically RHEL) better than the engineers who write it is laughable, but they tell people this, then trolled our knowledgebase looking for support articles when customers they poached called them for support.

Patches come in a bundle available separately so Oracle actually has to backport their own stuff instead of trying to tweak around patches for backported fixes in the kernel SRPM. Incidentally, this is also why their "Unbreakable" kernel is no longer compatible -- they'd actually have to put in engineering effort to get it working on the EL kernel and shipping upstream is easier for them.

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Geeze.

I know almost nothing about Oracle, but everything I ever hear about them makes them sound like real shitheels.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply