|
How does anyone justify using PHP for a new project?
|
# ? May 5, 2012 18:13 |
|
|
# ? Apr 29, 2024 07:25 |
|
pokeyman posted:How does anyone justify using PHP for a new project? If you have 5 guys that only know PHP, it's pretty much a no-brainer. I've been at companies where they started in one language, then switched to a different language that no one on the team was familiar with. It was a mess.
|
# ? May 5, 2012 18:22 |
|
Ithaqua posted:If you have 5 guys that only know PHP, it's pretty much a no-brainer.
|
# ? May 5, 2012 18:36 |
|
pokeyman posted:How does anyone justify using PHP for a new project?
|
# ? May 5, 2012 20:16 |
|
pokeyman posted:How does anyone justify using PHP for a new project? There are people who still think it's a great language.
|
# ? May 5, 2012 20:23 |
|
Real world example: Giant Bomb and Comic Vine were sold to CBS Interactive, but Distillery (their Django based CMS/wiki) was sold to Berman Braun, so they have to rewrite the sites from scratch. And they have to use PHP, because CBSi is a PHP shop.
pseudorandom name fucked around with this message at 20:51 on May 5, 2012 |
# ? May 5, 2012 20:28 |
|
pokeyman posted:How does anyone justify using PHP for a new project? I'm still a student, but PHP gets used for all the web stuff in group projects cos it's what people get taught and it's what people tend to know. Sure, some people might know Python as well, but they still have to learn a funny complicated framework and gently caress there's no time this is due in like a day just use PHP.
|
# ? May 5, 2012 20:31 |
|
When starting a project you also might have to deal with: * Business decisions and policy * Outsourcing development to idiots * Legacy code And many, many, many more reasons. That said, if you do PHP right, most of the security issues do not apply to your project. Key phrases being "doing it right" and "most". I personally start every project in Python, but that's just because I love it so much and know it well enough. All hosting companies hate me for it, though. Do note that Python also had the hash-collision vulnerability, like PHP.
|
# ? May 5, 2012 20:35 |
|
Scaramouche posted:Are you talking about CGI 'classic'? Out of process? Single threaded? The bug at issue here is clearly a PHP bug, not a CGI one. Gazpacho fucked around with this message at 23:22 on May 5, 2012 |
# ? May 5, 2012 20:49 |
|
Gazpacho posted:While the overhead of CGI is considerable Assuming you're using an interpreted language.
|
# ? May 5, 2012 21:01 |
|
Zombywuf posted:Assuming you're using an interpreted language. It's also large if you use C.
|
# ? May 5, 2012 21:26 |
|
Ithaqua posted:If you have 5 guys that only know PHP, it's pretty much a no-brainer. If you have five guys that only know PHP you seriously need to rethink your hiring practices because you are apparently hiring high school freshmen or something.
|
# ? May 5, 2012 22:05 |
|
shrughes posted:It's also large if you use C. Less than a single garbage collection sweep.
|
# ? May 5, 2012 22:18 |
|
Zombywuf posted:Less than a single garbage collection sweep. HTTP servers that do everything in-process for the sake of performance are a Thing That Exists, you know.
|
# ? May 5, 2012 22:27 |
|
I think it's been made clear that the issue re: CGI is not whether such things exist, but whether they introduce problems of their own.
|
# ? May 5, 2012 22:37 |
|
shrughes posted:HTTP servers that do everything in-process for the sake of performance are a Thing That Exists, you know. Yup, and how many people then think that callback driven, low latency architecture based web servers in a dynamic garbage collected language are a good idea? If you really need that kind of performance then go ahead, but most people don't. Hell, most don't even know how to actually make use of the performance in-process servers give them. p.s. you do know that the overhead of fork+exec is almost nil these days right?
|
# ? May 5, 2012 22:40 |
|
Zombywuf posted:Yup, and how many people then think that callback driven, low latency architecture based web servers in a dynamic garbage collected language are a good idea? "Low latency" is not the only use case, you also have overall CPU overhead, also (rarely important nowadays) overall memory usage. Zombywuf posted:p.s. you do know that the overhead of fork+exec is almost nil these days right? It's not. You are a crazy person (if you think you are refuting the fact that HTTP servers that do everything in process for the sake of performance is a Thing That Exists). Or are you making some vacuous claim like that it's not enough overhead for certain applications where exec is fast enough? Even then that's still wrong, because exec is not the only source of overhead.
|
# ? May 5, 2012 23:16 |
|
shrughes posted:"Low latency" is not the only use case, you also have overall CPU overhead, also (rarely important nowadays) overall memory usage. quote:Or are you making some vacuous claim like that it's not enough overhead for certain applications where exec is fast enough? Even then that's still wrong, because exec is not the only source of overhead. I'm saying the overwhelming majority of applications would benefit by several orders of magnitude by having fast code (usually in the db) over having in process http handlers.
|
# ? May 6, 2012 01:15 |
|
Zombywuf posted:Overall CPU overhead, this is something that dynamic languages excel at minimising is it? Also, high memory usage means slow code, if we're in the realms where fork and exec is too slow for your app. So.. you are not contradicting anything I'm saying? Why are you posting?
|
# ? May 6, 2012 01:27 |
|
shrughes posted:So.. you are not contradicting anything I'm saying? Because node.js is the real horror.
|
# ? May 6, 2012 01:42 |
|
Zombywuf posted:Overall CPU overhead, this is something that dynamic languages excel at minimising is it? Also, high memory usage means slow code, if we're in the realms where fork and exec is too slow for your app. node.js has nothing to do with this conversation.
|
# ? May 6, 2012 02:02 |
|
code:
code:
|
# ? May 6, 2012 02:30 |
|
Can we please make a rule that whoever is talking about performance being fast/slow needs data numbers? We should be scientists, justifying our hypotheses by observations here. Just throwing out words without weight behind them is only of use to his fellow charlatans. EDIT: didn't see your post Zombywuf, carry on
|
# ? May 6, 2012 02:34 |
|
http://www.reddit.com/r/perl6/comments/r44do/is_perl6_code_unreadable/quote:I was talking about Perl 6 on IRC the other day and I sent the following line of code:
|
# ? May 6, 2012 03:13 |
|
shrughes posted:So.. you are not contradicting anything I'm saying? Not everyone needs to savagely optimize their web server, given a significant chunk of web applications are written in interpreted languages
|
# ? May 6, 2012 03:21 |
|
tef posted:Not everyone needs to savagely optimize their web server, given a significant chunk of web applications are written in interpreted languages Or compiled languages that recompile on every page load.
|
# ? May 6, 2012 03:24 |
|
I knew before I clicked that reddit link that someone would make a comparison to APL.
|
# ? May 6, 2012 03:26 |
|
I wrote a quick benchmark which compares the cost of calling an empty function ("null"), forking a child which immediately exits ("fork"), and forking then execing /bin/true ("fork-exec"). Here's the numbers when running on my desktop (Ubuntu 11.10, 64-bit, Intel i7-2600k):code:
In other words, a server using CGI must burn nearly a full millisecond of time just to enter the request handler. Also, once you're in your handler, you'll have to spend several more milliseconds establishing database/rpc connections, loading/parsing templates, and generally doing setup work that a single-process server only needs to do once. All together, it might take more than 5ms from the time when the request hits your server to when you can return even the most basic response. Assuming you care at all about performance, then this is obviously unacceptable. For any stats geeks in the audience, here's the full benchmark output: code:
|
# ? May 6, 2012 03:27 |
|
Janin posted:Assuming you care at all about performance, then this is obviously unacceptable. Not as much as you do, obviously.
|
# ? May 6, 2012 03:45 |
|
Zombywuf posted:Really? That 4ms (most of which I suspect is taken up by bash) matters to you so much when you're shoving HTTP down a network connection. Which high frequency trading platform are you working on? There are many different answers to this question. One answer is that I don't write high frequency trading platforms that communicate with HTTP requests(??), I work on a database engine which I can neither confirm nor deny has an HTTP API. Another answer is that in the past I've implemented an HTTP proxy for modifying web content coming into a corporate network, and there it's not 1 or 2ms latency that's a concern, it's total CPU usage. There's a difference between a customer having to plop 1 box to run your product versus 10 boxes, or 4 versus 40. The initial fork+exec is not the only source of slowness that would be introduced there. Another answer is that if you have a web service, people might need to make a whole bunch of requests one after another (because your API doesn't support arbitrary SQL, or sending arbitrary code to run sandboxed server-side). A sad example of this is MAPI as used by certain proprietary C# MAPI libraries. Low latency is extremely important in situations like that. It's the only important measure of performance. A system that uses 0.1 core-milliseconds and 10 ms of wall-clock time for a request is usually much worse than one that uses 3 core-milliseconds and 5 ms of wall-clock time. For applications like these, not context switching is pretty important. In addition to fork+exec delays, there's the fact that now you've got a scheduler romping around deciding when things get to run, and the cost of piping data back (which is at its theoretical best a tricky and fragile game of zero-copy networking). tef posted:Not as much as you do, obviously. Blah blah blah tef you're not saying anything. Nobody is claiming that you do any real programming. Edit: real programming shrughes fucked around with this message at 04:05 on May 6, 2012 |
# ? May 6, 2012 04:03 |
|
It's me, I'm the guy for whom the 4ms matters so much when he's shoving HTTP down a network connection. I'm actually working on a system that does HTTP requests and needs to answer as fast as possible, doing real time bidding for online advertisement. There, HTTP is kind of a standard given we serve ads and do cookie-matching over HTTP anyway (although we have keep-alive connections to avoid auctions setting up connections each time). Our average serving time is below 5ms, worst times averaging 9-10ms, and we can usually afford to get up to 35ms. Nonetheless, taking 3-5ms more on each request is a performance hit of ~30% to ~100%. Moreover, because we're receiving a constant stream of such requests, reducing our performance by 30% in terms of time means we will have fewer resources available at any single point in time to handle everything at once, which likely means more hardware for the same job. MononcQc fucked around with this message at 07:11 on May 6, 2012 |
# ? May 6, 2012 07:09 |
|
Janin posted:Also, once you're in your handler, you'll have to spend several more milliseconds establishing database/rpc connections, loading/parsing templates, and generally doing setup work that a single-process server only needs to do once.
|
# ? May 6, 2012 07:37 |
|
This just in: terrible performance scales for trivial use cases. I love this. Yes, the problem people have with this code must be that they dislike concise and expressive notations. Ugh, I'm going to treat this as an exercise in language design. Simple ways to improve the legibility:
tl;dr: In a better-designed language, this would be: code:
|
# ? May 6, 2012 09:15 |
|
The full paste is too long (~200 lines) but I was going through my groupmate's contribution to our senior project, and have been constantly banging my head on the desk reading it.code:
For added fun: the files themselves are duplicated completely for two different projects in the solution. Or rather, they should be the same files, but comparing diffs seems to imply that one was updated when the other wasn't. I suppose its almost not fair mocking students, but the person who wrote this is a CS senior.
|
# ? May 6, 2012 10:42 |
|
rjmccall posted:I love this. Yes, the problem people have with this code must be that they dislike concise and expressive notations. Have you considered learning GolfScript, or APL (and derivatives)? quote:tl;dr: In a better-designed language, this would be: a better designed language wouldn't have the use case of 'all project euler problems should be one-liners'
|
# ? May 6, 2012 11:10 |
|
tef posted:a better designed language wouldn't have the use case of 'all project euler problems should be one-liners' If it were a better-designed language, it wouldn't be Perl.
|
# ? May 6, 2012 11:14 |
|
MononcQc posted:It's me, I'm the guy for whom the 4ms matters so much when he's shoving HTTP down a network connection. Thus demonstrating that the only reason to not use CGI is if you are on the side of evil. Janin posted:In other words, a server using CGI must burn nearly a full millisecond of time just to enter the request handler. Then it's probably best not to enter the CGI handler at all. Most web apps could most benefit from better design that revolved around cache performance.
|
# ? May 6, 2012 11:54 |
|
Zombywuf posted:Thus demonstrating that the only reason to not use CGI is if you are on the side of evil.
|
# ? May 6, 2012 12:36 |
|
Zombywuf posted:Thus demonstrating that the only reason to not use CGI is if you are on the side of evil. yes, much better to have slower ads all the time, people love slow ads. Also, what a poo poo argument.
|
# ? May 6, 2012 14:56 |
|
|
# ? Apr 29, 2024 07:25 |
|
MononcQc posted:yes, much better to have slower ads all the time, people love slow ads. Also, what a poo poo argument. 1) Block ads, problem solved 2) I'm saying you shouldn't use CGI in that very specific (evil) case.
|
# ? May 6, 2012 15:06 |