|
Monkeyseesaw posted:Lambdas are awesome, I can't imagine why anyone would think otherwise. The alternative is the old 2.0 anonymous delegate syntax which was ugly as hell and hard to read. Unless you've been in bracket land your entire programming career, lambdas read very naturally. Congrats, you don't know the difference between strong/weak typeing and static/dynamic. Have a cookie.
|
# ? Jan 28, 2010 07:11 |
|
|
# ? Apr 28, 2024 04:01 |
|
Ugh more horrors: Honestly this is written by someone who worked in the industry for awhile. i am a coop student, so i don't have any say. But gently caress. Note. The code i posted above is not this person, and the following morning he actually was like "what the gently caress did i do?". And corrected it. I See Too Much poo poo Like this: code:
code:
code:
|
# ? Jan 28, 2010 07:27 |
|
Lambda looks like the least of your problems. Have you tried drinking before work?
|
# ? Jan 28, 2010 10:24 |
|
So, yesterday I ran into yet another of those "C++ rocks, Java sucks" benchmark shootouts and while I don't even care to comment about the benchmarks themselves beyond the fact that this guy doesn't apparently understand JIT process, JVM warmup and GC tuning nor specifics of Java API nor how to microbenchmark in general, there's the redeeming quality to his tests that he has released all the related source code right here. Or, well. Umm, this is how he thinks is a good way to create an array with a size given by user in heapsort.java: code:
code:
Going into really small things, almost every number is outputted like this even during the tests: code:
Granted, these are just small things I found with a quick glance and I really shouldn't even care about arbitrary microbenchmarks but it still irks me to see C code directly ported to Java.
|
# ? Jan 28, 2010 11:45 |
|
http://arstechnica.com/business/news/2010/01/how-a-stray-mouse-click-choked-the-nyse-cost-a-bank-150k.arsquote:On November 14, 2007 at 3:20pm one of Credit Suisse's trading algorithms suddenly went haywire, and, in a few moments, sent hundreds of thousands of bogus requests to the exchange.
|
# ? Jan 28, 2010 13:50 |
|
UberJumper posted:Ugh more horrors: Off topic, but are you in Ottawa? Your username is familiar.
|
# ? Jan 28, 2010 16:22 |
|
Parantumaton posted:So, yesterday I ran into yet another of those "C++ rocks, Java sucks" benchmark shootouts and while I don't even care to comment about the benchmarks themselves beyond the fact that this guy doesn't apparently understand JIT process, JVM warmup and GC tuning nor specifics of Java API nor how to microbenchmark in general, there's the redeeming quality to his tests that he has released all the related source code right here. Butthurt much? Perhaps you could to explain to those of us who are java illiterate what is wrong with those code snippets.
|
# ? Jan 28, 2010 16:37 |
|
king_kilr posted:Congrats, you don't know the difference between strong/weak typeing and static/dynamic. Have a cookie. You're right I mixed the two up. I know the difference I just don't keep the terms straight in my head as I don't jump between those different worlds very often. Regardless I don't think that has anything to do with my particular criticisms/support for lamdbas, var, and dynamic in C#
|
# ? Jan 28, 2010 18:50 |
|
Janin posted:Why does this matter? Presumably if the variable is later passed to a procedure that accepts IEnumerable, its type will be inferred to be IEnumerable, right? I couldn't tell you, but it's a sentiment I've heard from other people stuck firmly in the static typing world.
|
# ? Jan 28, 2010 19:14 |
|
Free Bees posted:I couldn't tell you, but it's a sentiment I've heard from other people stuck firmly in the static typing world. Most of the big type inference systems prior to C# used the Hindley-Milner algorithm, which chooses most-general types. All of the programmers used to it have grown to expect type inference to work in a certain way, as a result. So when C# introduced its alternative type inference algorithm, it managed to simultaneously anger the non-type-inference crowd who think it's spooky witchcraft that is somehow "weak" or "dynamic" typing AND anger the type-inference crowd who get irritated that they still have to include some type annotations to make their program typecheck, and they don't have a good intuition for why those should be required. The end result seems to be a feature that is only used in relatively few simple cases, as neither group really trusts it to do what they want.
|
# ? Jan 28, 2010 19:39 |
|
ShoulderDaemon posted:The end result seems to be a feature that is only used in relatively few simple cases, as neither group really trusts it to do what they want. The intention I got when C# 3.0 was just coming out is that var was originally intended for a pretty narrow case. It arose out of LINQ's need to generate anonymous types where the programmer simply can't specify the type since it's unknown. The fact that it's only valid in local scope and must be declared and initialized in the same statement seems to support the fact that they weren't going out of their way to introduce type inference all over the language. But then I see tools like Resharper suggest you type all your local variables as var if you're initializing them and I can't discern if that's what MS intended or if the community just went crazy with the usage because it was new and shiny and they didn't have to type Dictionary<string, int> anymore or whatever. Either way I think its overuse leads to less readable code. I tend to keep it within the scope of LINQ expressions and use explicit typing everywhere else. Dr Monkeysee fucked around with this message at 20:36 on Jan 28, 2010 |
# ? Jan 28, 2010 20:33 |
|
Zombywuf posted:Butthurt much? I'm no fan of java, but I cannot possibly take anyone who thinks this is an acceptable presentation of data seriously. Edit: (1/(Speed/Fastest Speed)) Zhentar fucked around with this message at 23:32 on Jan 28, 2010 |
# ? Jan 28, 2010 23:30 |
|
Fib posted:http://arstechnica.com/business/news/2010/01/how-a-stray-mouse-click-choked-the-nyse-cost-a-bank-150k.ars That's the best thing I've read all day.
|
# ? Jan 29, 2010 00:33 |
|
Zhentar posted:Edit: (1/(Speed/Fastest Speed)) More = better than.
|
# ? Jan 29, 2010 07:50 |
|
Zhentar posted:I'm no fan of java, but I cannot possibly take anyone who thinks this is an acceptable presentation of data seriously. Looks clear enough to me. More = better consistently, all relevant info is present in an easy to read way (how long did the gcc version take at -O1 for example). Fastest Time/Time is much less . Or, considering Speed as Programs per Second, i.e. 1/Time, then it's just Speed/Fastest Speed.
|
# ? Jan 29, 2010 15:00 |
|
Zombywuf posted:Looks clear enough to me. On the plus side, zhentar won't have to take zombywuf seriously any more.
|
# ? Jan 29, 2010 16:03 |
|
tef posted:
He didn't have to in the first place. What do you think this is? srs bzns?
|
# ? Jan 29, 2010 16:11 |
|
Zombywuf posted:Butthurt much? Not butthurt by the test, I can easily agree that good C++ code is faster than good Java code. Anyways: - Microbenchmarks shouldn't measure all the code, just the part they're focusing on. For example when comparing sorting algorithms it's completely irrelevant to include VM startup time and whatnot to the actual test time since you're not going to start the VM every single time when you're actually sorting - at least I hope so! - JIT process in Java is iterative, the more you use some class/method, the more it gets optimized. First run is pretty much a Just-In-Time compilation, first draft of bytecode which works but isn't necessarily that fast, that's not important when running for the first time. After subsequent calls the bytecode is optimized (loop unrolling, inlining and all the other fancy ASM optimizations you can think of) over and over again until the VM reaches a median where additional optimizations to the piece of code get irrelevant. Usually this means about 20-50 calls for a method with average complexity, so when benchmarking Java code the actual code to be benchmarked should be looped through a couple of dozen times before actually performing the benchmark. This is what's commonly known as "JVM warmup". - Garbage Collector may sometimes "get in the way" when running the test. Java has several Garbage Collectors available and each VM has its own ones so tuning them can be considered an art in itself; one of the goals of GC tuning is to get the subsequent iterations evened out so that instead of having a piece of code which runs through itself in 40ms to 60ms it actually runs in 45ms to 47ms which obviously is better on average while the fastest run is slower than initially. - Arrays.newInstance(smth); is actually a Java Reflection API call. Reflection calls which haven't been JITted yet are usually run at about 5% of the speed of direct call, in some cases that may be abysmally slow because instantiating new objects directly in Java takes only 2 or 3 instructions. Also on top of this the version of Java he used (Java 6 update 3) isn't really known for its fast Reflection performance, most recent Java 6 is a lot faster with it and if you're feeling lucky, you can get an alpha of JDK7 and see how it's even faster with it. - System.in (usually) means keyboard or piped stream from some external source byte by byte. That's why I mentioned that he at least bothered to wrap it with Java's native buffering which at least reads (I think) in data in 8 kilobyte chunks. However that could be improved by increasing the default buffer size to, say, 32 kilobytes. There's also two variants of IO API:s in Java, IO and NIO ("New IO"), latter is the faster one in certain situations (there's -if I remember correctly- 16 ways to read in a file in Java at the moment, uhh...). When using the old API, you actually get a lot better results for benchmarking by reading in the file completely once, discarding it and then reading it again. This is because on the first run-through the file is checked by Java's SecurityManager for access rights etc. and that can be annoyingly slow at times but on the second time even these checks are both JITted and partly cached so it's even faster. - When printing to console with System.out.println() it (usually!) blocks because Java sends some sort of request to console buffer thingamajigger and basically sits down to wait for its turn to spam the console for a bit. Interestingly multithreading .println() calls is (also usually!) faster because the threads yield allowing the other threads to find an optimal spot for this pause or something like that. I hope this clarifies a bit why I did feel annoyed by those tests. EDIT: Me type bad! Me am play gods with dem words! Parantumaton fucked around with this message at 10:26 on Jan 30, 2010 |
# ? Jan 29, 2010 19:21 |
|
That graph owns. A 1.3 sec diff is over twice as wide a gap on the graph compared to a 2.3 sec diff. I remember in school our prof had us talking about factors people could use when benchmarking 2 different algorithms. Identifying the easy stuff like the algoritm's big-O classification went quick, but then he pushed for other more subtle ways. That brought up talks about implementation, language choice, running on system setups (OS, other software, etc) that will favor one approach vs. another, and constructing "sample" data either to torpedo one algorithm's weak spot or give an algorithm a known best case scenario. I brought up slyness with reporting results, ambiguity with words like "average", and generally measuring in metrics that are biased/meaningless. He acknowledged that but was dubious of it's impact, and tried to focus us on more technical tricks. I want to find his contact info and email him that loving image. Bhaal fucked around with this message at 01:42 on Jan 30, 2010 |
# ? Jan 30, 2010 01:39 |
|
Parantumaton posted:There's also two variants of IO API:s in Java, IO and NIO ("New IO"), latter is the faster one in certain situation (there's -if I remember correctly- 16 ways to read in a file in Java at the moment, uhh...). Not enough! Say hello to NIO.2.
|
# ? Jan 30, 2010 03:20 |
|
MrMoo posted:Not enough! Say hello to NIO.2.
|
# ? Jan 31, 2010 13:11 |
|
code:
|
# ? Jan 31, 2010 20:50 |
|
Lexical Unit posted:I bet there's a smaller horror inside!
|
# ? Jan 31, 2010 21:14 |
|
Well, you know, maybe the VB6 interpreter will just go ahead and unroll that "loop", inline what PD(1),PD(2) and PD(3) evaluate to, and strip out all the dead code blocks...right, guys?
|
# ? Jan 31, 2010 21:53 |
|
Vanadium posted:If you really want something that approaches a REPL for C++, why not go the whole distance and turn it into an IRC bot. if one of you assbutts attaches #include </dev/tty> to an irc bot, I'm so going to own your box.
|
# ? Jan 31, 2010 22:55 |
|
*whoosh*
|
# ? Feb 1, 2010 00:12 |
|
Parantumaton posted:- Microbenchmarks shouldn't measure all the code, just the part they're focusing on. For example when comparing sorting algorithms it's completely irrelevant to include VM startup time and whatnot to the actual test time since you're not going to start the VM every single time when you're actually sorting - at least I hope so! quote:I hope this clarifies a bit why I did feel annoyed by those tests.
|
# ? Feb 1, 2010 02:18 |
|
Zombywuf posted:Dunno, what if you want to implement a sort command? If you want a CLI command then use the -client rather than -server command line option and complaining that the runtime environment uses more RAM and takes longer to start. If you want a language that lets you write fast code while making gratuitous system calls, it's down aisle 7 between the pixie dust and unicorn farts and beneath the functional compilers that actually generate parallel code.
|
# ? Feb 1, 2010 07:26 |
|
Isn't the whole point of Erlang to be a function language that goes massively parallel? Are you saying it's still stuck on a single thread anyway?
|
# ? Feb 1, 2010 18:38 |
|
Zombywuf posted:Dunno, what if you want to implement a sort command? Collections.sort() with custom Comparator implementation. Oh you mean entirely? Well the code structure for both implementing and benchmarking would be something like this: code:
The code above outputs the following on my lousy computer: code:
I had to cut the scale rather lot since the difference between first run and subsequent runs were huge. The scale on the left is a bit hard to read, it's in milliseconds. Quick analysis: - JIT clearly is iterative, after iteration #22 the JIT/JVM clearly did some sort of magic trick and optimized the code substantially. - Not that it isn't clear already but as exponential trendline shows, the performance increases when the method is called more often. Zombywuf posted:I think you've clarified why Java is the horror. Yes, it's horrible that it requires you to know what you're doing - which I really don't, I can't be assed to tune the GC for best performance or even thinking of proper test data right now. --- I feel that I've put way too much effort on this whole debacle. Argh. Parantumaton fucked around with this message at 19:24 on Feb 1, 2010 |
# ? Feb 1, 2010 19:13 |
|
jandrese posted:Isn't the whole point of Erlang to be a function language that goes massively parallel? Are you saying it's still stuck on a single thread anyway?
|
# ? Feb 1, 2010 19:25 |
|
Dijkstracula posted:Well, you know, maybe the VB6 interpreter will just go ahead and unroll that "loop", inline what PD(1),PD(2) and PD(3) evaluate to, and strip out all the dead code blocks...right, guys? VB6 can compile to C object code which is then compiled to binary by VC6, so it will get optimized as well as the VC6 compiler can manage.
|
# ? Feb 1, 2010 19:36 |
|
Mustach posted:He means compilers that produce parallel programs out of arbitrary code. The Erlang compiler doesn't automatically parallelize anything. Isn't that kind of like asking "Why can't we make a compiler that takes a statement in the form of 'I want a calculator that looks like a TI-85 but operates in base 11', and make it for me? Why do I have to write all of this 'code' stuff?"
|
# ? Feb 1, 2010 19:47 |
|
jandrese posted:Isn't that kind of like asking "Why can't we make a compiler that takes a statement in the form of 'I want a calculator that looks like a TI-85 but operates in base 11', and make it for me? Why do I have to write all of this 'code' stuff?" Haskell compilers seem to manage it nowadays, although they don't scale quite as well as you might hope because there's still a fair amount of contention inside the runtime. Frankly there just isn't as much parallelism in most algorithms as some people think before you have to go speculative anyway. The programmer still has to be intentionally writing parallel algorithms if they want to scale up well. They just don't have to bother with creating threads or passing messages or all that muck.
|
# ? Feb 1, 2010 21:38 |
|
ShoulderDaemon posted:Haskell compilers seem to manage it nowadays, although they don't scale quite as well as you might hope because there's still a fair amount of contention inside the runtime. Haskell compilers don't parallelize automatically; the programmer has to provide annotations to describe which sections should be parallel. Apparently when they tried to do it automatically, it was really tricky to figure out what computations took a lot of time, so GHC would spawn way more threads than is useful.
|
# ? Feb 2, 2010 02:27 |
|
Janin posted:Haskell compilers don't parallelize automatically; the programmer has to provide annotations to describe which sections should be parallel. You can provide annotations, but the modern GHC runtime will create sparks on its own if you don't. Janin posted:Apparently when they tried to do it automatically, it was really tricky to figure out what computations took a lot of time, so GHC would spawn way more threads than is useful. Yes, it is really tricky, and ghc does not do an amazing job. The modern implementation uses a fixed number of threads, typically equal to the number of cores on the system, and every thread performs the highest unclaimed strict block it can on the code graph, resulting in speculative parallelism that preserves sharing and benefits well-enough from strictness analysis and fusion which we generally believe we can do well. It has real contention issues, and probably needs to move to something like workstealing with a queue-per-thread, but it's slowly getting better. Recently, for example, garbage collection was restructured to be a mostly per-thread activity instead of taking a global lock every time.
|
# ? Feb 2, 2010 02:44 |
|
ShoulderDaemon posted:You can provide annotations, but the modern GHC runtime will create sparks on its own if you don't. How modern do you mean? The GHC 6.12.1 docs indicate that parallel code must still be explicitly annotated using the par and pseq combinators, and I haven't heard anything about it being included in 6.14. I'd like to play around with it, but the only links I can find are basically "neat idea; doesn't work yet".
|
# ? Feb 2, 2010 03:06 |
|
Janin posted:How modern do you mean? The GHC 6.12.1 docs indicate that parallel code must still be explicitly annotated using the par and pseq combinators, and I haven't heard anything about it being included in 6.14. As I recall, you need to be running PARALLEL_HASKELL and not just the threaded GHC runtime, which means you may need to do a little source-diving and recompile to get your toolchain ready. At that point, any legal thunk should be a potential spark. You'll need to ask the RTS to use more than one thread (obviously) to get actual parallelism. And yeah, it doesn't really work yet. Last time I seriously played with it was right before the 6.10 release when I was trying (and failing) to make the locking overhead marginally lower. The sparks are really small unless your algorithm is massively-serial, and if it is they don't help anyway because they tend to speculate randomly and just waste memory. At the time I was observing <10% speedups at -N2, and no gains for -N3 and higher, although that was before the garbage collector was fixed.
|
# ? Feb 2, 2010 03:24 |
|
jandrese posted:Isn't the whole point of Erlang to be a function language that goes massively parallel? Are you saying it's still stuck on a single thread anyway? Erlang's parallelism on multicore is "incidental" at best. The language was defined to be concurrent (independent tasks and actors), but not necessarily parallel (tasks performed simultaneously). When the language was used in the 80s and 90s, any form of parallelism came from distribution, which was mainly there for reliability. To make things short, concurrency was implemented by having a bunch of Erlang processes (VM virtual processes) scheduled in a run-queue. Erlang only got SMP support in 2006 (R12B, if memory serves me right). The way it was done was by having a scheduler represented as an OS thread. You would have one of these threads per core, which would share a common run queue and get Erlang processes from it. Some tasks could scale quite well over multiple cores, but anything that was remotely demanding would be slower with smp than without it. Since R13B (2009), Erlang got a run-queue per scheduler, and there was a lot less lock contention and context-switching needed. Since then, Erlang did really get good at SMP. I don't really know where all the beliefs that Erlang was great at parallelism originated from. Maybe misguided definitions of concurrency vs parallelism or the fact Erlang really made it easy to adapt (you didn't need to change any code you had before, just upgrade VMs), but usually, e-people are more wary of non-implemented stuff. Now to answer the question, the main point of Erlang is to be really reliable and fault-tolerant. Good parallelism is only a benefit of the message-passing and insulation principles necessary to have high-reliability. For more details on SMP in Erlang, read http://www.erlang.org/euc/08/euc_smp.pdf MononcQc fucked around with this message at 03:30 on Feb 2, 2010 |
# ? Feb 2, 2010 03:26 |
|
|
# ? Apr 28, 2024 04:01 |
|
Saw this gem from a coworker today. IPs have been changed to protect the innocent. Spacing modified to hopefully protect the innocent forum tables.php:<? foreach($hello->sign as $helloSign) { //Incorrect IP address - reboot router if($helloSign->ip != "74.11.11.11" and $helloSign->ip != "98.11.11.12" and $helloSign->ip != "206.11.11.13" and !stristr($helloSign->ip, '10.11.11') and $helloSign->id != "28" and $helloSign->id != "3654" and $helloSign->id != "3660" and $helloSign->id != "3662") { echo "ID: $helloSign->id -- IP: $helloSign->ip<br/>\n"; //Create socket $socket = socket_create(AF_INET, SOCK_STREAM, 0); //Connect to remote host if(socket_connect($socket, $helloSign->ip, 23)) { //echo "Socket connected<br>"; //Receive prompt for password $buf = ""; if(($bytes = socket_recv($socket, $buf, $buffSize, MSG_WAITALL)) !== false) { //echo "Received: $buf<br>"; //Send password $send = "(password)\r\n"; if(($bytes = socket_write($socket, $send, strlen($send))) !== false) { //Receive main menu $buf = ""; if(($bytes = socket_recv($socket, $buf, $buffSize, MSG_WAITALL)) !== false) { //echo "Received: $buf<br><br>"; //Go to 'System Maintenance' menu $send = "24\r\n"; if(($bytes = socket_write($socket, $send, strlen($send))) !== false) { //Receive menu $buf = ""; if(($bytes = socket_recv($socket, $buf, $buffSize, MSG_WAITALL)) !== false) { //Select 'Command Interpreter Mode' $send = "8\r\n"; if(($bytes = socket_write($socket, $send, strlen($send))) !== false) { //Receive command prompt $buf = ""; if(($bytes = socket_recv($socket, $buf, $buffSize, MSG_WAITALL)) !== false) { //Send reboot command $send = "sys reboot\r\n"; if(($bytes = socket_write($socket, $send, strlen($send))) !== false) { echo $str = " - Sent reboot command to: $helloSign->id - $helloSign->ip<br />\n"; fwrite($fp1, date('H:i:s') . $str); //socket_close($socket); } else echo "Problem sending reboot command\n"; } else echo "Problem receiving command prompt\n"; } else echo "Problem selecting Command Interpreter Mode\n"; } else echo "Problem receiving menu\n"; } else echo "Problem selecting System Maintenance\n"; } else echo "Problem receiving Main Menu\n"; } else echo "Problem sending password\n"; } else echo "Problem receiving password prompt\n"; socket_close($socket); } else fwrite($fp, date('H:i:s') . " - Couldn't connect to sign: $helloSign->id with ip: $helloSign->ip\n"); } } ?>
|
# ? Feb 2, 2010 04:11 |