Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
leftist heap
Feb 28, 2013

Fun Shoe
I half agree, half disagree with the notion that learning monads outside of Haskell is less useful. On one hand, I can't imagine monads being as useful in any other language. Haskell's onerous type system make monads all but indispensable. On the other hand, Haskell's pervasive use of monads, weird monad instance syntax (>>=, return, and the haphazard mix of prefix and infix notation), and do syntactic sugar kind of obscure how simple a concept monads actually are.

Monads themselves are probably easier to understand than you think they are and you probably already understand the mechanics of them. It just takes time and experience to understand why and how they're useful. That said, I do agree if you want to fully understand them and their significance it's better to learn them in Haskell. At least Haskell is very upfront about the types and type signatures involved.

(It took a lot of effort not to type out my own mini-tutorial of monads. The internet does not need another monad tutorial.)

Adbot
ADBOT LOVES YOU

Toekutr
Dec 9, 2008

Plorkyeran posted:

Yet another reason not to listen to Crockford!

Trying to grok monads without using Haskell is like trying to grok OOP by implementing your own object system from scratch in R5RS without ever having used a language with built in support for OO. It's certainly not impossible, but it's not the route anyone would recommend.

This isn't really hard at all though. SICP covers it in exactly this way.

raminasi
Jan 25, 2005

a last drink with no ice
I learned monads without knowing Haskell, and if you're anything like me, when you finally "get" monads you'll feel incredibly cheated, because while almost every tutorial you find starts by barfing category theory at you they're basically just a gang-of-four-esque design pattern (albeit a more complicated one than actual gang-of-four design patterns). The concept absolutely makes sense outside of Haskell, but they're not nearly as helpful when you have to roll your own every time, and Haskell's the only language I know of that comes with a bunch included. (F# has two and C# has one but that's all I'm aware of outside of Haskell.)

DreadCthulhu
Sep 17, 2008

What the fuck is up, Denny's?!
Even a simple error monad seems pretty useful in Clojure. This is a good showcase imho.

SavageMessiah
Jan 28, 2009

Emotionally drained and spookified

Toilet Rascal

GrumpyDoctor posted:

I learned monads without knowing Haskell, and if you're anything like me, when you finally "get" monads you'll feel incredibly cheated, because while almost every tutorial you find starts by barfing category theory at you they're basically just a gang-of-four-esque design pattern (albeit a more complicated one than actual gang-of-four design patterns). The concept absolutely makes sense outside of Haskell, but they're not nearly as helpful when you have to roll your own every time, and Haskell's the only language I know of that comes with a bunch included. (F# has two and C# has one but that's all I'm aware of outside of Haskell.)

Yeah for me the biggest impediment to understanding monads was monad tutorials. The LYAH explanation was the only one that did anything for me because it didn't do either a) category theory!!! or b) stupid metaphor involving boxes/robots/machines.

pgroce
Oct 24, 2002
Having accidentally implemented a monad at work today (in Python, of all things), I'll walk back my snark earlier. But things like static typing and TCO make monads much more viable, and compensate for some of monads' drawbacks. (I find them much harder to debug in languages without static typing to to validate that I built something that is at least plausible.)

shrughes
Oct 11, 2008

(call/cc call/cc)

DreadCthulhu posted:

loving monads. If someone has a good tutorial or path to truly grokking them in Clojure, please share it.

Somebody on #cobol a couple of years ago appreciated this tutorial: http://shrughes.com/p/monad-tutorial/

But it's not for Clojure.

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

shrughes posted:

Somebody on #cobol a couple of years ago appreciated this tutorial: http://shrughes.com/p/monad-tutorial/

But it's not for Clojure.

There's a minor bug in that; it has getLine >> (\x -> putStrLn ("Your string is " ++ x)) at one point where it probably means >>= instead of >>.

shrughes
Oct 11, 2008

(call/cc call/cc)
Oh, thanks.

weird
Jun 4, 2012

by zen death robot

SavageMessiah posted:

Do they work everywhere or do you need to use destructuring-bind explicitly?

Replying to a really old post because I reminded myself of it, no one else responded, and it's a nice example of some of the cool stuff Lisp can do.

A quick unrelated example first:

Say you wanted to add a while loop to Lisp. Obviously it couldn't be defined as a function, because Lisp uses strict evaluation, so the body part would be evaluated immediately and only once, no matter what the predicate was. You could just wrap the arguments in lambdas and funcall them, but that would be awkward, and for more complex examples there would be more and more syntactic noise. For situations like this, Lisp gives us defmacro, a tool for syntactic abstraction. Macros take Lisp forms rather than values as input and return a Lisp form as output, and are evaluated at compile time. One of the key points to understand is that for some macro m that takes one argument x, if we write (m (+ 1 2)), x will be bound to the list (+ 1 2) in the body of m, not what it would evaluate to.

So while is easy to define as a macro. &body works exactly the same as &rest, but it marks that what it will be bound to is something equivalent to a function body, both as a semantic hint to the reader, and as an indentation hint to the editor.
Lisp code:
CL-USER> (defmacro while (pred &body body)
           (list* 'do ()
                  (list (list 'not pred))
                  body))
WHILE
CL-USER> (macroexpand-1 '(while t (write-line "True")))
(DO () ((NOT T)) (WRITE-LINE "True"))
And that works as we want, but the definition is cumbersome and hard to read. For situations like this, Lisp gives us backquote. Backquote is not restricted to macro definitions, but it is most often used there. Backquote works just like normal quote, in that it suppresses evaluation, but inside backquotes we are also allowed to unquote forms with a comma, or to use comma-at to evaluate a form and splice its result into the surrounding list (it is an error if their result is not a list). Backquote makes "templating" situations much easier to read. It's important to keep in mind, though, that backquote is just shorthand for the appropriate explicit conses. Macros can make use of the normal list functions when operating on Lisp programs.
Lisp code:
(defmacro while (pred &body body)
  `(do ()
       ((not ,pred))
     ,@body))
Now that you've been introduced to macros, let's move on to why I thought to write this post.

The thing you have to understand about Common Lisp, in my opinion at least, is that a lot of the language is specifically geared toward making it easier to extend the language. The Common Lisp philosophy is all about giving you the tools to transform Lisp into a language specifically tailored for whatever problem you're trying to solve. destructuring-bind is one of those tools that falls really easily into that category.*

I was writing some Lisp today and I wanted to loop over a list and destructuring-bind each element. It's easy enough to call destructuring-bind at the head of the loop body, but I wanted to use that functionality in a few places (unnecessarily repeating yourself is bad), and more importantly, breaking up the binding into two steps (one for the list element, and one for destructuring the list element) turns what in my mental model is one step into two steps in Lisp. To solve that problem, I defined the following macro. If you didn't know, destructuring is allowed in the lambda list of a macro.

Lisp code:
(defmacro destructuring-dolist ((var list &optional result) &body body)
  "Like DOLIST, but if VAR is a list then DESTRUCTURING-BIND LIST's elements"
  (if (listp var)
      (let ((var* (gensym)))
        `(dolist (,var* ,list ,result)
           (destructuring-bind ,var ,var*
             ,@body)))
      `(dolist (,var ,list ,result)
         ,@body)))
The only new thing in this definition is gensym, which returns a symbol that is guaranteed not to be in use. If it weren't for gensym, there would be the problem of variable capture. Whatever name I chose for the variable that is bound to the full list element might already be in use by whoever is using the macro, and they probably wouldn't appreciate me re-binding their symbol. Remember that macros are expanded inline. gensym prevents this problem. Anyway, our macro works as advertised, and if it's handed a non-destructuring form it expands into a normal dolist call.
Lisp code:
CL-USER> (destructuring-dolist ((k . v) '((a . 1) (b . 2) (c . 3)))
           (format t "~a => ~a~%" k v))
A => 1
B => 2
C => 3
As one final example, let us consider with-slots. with-slots is a standard macro provided by CLOS that allows destructuring of objects, letting us refer to their slots as though they were normal variables, optionally giving them names different from the slot names.
Lisp code:
CL-USER> (defclass point ()
           ((x :initarg :x
               :accessor x-of)
            (y :initarg :y
               :accessor y-of)))
#<STANDARD-CLASS POINT>
;; This isn't how you define a print-object method, but it works for this purpose
CL-USER> (defmethod print-object ((point point) stream)
             (format stream "(~a, ~a)" (x-of point) (y-of point)))
#<STANDARD-METHOD PRINT-OBJECT (POINT T) {1006F71E93}>
CL-USER> (defvar *p* (make-instance 'point :x 3 :y 5))
*P*
CL-USER> *p*
(3, 5)
CL-USER> (with-slots (x y) *p*
           (incf x) (incf y))
6
CL-USER> *p*
(4, 6)
We can define with-slots fairly easily as a macro (and in fact all of CLOS is defined as a collection of macros). The only new tool we need for this is symbol-macrolet, which creates symbol-macros--single symbols that expand into arbitrary Lisp code. Note too how we are careful to make sure the object is only evaluated once in our expansion. If the object were not passed directly, but instead as a result of a function that might have side effects, this would be an important consideration.
Lisp code:
(defmacro our-with-slots (binds instance &body body)
  (let ((obj (gensym)))
    `(let ((,obj ,instance))
       (symbol-macrolet
           ,(mapcar #'(lambda (x)
                        (if (listp x)
                            `(,(first x) (slot-value ,obj ',(second x)))
                            `(,x (slot-value ,obj ',x))))
                    binds)
         ,@body))))
This definition is a bit more complicated because it does more work at compile time, but if you remember what parts are quoted and which aren't, the logic is simple. See also how we used mapcar to process the list of bindings--code is data. Our version works the same as the standard one, and even expands into virtually the same code:
Lisp code:
CL-USER> (macroexpand-1 '(with-slots (x (the-y y)) p
                           (incf x) (incf the-y)))
(LET ((#:G1338 P))
  (DECLARE (IGNORABLE #:G1338))
  #:G1338
  (SYMBOL-MACROLET ((X (SLOT-VALUE #:G1338 'X)) (THE-Y (SLOT-VALUE #:G1338 'Y)))
    (INCF X) (INCF THE-Y))
CL-USER> (macroexpand-1 '(our-with-slots (x (the-y y)) p
                           (incf x) (incf the-y))
(LET ((#:G1339 P))
  (SYMBOL-MACROLET ((X (SLOT-VALUE #:G1339 'X)) (THE-Y (SLOT-VALUE #:G1339 'Y)))
    (INCF X) (INCF THE-Y))
And that's all for now :) For the people who didn't already know all this, hopefully it was an interesting look at some of the more unique and powerful features of Lisp. If you want to learn more about macros, Paul Graham's book On Lisp (available free here) is the canonical macro book. Even if you aren't necessarily interested in macros, but just Common Lisp, On Lisp is an excellent guide to the Common Lisp way of doing things. That book is what made me first feel like I kind of understood CL, I can't recommend it highly enough.

* This page discusses why destructuring-bind was introduced into the standard; it's kind of interesting.

e: It's debatable whether or not defining destructuring-dolist is in good taste. The same functionality is available in loop (which I don't like) and iter (which is nonstandard, and I didn't want to use a library for one function in the tiny thing I was writing).

weird fucked around with this message at 15:28 on Nov 6, 2013

SavageMessiah
Jan 28, 2009

Emotionally drained and spookified

Toilet Rascal
So I guess the answer to my question is "yes you need to use it explicitly except in lambda lists for defmacro and loop". And by explicitly I mean "write a macro". I like the fact that destructuring just works in every assignment form in clojure. It bothers me when I have to write macros for basic control structures that should already exist.

weird
Jun 4, 2012

by zen death robot

SavageMessiah posted:

So I guess the answer to my question is "yes you need to use it explicitly except in lambda lists for defmacro and loop". And by explicitly I mean "write a macro". I like the fact that destructuring just works in every assignment form in clojure. It bothers me when I have to write macros for basic control structures that should already exist.

Yes, it works more nicely in Clojure. It's not that big of a deal in CL, though, because the macros you have to write to add it to most places are really simple. The way Lisps in general make extending the language like that so easy is probably the most unique thing about the Lisp family today, so I figured I'd give a couple examples of it for anyone who didn't already know.

Toekutr
Dec 9, 2008

VanillaKid posted:

(and in fact all of CLOS is defined as a collection of macros)

Just going to point out that this isn't really true. Only the top-level interface of CLOS (defclass, defmethod and such) is defined with macros.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

DreadCthulhu posted:

loving monads. If someone has a good tutorial or path to truly grokking them in Clojure, please share it.

This paper really helped me get monads: it explains the math and types behind what you're writing instead of trying to use a silly analogy. It requires you know a basic types (and CPS), but nothing too difficult.

Monads aren't bad once you understand how the types flow, but until you do it seems like spaghetti magic.

Edit: Also, writing monads without do notation is like writing C without -> notation.

QuantumNinja fucked around with this message at 04:53 on Nov 30, 2013

DreadCthulhu
Sep 17, 2008

What the fuck is up, Denny's?!
Ugh, kind of sad there isn't a Haskell thread around here. I've been really curious recently about the kind of benefits I'd hit by switching to a super-anal statically typed language for the kind of work we do. I wrote all of our backend services in Clojure, which has been a real blast, but at a certain scale you really start to hit issues with refactoring and debugging.

You have to unit test 100% of your codebase if you want to make sure that what you just put down is even remotely sane, unless you enjoy debugging runtime errors. Which btw, you can't do very well, since there's not a decent step debugger in the entire ecosystem. Deftrace + nrepling in is nice and all, but you're effectively printfing for your life. I think preventing 90% of error cases due to type mismatches is worth looking into.

ToxicFrog
Apr 26, 2008


DreadCthulhu posted:

Ugh, kind of sad there isn't a Haskell thread around here. I've been really curious recently about the kind of benefits I'd hit by switching to a super-anal statically typed language for the kind of work we do. I wrote all of our backend services in Clojure, which has been a real blast, but at a certain scale you really start to hit issues with refactoring and debugging.

You have to unit test 100% of your codebase if you want to make sure that what you just put down is even remotely sane, unless you enjoy debugging runtime errors. Which btw, you can't do very well, since there's not a decent step debugger in the entire ecosystem. Deftrace + nrepling in is nice and all, but you're effectively printfing for your life. I think preventing 90% of error cases due to type mismatches is worth looking into.

Have you looked at core.typed at all? I've only used it for small things but it seems to work pretty well.

shrughes
Oct 11, 2008

(call/cc call/cc)

DreadCthulhu posted:

Ugh, kind of sad there isn't a Haskell thread around here. I've been really curious recently about the kind of benefits I'd hit by switching to a super-anal statically typed language for the kind of work we do.

Lazy evaluation is kind of awful. Maybe an OCaml thread. How about Scala or even just Java. There are plenty of statically typed languages -- Haskell doesn't add anality so much as flexibility (in a different fashion than you'd see in Java or C#).

DreadCthulhu
Sep 17, 2008

What the fuck is up, Denny's?!
My understanding is that enforcing strict evaluation is a second sense you just develop with time in Haskell and in practice it turns out not to be as much of a hassle. I don't have much experience in it yet, so I'm just regurgitating.

In any case, I tend to favor languages that are extremely terse, composable, and low on boilerplate, preferably with no compromises in place to facilitate OO. Clojure has been incredible from that perspective, with Java on the very opposite side of that spectrum.

shrughes
Oct 11, 2008

(call/cc call/cc)

DreadCthulhu posted:

My understanding is that enforcing strict evaluation is a second sense you just develop with time in Haskell and in practice it turns out not to be as much of a hassle. I don't have much experience in it yet, so I'm just regurgitating.

That's not really how it turns out. You can induce a memory leak by not being strict enough, or by being too strict. This is the sort of thing where people who write the haskell compilers and have been doing lazy functional programming for decades still struggle. Also, compiler optimizations can make your code that doesn't have memory leaks (as compiled) be fragile with respect to small changes. I remember one Haskell web server benchmark where some servers would just fail to complete on the high workload because they were leaking memory so much.

Katalize
Dec 30, 2013

shrughes posted:

Lazy evaluation is kind of awful. Maybe an OCaml thread.

Yes please. I played a bit with Haskell some years ago. It was fun, and lazy evaluation is cool when all you do is solving Project Euler problems. Then you try to do something (semi)serious in Haskell and it all falls apart because lazy evaluation is not that cool and useful in real-world programming and 90% of your functions end up being IO(). OCaml feels much more practical after that. At least it has real strings :)

leftist heap
Feb 28, 2013

Fun Shoe
Man, I want to give Clojure a fair shake, but I keep getting stuck on dumb things, like:

- Really terrible run time and compile time error reporting compared to CL. CL's error reporting isn't always spectacular, but it's usually easy to understand how the error relates to your code. Clojure's errors tell you more about the underlying language sausage-making than your code. It gets easier the more you learn, but it makes the learning process a little difficult.

- Bad debug tooling. The whole Clojure community really seems to be spinning its wheels on this one. I remember checking out Clojure 3-4 years ago and being really excited because it seemed like it was on track to having tooling that came close to matching CL/Emacs/SLIME and since then... nada. If anything it seems like things have gone backwards. Even the most recent articles on debugging Clojure are already out of date. Ritz looked promising, but now looks abandoned. I get the feeling that the particulars of Clojure's Java implementation makes debugging and creating debug tools difficult.

- Vector literals. It's silly, but I'm bugged by the use of vector literals for various things that would just be lists in CL, like function params, let blocks, for clauses, etc.

On top of that I'm not particularly sold on functional/lazy/immutable by default.

There are a bunch of things I like though. Lein is a great build tool/packaging system and everything surrounding packages in Clojure beats the pants off of CL (although Lein is slow as gently caress). Obviously it's nice to code in a language that isn't saddles with 50 year old cruft.

Tequila Bob
Nov 2, 2011

IT'S HAL TIME, CHUMPS

rrrrrrrrrrrt posted:

On top of that I'm not particularly sold on functional/lazy/immutable by default.

Clojure didn't choose functional/immutable just for the heck of it. Once you study Clojure's concurrency model(s), you'll see why immutability and functional ideas are a very, very good thing. I recently sped up some code by over a factor of 10 just by replacing "map" with "pmap" in one place - and if I'd been coding with shared, mutable data, I couldn't have made that optimization nearly as easily.

Once you get used to functional programming, you'll learn again how to build a method from the ground up, using little blocks of code which you already tested at the REPL. This reduces your need to debug complex functions.

(The error messages do completely suck, no argument there - in fact, they only become less useful as you move on to macros.)

pgroce
Oct 24, 2002

rrrrrrrrrrrt posted:

- Bad debug tooling. The whole Clojure community really seems to be spinning its wheels on this one. I remember checking out Clojure 3-4 years ago and being really excited because it seemed like it was on track to having tooling that came close to matching CL/Emacs/SLIME and since then... nada.

You know about Cider, right? Not that you're wrong about debugging, but Cider (and ac-nrepl) will give you a lot of the other SLIMEy goodness, like testing at a REPL, C-x C-e evaluation, macro expansion, etc. (And with a lot less pain keeping things up-to-date, in my experience.)

I wish there was a debugger sometimes (more often, I wish there were better error messages than Java stack traces), but when I'm not debugging a runtime problem it's a pretty fun language.

leftist heap
Feb 28, 2013

Fun Shoe

Tequila Bob posted:

Clojure didn't choose functional/immutable just for the heck of it. Once you study Clojure's concurrency model(s), you'll see why immutability and functional ideas are a very, very good thing. I recently sped up some code by over a factor of 10 just by replacing "map" with "pmap" in one place - and if I'd been coding with shared, mutable data, I couldn't have made that optimization nearly as easily.

Once you get used to functional programming, you'll learn again how to build a method from the ground up, using little blocks of code which you already tested at the REPL. This reduces your need to debug complex functions.

I'm reasonably well versed in functional programming at this point, and I think that functional and immutable data structures are great and useful, I'm just not sold on them as language defaults. I guess it depends on the types of domains you work in. Your pmap example is cool, but I can count on one hand the times in my life when speed has been an issue for me. The times I have done heavily parallel code it's been planned up front and expressly designed to reduce or eliminate shared state.

pgroce posted:

You know about Cider, right? Not that you're wrong about debugging, but Cider (and ac-nrepl) will give you a lot of the other SLIMEy goodness, like testing at a REPL, C-x C-e evaluation, macro expansion, etc. (And with a lot less pain keeping things up-to-date, in my experience.)

I wish there was a debugger sometimes (more often, I wish there were better error messages than Java stack traces), but when I'm not debugging a runtime problem it's a pretty fun language.

I'm using Cider, it's great, but obviously lacks all the great stuff in the SLIME debugger -- not necessarily Cider's fault since Clojure/nREPL don't have the same debug facilities out of the box as a CL implementation.

Anyway, Clojure is a great language. I'm using it for a small project right now and it really is nice to be using something more modern and comfortable. I just miss the whole "turtles all the way down" environment that CL provides.

Meiwaku
Jan 10, 2011

Fun for the whole family!

rrrrrrrrrrrt posted:

... I can count on one hand the times in my life when speed has been an issue for me.

If it isn't for speed, what do you want mutable data structures for? That's the ONLY use case I can think of for wanting non-immutable.

Also immutable offer parallelization without having to do all that upfront planning and design, that is why it's so valuable.

shrughes
Oct 11, 2008

(call/cc call/cc)

Meiwaku posted:

If it isn't for speed, what do you want mutable data structures for? That's the ONLY use case I can think of for wanting non-immutable.

Mutable data structures are often much, much easier to write algorithms for. Not in terms of performance, but just in terms of getting it working. For example, try doing programming contest problems (like those of Google Code Jam) using mutable data structures and then immutable data structures. It's generally easier to write and patch algorithms that are mutating things.

Tequila Bob
Nov 2, 2011

IT'S HAL TIME, CHUMPS

shrughes posted:

Mutable data structures are often much, much easier to write algorithms for. Not in terms of performance, but just in terms of getting it working. For example, try doing programming contest problems (like those of Google Code Jam) using mutable data structures and then immutable data structures. It's generally easier to write and patch algorithms that are mutating things.

I find that the more you practice mutable programming, the easier it is to think in terms of mutable programming techniques, and the more you practice immutable programming, the easier it is to think in terms of immutable programming techniques. As an example, since you mentioned the Google Code Jam, I took a look at the first two problems. Without giving the solutions away, here are my thoughts:

1. All Your Base: you need to know the number of distinct elements in each string. This is not a mutating operation. You also need to substitute characters with their possible digit values and get the number this represents; Clojure (and Haskell, etc.) have built-in functions to do both of these things immutably and easily.

2. Rotate: treat each row of the board as a separate list; map "sort-with" to the collection of rows, using a key function which only differentiates between '.' and not-'.'. This gets you the rotated boards without a single mutation, and we should agree that determining a victor is a non-mutating operation.

Do you have any counter-examples you'd like to share?

shrughes
Oct 11, 2008

(call/cc call/cc)
Your post is not even wrong.

You do not give examples of how practicing (im)mutable programming makes it easier to think about problems with (im)mutable programming techniques (which are practically tautologies, by the way). You give examples of how you might solve the trivial qualification problems with a functional algorithm. Your request that I give a counter-example to the claim that practicing immutable programming makes it easier to think about problems with immutable programming techniques, or that practicing mutable programming makes it easier to think about mutable programming techniques, is patently insane, because that claim is not counter to what I said. Is this misdirection intentional or by accident?

Go reread my post:

shrughes posted:

Mutable data structures are often much, much easier to write algorithms for. Not in terms of performance, but just in terms of getting it working. For example, try doing programming contest problems (like those of Google Code Jam) using mutable data structures and then immutable data structures. It's generally easier to write and patch algorithms that are mutating things.

Are you disputing that mutable data structures are often much easier to write algorithms for with? I meant to say "with." (This is clearly true, it's such a low standard, all you need to do is pick one of the problems that's easier to do with mutable data structures. The suggestion to do Code Jam problems was not meant to be a suggestion that you do two trivial qualification-level Code Jam problems, or that you imagine how they might be done instead of actually doing them and seeing for yourself while implementing non-trivial Code Jam problems how imperative programming constructs make the annoyances of functional programming constructs go away.) (On its face this is practically a tautology too, of course, because imperative algorithms is a superset of purely functional algorithms, so all it takes is for imperative techniques to occasionally come in handy. But let's go with the stronger claim that avoiding functional programming language constructs like higher-level functions is often a desirable thing, because they make editing code a pain.)

Are you disputing that it's generally easier to modify algorithms that are mutating things? (With a for loop you can turn a map into a fold by adding a variable, you can turn it into some other traversal by adding recursive calls, you can add fields to the result set without touching any other code by adding a parallel array, and the edit distance between buggy and non-buggy versions of the algorithm is thus much, much smaller.)

If you're going to argue with my post, it would be nice if you disputed the claims it actually makes, maybe by saying which you disagree with, inviting a response with more detail. These beliefs are not formed in a vacuum of functional programming experience.

Tequila Bob
Nov 2, 2011

IT'S HAL TIME, CHUMPS
You keep insisting on the existence of problems which are objectively easier to solve with mutability than without. You keep failing to provide actual examples, while you tell me that the examples I provide are not good enough.

(You say: The problems in Google Code Jam prove that mutability is easier to code.
I said: Here's some examples of easy immutability from Google Code Jam.
You say: I didn't mean THOSE problems! *moves goal posts*)

When I asked you for examples, I wanted you to show me something that is easier to code with mutability than without. The one (1) example you allude to in either of your posts is a for loop, which I've never once felt the need for in a functional language. The whole point of a for loop is to repeat a piece of code in the context of a predictably-changing loop variable, which is easily done by mapping over a given range, or by using tail recursion.

Now, to address the questions you asked:
1. Am I disputing that mutable data structures are often much easier to write algorithms for with? Yes, I am. Lacking any examples from you to work with, I would bring up Rotate again. I've written a problem very much like Rotate (involving falling game pieces) in both imperative and functional languages, and I can tell you from experience that trying to make the pieces "fall" through changing the array took more code, more time, and left more room for errors than the sort-by approach I outlined here.

2. Am I disputing that it's generally easier to modify algorithms that are mutating things? Yes. Mutation permits the corruption of data, and must be carefully reasoned about - especially in the case of multi-threaded situations. Immutability helps to enforce referential transparency, which by definition guarantees that you know what the result of the code will be for a given set of parameters.

But really, to answer both of your questions at once, I would say that any claim of one style being "often" or "generally" easier than another is impossible to prove or disprove in the large. I would gladly admit that any individual may say that one style is easier for himself/herself, but it would be a mistake for an individual to assume that his/her experience is "generally" true for everyone else.

Finally:

shrughes posted:

Let's go with the stronger claim that avoiding functional programming language constructs like higher-level functions is often a desirable thing, because they make editing code a pain.

That claim is utterly baseless. If you want to share a time when you had trouble with a higher-order function, please feel free, but you've got an uphill battle trying to convince me that map, filter, reduce, and sort-by are actually making my code harder to work with.

Deus Rex
Mar 5, 2005

rrrrrrrrrrrt posted:

- Vector literals. It's silly, but I'm bugged by the use of vector literals for various things that would just be lists in CL, like function params, let blocks, for clauses, etc.

This was intentional. Hickey felt, from his own experience, that list literals are overloaded as syntactic forms in other Lisps (I haven't written or read enough code in other Lisps to judge this claim). Introducing a couple more literals into the syntax at least superficially makes the code easier to understand. For example, you can relatively quickly guess when you see something like [x y z a] in some unfamiliar Clojure code that it's a sort of binding form. Let, function definitions, destructuring forms, list comprehensions, loops, and more consistently use [] to mark binding forms. Map literals ({}) are used as Clojure syntax too, like in associative destructuring forms.

leftist heap
Feb 28, 2013

Fun Shoe

Meiwaku posted:

If it isn't for speed, what do you want mutable data structures for? That's the ONLY use case I can think of for wanting non-immutable.

Also immutable offer parallelization without having to do all that upfront planning and design, that is why it's so valuable.

Simplicity of code? I find you generally have to jump few more hoops to write towards immutable data-structures. Anyway, it really isn't that big a deal. On the project I'm working on now Clojure's immutable data structures turned out to be super useful just because you get read-only copies pretty much "for free", which made keeping state history way simpler.

Deus Rex posted:

This was intentional. Hickey felt, from his own experience, that list literals are overloaded as syntactic forms in other Lisps (I haven't written or read enough code in other Lisps to judge this claim). Introducing a couple more literals into the syntax at least superficially makes the code easier to understand. For example, you can relatively quickly guess when you see something like [x y z a] in some unfamiliar Clojure code that it's a sort of binding form. Let, function definitions, destructuring forms, list comprehensions, loops, and more consistently use [] to mark binding forms. Map literals ({}) are used as Clojure syntax too, like in associative destructuring forms.

Yeah, I get that it was purposeful. I'm just not entirely sold on it, but it's really not a big deal. You get used to it pretty quickly.

aerique
Jul 16, 2008
In any case, it is nice to hear something non-hypish about Clojure from someone who has used other Lisps as well.

(And who has the same reservations about some of the choices that have been made :shobon:.)

leftist heap
Feb 28, 2013

Fun Shoe

aerique posted:

In any case, it is nice to hear something non-hypish about Clojure from someone who has used other Lisps as well.

(And who has the same reservations about some of the choices that have been made :shobon:.)

I like Clojure a lot, but it working in it doesn't have same comfortable, turtles-all-the-way-down feeling as CL (or, say, SmallTalk). You're never really far away from the JVM machinery underneath it. Obviously that isn't a criticism of the language itself, which is really good.

Funnily enough I feel like most of my Clojure code would be more easily ported to Haskell rather than CL.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

Tequila Bob posted:

You keep insisting on the existence of problems which are objectively easier to solve with mutability than without. You keep failing to provide actual examples, while you tell me that the examples I provide are not good enough.

Since it looks like shrughes bailed without giving you an answer, I have a few problems that are actually easier with a bit of state. I'm a CS PL grad student and I write a lot of FP (I actually spent today adding FFIs to my Scheme->x86 compiler to implement garbage collection), so I'm going to try to give you problems that I've encountered on that side of the fence.

- Unique variable IDs; if I want to generate unique variable names for, say, a compiler, I need something that I can increment that will stay persistent. It's really easy to do with a single, mutable variable storing an integer, and a function that returns the value and increments it with each call.

- Traversing a large tree and culling up all of the, say, leaves with a certain type, where duplicates might exist, while also returning the tree. In Haskell I'd write this type traverseCull :: Tree a -> (Tree a, [a]). The problem, of course, is that every set of recursions needs to call union or remove-duplicates or something, but if you can define a list inside of the function (like in Scheme) that will be guaranteed empty every call, you can just set! that list. It becomes much cleaner syntactically than pulling that pattern apart and calling union every time, and you don't lose efficiency.

However, the focus was on data structures, so here are some of those, too:

- A graph! Adding two-way edges requires two updates for every edge in an immutable structure, and a single one for mutable structures

- You mentioned parallelism, but any parallelism that shares data that needs to be changed requires either much more fine-grained synchronization or stateful operation.

Anyway...

rrrrrrrrrrrt posted:

I like Clojure a lot, but it working in it doesn't have same comfortable, turtles-all-the-way-down feeling as CL (or, say, SmallTalk). You're never really far away from the JVM machinery underneath it. Obviously that isn't a criticism of the language itself, which is really good.

Funnily enough I feel like most of my Clojure code would be more easily ported to Haskell rather than CL.

Clojure bothers me a lot in both syntax and semantics. The Hickey comment about vector literals sort of strikes at home for me: the list operations being overloaded in Scheme are one of my favorite parts of the language, and one of the parts that keeps me coming back. When everything's a list, and everything is wrapped in parentheses, you get used to writing code that is precisely the AST you're using.

The Haskell I write looks a lot like direct math. The Scheme I write looks a lot like the lambda calculus. The Clojure I've written looks like it really, really wants to be Haskell but can't get the rest of the way there due to old trappings. I feel like Clojure took the Lisp model, tried and true, and tried to make a new language out of it without really paying attention to the modern functional languages in existence and without a lot of the experience that PL developers have.

For example, someone claimed Clojure can't provide true tail recursion, which is simply untrue: all you need is CPS and a trampoline, and everything becomes tail recursive.

That said, I absolutely love the Clojure community and the raw spirit they all have for their work and their language. For all the disdain I have for the language itself, the community makes me want to love it.

SavageMessiah
Jan 28, 2009

Emotionally drained and spookified

Toilet Rascal

QuantumNinja posted:

For example, someone claimed Clojure can't provide true tail recursion, which is simply untrue: all you need is CPS and a trampoline, and everything becomes tail recursive.

Rich Hickey never said that he couldn't do full tail calls, he said he can't do it without a cost on the JVM. The cost would be in performance and difficulty in interoperating with Java. And since the reason Clojure exists is to take advantage of stuff on the JVM, that would be a terrible tradeoff. Just because you don't know or understand the reasoning behind things in Clojure doesn't mean Rich didn't spend the better part of a decade thinking about them.

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.

SavageMessiah posted:

Rich Hickey never said that he couldn't do full tail calls, he said he can't do it without a cost on the JVM. The cost would be in performance and difficulty in interoperating with Java. And since the reason Clojure exists is to take advantage of stuff on the JVM, that would be a terrible tradeoff. Just because you don't know or understand the reasoning behind things in Clojure doesn't mean Rich didn't spend the better part of a decade thinking about them.

This isn't true in any sense of the word. You should really watch this video; the first minute of that video pretty much sets up the straw-man argument you just made and the rest of the video proceeds to knock it down.

The only "cost" is that you don't get nice JVM stack-traces for big recursions and using heap space can blow up (but no worse than stack space will). And it doesn't break interoperability with Java at all. All you'd have to do is teach your compiler how to identify when you're making calls out to Java and use the non-CPSed version of the function in that case.

SavageMessiah
Jan 28, 2009

Emotionally drained and spookified

Toilet Rascal

QuantumNinja posted:

All you'd have to do is teach your compiler how to identify when you're making calls out to Java and use the non-CPSed version of the function in that case.

That's awful. Boy, I'd sure love it if the performance characteristics of my code changed dramatically if I called a java function. What if that happens in a macro? If you can't rely on TCO being there all the time then you can't really rely on it being there at all. That's why scheme makes it part of the language.

Also this doesn't solve the problem of interop at all. Interop goes both ways! How are you calling Clojure code naturally from java if clojure has a totally different calling convention? How the hell is the compiler going to know if you intend to call into it from java later?

Watching the video now and, content aside, the speaker is a douche.

EDIT: So he explains TCO and then shows a trivial, naive implementation of TCO that is totally unsuitable for most code because it would be slow as poo poo. Also all the thunks would require generating a fuckload of classes, even more than clojure already makes. This would also basically completely destroy any chances of hotspot being able to do anything with clojure code. And the stupid trick with "fixing" arity overloading by checking the metadata of the last argument wouldn't even work in general because not everything has metadata and passing the thunk in place of another normal argument would also gently caress up the ability of the compiler to elide reflection which is hugely important for performant clojure code. Seriously did you even watch the video?

SavageMessiah fucked around with this message at 02:31 on Feb 12, 2014

QuantumNinja
Mar 8, 2013

Trust me.
I pretend to be a ninja.
You're right, it would be really terrible to make everything tail-recursive unless you call Java, because having everything blow up the stack is far preferable to having only Java calls blow up the stack.

Maybe it wasn't clear from the video, or maybe you just didn't watch it, but it leaves the original-arity definition in place, replacing its body with an invocation to the tail-recursive function, so someone using your library that's all TCO doesn't ever have to know.

Your complaints about speed might be well-founded for this naive implementation, but if you were going to implement a compiler to do this (which is done the world over), there are good ways to do this. Furthermore, you can actually perform a large number of optimizations on CPSed code that regular code won't let you, such as implicit loop unrolling and continuation elimination. The crux of your argument is that the JIT hotspot optimizer can't inline closures when executing Clojure code, which seems more like an argument against using functions in Clojure than an argument against CPSing.

I don't know enough Clojure to comment on the metadata trick, but a compiler (as opposed to macro) that performs this transformation can easily generate an additional function called fname-cps that's invoked for the CPS cases.

Maybe consider reading:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.70.2304&rep=rep1&type=pdf#page=7
http://matt.might.net/articles/cps-conversion/
http://research.microsoft.com/pubs/64044/compilingwithcontinuationscontinued.pdf
http://dl.acm.org/citation.cfm?id=155113
http://www.amazon.com/gp/product/05...ASIN=052103311X
http://dl.acm.org/citation.cfm?id=13333

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

shrughes posted:

Lazy evaluation is kind of awful. Maybe an OCaml thread. How about Scala or even just Java. There are plenty of statically typed languages -- Haskell doesn't add anality so much as flexibility (in a different fashion than you'd see in Java or C#).

Lazy evaluation is perfectly fine if you understand how forcing works. It's what allows many of the extreme semantic changes that simply do not work in strict languages.

shrughes posted:

Mutable data structures are often much, much easier to write algorithms for. Not in terms of performance, but just in terms of getting it working. For example, try doing programming contest problems (like those of Google Code Jam) using mutable data structures and then immutable data structures. It's generally easier to write and patch algorithms that are mutating things.

Yes because time constrained single person programming contests are reasonable examples of how real world engineering works.

When you have to deal with multiple programmers, and ancient code, and have to actually reason about how your program works, immutable data structures are a godsend.

Try sharing mutable state between threads and let me know how that goes for you.

Immutable structures (and non strict semantics) allow for composition.
For example, map f . map g equalling map (f.g) simply does not hold in the presence of seq (the strictness primitive). The entire technique of stream fusion relies on this.

Are there programs where state is very useful? Sure. Does that state have to be mutable? Nope. The State monad hides that plumbing (replacing old state with new state). Now, mutably updating a reference as opposed to creating/destroying memory is probably (not always!) faster but it's an optimization that may destroy other guarantees. (And you can use the ST monad in haskell to get it)

Basically: mutable state is the root of all evil and you should think and measure very carefully before introducing it. Global mutable state is Satan himself.

Adbot
ADBOT LOVES YOU

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

SavageMessiah posted:

Rich Hickey never said that he couldn't do full tail calls, he said he can't do it without a cost on the JVM. The cost would be in performance and difficulty in interoperating with Java. And since the reason Clojure exists is to take advantage of stuff on the JVM, that would be a terrible tradeoff. Just because you don't know or understand the reasoning behind things in Clojure doesn't mean Rich didn't spend the better part of a decade thinking about them.

Yes, the same applies to Rust and C.

Tail calls are necessary for functional programming but completely gently caress with the stack based view of subroutines.

Rich made the right call (he made a lot of right calls with clojure). The only reason to be on the jvm is to have java interop!

  • Locked thread