# Functional Thinking in Clojure: Part 2

Welcome to part 2 of my Functional Thinking in Clojure series. Today we’ll discuss how to optimize the Number Classifier we implemented last time like the original Java implementation was, and how to do closures in Clojure.

The Number Classifier (Optimized and Optimizable)

If you remember, the requirements for the Number Classifier are as follows:

… given any positive integer greater than 1, you must classify it as either perfect, abundant, or deficient. A perfect number is a number whose factors (excluding the number itself as a factor) add up to the number. Similarly, an abundant number’s sum of factors is greater than the number, and a deficient number’s sum of factors is less.

Our original implementation looked like this:

For a full discussion of my reasoning behind that implementation, see part 1 of this series. For now, all I’ll point out is that it’s a naive and terribly slow implementation for one very good reason.

The implementation of `aliquot-sum` relies on finding the proper factors of a given number. The implementation of that function in turn relies on finding all of the factors of a number and then filtering them. That means that I have to check every number from 1 to `number` and see if it’s a factor.

It’s correct, but man alive is it slow.

So let’s correct that.

Neal points out in part 2 of his series (and mentions in part 1) that there is a very simple optimization that involves realizing that if you have the factors of the square root of a number, you can quickly find the rest of the factors by dividing the number by each of those factors in turn.

So what we’ll do is open up our implementation to our users and give them the ability to provide their own implementation of our logic.

What have we done?

1. I added a comment form at the bottom of `core.clj` that contains calls to `time` that can be executed from Emacs using `C-x C-e` if you’re using `clojure-jack-in`. They’re a quick and easy way to time how long a given call to `classify` takes to execute. This gives us the ability to see if our optimizations are actually making a difference.

2. As you can see, if you have a particular part of your algorithm that you think might be useful to swap a different implementation in for, you just provide an argument for that part. Functions are first class citizens in Clojure so we just pass the name of the algorithm part in to our function and it get’s used.

3. I use the idiom of arity overloading to provide defaults for arguments all the time in my own code. This allows us to call `aliquot-sum` naked and get the un-optimized version but at the same time provide a different function if we so choose. Obviously, by default you would want the optimized version but I left it this way for clarity.

4. `aliquot-sum-optimized` utilizes a very common idiom in Clojure code of naming steps in a programming using `let` binding. Remember, the core optimization is to only explicitly check the numbers from 1 to the square root and then utilize the results of that to get the other factors. This is easy to do by naming each step and then combining them later.

5. I also demonstrate here just how easy it is to use Java functions (read ‘static methods’) from Clojure. The Java Math class implements all sorts of goodness for mathematical operations and they implement it all as static methods. We love static methods because static methods are (or should be!) functional and thus we can call them directly by pretending that the class is a namespace.

6. Notice that because the base abstraction for our system is a function, I can easily reuse the same function for finding factors that I used with the `proper-factors` implementation. I didn’t wrap that up in a class that I would then have to reference elsewhere, so I just get that logic for free.

That’s enough for that.

Now we move on to the ugliness of state…

`make-counter` With A Closure

I’ll admit outright that I don’t get closures. I think I understand them technically: You have a block of code that has implicit local bindings to the variables in it’s containing scope so that it can change it’s reference to them without changing other blocks with their own references to them.

I also have used them to great effect in Gradle et al and so I get their usefulness in terms of library and framework construction. But maybe I just don’t get the big deal of them. What’s the big deal that’s a bigger deal than first-class functions? I don’t know.

But that won’t stop me from showing how I’d implement it in Clojure!

1. Yuk! Look at that state! I have no tests written for this guy because it’s hard to test functions that rely on state.

2. `make-counter` relies on a let-bound `atom` named `incremented` which is set to 0 at the start. Every anonymous function returned from `make-counter` gets their own locally-bound reference to `incremented`.

3. The `defn` form is actually a macro that does nothing but create a different form `(def function-name (fn [args]))`. That’s why I can `(def c1 (make-counter))` and then call `c1` as if it’s a function. It is a function because `make-counter` returns a function and then gets assigned to the name `c1`.

4. `atom` is one of the concurrency primitives provided by Clojure. One of Clojure on the JVM’s killer features is their STM and explicit state management logic. Everything is a value in Clojure by default. If you need state, you need to use one of the concurrency primitives which then force you to utilizes transactions to update their value. It’s overall quite a brilliant system but is well beyond the scope of this post to document. For now, if you use `atom`s, you just need to be aware that the way to update their state is with `swap!`. Read up if you’re interested!

Hopefully you’ve gleaned some useful tips from this post. Stay tuned till next time when I cover the code samples provided in the third article of Neal Ford’s Functional Thinking series.

# Functional Thinking in Clojure: Part 2.0.0.1

Who knew I’d need 3 `.`s to number these posts!

OK, I really don’t need all those dots, but whatever. I needed some clever way to let you know that I got caught with my pants down.

Turns out that my reading challenges have continued and here I am to say that I poorly implemented the square-root factorization optimization documented in Neal Ford’s Functional Thinking Part 2. I caught it only when I moved on to working on part 3 in this series as I removed the old non-optimized code. Hurray for unit tests!

Here is the correct implementation:

My problem was my understanding of how you utilized the square root. I thought you had to take the factors of the square root and then find which of them also factor the target number. Not so!

Instead, you only need to investigate whether the numbers less than the square root are factors of the target number, thus suggesting the use of `range`. It’s a simple change and I will be cleaning it up even further in the next post, but for now this will do.

/me blushes…

# Functional Thinking in Clojure: Part 3

Refactored Number Classifier

Neal starts his latest article demonstrating what he would have done if he’d been programming in a functional language like Scala from the beginning. Seems like the perfect place for me to do the same in Clojure.

Voila!

Some notes about what I’ve done:

1. When I originally optimized `aliquot-sum`, I slavishly followed (or thought I did) his implementation and thus mimicked his use of `Math/round` and `Math/sqrt`. This is cool, but then you have to add 1 to the resulting number to begin to check for factors. Why not just use the equally valuable `Math/ceil` function? Thus, instead of deciding which way to round based on the size of the floating point number, we just always round up as high as we can go.

2. The use of `Math/round` allows us to get rid of the `(+ 1 …` code, which makes the whole step sightly more readable, IMHO.

3. I pulled `factor?` out into it’s own function and now my `filter`s read very nicely.

4. I’ve removed the slow implementation of `aliquot-sum` but left `classify` as a function who’s guts can be replaced. If someone else wants to come along with some sort of elliptic curve factorization algorithm that I can’t even begin to understand, they are officially my guests.

My implementation thinks a little differently about the problem than Neal apparently wants to, but like I said I find his implementation to be a bit repetitious.

Functional Testing

Neal begins by documenting some of the changes he thinks functional programming can bring to your testing.

I started off by copying his initial implementation as directly as I could.

1. I use `map` instead of a for loop, because for loops suck.

2. The `#(...)` construct is a reader macro that allows for very succinct anonymous function definition. I’m avoiding it more and more now that I know about `partial` and `comp`.

3. Clojure supports branching!

4. This will result in over 10,000 tests running, which is a little slower than necessary.

So I refactored immediately.

1. I didn’t like that `if` form in the original code. An easy way to get around that is to use the sequence manipulation functions to get a slightly more declarative solution. Now, I classify all of what I expect to not be perfect in the range 1 to 10000 and verify that none of them turned out to be perfect.

2. `#{...}` is the notation for creating a hash-set literal. I also `apply` `hash-set` to the range so that I can `disj` them.

Turns out that Neal wasn’t super happy with his implementation either, specifically because of that for loop. He transformed his thinking as well and here’s my implementation of that:

This translates pretty directly into Clojure. We’re verifying that the only perfect numbers in that range are the ones we expect via `filter`ing. Perhaps interesting to note that `=` works despite `filter` producing a `seq`.

Moving on to currying.

What the heck is currying? Or, where I attempt to appear like I have the ability to ascend far above my own head.

When I got to writing this section, I realized that I have no clue what currying is and why it’s so important. I’ve heard Rich and other Clojurians say that they explicitly decided to not follow Haskell down the full currying path, but not why. I’ve heard Haskell people say that currying is so important that you cannot do anything else in their language.

So when two really smart groups of people disagree, what’s a relatively dumb one like me supposed to do?

Improvise!

As a side note, it seems much more common to not understand currying than it does to understand it. Neal doesn’t seem to understand currying, at least according to his article. For that matter, some of the other authors over at Developer Works don’t seem to understand currying. And over some brief discussions with other developer friends, once you understand currying, the most common question I found was, ‘So what?’.

For my purposes, and so this article can at the very least be internally consistent even if it’s externally wrong, Currying is taking a function of more than 1 argument and splitting it up into many functions that take 1 argument and return another function that takes 1 argument until you reach the last argument after which the result is given.

Partial application, in contrast, is not currying, though many of the authors at IBM Developer Works seem to think so. Partial application is taking a function of multiple arguments (i.e., a function that could not exist in Haskell, where all functions take exactly one argument) and ‘fixing’ some of the leading arguments, returning a new function that takes less arguments (or the same number of arguments in the case of true variadic functions like `+`).

The question in my head is why currying matters beyond partial application, which you can obviously see how much I enjoy by reading the code samples in this series.

One of the main advantages noted by Haskell folks in `#haskell` is that currying allows for automatic partial application. This is actually a good, albeit debatable, deal. It’s a question of verbosity. `(comp (partial a b c) (partial d e f) (partial g h i))` is a heck of a lot more verbose than `a b c . d e f . g h i`. Which do you find to be more readable? How ‘bout more writable?

Notably, Scala does support auto-partial application. Pass less arguments than the function definition required and you get a new function definition back that has the first `n` arguments fixed and accepts the rest of the arguments. Again, this isn’t currying, but it does solve the initial point brought up by the Haskell folks. I have no idea why Clojure doesn’t do this. Why isn’t it possible to `(comp (a b c) (d e f) (g h i))`?

Greater minds than mine will have to explain.

I’m not happy with that argument because it comes down to an aesthetic decision, not a technical one. Doesn’t feel like that should be correct. On the other hand, I do believe strongly in aesthetics driving thought so I can see the point. I’m definitely on the side that my lovely AST is more important than complete succinctness but I understand the other side.

Another possibility is something hinted at in Learn You A Haskell, namely, that you can partially apply arguments out of order, which is something I’ve been tempted to do a lot in my own source. Turns out this isn’t really valid, as this is more a product of supporting infix notation than it is of currying your functions. Obviously in a prefix language like Clojure, infix is irrelevant and thus functions don’t have ‘sides’ that you can apply an argument to. You can get the same effect by defining an anonymous function but it’s much more verbose.

To wrap up this little tour of currying, 2 points that I understand almost not-at-all. One, several people mentioned that pointfree programming rocks. I read through the wiki page and this again seems to come down to an aesthetic decision. Two, several people in #haskell continually asked me what ‘type’ a variadic function would have, and how you would define the argument type constraints of your function in the case of variadic arguments. As a Clojure programmer, I’m not used to caring about types (unless I pass the wrong one in!), but this is apparently very important to the Haskell folks. It seems like this is for the same reason that most people argue for static type checking, namely, compiler level function call verification. I’ve obviously been won over by the dynamic language siren since I’m using Clojure, but I can see the point. This is the case where Haskell is more verbose than Clojure. It’s up to you whether that’s a good or bad thing.

Enough theory, let’s get to the code!

Neal’s ‘Currying’ examples

Now that I’ve thoroughly destroyed Neal’s notion of currying, let’s get on to exploring what he does through partial function application, which Groovy does indeed support, albeit through a method, not automagically like Scala.

I think the only thing terribly interesting here, as in new to show off, is the recursion construct that Clojure supplies. Everything else is vanilla function composition as I’ve showed off in my other posts.

If you’re not familiar with recursion, the good news is that you don’t have to be! In Clojure at least, recursion is pretty uncommon because you can accomplish much of it through list comprehension in it’s various flavors. It’s so uncommon that whenever I’m forced to do it because of a programming challenge or something like this, I always have to look it up again.

Basically, Clojure supplies easy recursion just like any other language through calling yourself within your definition. The badness here is that there is no tail-call optimization on the JVM. Thus, rather than solving that problem partially and off-loading the over-head of reasoning about whether or not you’ll blow your stack in this or that instance, Clojure simply says that you’ll never get it, unless you explicitly use the `recur` form. `recur` allows you to get the benefits of tail-call optimization on the JVM.

A couple of points:

1. `recur` must be used in the tail position of your function.

2. `recur` outside of a `loop` uses the function definition as it’s loop point.

3. If that doesn’t work for you, as in this case, you can supply a `loop` form that gives it it’s own bindings as in `let`.

I think that’ll do for this iteration of my Functional Thinking in Clojure series. Hopefully you’re continuing to get good stuff out of it and you’ll stay tuned for next time.