module MonadNum where

instance (Show (m n), Eq (m n), Monad m, Num n) => Num (m n) where
    a + b = do
      a' <- a
      b' <- b
      return $ a' + b'
    a - b = do
      a' <- a
      b' <- b
      return $a' - b'
    a * b = do
      a' <- a
      b' <- b
      return $ a' * b'
    negate = (>>= return . negate)
    abs = (>>= return . abs)
    signum = (>>= return . signum)
    fromInteger = return . fromInteger

This little hack lets you use Monads that are wrapping Nums work like Nums directly. So for example, you can do fun stuff like:

*MonadNum> [1,2,3] * 3
*MonadNum> [1,2,3,4] * [5,6]
*MonadNum> [[1,2],[3,4]] * [5,6]
*MonadNum> Just 3 + Just 4
Just 7
*MonadNum> Just 3 + Nothing

And, if you don’t mind delving into the ill-advised, you can add these instances:

import System.IO.Unsafe

instance (Show a) => Show (IO a) where
    show x = show $ unsafePerformIO x
    {-# NOINLINE show #-}

instance (Eq a) => Eq (IO a) where
    m == n = unsafePerformIO $ do
                               m' <- m
                               n' <- n
                               return $ m' == n'
    {-# NOINLINE (==) #-}

And that allows you to do really, really dumb stuff like this:

*MonadNum Control.Monad> b <- (liftM read getLine) + (liftM read getLine)
*MonadNum Control.Monad> b
*MonadNum Control.Monad> getLine == getLine

Woo, I’m exhausted from having all this fun…

{:Clojure_Fn_of_the_day juxt}

Takes a set of functions and returns a fn that is the juxtaposition of those fns. The returned fn takes a variable number of args, and returns a vector containing the result of applying each fn to the args (left-to-right).

Binary relation:
((juxt a b c) x) => [(a x) (b x) (c x)]

Mind Blown at how useful that could be!

It’ll surely make an appearance or two in my dissertation.

Simplifying Series Generation in JavaScript

Series generations for things like ids can be annoying, requiring manual loops and id generation. But by applying a more functional style of code, this generation can become quite short:

var times = function(n, callback) {
  return Array.apply(null, Array(n)).map(callback);

var uniqueId = (function() {
  var number = 0;
  uniqueId = function(prefix) {
    return prefix + (number += 1);
  return uniqueId;

console.log(times(6, uniqueId.bind(this, 'ball_')));
//[ 'ball_1', 'ball_2', 'ball_3', 'ball_4', 'ball_5', 'ball_6' ]

The uniqueId can be replaced with utility functions such as _.uniqueId and times in Lodash with _.times making the above operation one line. The unique id uses a Lazy Function which I have written about in a previous post. Though in this instance the function is immediately invoked so it can be used with bind.

Github Location:


Brian Beckman: Don’t Fear the Monad

  • A monoid is: ({stuff}, operations)  where the operations are associative and have a unit.

  • So a monoid is a semigroup with unit. (a +0 or a ×1 or a 1_C ⟳)
  • Brian Beckman in this video expresses a monoid with two functions f:a->a and g:a->a. (same type or domain)

  • His example is addition mod 12 (integral hours on a clock).

  • Then the monoidal category happens when you allow f:a->b and g:b->c. (linking together different types or domains)
    External image

(por jasonofthel33t)

Successive Pattern Matching and the Happy Path


Successive (nested) pattern matching with exhaustive matching at each point, 16 LOC in Scala.

    case req @ DELETE(Path(Seg("users" :: name :: "bookmarks" :: uri))) => {
      val uriString = uri mkString "/"
      logger.debug("DELETE /users/%s/bookmarks/%s".format(name, uriString))
      userRepository findByName name match {
        case Some(user) => req match {
    	  case BasicAuth(u, p) if verify(u, p, user) => {
    	    user.bookmarks.remove(uriString) match {
    	      case Some(_) => NoContent
    	      case _ => NotFound
    	  case _ => NotFound
        case _ => NotFound


Successive happy-path style pattern matching with non-exhaustive patterns and MatchError exception handling, 9 LOC in Scala.

    case req @ DELETE(Path(Seg("users" :: name :: "bookmarks" :: uri))) => try {
      val uriString = uri mkString "/"
      logger.debug("DELETE /users/%s/bookmarks/%s".format(name, uriString))
      val Some(user) = userRepository findByName name
      val BasicAuth(u, p) = req
      val true = verify(u, p, user)
      val Some(_) = user.bookmarks.remove(uriString)
    } catch { case _: MatchError => NotFound }

Using for-comprehensions

A similar happy-path style with similar conciseness but without exceptions can be achieved using for-comprehensions with the Option monad. 8 LOC plus 4 LOC abstracted into the custom extractor.

    case req @ DELETE(Path(Seg("users" :: name :: "bookmarks" :: uri))) => {
      val uriString = uri mkString "/"
      logger.debug("DELETE /users/%s/bookmarks/%s" format (name, uriString))
      (for {
        MatchingBasicAuth(_) <- Some(req)
        _ <- repository.removeBookmark(name, uriString)
      } yield NoContent) getOrElse NotFound  


  • The nested style is ugly because it is verbose and includes multiple branches with the same result. The happy-path style is much more concise and easy to read.
  • The happy-path style “misuses” exceptions (in this case, MatchError) for non-erroneous control flow. This needs to be weighed against the conciseness and readability gains.
  • Why not an assertion instead of using true as a pattern? Because this is a functional concern, whereas assertions are a testing concern that might be disabled in production use.
  • The for-comprehension style using the Option monad is the most clean and concise. It also integrates beautifully with custom extractors.
  • Other thoughts?
An introduction to functional programming
Many functional programming articles teach abstract functional techniques. That is, composition, pipelining, higher order functions. This one is different. It shows examples of imperative, unfunctional code that people write every day and translates these examples to a functional style.

This is pretty much the most accessible intro to Functional Programming I’ve found so far - it is Super Useful & Really Cool to think about!!

Lazy Function Definitions In JavaScript

This strategy makes it possible to cache the results of complex calculations without using conditional statements such as if. To demonstrate, I have simulated a complex calculation by simply incrementing to a really high number:

var complexCalculation = function() {
    var value = 0;
    while (value < 1000000000) {
        value += 1;

    complexCalculation = function() {
        return value;
    return complexCalculation();

elapsedTime(complexCalculation);// should probably be > 1000ms
elapsedTime(complexCalculation);// should probably be < 1ms

This avoids conditionals, so it is a very efficient strategy. Which is useful when the caching function needs to be called many times in a row. For example, when determining how to get the current page scroll in a cross browser way as Peter Michaux describes. The elapsed time function:

var elapsedTime = function(callback) {
    var start = new Date().getTime();
    var end = new Date().getTime() - start;
    console.log('Elapsed Time', end + 'ms');

Github Location:

The Y combinator is a very elegant and useful formula. :)

If this all seems gibberish to you (which it probably does), look up the lambda calculus. This article is a short introduction and mainly an explanation of the Y combinator and its use. For a more complete, but lengthy and academic explanation there’s also the Stanford Encylopedia of Philosophy entry on the lambda calculus, or this one which explains it using Python. It’s definitely worth learning, especially if you’re into programming and computer science.

RSA encryption in Haskell

It feels good to be back in the swing of learning. Pretty much my entire Summer was spent in Germany, doing an internship at Technische Universität Ilmenau. It was fun, but after three months of gluing together OpenCV and Android using JNI, I’m quite happy to be studying again, in an environment where I can choose my own tools and languages (most of the time).

Let me get one thing straight before I go any further: I luuuurve functional programming. My first foray into FP was about 6 years ago, when I was helping my girlfriend at the time with her first year university coursework. At Edinburgh University, the first language they teach their undergrads is Haskell, and in my opinion there is a strong case for this to be encouraged at other universities. Learning Haskell was an eye-opening experience for me, as a young programmer. This was the first time I had tried my hand at any non-imperative paradigm; I had never so much as glanced at Java tutorial back then, and had only used Python and C up to that point. Nevertheless, from my first toy factorial program, I was hooked, and though my love affair with Haskell has been a turbulent one, it has endured longer than the romantic relationship that spawned it.

I think a lot of green-eared first year CS undergrads are in a similar position to the one I was in when I first learned Haskell. I often take prospective students on tours of St Andrews University’s CS department, and I always make a point of asking them what languages they’ve learned. Invariably, most of them learned Visual Basic at school, there are always a couple of Java or Python, but there has only been one prospective student I’ve spoken to in the last 2 years who has even heard of Haskell. So we see, Haskell is common to newbie CS students in its absence. No matter their imperative, object-oriented, or event-driven experience, the majority of new CS students have never programmed functionally, or even declaratively (though this is changing with the rise in popularity of JSON) before. Teaching Haskell as a first language to students sets a common denominator which brings almost everyone to the same level at the beginning of a course. Hopefully, it will also instil some good habits in the way they write programs too, such as thinking about the problem rather than the code. I could write accounts of some of my contemporaries in 3rd year CS to illustrate why this would be useful: the person who has all the skills to solve problems algorithmically, but cannot translate this into the simplest Java code; the person who creates unnecessary variables for everything; the person who writes one big messy chunk of spaghetti code. I’m sure anyone reading this, whether a student or working in the industry, will recognise these descriptions. I would argue that coding everything in a pure FP language such as Haskell for half a year can eliminate these habits in young programmers.

I’ve gone on a bit of a tangent here to what I was originally going to talk about, namely a project I had to do as part of the coursework for the Data Encoding course I’m doing this semester: RSA encryption ( Upon a tentative “yes” from the lecturer when asked whether I could use Haskell for coursework, I set about getting back into the FP mindset. It had been about a year since I last used Haskell, and my memory for syntax is terrible, so I had my trusty copy of The Craft of Functional Programming ( open by my side. It’s a really nice book, but I’ll admit, it’s the only Haskell book I’ve read; other options are Real World Haskell (, and Learn You a Haskell (, both freely available to read online.

Incidentally, I came across a good paper the other day about RSA in Haskell, which likely explains it much better than I could. Have a gander at is here: Instead of an in-depth explanation of RSA and how I implemented it, I would like to use my code to illustrate some of the things I like (and don’t like) about Haskell. The first is something I love: the ease from which you can go from thinking algorithmically or mathematically to having working code. It’s often said that writing in a language like Python or Ruby is like writing executable pseudocode, and I would argue that Haskell displays this quality as well. Take, for example, the Extended Euclidian Algorithm (, one of the most implemented algorithms in computer science, and one that is necessary for RSA. From the Wikipedia article I just linked, the pseudocode for a recursive implementation is:

function extended_gcd(a, b)
    if b = 0
        return (1, 0)
        (q, r) := divide (a, b)
        (s, t) := extended_gcd(b, r)
        return (t, s - q * t)

As you can probably see, this fits Haskell’s programming model so well that the code practically writes itself! Here is the code for this algorithm from my RSA implementation:

-- Extended Euclidian algorithm using the recursive method, returning (gcd, x, y)
eea :: (Integral a) => a -> a -> (a, a, a)
eea a b
	| b == 0	= (a, 1, 0)
	| otherwise	= (d, t, s - q * t)
		(q, r) = a `divMod` b
		(d, s, t) = eea b r

In fact, the translation between algorithm and executable code is so straightforward, I was using the code shown here to explain the algorithm to a fellow student who had not seen a single line of Haskell before. The only explanation I had to give of syntax was “the ’|’ character means ‘if’”.

So we’ve seen one example of the good, let’s move onto the bad. One of Haskell’s strengths may also be considered a weakness: a short mental distance between algorithm and code makes it easy to implement something badly. I guess this isn’t a failing of the language really – bad code is bad code, no matter what it’s written in – but the sheer number of ways of implementing a function in Haskell often leads to bad code through laziness (in a bad “I can’t be bothered to take the dog for a walk” way, rather than a good “tail call optimisation” way ;) ). Another example from my code, this time the part that handles modular exponentiation:

-- Modular exponentiation by squaring (using Montgomery's ladder). Return x^n (mod m)
mexp :: Integer -> Integer -> Integer -> Integer
mexp x n m
	| n == 0	= 1
	| otherwise	= fst (foldl (mexp' m) (x, x ^ 2) [ testBit n (k - b - 2) | b <- [0 ..(k - 2)] ])
		k = ceiling ( logBase 2 (fromIntegral (n + 1)) )

mexp' :: Integer -> (Integer, Integer) -> Bool -> (Integer, Integer)
mexp' m xs b
	| b == False	= ((x1 ^ 2) `mod` m, (x1 * x2) `mod` m)
	| otherwise		= ((x1 * x2) `mod` m, (x2 ^ 2) `mod` m)
		x1 = fst xs
		x2 = snd xs

I look at this and I cringe. In my 20/20 hindsight, I can see much simpler ways of implementing this, with judicious use of (mod 2) arithmetic. Nevertheless, it illustrates my point perfectly: folding across a list comprehension yielding the binary representation of a number, in order to do one operation for a '1’ and another for a '0’, for every bit, is clunky. It gets the job done, but it was literally the first idea that came to my mind, on reading a description of Montgomery’s Ladder ( If/when I decide to clean this code up a bit, these two functions will be first in line for overhaul.

After such tomfoolery, let’s end on an upbeat note. There are a few things I was thinking of including here, but I’ll save them for another post. Testing in Haskell is great fun. As with other interpreted languages, calling individual functions is a doddle, but one advantage (as I hinted at earlier) is the nature of functions written in Haskell to be short, simple, and specific. Having a chunky “do-everything” function just isn’t a sensible option. This makes testing easy, as functionality remains discrete among functions. At St Andrews University, a lot of the lecturers encourage a test-driven approach. A unit testing library such as the fantastic HUnit ( is great for TDD, and I can honestly say that the satisfaction in writing a handful of one-liner test functions beats the hell out of tedious method testing in Java.

Anyhoos, that’s all I really want to say about this for now. Maybe a follow-up will, er, follow up, but for now here’s my code. And yes, before you say anything, I know that encoding letter-by-letter is crap. Pro-tip: don’t leave coursework until the last minute ;).

import Data.Bits
import System.Random
import Data.Char

--Keys, in the form n, k (where k is i for the public key, and j for the private key)
data Key = Public Integer Integer | Private Integer Integer
	deriving (Eq, Ord, Show)

-- Extended Euclidian algorithm using the recursive method, returning (gcd, x, y)
eea :: (Integral a) => a -> a -> (a, a, a)
eea a b
	| b == 0	= (a, 1, 0)
	| otherwise	= (d, t, s - q * t)
		(q, r) = a `divMod` b
		(d, s, t) = eea b r

-- Modular multiplicative inverse for a (mod m)
mminv :: (Integral a) => a -> a-> a
mminv a m
	| gcd /= 1	= error "Number doesn't have a multiplicative inverse for this modulus!"
	| otherwise	= x `mod` m
		(gcd, x, _) = eea a m

-- Modular exponentiation by squaring (using Montgomery's ladder). Return x^n (mod m)
mexp :: Integer -> Integer -> Integer -> Integer
mexp x n m
	| n == 0	= 1
	| otherwise	= fst (foldl (mexp' m) (x, x ^ 2) [ testBit n (k - b - 2) | b <- [0 ..(k - 2)] ])
		k = ceiling ( logBase 2 (fromIntegral (n + 1)) )

mexp' :: Integer -> (Integer, Integer) -> Bool -> (Integer, Integer)
mexp' m xs b
	| b == False	= ((x1 ^ 2) `mod` m, (x1 * x2) `mod` m)
	| otherwise		= ((x1 * x2) `mod` m, (x2 ^ 2) `mod` m)
		x1 = fst xs
		x2 = snd xs

-- Generate public and private keys using the multiplicative inverse of i, mod phi
generateKeys :: Integer -> Integer -> Integer -> (Key, Key)
generateKeys p q i
	| gcd /= 1	= error "Public exponent i is not coprime with phi!"
	| otherwise	= (Public n i, Private n j)
		n = p * q
		phi = (p - 1) * (q - 1)
		(gcd, _, _) = eea i phi
		j = mminv i phi

-- Code or decode an integer, given a public/private key
rsaCoder :: Key -> Integer -> Integer
rsaCoder (Public n k) x = mexp x k n
rsaCoder (Private n k) x = mexp x k n

-- Primality tester from, but using my own mexp function instead of theirs
-- BEGIN --
-- (eq. to) find2km (2^k * n) = (k,n)
find2km :: Integral a => a -> (a,a)
find2km n = f 0 n
        f k m
            | r == 1 = (k,m)
            | otherwise = f (k+1) q
            where (q,r) = quotRem m 2  

-- n is the number to test; a is the (presumably randomly chosen) witness
millerRabinPrimality :: Integer -> Integer -> Bool
millerRabinPrimality n a
    | a <= 1 || a >= n-1 = 
        error $ "millerRabinPrimality: a out of range (" 
              ++ show a ++ " for "++ show n ++ ")" 
    | n < 2 = False
    | even n = False
    | b0 == 1 || b0 == n' = True
    | otherwise = iter (tail b)
        n' = n-1
        (k,m) = find2km n'
        b0 = mexp a m n -- modified this line
        b = take (fromIntegral k) $ iterate (squareMod n) b0
        iter [] = False
        iter (x:xs)
            | x == 1 = False
            | x == n' = True
            | otherwise = iter xs

squareMod :: Integral a => a -> a -> a
squareMod a b = (b * b) `rem` a
-- END --

-- Use the Miller-Rabin method of primality testing, with a witness of 100 (i.e. a non-prime probability of 2^(-100),
-- according to
primeTest :: Integer -> Bool
primeTest x = millerRabinPrimality x 100

-- Generate an n-bit random prime number
getPrime :: Integer -> IO Integer
getPrime n = do
	r <- randomRIO (2 ^ n, (2 ^ (n + 1)) - 1)
	if (primeTest r)
			return r
			getPrime n

-- Encode a string byte-wise as an list of RSA-encrypted integers (this is not a good way of doing it, as frequency
-- analysis can easily be performed for frequent characters
encode:: String -> Key -> [Integer]
encode s k = [rsaCoder k (toInteger $ ord i) | i <- s]

-- Decode a list of RSA-encrypted integers byte-wise to a string
decode:: [Integer] -> Key -> String
decode s k = [chr $ fromInteger $ rsaCoder k i | i <- s]

mane :: IO()
mane = do
	p <- getPrime 256 
	q <- getPrime 256
	i <- getPrime 256
	putStr $ "p: " ++ (show p) ++ " " ++ show (primeTest p) ++ "\n"
	putStr $ "q: " ++ (show q) ++ " " ++ show (primeTest q) ++ "\n"
	putStr $ "i: " ++ (show i) ++ " " ++ show (primeTest i) ++ "\n"
	let keys = generateKeys p q i
	let pub = fst keys
	let priv = snd keys
	putStr $ show pub
	putStr "\n"
	putStr $ show priv
	putStr "\nType the text to encode:\n"
	plaintext <- getLine
	putStr "\n"
	let encrypted = encode plaintext pub
	putStr $ "Encrypted:\n" ++ (show encrypted) ++ "\n"
	let decrypted = decode encrypted priv
	putStr $ "Decrypted:\n" ++ (show decrypted) ++ "\n"

Commutativity is an easy property to render into English: it means order doesn’t matter.

External image

External image

For example “three groups of five rocks” totals the same as “five groups of three rocks”. In fact a general proposition is true: “L groups of R rocks” totals the same as “R groups of L rocks” for any L,R. Which is surprising if you think about arraying the stones in piles or spirals or circles

but less so if you think about arraying them in grids


But what about associativity? It’s a basic assumption of category theory and every monoid or semigroup is associative. Since functional programming (F#, Haskell, etc) is based in these composition-friendly mathematics of braids, string diagrams, monads, and monoids, the associative assumption will come up in functional programming as well.

External image

If the property holds then we can do things like this:

but what does it mean for associativity to hold? Just like I didn’t understand why the classical poetry I read in school was considered good until I read some truly bad poetry, I need some examples of non-associative things before I can understand what it means to assume some process is associative.

The sedenions aren’t associative

External image

and neither are the octonions.

External image

But these two algebras are unusual so in order to explain why they don’t translate evaluation parentheses, you have to first figure out how they work. Same with Okubo algebras, Jordan algebras, Poisson algebras, and vector cross-products. How about a more commonly understood subject matter?

Here is one: arithmetic of exponents.

The Berkeley calculator evaluates 2^2^2^2^2 right-to-left.

To access the Berkeley Calculator: Type bc -l at the terminal in Linux or Mac. In Windows get PuTTY and ssh

If the evaluation order doesn’t matter (associativity), then the square root of 2^2^2^2^2 should be the same as the base-two log of 2^2^2^2^2, since one cancels “from the bottom” and one of them cancels “from the top”.

External image

External image

But guess what?

The square root of 65536 is 256, but the log_2 of 65536 is ≈16. Since they’re not the same, exponentiation is non-associative.

Playing around with exponents of a few two’s and different evaluation orders can be done either in bc -l or on paper.



65536^2 = 4294967296
2^65536 = ridiculous

(2^3)^2 = 8^2 = 64
2^(3^2) = 2^9 = 512

And now that I’ve played around I have a plain-English description of what associativity means: “Order of evaluation doesn’t matter”

Best Posts For 2015

This year Obscure JavaScript had a bunch of interesting posts with a more practical focus than 2014s posts. The most popular posts are already listed to the right, so I want to note 5 posts that are not on that list, but still demonstrate interesting things. In chronological order:

  • 4 Line Node.js HTTPS Server
    This is a block of code I use everywhere. Instead of having to configure a server to point to a directory and do other configuration for a simple client side code test, I just put this in the directory which needs served files. Then I just run npm install and run the server with a port. Every file in the server files directory or lower is now server accessible.

  • Function Decompilation
    The source code of any JavaScript function can be printed out at run time which opens up a lot of possibility. For example, a new syntax can be introduced that uses the names of arguments to do dependency injection.

  • Maps 3rd Argument
    An example of how an extra unknown parameter can add a whole new dimension of programming. It even makes it possible to use map to computer Fibonacci numbers.

  • Generic Chaining
    One of the simplist ways to implement chaining is to using a chaining object return a generated version itself each time chaining is done. It sounds confusing, but the implementation is very straightforward.

  • MultiCore JavaScript
    This demonstrates how to run JavaScript using Node.js in parallel. It always amazes me how much JavaScript has come from implementing simple scripts in a browser to full parallel programming.

Getting started with Functional Programming

For the last 6 months, I’ve been hearing increasing more and more about functional programming and how it can magically improve the way you write code. My first thoughts when hearing about functional programming were rather naive, they were along the lines of “Aren’t all programming languages functional?” (in the operational sense) obviously not really knowing what is actually meant by term functional.

I’m currently learning Haskell, which is a purely functional language. It is very fundamentally different from most object oriented languages in a structural sense, and even more abstract.

Here are a few links I am finding quite handy and should help you on your way to a zen mastery of the functional way of programming.

  • Introduction to Haskell - Interactive lecture slides for an introduction to Haskell from the University of Virginia.
  • Learn you a Haskell - A great and well paced book, which has also been published online for free for all!
  • List of Lambdas - A really well written blog post by Steve Losh. Takes you through a functional way of creating lists in JavaScript functions. This is what originally sparked my interest of functional programming. 
  • Functional Programming in 5 Minutes - Another set of slides posted recently on, which explains function currying through some examples written in JavaScript.
  • Functors, Applicatives, And Monads in Pictures - This is a slightly more complicated article regarding some more abstract concepts in functional programming, but it is well written.
  • Haskell Amuse-Bouche - A google tech talk from Mark Lentcnzer, this talk does a good job of explaining why people are really getting excited about Haskell & functional programming.

Anyways Good luck and hopefully you find these links of use to you.

1 Line Random Text Code Generator in JavaScript

It is possible to generate a 1 line random string generator in JavaScript using only native functions. This one generates one of length 5:

var code = (Math.random() + 1).toString(36).substr(2, 5);
// e.g. 7ggz7

I got the idea from combining the modifications in the comments to this Stack Overflow answer. Firstly, a random number is generated. The +1 is to deal with rare cases where the random number generator returns 0, 0.5 and so on:

console.log(Math.random() + 1); // e.g. 1.2071540202014148

Then the text is converted to base 36 (26 letters + 10 numbers):

console.log(Math.random() + 1).toString(36); // e.g. 1.7ggz7g07jv1dzpvi

The substr just selects 5 digits after the decimal. This specific tactic can only generate random codes up to 16 characters long. Various combinations of Math.random and other Math function can be mixed and matched to create more digits to choose from. Finally, this is technically a Psuedo-Random text generator.

Github Location: