Watch on

In addition to shamelessly running around naked and drinking from tiny coffee cups, squirrels spend a great deal of their time collecting and hiding (or caching) food. When in the presence of other squirrels, they’ve even been known to pretend to hide their food in one spot only to then go actually hide it somewhere else in hopes of preventing the competition from stealing their cache.

In this hilariously awesome video we meet Shannon Apple’s pet squirrel Wally, who is busy trying to bury a large nut in the fur of an extraordinarily patient Bernese Mountain dog named Jax. Wally makes a few valiant attempts to take advantage of Jax’s silky coat before hopping off to try his luck elsewhere. Good luck with that Wally!

Visit Shannon Apple’s YouTube channel for more videos of Wally’s antics.

[via io9]


There’s this thing called feature points and basically what you do is you play apps and for each app, feature points gives you points and you can redeem the points into Amazon, iTunes gift cards or Paypal. 

And every app that you download, you get an extra chance to win an iPad mini.

I don’t use amazon or iTunes so I got paypal instead, see!


To sign up go to on your iphone/andriod phone to start with 0 points, or use this link to get 50 bonus points when you sign up

OR use the referral code CS9QNP to get 50 bonus points if you dont use the link above

++ If you did this, please like this post so I can check out your blog! 

and if you have any questions ask me here

Caching In

When a programmer is willing to live by certain restrictions, it is often the case that they will gain something in return. In 1957, when John Backus developed Fortran, he understood that a higher level language would allow programmers to produce more complicated programs more quickly, at the cost of a few extra machine instructions and the time it took to run the compiler.

Programmers adapted, machines got faster, and programming languages become more mature, supported new abstract concepts and came with new sets of restrictions to overcome. In some ways, things did not change at all. Every new language had a different set of trade offs, a trend that still hasn’t changed over 50 years later.

Programming language designers, however, are not the only people who struggle with finding an acceptable set of trade offs in order to provide an adequate solution. No, API developers suffer from the same ailments, and in some ways, might even be in a more difficult position.

Think about it. How many programming language developers do you work with? Now, how many programmers contribute in some fashion to your products’ API?

The answer to the first question almost certainly averages close to 0, and to the second? That is most likely much closer to the number of programmers in your organization than it is to 0.

One API that we have at Meetup is no different—our ORM. Our ORM is home-grown, and is code generated straight from MySQL, a set of Mako templates, and a Python script. It has lots of niceties built in to it which makes the fact that you are likely using it with Java not seem all that bad. One aspect of the ORM that is worth discussing, is something whose name itself describes its purpose—AutoCache.

The ORM, or Entities, as it’s internally called, has built in support for the Memcache pattern, but, sometimes that just isn’t enough. Large portions of code space must be allocated in order to manage invalidation and other dependencies. Essentially, the ORM does the easy part and forces a developer to struggle to manage the hard part.

AutoCache attempts to fix that, but it is not without its own set of trade offs.

The system works by first annotating queries with meta-data related to the primary and foreign key columns that make up the query. For example, when writing a query that involves RSVPs, the keys that will help invalidate the cache are the RSVP’s primary key, rsvp_id, the foreign key to the events table, event_id and the chapters table, chapter_id.

When an RSVP is modified a key template is looked up in a map, filled in with the details of the modifications and a delete request is sent to memcache with the constructed key.

As an example, suppose we have this query:

@AutoCache(ns="EventHelper.MemberRSVPS", keys="Rsvp.chapter_id,Rsvp.member_id")
List eventsWithRsvpInfo = Event.Query().
   equal(Event.CHAPTER_ID, chapter.getId()).
   sql("status in (?,?)", Status.Event.PAST, Status.Event.ACTIVE).
	 key(Rsvp.CHAPTER_ID, chapter.getId()).
	 key(Rsvp.MEMBER_ID, member.getId())

This query retrieves a list of event ids and responses for a specific member in a specific chapter. We then end up with an entry in the invalidation table (using a shorthand notation):

invalidationMap[Rsvp]['chapter_id'] = 
invalidationMap[Rsvp]['rsvp_id'] = 

Now, if we change an RSVP, for chapter_id 42, member_id 24, we’ll look up all of the relevant key templates, fill them out and execute memcache deletes on each key created. In this example, only one key is deleted, EventHelper.MemberRSVPS:chapter_id=42:member_id=24 but you can see how this number would increase with more queries being entrusted to AutoCache.

So, what tradeoffs exist here?

Perhaps the biggest trade off is also the most important one—you must only run updates to these tables via the ORM, and it is easy to see why.

First and foremost, AutoCache is an ORM layer device. It’s this way because the ORM’s job is to build a query from meta data that is supplied to it. It is this meta data that can then be interrogated in order to build substituted cache key templates that can store/delete result sets at predictable locations.

As it turns out, though, all this means is that, in practice, we can’t cache AutoCache managed queries indefinitely—we must provide an expiration.

Other trade offs exist as well, mostly related to the types of queries that can actually be managed.

Looking to the future

To rectify the problem of not having enough meta data in “raw”, non-ORM queries, it might make sense to parse the query into an AST and pull out the relevant data from that. Parsing every query that is made, however, would be quite expensive.

The approach we’ve started to take is even simpler—produce the AST instead of writing raw queries to begin with. Using our expressive SQL builder, it’s easy to turn:

SELECT e.event_id, r.response
FROM event e
INNER JOIN rsvp r ON e.event_id = r.event_id
WHERE e.status IN (?, ?)
  AND e.chapter_id = ?
  AND e.member_id = ?

into the equivalent

.join(JOIN("rsvp", $EQ("event.chapter_id", "rsvp.chapter_id")))
.where(AND(IN(Status.Event.PAST, Status.EVENT.ACTIVE),
           KEY("event.chapter_id", chapter.getId()),
           KEY("rsvp.member_id", member.getId())))

which is 1000x more useful for the purpose of analysis.

We’ve been letting AutoCache manage queries since last October, and are using it more and more as weeks go by. We might never have a system without trade offs, but in all honesty, is that even possible?

Swamps, sheep, and cats, oh my! - 25, August, 2011

Today was our day to geocache. Peter and I (because I still owe Tinus money) split the cost of a GPS, and this was the first time we were going to use it.


Tinus and Peter mostly had control over the GPS, so they know better how it all works, but it seemed to do just fine. Today, we were on the hunt for two caches in Asperen. One was a multicache, and the other was labeled a “Letter box hybrid”, but it seemed to work like a multicache. A multicache is a geocache that has several steps involved. The coordinates that you get online are only for the starting point. After that, there are two options: 1. You must solve questions or riddles using information from your current surroundings to find the numbers for the coordinate of the cache, or 2. You must find small cache-clues that give you hints to where the end cache is.

We decided to try to find the cache that was supposed to be in or near a swamp first. We had a very rough time with this one, and I will tell you right now we didn’t find it.
The first set of coordinates given were for a starting point, at which place you can park your bike. We accidentally skipped this one, and ended up riding our bikes close to where the first part of the cache was supposed to be. We were up on the road, and the cache was somewhere in a thicket of trees on the other side and down an embankment. We saw a spot at the side of the road which looked like a good place to put our bikes. I went to park my bike first.

There was a lot of tall grass and nettle here, which gave the illusion of a slightly sloping area, completely hiding the fact that this was a steep, prickly hill of doom. I started to wheel my bike off to the side, and before I knew what was going on, started careening down the hill uncontrollably, crashing through all that lovely nettle. Mind you, I was wearing shorts today, because it was hot and humid and I decided that I would rather deal with a few possible scratches then die of heat exhaustion. At this point, I was really really really wishing I had chose to wear pants, because oh my lord, did this sting. Tinus asked why I didn’t use the handbrakes or, you know, let go of my bike. One, I didn’t think to actually use my bike’s handbrakes until the very end, and two, I was practically running, and I think that if I had let go of my bike then, I probably would have tripped over it, and then would have really had something to complain about.

As I mentioned before, this was not the bike friendly starting point we were supposed to be at, so we had to then drag our bikes out of this horrid place (Peter and Tinus ended up putting their bikes here too, just not as flashily as I - "you mean as retarded as you?" - Tinus). Tinus offered to get mine for me, what a gent.


So we walked into the town, then back to the area we had been before. We walked into a grassy area with sheep, which were much bigger than the ones me and Tinus saw on our last caching attempt. Supposedly, some of these sheep were aggressive, chasing cachers and in one case, a bird. They were nicknamed “The Fighting Sheep”. I wasn’t afraid though, I would love for one of them to come at me … as long as I didn’t end up in all the crap that was lying around. That would definitely kill me.
Eventually, we found the entrance into the “swamp”, so in we went.


That’s Peter’s game face for sure.

There are only two insects in this world that I absolutely, without a doubt, cannot stand: flies, and mosquitoes. This area was a perfect breeding ground for mosquitoes, and my worst nightmare. They were EVERYWHERE! And no matter how many times I brushed my hands over my arms and legs, or smashed one that had already bitten me, within milliseconds there was another one on me. I told Tinus, “If I slap you randomly, don’t worry, I am just killing a mosquito”. Over all, this place was awful. I was still itching terribly from my nettle incident before, and now I was not only getting stung by more nettle, I was getting bit to high heaven by mosquitoes. Peter, who was wearing pants, just turned to me and Tinus (who was also wearing shorts) and said “Seriously you guys, what were you thinking, IT’S A SWAMP!!”

We spent maybe 15 minutes searching for a hint or whatever this was supposed to be. Oddly, enough, it wasn’t like there were many places to actually search. On the right of us was a small inlet that seemed pretty stagnant, and to the left was overgrown brush. We had to walk through the area on some boards that had been placed there, but there were still plenty of tall plants narrowing this already narrow walking space, and it got worse the farther we went in. We decided to look for the second hint.

This was somewhere in a field, with more sheep. We found nothing around here either. I was starting to feel a little disenchanted, I was sweaty, sticky, bleeding in spots from scratching so much, and now it started to rain. I didn’t even have enough pep in me to try and play with the sheep. We called it quits on this one and decided to give the second cache we picked a go.

This one seemed more friendly. We needed to walk around the town of Asperen, finding specific mailboxes. The goal was to find the mailbox that corresponded with photographs posted on the cache listing, and then find the house number that the mailbox belonged to. Each mailbox was given a letter, A-R. Once you find the mailbox and its corresponding house number, you write down the letter associated with that mailbox on a chart with numbers 1-88 (the numbers represent the house numbers). For example, you find the mailbox from the picture labeled A. Its house number is 7, so you go to the chart, find the number 7, and write down A. Each number has a direction, for example: go left, go back, go through the door, pet a cat, etc. When you have written down all the letters in the correct place, you then follow the directions in alphabetical order which should lead you to the cache.
Golly, I hope that made sense.

We spent quite a few hours walking around, and I admit, I was being a bit of a cheese hobo. I have mentioned before that there are lots of cats roaming around Holland, and today, they were out in droves! I can’t even count how many kitties I saw, but I knew I had to pet them all, and I did. Black ones, orange ones, tabby and tortie, and all so friendly! Sooo fluffy, soooo beautiful! I really miss having a kitty cat.

Ahem. We had to go back and run through the area twice, because after following the directions, we never found the cache. We knew there were some we got wrong, and some we just didn’t find. We already gave up on one cache, and it would be a really disappointing thing to have gone through all that effort just to not find another. This entire time we were walking on foot. “Why on earth have we not been riding our bikes!?” we wondered. So, we started to retrace our steps, but on bikes this time, which was so much easier than walking.
Everything seemed to be adding up until we got to the letter O. We never actually found it the first time. On our second run through though, we came across it! What a sneaky sneaky bastard this mailbox was! The owners of the house had been doing some renovating, so there was a large blue sheet covering the front of the house, thus blocking the view of the mailbox. Also, I think Tinus said something about the mailbox having been removed or something.
Sure enough, this particular set of directions was the one that was going to make us or break us. Without them, we were sent in a completely different direction, which was why we never found the cache.

So, after following the complete set of directions, we did find it. The directions took us to some sort of mill, or something like that. It just looked like a tower to me, and the boys never did much explaining. We started poking around places, and I saw a mailbox on the side of the building. I began to open it and Tinus said “Maria, now you’re invading other people’s mail boxes.”
"Actually, this is the cache, look" said Peter



:D And here I thought I was just looking in people’s mail boxes! My mind was too tired to consider this might have been an unethical/illegal thing to do. Besides, we were geocaching, and everything is fair game when you’re geocaching. 15 minutes later it clicked for me though. It’s fitting that the final cache was a mailbox, since this entire thing was centered around mailboxes. I finally dropped off that geocoin I brought from Colorado. One tiny problem though: I forgot to write down the tracking code that is on the coin. So now I need to go back there and grab the code so I can report that I dropped the coin off. Oi vey.

Overall, an exhausting day. I have a slight migraine and my skin is all sticky. I would take a bath, but it is 2:46 PM in Colorado, which means it is late here. Now it is time to relax, and probably check myself for ticks.

A Slightly Longer Guide to Setting up a Caching / Forwarding Name Server

Setting up a caching server on RHEL6, by default, works out of the box for a RHEL-type distributions. From there, it is just the matter of adding additional changes to make it either a forwarding, master or slave name server.

Of course, in our case, there were glitches.

Reviewing through Michael Jang’s RHCE prep book, the instructions for setting up the caching name server goes as following:

1) Install bind-*

2) Enable named with “chkconfig named on”

3) Start up named with “service named start”

That worked.

What did not work was this:

rndc status

That returned with the following message:

  [root@localhost ~]# rndc  status

  rndc: neither /etc/rndc.conf nor /etc/rndc.key was found

As it turns out, rndc.key was not generated when we installed bind. So let’s try to resolve this. While we are on the task, we will go and setup some ACLs and forwarding as well.

First, fixing the immediate problem involves running a command to generate a new key:

  rndc-confgen -a

Which will put in a new rndc.key file by default into /etc.  

However, even though it defaults to a key length of 128, a generating the key on the a virtual machine took a long while. So in the interest of time as well as looking to try something new, we bypassed it by using the OS’s random generator:

  rndc-confgen -r /dev/urandom -a

(Note: you probably wouldn’t want to do this in a mission critical environment).

With the file generated:

  [root@localhost etc]# cat /etc/rndc.key 

  key “rndc-key” {

   algorithm hmac-md5;

  secret “WxQEDQ8KY+SII48TfeV92w==”;


We will next need to give it the appropriate permissions and ownership:

  chown root:named rndc.key 

  chmod 644 rndc.key 

  chcon -u system_u -t etc_t rndc.key

What we have done here is put the group ownership under named group and then make it readable to the named user can read it. Then afterward, we used selinux to give it the appropriate security context for named.

With that done, we insert the following line in at the end of the named.conf file.

  include “/etc/rndc.key”;

We will also need  to setup the forwarders somewhere in the options { } section.

  forwarders {; };

And disable the following lines as well, since we are not using dnssec in our network, otherwise, we will not get a response back for our internal domains:

  # dnssec-enable yes;

  # dnssec-validation yes;

  # dnssec-lookaside auto;

  # bindkeys-file “/etc/named.iscdlv.key”;

(In real life, dnssec is highly recommend for managing DNS in order to avoid DNS poisoning. For now, we’ll work on that another time)

While we are at it, lets add some ACLs to named.conf:

  controls {

         inet allow {localhost;};


Now we restart named and run the original rndc status command:

[root@localhost etc]# rndc status

version: 9.7.3-P3-RedHat-9.7.3-2.el6_1.P3.2

CPUs found: 1

worker threads: 1

number of zones: 19

debug level: 0

xfers running: 0

xfers deferred: 0

soa queries in progress: 0

query logging is OFF

recursive clients: 0/0/1000

tcp clients: 0/100

server is up and running

To ensure that our caching server works, we added the following as the first line in /etc/resolv.conf


If your server is on a DHCP network and you are using dhclient (like mine), you can have added to the resolver file by adding following line in the file dhclient.conf (in my case, /etc/dhcp/dhclient-eth0.conf):

prepend domain-name-servers;

With that done, let’s do a test. We will run the following command to query for an internal domain:

dig @

While on the DNS server, we will monitor DNS traffic from the client (which in this case,

The result? On the client:

[root@localhost data]# dig @

; «» DiG 9.7.3-P3-RedHat-9.7.3-2.el6_1.P3.2 «» @

; (1 server found)

;; global options: +cmd

;; Got answer:

;; -»HEADER«- opcode: QUERY, status: NOERROR, id: 45507

;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1


; IN A




;; Query time: 0 msec


;; WHEN: Sun Sep 11 14:11:56 2011

;; MSG SIZE  rcvd: 86

On the DNS server:

[root@centos log]# tcpdump -i eth0 port 53 and src host

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes

14:19:31.219035 IP >  10966+ [1au] A? (48)

So we know that the client has queried the DNS and retrieved the correct IP. Now lets see if the client had it cached now. We run the command the again:

[root@localhost data]# dig @

; «» DiG 9.7.3-P3-RedHat-9.7.3-2.el6_1.P3.2 «» @

; (1 server found)

;; global options: +cmd

;; Got answer:

;; -»HEADER«- opcode: QUERY, status: NOERROR, id: 5672

;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1


; IN A




;; Query time: 0 msec


;; WHEN: Sun Sep 11 14:15:56 2011

;; MSG SIZE  rcvd: 86

Again, we got the correct IP returned. Did it work?

Yes, because on the DNS server, we did not receive query from the client:

  [root@centos log]# tcpdump -i eth0 port 53 and src host

  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

  listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes

  14:19:31.219035 IP >  10966+ [1au] A? (48)

  14:21:07.437358 IP 192.16

In fact, we did not see any output until we retried the same dig command a few minutes later:

  [root@centos log]# tcpdump -i eth0 port 53 and src host

  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

  listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes

  14:19:31.219035 IP >  10966+ [1au] A? (48)

  14:21:07.437358 IP >  22604+ [1au] A? (48)

At this point, we have verified that caching server works. We are done here.

I love fragment caching in Rails

Phil Karlton said the only difficult things in software development were cache expiration and naming things.

While this statement can be debated, nobody can argue that these two things are extremely difficult (and for the record, I tend to agree with Phil).  This post is about the first one: caching and the expiration of that cache.

While caching can become a headache as the data model gets more complex, at a rudimentary level, a little caching in Rails can not only be done extremely easily, but it can go a LONG way.

FourthSegment had a dashboard action that was performing less than ideal, so I decided to spend some time seeing if I could speed things up.

I sat down for about 20 minutes, made a couple changes, and pushed them live.  The results — recorded over 7 days by New Relic —  speak for themselves:



All I did here was implement some simple fragment caching around the view of the conversation table.  The caching was extremely simple:

And the expiration was also simple:

Note that the cache fragments are keyed by a prefix and the site ID.  If I left the site ID out, then all users would see the dashboard from whoever was the last one to refresh the fragment.  Not ideal :)

FourthSegment runs on Heroku, so I used their free memcached add-on to facilitate the actual caching.  It was super easy to setup.

There’s something very interesting in this graph that might not be obvious right away.  Notice the dark green on the left side.  That’s request queuing time.  It’s the amount of time Heroku’s nginx layer had to hold onto the request before my application was ready to process it.  It doesn’t show up nearly as much after the fix was put in place.  In this case, the dashboard was taking so long that, despite having several concurrent processing threads, they were piling up and blocking other requests.

The bottom line here is that you shouldn’t let the fact that caching is hard stop you from spending a little bit of time on your worst endpoints.  It can go a long way towards speeding up your entire application.

Lazy programming.

Apple’s iOS provides programming utilities called “CoreLocation” and “Background Location Updates” specifically to allow developers access to your GPS or other location data. That some of this information might be cached for later use shouldn’t be too surprising.

But it’s just poor technique to allow information to accumulate indefinitely… as that consumes storage space unnecessarily. To do so with personally identifiable information is negligent and Apple should remedy the problem immediately. 

iOS already prompts the user to Allow/Decline the use of his current location. There should be a user setting to control the caching of historical data, too… and for how long.