Getting to alpha in under 4 weeks part 2

Part 2 post on the practises we used to get the site out in under 4 weeks. Read Part 1 here.

Reuse as much as you can

We’ve developed our application using Rails and tools from the Ruby community. When we needed to process our feeds, we chose to use Superfeedr so that we can quickly to concentrate our time on what’s going to make our product better for users to read information on the web.

We didn’t roll our own methods of backend processing, we use Resque which is powered by Redis and something that was born from Github.

Instead of writing my own CSS, I reviewed a couple of CSS kits that came with simple styles and a good grid.  Eventually settling on using Foundation CSS, so that the site looked presentable enough to use and so that I could concentrate on what mattered, the reader content page.


We didn’t practise a full agile process, but we made sure we wrote stories, reviewed and prioritized them. I made sure we spent time asking the “why” on what was being implemented, giving us a pause to review the scope of a story.

We made sure stories were small enough to deliver in under a day, otherwise to break them up so we could better manage and prioritize.

There was no concept of an iteration because of our short time frame and our juggled time involvement but we made sure to do a weekly review and theme the week ahead.

Zero Bug Policy

Something I have practised with my colleagues at GoFreeRange and was first told to us from Ben Griffiths.  If there was a bug, we put it at the top of the backlog and fixed it quickly.  So we made sure to not backfill the backlog with bugs.

I’ve found this encourages you to not bother with bugs that are small, you accept them as limitations with the current product, you’ll improve it as you go, only when a bug is important (a showstopper) you’ll add it and fix it.  Keeping the focus on delivering features and improvements that matter and not bogging yourself down on things you just haven’t had the time to sort out.


A big debate with developers in startup mode is how much testing to do on your application.  What I did with RSS Hero was:

  • Use outside in testing with Cucumber for all areas of the app, made sure the user experience is tested.
  • Added testing for complex areas of the site, so the queue, importing/parsing of articles, teleport test cases, testing of the feed checking, etc.
  • Left areas untested that were common, like “an error should be displayed when I send no parameters to the login page” but when these relaxed areas caused an error I’ve added tests.

Some people say “Why bother adding tests? Everything is changing all the time, why waste time”, this itself is a damn good reason to write tests, “everything is changing”, make sure some feature you implemented on day 1 still work when it changes further on, don’t let your users see that error and have it publicly tweeted out.

I’m not saying that our app has no bugs because we’re writing tests, but that we are working hard to keep the quality and experience consistent, it will only get better not worse.

Continous deployment

For most developers this is a no brainer, but spending time on continuos deployment was important.  I made sure that I could run one command and it would update everything on the server and restart any background processors.

This allowed us to quickly try new things, fix issues and quickly get feedback.

Kippt as an RSS Reader

Last weekend I felt like working on a hack, but what? With the recent news of Google Reader being retired on July 2013 I thought there was something cool to be done with RSS, but what? As I was looking for ideas on my Clips I realized that with Kippt's latest update, a redesign from ground up and tons of new features, you could store your RSS feed’s new entries, actually read articles just fine and have them stored and categorized however you want with Lists. 

So I already had a formula for something to hack on, RSS + Kippt.

Keep reading

Removing Inactive Accounts

For the past couple months we’ve been working hard migrating infrastructure to ec2 and updating our core feed processing pipeline.  We now have a full integration into superfeedr.  Superfeedr is like our rss backend system that turns rss/atom feeds into message however it works faster and has better feed coverage.  This will be better for all.  You will be better served with faster updates and we will no longer need to spend our resources on this backend services that has no user visibility.

Moving to superfeedr is our first step towards taking into a inexpensive subscription service.  @sdether and myself are committed to stabilizing and advancing prior to charging for the service. 

Our first step to moving to superfeedr is to remove accounts that are no longer used. 

Today we are sending out emails to thousands of inactive account letting them know in about 1 week they will be removed.  We have so many abandoned accounts that are taking valuable resources.  For instance we process about 500k IM messages a day however we only deliver about ~50k the other 450k are tossed because the IM account is not logged in.  

If you received an email and no longer need the service you don’t have to do anything.  We will delete your account in about 1 week.  

If you received an email and you would still like to use then you will simply need to start receiving notifications again via email or IM.  To update you account go here

If you need support go here

Thank you,

Jason Wieland


When PuSH Comes to Shove

Google stellt zum 1. Juli den Google Reader ein. Damit verschwindet ein über viele Jahre zentraler Baustein im Feed/RSS/Atom Ökosystem. Was geschieht eigentlich ab diesem Zeitpunkt mit PubSubHubbub (PuSH) bzw. dem von Google betriebenen Hub, ein weiterer zentraler Baustein dieses Ökosystems?

Zur Erinnerung: PuSH ist ein offenes Protokoll, das die Auslieferung von RSS und Atom Feeds wesentlich beschleunigt, d.h. Blog Beiträge werden z.B. unmittelbar nach Veröffentlichung im Feed Reader angezeigt. PuSH wurde 2009 von Brad Fitzpatrick und Brett Slatkin bei Google im Rahmen eines 20% Projekts entwickelt.

Das Protokoll ist dezentral angelegt, d.h. jeder kann einen Hub betreiben, der die Kommunikation zwischen Publisher - also z.B. Blogbetreiber - und Abonnent übernimmt. Google hatte damals einen eigenen Hub zur Verfügung gestellt, der auch weiterhin noch betrieben wird. Allerdings ist dort schon seit Beginn des Projekts dieser Absatz zu lesen:

PubSubHubbub is just a protocol, not a service, but we’re running this as an open test server for anybody to use to help bootstrap the protocol. Feel free to publish to or subscribe from this hub. You can migrate in the future when you want to run your own hub server, or you can just keep using this one.

Ein Test Server, um das Protokoll zu verbreiten!

Neben dem Google Hub ist mir nur noch der Hub von Superfeedr als öffentlicher, frei zugänglicher Hub bekannt. Dieser kann bis zu einer Grenze von 10.000 Benachrichtigungen frei genutzt werden; mehr Benachrichtigungen kosten (nicht viel) Geld. Den Superfeedr Hub nutzt meines Wissens auch Tumblr; andere Blog Plattformen haben eigene Hubs, z.B. Für selbst gehostete Blogs bleiben aber eigentlich nur Google und Superfeedr, die auch beide vom besten Plugin zum Thema von Hause aus unterstützt werden.

Zur Klarstellung: Bislang hat Google nicht verkündet, den PuSH Hub aufzugeben. Aber angesichts der Entwicklungen bei Google zum Thema Feeds/RSS/Atom (s.a. Schließung der Feedburner API), sollte man zumindest die Alternative(n) im Auge behalten.

Update: Lest auch den Kommentar von Julien Genestoux von Superfeedr.

Poko goes PubSubHubBub!

Ar žinojai, kad visi jūs kartu belankydami puslapį sunaudojate kelis kartus mažiau interneto srauto, nei mes išnaudojame tikrindami blogų RSS? Taip - RSS tikrinimas yra tikrai nemažas darbas.  Kadangi serverio galimybės, deja, ribotos, RSS srautų tikrinimas jau nuo pat pradžių mums buvo gan didelis galvos skausmas. Juk visi nori, kad įrašai Poko pasirodytų vos tik parašyti, bet vienintelis būdas mums sužinoti, kad įrašas atsirado, yra patikrinti RSS srautą. Tad iki šiol taip ir sukomės: dažniausiai atnaujinamus blogus tikrinome kas 10 minučių, ne taip dažnai atnaujinamus - kas valandą, na, o dar kitus - kas 3 valandas.

Tačiau Poko ne be reikalo turi beta ženkliuką ant savo logo - sistema kartas nuo karto nesuveikdavo taip, kaip reikia. RSS tikrinimui naudojome SimplePie biblioteką, kuri iš pradžių veikė labai gerai. Bet vėliau pradėjome pastebėti, kad visgi vietomis styro klaidų ausys - dalį jų pavyko susitvarkyti, tačiau dalis taip ir liko kažkur anapus kodo eilučių…

O tuo metu mes vis dažniau pastebėdavom, kad sistemoje susikaupia krūva netikrintų blogų. Tad pradėjom ieškoti sprendimų. Vienas variantas - XML-RPC pranešimai. Nors juos ir palaiko dauguma blogų varikliukų, šios technologijos trūkumas - blogo savininkas turi pats įrašyti mūsų XML-RPC URL prie savo blogo nustatymų. Plačiai apie tokią galimybę neskelbėm, bet kad tai padarys tik 2 žmonės - irgi nesitikėjom. Teko mesti visus kitus programavimo darbus ir imtis PubSubHubBub įgyvendinmo.

PubSubHubBub - tai technologija, kurios esmė yra tokia: blogeriui parašius įrašą, jo sistema (Publisher) praneša specialiam serveriui (Hub) apie naują įrašą. Šis serveris įrašą pasiima (t.y. patikrina RSS srautą), ir apie jį praneša serveriams, kurie tą blogą prenumeruoja (Subscribers). Taip tokioms tarnyboms kaip Poko nebereikia pačioms tikrinti RSS srautų - jos gauna naujų įrašų turinį tiesiai iš hub'o. Teoriškai kiekvienas gali turėti savo PubSubHubBub hub'ą. Tačiau turbūt dėl to, jog PubSubHubBub protokolas buvo sugalvotas vieno Google darbuotojo, hub'o vaidmenį atlieka ir Feedburner. Tad visiems, kurie naudoja Feedburner, niekuo rūpintis nereikia. O tarp Poko stebimų blogų tokių yra beveik 400.

Na va, 40% problemos jau išspręsta. Likusius 60% išsprendėme su Superfeedr tarnyba. Ši tarnyba tikrina RSS srautus ne rečiau nei kas 15 min. ir išnaudoja tą pačią PubSubHubBub technologiją - žodžiu, atlieka visą juodą darbą. O kartu ji veikia ir kaip hub'as - mes gauname pranešimus apie naujus įrašus tiesiai iš ten.

Tad nuo šiandien visi Poko stebimi blogai yra įdėmiai sekami pro PubSubHubBub akinius. Tai reiškia, kad nauji įrašai Poko turėtų atsirasti per kelias minutes nuo parašymo momento, o Poko naujausių įrašų puslapis tampa blogosferos pulso matuokliu!

Iliustracija: Noise to Signal.


Ernestos pastaba: aš esu tas žmogus, kuris kasdien patikrina pasiūlytus blogus, sudeda juos į sistemą ir parašo laiškus visiems, kurie paliko savo el.pašto adresus. Nuobodžiausia šito darbo dalis - laukimas, kol pridėto blogo įrašai pirmą kartą atsiras sistemoje - tik tada siūlytojui galima pranešti geras naujienas (įrašus galima patikrinti ir rankomis, bet darbas ne pats įdomiausias). Priklausomai nuo to, kaip gerai sistema veikdavo, visas procesas užimdavo nuo 15 min. iki gero pusdienio. Bet nuo vakar naujai pridėtų blogų įrašai atsiranda akimirksniu. Ir aš vis dar negaliu patikėti. Ir kaskart, kai įrašai atsiranda akimirksniu, man norisi truputį pašokinėti. Tai turiu įtarimą, kad po sistemos atnaujinimo laimingiausias žmogus esu aš. Bet jei rašai blogą, džiugu turbūt ir tau.

Adding PubSubHubbub support

Recently I added PubSubHubbub support to my site. It’s pretty simple to add, so I thought i’d share it here.

PubSubHubbub is a pub/sub model for the internet. e.g. publishers can update their feeds and then inform a central hub. Subscribers register with the hub and get pinged whenever the published has posted new data. For a more thorough explanation, refer the PubSubHubbub  site.

I will explain how to add support for a rss feed.

Step 1:

You first need to find a hub (or maybe host your own) I used the hub at Superfeedr. Once you register, you will receive your hub url, something like:

Step 2:

You need to augment your rss feed with the url of the hub the publisher will ping once new data appears. For good measure, also add it to the response headers. Here’s what I added to my rss feed:

<!– PubSubHubbub Discovery –>
<link rel=“hub” href=“https://{{ meta.hub }}” xmlns=“” />
<link rel=“self” href=“{{ meta.self }}” xmlns=“” />
<!– End Of PubSubHubbub Discovery –>

Here’s how I added new headers (assuming a node/express type framework):

res.links({hub: cfg.hub,
self: cfg.transportRequire + ’://’ + cfg.apiHost + ’/feeds/rss2’

Thus, the first time anyone accesses the feed, the links in the feed (or alternatively in the header), will inform them that the pubsubhubbub protocol is supported and the client can register with the hub url specified in order to get updates.

Step 3:

Subscribe with the hub and provide an appropriate callback to be called each time the publisher updates it’s feed.

Step 4:

The publisher needs to ping the hub any time it’s feed is updated. Here is a sample gist of a function written in javascript to achieve that.

You can see it in action on my site at PackageIndex

A few Superfeedr API tricks

Bloggers love being promoted inside apps like FanPulse!  We also love it when we can depend on some awesome people to write great sport posts to help people keep up with their favorite teams and games.  The problem is, lots of infrastructure needs to be put together on the backend to support the synchronization of our algorithms with the premium content from blogs and news sources.  Superfeedr fortunately has some tricks for me to automate the process of adding and removing sources on the fly.

When a new source is added to our database, I fire off an after filter that subscribes Superfeedr to that feed.  When the source is destroyed, I also unsubscribe with Superfeedr.  The API makes it super easy.  Here are some examples from FanPulse’s Rails framework:

Adding a feed to Superfeedr to subscribe to:

I have a few constants in there that are specific to our app.  

  • SUPERFEEDR_LOGIN = my_superfeedr_login_name
  • SUPERFEEDR_PASSWORD = my_superfeedr_password
  • SUPERFEEDR_CALLBACK = my_servers_callback_method

I’m using the RestClient gem by adamwiggins to make the calls, as you can see it’s pretty darn easy.  "hub.topic" is the url for the feed,  "hub.callback" is the url I want future feeds to be pushed to using webhooks as well as subscribing internally, “hub.verify” should be set to sync since we’re not doing an asynchronous call, and lastly “hub.mode” is set to subscribe since we want to add this new feed.

There are a few slight subtleties I did not mention, including the fact that your callback has to implement the basic PubSubHubbub subscribe / unsubscribe spec as usual.  That will have to be in another discussion though.

Lastly, if you want to destroy the source from your database, just make sure you hit the Superfeedr API to unsubscribe the feed as well.

Hope that helps!  Big thanks to Julien for all the support and help getting FanPulse running smoothly with the awesome Superfeedr.

Let The News Find You: Msgboy Reads the Web For You In Real Time [Invites]

Keeping up with the news is pretty much a full-time job these days. Thankfully, recommendation services like Zite and Flipboard have figured out some ways of keeping their users informed without overloading them with information. Ideally, though, a recommendation service wouldn’t just learn about the articles you read in a certain app and what you or your friends share on Twitter or Facebook, but it would also look at what you read in your browser throughout the day. Msgboy is trying to do just that. It reads what you read as you browse the Internet and automatically subscribes you to the sites you regularly visits. It then ranks new stories based on how interesting they will likely be for you and notifies you whenever it detects a new and potentially interesting story. Msgboy only works in Chrome right now, but will soon support other browsers as well.

Your Ticket to Paris!

Posted by Merrin

In preparation for their upcoming participation in the startup competition at the LeWebtech event in Paris next month, Superfeedr has launched a competition to create “the coolest app” using their Comet API. All contending applications must be submitted by December 4th, and the winner will be awarded a free ticket to LeWeb on December 9thand 10th (worth $2,230).

The theme for this year’s LeWeb program is real-time web, and Superfeedr is one among 16 startups that will have the opportunity to present themselves to the jury. This competition is based around one of their rivers (freely accessible streams of data) – a global stream that you can get to from here.

According to Superfeeder, there is “no rule other than cool!” They are looking for an app that’s fun, profitable, useful, downloadable, open-source, and web-based. All apps must be visible by December 4th at midnight PST, and must be accompanied by a blog post or doc page explaining how it works.