pir8grl said:

You can tell I'm losing it...I'm imagining scenes onto other people's fics, the way I imagine my own fics onto episodes... I have this image in my head now of Wilf having finagled his way into Ian's room by saying he was his granddad and reading him Winnie the Pooh (or maybe Paddington) Of course, he'd brought a big book for when Ian is feeling better, but who doesn't love a story about a favorite bear when they're sick? (Crawling back under my rock now...)

NO BUT THAT’S ADORABLE!!!  ALL THE WILF BEING ADORABLE SHOULD HAPPEN.

I don’t know if it will, since the next chapter is still sort of tenuous in my mind, but STILL.

Watch on toastyhat.tumblr.com

Mumford and Sons—Sister

One of my favorite songs of theirs that isn’t on an album.  Hopefully they can include it on the next one!

Tumblr's Finagle Redis Implementation

Tumblr uses redis as a cache/store for a number of our internal applications. Some of the things that depend on redis include:

  • dashboard notifications (via staircar)
  • Tiny URL’s (via gob)
  • gearman persistent storage (via george)
  • note counts (also via staircar, coming soon)
  • dashboard data (coming soon via ira)

Over the past six months we’ve started shifting internal services from being C/libevent based to being Scala/Finagle based. Finagle is a wonderful little library for building asynchronous RPC servers and clients. We’ve built a number of services (besides the ones above) with it, some of our usage I talked about here.

Back in October we created a native redis implementation for finagle, and as of yesterday Marius (one of the finagle leads at Twitter) had pulled it into master. Although I’m sure this will take a slightly different shape over the coming months as Twitter engineers have a chance to work with it, I’m proud to have helped shape a lot of that code along with Bennett, Dallas and Wiktor.

If you’re passionate about massively scalable services like this, we’re hiring.

Now that the Pathfinder RPG Advanced Class Guide has been out for a few days, and people are loving Shardra and all the other new iconic characters, I thought I’d draw a bit of attention to the one thing I wrote for the book. It’s just one wondrous item that appears without illustration on page 229. I wrote a version of this on the Paizo message boards months ago and managed to finagle a revision into this book with the blessing of the entire creative staff and great edits by amazonchique, jessicalprice, Logan Bonner, Judy Bauer, and others.

As far as game rules go, it doesn’t do much—you can certainly find more min-maxy items—but I wanted to make sure players and GMs had concrete rules backup for including such backgrounds and stories in their games. It’s not meant to suggest that this is the only way characters can change their sex in the Pathfinder RPG or that any character needs to take this, but it’s an option for folks who might want it and a rules departure point for any related effects you might want to create.

Beyond that, though, we always say the Pathfinder RPG rules let us tell the stories we want to tell, so I wanted to make it clear that stories about amazing individuals overcoming challenges and being exactly the people they want to be are absolutely among the stories we want to tell.

This text currently appears in the Advanced Class Guide but will also soon be included along with all the rest of the Pathfinder RPG rules, for free, on the Pathfinder RPG Reference Document.

I hope folks dig this little potion and find it helpful for creating exactly the sorts of characters they want to play and for telling even more awesome stories!

~W

okay but how often do you think Boromir pulls the “this is absolutely traditional in Gondor trust me” card in order to disguise his romantic gestures

"no no it’s traditional for a husband to stay six months with his wife after returning from a campaign, even if he might be needed in Minas Tirith"

"I didn’t bring you flowers for no reason it’s traditional for a husband to bring his wife on—near—some weeks after the first day of spring”

"Yes, I know I could have gotten one of the guards to accompany you in the market but it is most seemly for a lord to accompany his lady"

"I promise, this is a very traditional way for a Gondorian husband to please his wife, just hook your legs over my shoulders"

and Zinat is like “what quaint customs you have!” because she likes watching him blush, pink and pleased, but she’s mostly charmed with how totally utterly transparent he is

The Twitter stack

For various reasons, including performance and cost, Twitter has poured significant engineering effort into breaking down the site backend into smaller JVM based services. As a nice side effect we’ve been able to open source several of the libraries and other useful tools that came out of this effort.

While there is a fair amount of information about these projects available as docs or slides I found no simple, high level introduction to what we can unofficially call the Twitter stack. So here it is. It’s worth noting that all this information is about open source projects, that it is public already and that I am not writing this as part of my job at Twitter or on their behalf.

Now, granted these were not all conceived at Twitter and plenty of other companies have similar solutions. However I think the software mentioned below is quite powerful and with most of it released as open source it is a fairly compelling platform to base new services off of.

I will describe the projects from a Scala perspective, but quite a few are useful in Java programs as well. See the Twitter Scala school for an intro to the language, although that is not required to understand this post.

Finagle

At the heart of a service lies the Finagle library. By abstracting away the fundamental underpinnings of an RPC system, Finagle greatly reduces the complexity that service developers have to deal with. It allows us to focus on writing application-specific business logic instead of dwelling on lower level details of distributed systems. Ultimately the website itself uses these services to perform operations or fetch data needed to render the HTML. At Twitter the internal services use the Thrift protocol, but Finagle supports other protocols too such as Protocol buffers and HTTP.

Setting up a service using Finagle
A quick dive into how you would set up a Thrift service using Finagle.

  1. Write a Thrift file defining your API. It should contain the structs, exceptions and methods needed to describe the service functionality. See Thrift Interface Description Language (IDL) docs, in particular the examples at the end for more info.
  2. Use the Thrift file as input for a code generator that spits out code in your language. For Scala and Finagle based projects I would recommend Scrooge.
  3. Implement the Scala trait generated from your Thrift IDL. This is where the actual functionality of your service goes.
  4. Provide the Finagle server builder an instance of the implementation above, a port to bind to and any other settings you might need and start it up.


That looks pretty similar to just using plain Thrift without Finagle. However, there are quite a few improvements such as excellent monitoring support, tracing and Finagle makes it easy to write your service in an asynchronous fashion. More about these features later.

You can also use Finagle as a client. It takes care of all the boring stuff such as timeouts, retries and load balancing for you.

Ostrich

So let’s say we have a Finagle Thrift service running. It’s doing very important work. Obviously you want to make sure it keeps doing that work and that it performs well. This is where Ostrich comes in.

Metrics
Ostrich makes it easy to expose various metrics from your service. Let’s say you want to count how many times a particular piece of code is run. In your service you’d write a line of code that looks something like this:

Stats.incr(“some_important_counter”)

As simple as that. The counter named some_important_counter will be incremented by 1.

In addition to just straight up counters you can get gauges that report on the value of a variable:

Stats.addGauge("current_temperature") { myThermometer.temperature }

or you can time a snippet of code to track the performance

Stats.time("translation") {
 document.translate("de", "en")
}


Those and other examples can be found in the Ostrich readme.

Export metrics
Ostrich runs a small http admin interface to expose these metrics and other functionality. To fetch them you would simply hit http://hostname:port/stats.json to get the current snapshot of the metrics as JSON. At Twitter the stats from each service will be ingested from Ostrich by our internal observability stack, providing us with fancy graphs, alerting and so on.

To tie this back to our previous section: If you provide a Finagle client or server builder with an Ostrich backed StatsReceiver it’ll happily splurt out tons of metrics about how the service is performing, the latencies for the RPC calls and the number of calls to each method to name a few.

Ostrich can also deal with configuring your service, shutting down all the components gracefully and more.



This is an example of what a dashboard could look like with stats gathered from Ostrich by our observability stack. Screenshot from @raffi’s presentation deck.


Zipkin

Ostrich and Finagle combined gives us good service level metrics. However, one downside of a more service oriented architecture is that it’s hard to get a high level performance overview of a single request throughout the stack. 
Perhaps you are a developer tasked with improving performance of a particular external api endpoint. With Zipkin you can get a visual representation of where most of the time to fulfill the request was spent. Think Firebug or Chrome developer tools for the back end. Zipkin is a implementation of a tracing system based off of the Google Dapper paper.

Finagle-Zipkin
So how does it work? There’s a finagle-zipkin module that will hook into the transmission logic of Finagle and time each operation performed by the service. It also passes request identifiers down to any services it relies on, this is how we can tie all the tracing data together. The tracing data is logged to the Zipkin backend and finally we can display and visualize that data in the Zipkin UI.

Let’s say we use Zipkin to inspect a request and we see that it spent most of it’s time waiting for a query to a MySQL database. We could then also see the actual SQL query sent and draw some conclusions from it. Other times perhaps a GC in a Scala service was a fault. Either way, the hope is that a glance at the trace view will reveal where the developer should spend effort improving performance.

Enabling tracing for Finagle services is often as simple as adding

.tracerFactory(ZipkinTracer())

to your ClientBuilder or ServerBuilder. Setting up the whole Zipkin stack is a bit more work though, check out the docs for further assistance.



Trace view, taken from my Strange loop talk about Zipkin.

Mesos

Mesos describes itself as “a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks”. I’ll try to go through this section without using buzzwords such as “private cloud”, although technically I just did.

The core Mesos project is an open source Apache incubator project. On top of it you can run schedulers that deal with more specific technologies, for example Storm and Hadoop. The idea being that the same hardware can be used for multiple purposes, reducing wasted resources.

In addition to using Storm on top of Mesos we deploy some of our JVM-based services to internal Mesos clusters. With the proper configuration it takes care of concerns such as rack diversity, rescheduling if a machine goes down and so on.

The constraints imposed by Mesos have the positive side effect of enforcing adherence to various good distributed systems practices. For example:

  • Service owners shouldn’t make any assumptions about jobs’ lifetimes, as the Mesos scheduler can move jobs to new hosts at any time.
  • Jobs shouldn’t write to local disk, since persistence is not guaranteed.
  • Deploy tooling and configs shouldn’t use static server lists, since Mesos implies deployment to a dynamic environment.
Iago

Before putting your new service into production you might want to check how it performs under load. That’s where Iago (formerly Parrot) comes in handy. It’s a load testing framework that is pretty easy to use.

The process might look something like this:

  1. Collect relevant traffic logs that you want to use as the basis for your load test.
  2. Write a configuration file for the test. It contains the hostnames to send load to, the number of requests per second, the load pattern and so on.
  3. Write the actual load test. It receives a log line, you transform that into a request to a client.
  4. Run the load test. At Twitter this will start up a few tasks in a Mesos cluster, send the traffic and log metrics.


Example
A load test class could be as simple as this:

class LoadTest(parrotService: ParrotService[ParrotRequest, Array[Byte]]) extends     
 ThriftRecordProcessor(parrotService) {

 val client = new YourService.FinagledClient(service, new TBinaryProtocol.Factory())

 def processLines(job: ParrotJob, lines: Seq[String]) {
   lines foreach {line =>client.doSomething(line) }
 }
}


This class will feed each log line to your service’s doSomething method, according to the parameters defined in the configuration of parrotService.

ZooKeeper

ZooKeeper is an Apache project that is handy for all kinds of distributed systems coordination.

One use case for ZooKeeper within Twitter is service discovery. Finagle services register themselves in ZooKeeper using our ServerSet library, see finagle-serversets. This allows clients to simply say they’d like to communicate with “the production cluster for service a in data centre b” and the ServerSet implementation will ensure an up-to-date host list is available. Whenever new capacity is added the client will automatically be aware and will start load balancing across all servers.


Scalding

From the Scalding github page: “Scalding is a Scala library that makes it easy to write MapReduce jobs in Hadoop. Instead of forcing you to write raw map and reduce functions, Scalding allows you to write code that looks like natural Scala”.

As it turns out services that receive a lot of traffic generate tons of log entries. These can provide useful insights into user behavior or perhaps you need to transform them to be suitable as Iago load test input.

I have to admit I was a bit sceptical about Scalding at first. It seemed there were already plenty of ways to write Hadoop jobs. Pig, Hive, plain MapReduce, Cascading and so on. However, when the rest of your project is in Scala it is very handy to be able to write Hadoop jobs in the same language. The syntax is often very close to the one used by Scala’s collection library, so you feel right at home, the difference being that with Scalding you might process terabytes of data with the same lines of code.

A simple word count example from their tutorial:

  TextLine(args("input"))
   .read
   .flatMap('line -> 'word){ line : String => line.split("\\s")}
   .groupBy('word){group => group.size}
   .write(Tsv(args("output")))


jvmgcprof

One of the well known downsides of relying on the JVM for time sensitive requests is that garbage collection pauses could ruin your day. If you’re unlucky a GC pause might hit at the wrong time, causing some requests to perform poorly or even timeout. Worst case that might have knock on effects that leads to downtime.

As a first line of defence against GC issues you should of course tweak your JVM startup parameters to suit the kind of work the service is undertaking. I’ve found these slides from Twitter alumni Attila Szegedi extremely helpful.

Of course, you could minimize GC issues by reducing the amount of garbage your service generates. Start your service with jvmgcprof and it’ll help you reach that goal. If you already use Ostrich to track metrics in your service you can tell jvmgcprof which metric represents the work completed. For example you might want to know how many kilobytes of garbage is generated per incoming Thrift request. The jvmgcprof output for that could look something like this.

2797MB w=101223 (231MB/s 28kB/w)
50.00%  8   297
90.00%  14  542
95.00%  15  572
99.00%  61  2237
99.90%  2620    94821
99.99%  2652    95974

On the first line you can see that the number requests or work were 101223 for the period monitored, with 231MB/s of garbage or 28kB per request. The garbage per request can easily be compared after changes has been made to see if they had a positive or negative impact on garbage generation. See the jvmgcprof readme for more information.

Summary

It’s no surprise, but it turns out that having a common stack is very beneficial. Improvements and bug fixes made by one team will benefit others. There is of course another side to that coin, sometimes bugs are introduced that might just be triggered in your service. However, as an example, when developing Zipkin it was immensely helpful to be able to assume that everyone used Finagle. That way they would get tracing for free once we were done.

I have left out some of the benefits of the Twitter stack and how we use Scala, such as the very convenient way Futures allow you to deal with results from asynchronous requests. I hope to write a more in depth post on how to set up a Twitter style service that would deal with the details omitted in this article. In the meantime you can check out the Scala school for more information.

Thanks to everyone who worked on the projects mentioned in this article, too many to name but you know who you are.

Mercury 5

Previously: Helena absolutely blew it because she should not have known Emily’s last name. Myka is about to absolutely lose it because she does not like the idea that Helena, even as Emily, ever had any kind of relationship with anybody else ever. Pete would absolutely like to run interference here but geez, how? And this blond girl absolutely had it bad for Emily-of-the-sharp-chin. Who absolutely dug H.G. Wells. Who absolutely might be standing right here. Also, bunnies will absolutely be returning soon, though not quite yet in this part. Previously, in detail:part 1, part 2, part 3, and part 4.

Mercury 5

Helena narrowed her eyes, and Myka recognized that look. She’d seen it directed often enough at herself, for it had something to do with “there are right words that can be said, and if I can find them instantly, they will fix everything.”

The problem, of course, was that those right words almost never existed. Even when Helena thought she’d come up with them, had defused the argument or mitigated some thoughtlessness or even just provided reassurance, she was almost always wrong. So, now, what Myka wanted to tell her was, “Just don’t try.”

But Helena didn’t have a chance to try, for the woman was pointing at her again—not poking her in the chin this time, just pointing with a tense finger, a clenched fist. “You know perfectly well I never said it. Jesus, Emily, if you would just tell me! I swear I’m not trying to… I mean, I don’t think we… well, obviously, because you left and you didn’t come back, but what am I supposed to think? What did you expect me to do?”

Read More

Basvaraad pt 2: Worthy

So I went to bed determined to go to sleep but instead I read fanfiction and then got inspire to write a sequel to that thing I wrote earlier. This may become a series.

Part 1

___________

She didn’t tell Mother about Ketojan. Contrary to what everyone else seemed to think, she did have some sense of self-preservation, and she’d really rather not deal with whatever reaction her mother would have to her daughter harboring an escaped Qunari mage.

It took some finagling with a few old and less than reputable contacts, but she managed to scrape enough coin together to get him a house (if you could call it that) in Lowtown, just a few blocks from Gamlen’s. It was shoved away in a corner and only one room with a small curtained-off washroom, but it was surprisingly low on rodents and it was the only place she could convince the landlord to let to a Qunari. She used a good chunk of her Deep Roads fund to pay for furniture (Lirene’s Ferelden Imports came through there— cheap, if worn out, furniture, plus she was helping her fellow Fereldans), but Varric assured her he could hold Bartrand until she could recoup the loss.

Didn’t stop Carver from bitching about it, though. Though she supposed not even Andraste herself descending from on high to tell him to put a cork in it would keep her dear baby brother from bitching about anything.

 

Read More

Turns out anyone can buy a MCU!Star-lord-style leather jacket online for $150 plus shipping.

Buy the mask and guns from Meijer’s and tweak the paint because mass produced plastic. Halloween costume complete.

(Hey deweybundren, Tori suggested I wear a nametag that says “HI MY NAME IS” and write “GROOT” and walk around saying “I am Groot” and you are totally welcome to be Groot if you have no costume for Tori’s reception.)

Argh, I really need to find better reference pictures of him. Perfectly angled profiles and full front-face pics with neutral expressions are really hard to find, surprisingly! 

Also really wish there were slider options. Having to guess as to how to do this or that becomes increasingly frustrating. (Also frustrating? Absolutely no easy way to just make eyebrows closer to the eyes! It takes an absurd amount of finagling to make them look not 50 feet away.)

I’m writing a fic right now that literally no one outside of my best friend will ever read, but I don’t care, because I love it. Unless, of course, I can finagle my sister into beta-ing it for me. In which case, two people will have read it.

Text
Photo
Quote
Link
Chat
Audio
Video