As you may have heard in the news, the Internet will be “rebooted” tomorrow  and is expected to be down for around one minute. A global consortium lead by Verizon, Deutsche Telekom, and TeliaSonora has called for the action following several periods of instability over the past few years.

A spokesperson for the group commented:

People forget that the Internet has been running continuously since the 1970s. This reboot will provide greater stability for years to come.

The plans have been meticulous and all information will be backed up to the cloud. No one should lose data as a result of the reboot, but everyone is advised to:

  1. Convert 1 April 00.00 UTC to their local time using a tool such as The World Clock Time Zone Converter.
  2. Shut down all Internet connected devices such as laptops, tablets, and smart phones shortly before the reboot.
  3. Wait at least a minute before restarting any devices.

The process is especially hazardous for the Internet technicians handling the restart. One Google specialist – who wished to remain anonymous – told us:

Some of the equipment down there is nearly 50 years old. There are missing fuses, exposed wires and unterminated cables: it will be dangerous.

Anyone browsing the web or posting a large files such as videos at 00.00 UTC could easily electrocute one of my team.

How will your company handle the reboot downtime?

How to parse and dump a sitemap

When deling with website migrations, you sometimes need to map out the old content such that you can create your redirect to the new pages.

While doing this, I ran across this little helpful snippet.

Just modify the URL to the sitemap and the script will print out all the pages. You will need BeautifulSoup 4 and Requests.

Empathy: The Essence of DevOps

© 2014 Jeff Sussna, Ingineering.IT

I first encountered empathy as an explicit design principle in the context of design thinking. You can’t design anything truly useful unless you understand the people for whom you’re designing. Customer satisfaction is more than just an intellectual evaluation. Understanding users requires understanding not just their thoughts, but also their emotional and physical needs.

I was surprised to encounter empathy again in the context of cybernetics. This rediscovery happened thanks to a Twitter exchange with @seungchan​. Cybernetics tells us that, in order for any one or any thing to function, it must have a relationship with other people and/or things. That relationship takes place through the exchange of information, in the form of a conversation. The thermostat converses with the air in the room. The brand converses with the customer. The designer converses with the developer. The developer converses with the operations engineer. Information exchange requires (and can contribute to) mutual understanding; e.g., empathy.

I had another Twitter exchange, this one with @krishnan, on the question of whether Platform-as-a-Service needs DevOps. I think the question actually misses the point. Software-as-service offers customers inseparable functionality and operability. Development delivers functionality and experience; operations ensures the operational integrity of that experience. At some point, the service will inevitably break. Uncertainty and failure are part of the nature of software-as-service. They are, to use @seungchan’s term, part of its “materiality”, just as flexibility or brittleness are part of the materiality of the wood or metal or plexiglass used to make a piece of furniture.

When a service does break, someone has to figure out where and why it broke, and how to fix it. Did the application code cause the failure? The PaaS? An interaction between them? Or something at a layer below them both? Regardless of how many abstraction layers exist, it’s still necessary both to make things and to run them. It doesn’t matter whether or not different people, or teams, or even companies take responsibility for the quality of the making and the operating. In order for a software service to succeed, both have to happen, in a unified and coherent way.

The confluence of these two Twitter exchanges led me to reflect on the true essence of DevOps. It occurred to me that it’s not about making developers and sysadmins report to the same VP. It’s not about automating all your configuration procedures. It’s not about tipping up a Jenkins server, or running your applications in the cloud, or releasing your code on Github. It’s not even about letting your developers deploy their code to a PaaS. The true essence of DevOps is empathy.

We say that, at its core, DevOps is about culture. We advise IT organizations to colocate Dev and Ops teams, to have them participate in the same standups, go out to lunch together, and work cheek by jowl. Why? Because it creates an environment that encourages empathy. Empathy allows ops engineers to appreciate the importance of being able push code quickly and frequently, without a fuss. It allows developers to appreciate the problems caused by writing code that’s fat, or slow, or insecure. Empathy allows software makers and operators to help each other deliver the best possible functionality+operability on behalf of their customers.

Dev and Ops need to empathize with each other (and with Design and Marketing) because they’re cooperating agents within a larger software-as-service system. More importantly, they all need to empathize, not just with each other, but also with users. Service is defined by co-creation of value. Only when a customer successfully uses a service to satisfy their own goals does its value become fully manifest. Service therefore requires an ongoing conversation between customer and provider. To succeed, that conversation requires empathy.

Using map for upstream configuration on NGINX

Hi guys, sorry for not posting any content here for such a long time.

Some of you should have noticed already that I like using map for a lot of stuff.

Another neat usage of map is to simplify your configuration, let me give you a quick example.

Imagine that you have the following configuration:

To simplify your configuration you can use map with the following configuration:

This helps reducing the mess on the configuration made by a lot of location { … } blocks with similar configurations.

MongoDB Management Service Re-imagined: The Easiest Way to Run MongoDB

Discuss on Hacker News

We consistently hear that getting started with MongoDB is easy, but scaling to large configurations that include replication and sharding can be challenging. With MMS, it is now much easier.

Today we introduced major enhancements to MongoDB Management Service (MMS) that makes it significantly easier to run MongoDB. MMS is now centered around the experience of deploying and managing MongoDB on the infrastructure of your choice. You can now deploy a cluster through MMS and then monitor your deployment. You can also optionally back up your MongoDB deployment directly to MongoDB, Inc. Once deployed, you can upgrade or scale a cluster in just a few clicks.

How It Works

MMS works by communicating with an automation agent on each server. The automation agent contacts MMS and gets instructions on the goal state of your MongoDB deployment.

MMS can deploy MongoDB replica sets, sharded clusters and standalones on any Internet-connected server. The servers need only be able to make outbound TCP connections to MMS.

MMS Backup is built directly into MMS. You can enable continuous backup in just a few clicks as you deploy a cluster.

The Infrastructure of Your Choice

By “infrastructure of your choice” we mean that MMS can run and control MongoDB in public cloud, private data center or even your own laptop. For AWS users, we can control virtual machine provisioning directly from MMS.

For example, you might start 20 servers at Google Compute, put the MMS Automation Agent on each servers and then launch a new sharded cluster on the servers through MMS.

If you use AWS, you can insert your AWS keys directly into MMS, and MMS will provision EC2 servers for you and start the MMS automation agent. Hence, with AWS, deploying MongoDB is even simpler.

Bringing your own infrastructure has some advantages. The database is not an island. It must interact with your application. With MMS, you can put your database servers in security zones that you design and be assured that the different pieces of the architecture are in the right places for fault tolerance. For example, if you use AWS, deploying MongoDB across availability zones is now a single click away.

Why We’re Excited

We believe MMS is a quantum leap forward for MongoDB developers and operators. Developers can get MongoDB running much more quickly without understanding the vagaries of installation. Ops can confidently create scalable, fault tolerant, backed up, monitored deployments with a small fraction of the work.

For those who have been using MMS for a long time, you know that it was previously a free monitoring and paid backup service. That classic version of MMS is closed to new users. But if you have a classic account it will continue to work the same way it always has.

As much as we are releasing today, we have barely begun to scratch the surface of what is possible with MMS so expect even more in the future.

We hope you find MMS useful in running MongoDB at scale. You can open a free account and get started at


The new MMS is free for up to eight servers, so most users won’t need to pay anything to run MongoDB through MMS. Full pricing details are available at

Last year I struggled with learning some more focused programming (primarily with the help of Codecademy’s Python curriculum) and I was thinking about putting together a list of resources I have tried so far:

Using online resources as a way of learning computor sciences and programming has never been easier… I guess? That’s a somewhat sad evolution as I see a lot of merits in organized studies (ie. courses in school like settings) but at the other hand it’s awesome if a hoard of autodidact get-shit-doners who learned by just doing flood the IT business.

Microservices, Have You Met…DevOps?

© 2015 Jeff Sussna, Ingineering.IT

Numerous commentators have remarked that microservices trade code complexity for operational complexity. Gary Oliffe referred to it as “services with the guts on the outside”. It is true that microservices potentially confront operations with an explosion of moving parts and interdependencies. The key to keeping ops from being buried under this new-found complexity is understanding that microservices represent a new organizational model as much as a new architectural model. The organizational transformation needed to make microservices manageable can’t be restricted to development; instead, it needs to happen at the level of DevOps.

Microservices work by reducing the scope of concern. Developers have to worry about fewer lines of code, fewer features, and fewer interfaces. They can deliver functional value more quickly and often, with less fear of breaking things, and rely on higher-order emergent processes to incorporate their work into a coherent global system.

In order for microservices to work, though, ops needs a similar conceptual framework. Trying to manage an entire universe of microservices from the outside increases the scope of concern instead of reducing it. The solution is to take the word “service” seriously. Each microservice is just that: a software service. The team that builds and operates it need only worry about it and its immediate dependencies. Dependent services are customers; services upon which a given microservice depends are vendors.  

How do you ensure robustness, and manage failure, when you restrict your operational scope to local concerns? The reason we try to operate microservice architectures monolithically in the first place is because we think “but it all has to work”. The answer is to treat them as the complex systems they are, instead of the complicated systems they’re replacing. If each microservice is a first-class service, its dependencies are third-parties, just like Google Maps, or Twilio, or any other external service. Third-party services fail, and systems that depend on them have no choice but to design for failure. 

Microservices promise increased agility combined with improved quality. To achieve both goals, organizations have to shift their definition of system-level quality from stability to resilience. If they can do that, they can reflect that definition of quality in their organizational structure. Just as a SaaS company as a whole would use DevOps to deliver service quality and agility, so too each microservice would do the same.