For anarchists, though, free software is attractive not because of the legal provisions of its production process, but primarily because it contains gratis, high- quality alternatives to the proprietary and monopolist software economy. The latter, already on an early critique, represents “a special form of the commodification of knowledge…the special properties of knowledge (its lack of material substance; the ease with which it can be copied and transmitted) mean that it can only acquire exchange value where institutional arrangements confer a degree of monopoly power on its owner” (Morris-Suzuki 1984) — i.e. intellectual property rights. One may add that these are more than mere “institutional arrangements”, since they can be encoded into the technology itself as access-codes for software packages or online content. On such an optic, the collaborative development of free software like the Linux operating system and applications such as OpenOffice clearly approximate an informational anarchist communism. Moreover, for anarchists it is precisely the logic of expropriation and electronic piracy that enables a radical political extension of the cultural ideals of the free manipulation, circulation and use of information associated with the “hacker ethic” (Himanen 2001). The space of illegality created by P2P (peer- to-peer) file-sharing opens up the possibility, not only of the open circulation of freely- given information and software as it is on the Internet today, but also of conscious copyright violation. The Internet, then, enables not only communist relations around information, but also the militant contamination and erosion of non-communist regimes of knowledge — a technological “weapon” to equalize access to information, eating away at intellectual property rights by rendering them unenforceable.
Watch on futurescope.co

Poppy Project: Time lapse of Poppy’s assembly

The Poppy project aims at building an open source humanoid robot and an interdisciplinary community to promote Science, Art and Education.
This video was shot during the assembly of our last Poppy. The actual duration of this assembly was around 7 hours.

The final choreography has been done during the “Êtres et numérique” residency. The code which makes move the robot is available on Github: bit.ly/TJOpGS
You can watch the performance video on vimeo.com/92281019; more information is available on our forum (forum.poppy-project.org/t/artist-residency-etres-et-numerique).

More info on
poppy-project.org
forum.poppy-project.org
github.com/poppy-project

Music crédit:
Four Tet - Moma : soundcloud.com/four-tet/four-tet-moma
Buy it on itunes (bit.ly/1igIb6u) or amazon (amzn.to/1mmOnOr).

Open sourcing memkeys

We rely on memcache pretty heavily at Tumblr, with over 10TB of cache memory available across the stack. One of the things we’ve historically had a challenging time with at Tumblr is finding hot keys. A hot key is a memcache key getting dramatically more activity than other keys. This can have a significant performance impact on your cache backend.

We spent the past few days working on a C++ implementation of mctop*, which we’re happy to release today as memkeys. We do some pretty interesting stuff in memkeys to keep from dropping packets, some of which is documented in the wiki. I’m particularly proud of the striped lock-free queue implementation. In some basic benchmarks I found that memkeys dropped less than 2% of packets when seeing 1Gb/s of traffic. Additionally, the latency between a packet being picked up, parsed, processed, and reported on averages less than 1ms. Here is a screenshot of memkeys in action.

Screenshot

Interested in stuff like this? We’re hiring.

Footnote: Etsy created the excellent mctop tool which aims to be like unix top for memcache, showing you which keys are getting the most activity. Unfortunately (as noted in the known issues), mctop drops packets. It drops a lot of packets. This can be really problematic because depending on the packets being dropped, you’re getting a really incomplete view of your cache story.

First ever genetic pathway mapped for MDSC cancer progression.

First ever genetic pathway mapped for MDSC cancer progression. Thoughts health innovators? http://bit.ly/1wXV3XR

An international collaboration led by Johns Hopkins University successfully established a visual mapping of the molecular pathway of myeloid derived suppressor cell (MDSC) cancer by way of an interactome. The damage and immune suppression the cells cause are not fully understood, however this is a major stepping stone in creating necessary transparency. The illustrative map will allow scientists…

View On WordPress

Open Source: Fast Image Cache

To the engineers in the crowd, we’ve open sourced a bit of code that we use heavily in Path and that you might find useful in your own projects.

Fast Image Cache is an Objective-C library that—you guessed it—caches images fast. Use it to cache like-sized images (e.g. profile pictures, photo thumbnails, etc.), persist them to disk, and display them without the overhead of opening files, decoding image formats, or even really copying memory. Images are mapped directly from file to screen. In Path, this allows us to display and scroll lots of images on screen more or less at 60FPS, even on older devices.

For code, documentation, and comparisons, check out the project’s GitHub page: http://github.com/path/FastImageCache

So, why open source? Well, because this library is awesome! Also because, like most small software companies, we rely heavily on the open source community. Giving back to a community we take so much from is the right thing to do. And very simply, as users of iOS software, we hope our little contribution will improve the experiences of other products we love.

free open-source mathematics software system licensed under the GPL. It builds on top of many existing open-source packages: NumPy, SciPy, matplotlib, Sympy, Maxima, GAP, FLINT, R and many more. Access their combined power through a common, Python-based language or directly via interfaces or wrappers. 
Mission: Creating a viable free open source alternative to Magma, Maple, Mathematica and Matlab.

Working In Public From Day 1

By Eric Mill

In the wide world of software, maybe you’ve heard someone say this, or maybe you’ve said it yourself: "I’ll open source it after I clean up the code; it’s a mess right now."

Or: "I think there are some passwords in there; I’ll get around to cleaning it out at some point."

Or simply: "No way, it’s just too embarrassing."

These feelings are totally natural, but keep a lot of good work closed that could easily have been open. The trick to avoiding this is simple: open source your code from day 1. Don’t wait for a milestone, don’t wait for it to be stable — do it from the first commit.

Here are a few reasons why you should feel good about working in the open from the moment your shovel goes in the ground:

No one’s going to read your code. Your code is almost certainly boring. Most code is. Instead, people will evaluate your work based on how they’d interact with it. Is it easy to learn how to use it from the README? Is development active? Have many GitHub users starred it? And none of that will matter until your project is far enough along that it’s useful. You will not be in the spotlight until you deserve to be.

You will make better decisions. At the most basic level, you will be vastly less likely to accidentally commit a password to an open source project than a closed one. But more than that: even though no one is reading your code, you’ll still feel a bit of natural pressure to make better decisions. You’ll hardcode less, and move more into configuration files. You’ll make things slightly more modular. You’ll comment more. You’ll catch security holes more quickly. That’s a healthy pressure.

It will not waste your time. It may feel like some of those “better decisions” take more time. But even if you’re the only person who will ever work on this project, you have to live there. You’ll greatly and immediately appreciate having made those decisions the minute you return to your own code after taking a month off. And when making better decisions becomes routine, they stop taking more time — and you become a better coder.

You might just help people. And people might just help you! The internet is a big place and a small world, and GitHub has a way of making unexpected connections. If your work is even a little bit useful to someone else, there’s a good chance they’ll find their way to your door, start poking around, and find a way to be useful right back. Even if you’re working on what you think is the most niche project that no one else would ever use: leave the door open for providence.

Once you get used to beginning your work in public, it stops feeling like performance art and starts feeling like breathing. It’s a healthy routine that produces better work and personal growth, and opens the door to spontaneous contribution and engagement. When your default is open, everyone wins.

La Giobia on Flickr.

La Giobia è una strega, spesso magra, con le gambe molto lunghe e le calze rosse. Vive nei boschi e grazie alle sue lunghe gambe, non mette mai piede a terra, ma si sposta di albero in albero. Così osserva tutti quelli che entrano nel bosco e li fa spaventare, soprattutto i bambini.

18F: An Open Source Team

By Raphael Majma and Eric Mill

At 18F, we place a premium on developing digital tools and services in the open. This means contributing our source code back to the community, actively repurposing our code across projects, and contributing back to the open source tools we use. For a variety of reasons, we believe that doing so improves the final product we create. It is because of this that our policy is to:

  1. Use Free and Open Source Software (FOSS) in our projects and to contribute back to the open source community;
  2. Create an environment where any project can be developed in the open; and
  3. Publish all source code created or modified by 18F publicly.

FOSS is software that does not charge users a purchase or licensing fee for modifying or redistributing the source code. There are many benefits to using FOSS, including allowing for product customization and better interoperability between products. Citizen and consumer needs can change rapidly. FOSS allows us to modify software iteratively and to quickly change or experiment as needed.

Similarly, openly publishing our code creates cost-savings for the American people by producing a more secure, reusable product. Code that is available online for the public to inspect is open to a more rigorous review process that can assist in identifying flaws in the source code. Developing in the open, when appropriate, opens the project up to that review process earlier and allows for discussions to guide the direction of a products development. This creates a distinct advantage over proprietary software that undergoes a less diverse review and provides 18F with an opportunity to engage our stakeholders in ways that strengthen our work.

The use of open source software is not new in the Federal Government. Agencies have been using open source software for many years to great effect. What fewer agencies do is publish developed source code or develop in the open. When the Food and Drug Administration built out openFDA, an API that lets you query adverse drug events, they did so in the open. Because the source code was being published online to the public, a volunteer was able to review the code and find an issue. The volunteer not only identified the issue, but provided a solution to the team that was accepted as a part of the final product. Our policy hopes to recreate these kinds of public interactions and we look forward to other offices within the Federal Government joining us in working on FOSS projects.

In the next few days, we’re excited to publish a contributor’s guide about reuse and sharing of our code and some advice on working in the open from day one.

Are you up-to-date on the largest human studies coming through on the genetics of epilepsy? New genes identified with key role in the development of severe childhood epilepsies. Thoughts health innovators? http://bit.ly/1sxyjzE

In the largest collaborative study so far, an international team of researchers from the European EuroEPINOMICS consortium, including scientists from VIB and Antwerp University identified novel causes for severe childhood epilepsies.

LiveFrost: Fast, Synchronous UIView Snapshot Convolving

LiveFrost is a new thing that Nicholas and I have spent half an evening working on. It gives you fast and synchronous UIView snapshot convolution by providing a LFFrostView, a blurring view for UIKit which you can drop into any superview to be blurred. When the app runs, LFFrostView will be filled with a convolved image drawn from the snapshot of its superview.

LiveFrost is released under the MIT license and comes with a sample app.

Other Solutions

There are many competing implementations available: FXBlurView, ios-realtimeblur are the top two hits.

iOS-blur is another one that warrants special mention. It’s an amazingly brilliant hack for iOS 7+ which simply stole UIToolbar and had that view do the blurring.

iOS-blur deserves special mention because it relies on Apple’s kindness and generosity to work. If you try to run it on an iPhone 4, where LiveFrost works smoothly, it would refuse to blur. However, if you’re just looking for a blurring view for your iOS 7+ application which does not target the iPhone 4, and you’re not keen on customization nor compatibility, this library obviously does the blurring with the least amount of code. :)

General Workflow

The general idea of such a blurring view is pretty simple:

  • Draw the contents of its superview into a bitmap context, like a CGBitmapContextRef if you are using Core Graphics.
  • Blur the bitmap algorithmically. (For example, by using GPUImage’s GPUImageUnsharpMaskFilter, or the Accelerate framework’s vImageConvolve_ARGB8888.)
  • Send the bitmap back onto the screen in some way.

Not so simple in practice. The first thing you’d notice when running samples of these implementations on the real device is possibly the slugginess, low frame rates, or out-of-sync blurring results lagging a few frames behind the main view.

Slow Drawing Explanations

In greater detail, the jankiness (in which you lose frames) is usually caused by doing too much on the main thread (1 second /60 frames = 0.016̅ seconds per frame). If you’ve ever profiled such solutions, they’re usually spending a lot of time in drawing into a large image buffer. Once you’ve solved that by bringing the scale factor down (as the product will get convolved anyway), you’ll find the solution still spending a lot of time creating single-use image buffers.

That isn’t right nor necessary. If the blurring view has not been resized — in other words, if its bounds size has not changed — it should not have to waste time throwing away then reclaiming memory. Reusing this context gives you much more time for actual work.

Once jankiness is fully understood we can pierce through reasons causing frame lag as well. The developer, faced with the problem of things taking too long, may try rendering asynchronously — off the main thread — to ease the burden. Now they have precisely two problems: frame lag and threads.

First of all, putting rendering on the background means the bitmap has to (conceptually) travel to the background thread, get operated on then at a later stage re-committed back to the layer, some times in form of -[CALayer setContents:] as an CGImageRef. As all drawing is done by the Render Server (backboardd) the actual image may be committed several frames past its originating frame, resulting in visible lag.

Rendering views off the main thread may also not work out as intended. Some collection views driving multiple cells, usually one per represented object, compute layout in lump sum. They usually hold an internal layout map that correlates objects to their presentation items, deriving bounds and other attributes from the same source. (This is exactly why infinite scrolling is so difficult to achieve with UITableView. This class expects you to know everything because it wants to use that to compute a complete layout, or at least something it can get layout information from.) Views that prefer to build interim layout states as it go still need to constantly mutate their layer trees, updating subviews to reflect content with the new offset. Even though CALayer is thread-safe, you might catch the view in the middle of mutating its own subviews as you attempt to render it from a background thread.

Practically, this results in missing cells in the final images. If you scroll really fast on an implementation which throttles number of frames, you’ll see this happening a lot if you have a long enough collection view to draw from.

If the drawing or convolving itself still takes too long, the developer will have to manually drop frames. They might decide to have a demigod object which listens to CADisplayLink, and implement the callback handler like this. I first learned of this technique from Brad Larson’s answer to CADisplayLink OpenGL rendering breaks UIScrollView behaviour:

- (void) refresh {
    if (dispatch_semaphore_wait(_renderSemaphore, DISPATCH_TIME_NOW) == 0) {
        dispatch_async(_renderQueue, ^{
            for (UIView<LFDisplayBridgeTriggering> *view in self.subscribedViews) {
                [view refresh];
            }
            dispatch_semaphore_signal(_renderSemaphore);
        });
    }
}

Using this technique, the developer effectively clamps the depth of the dispatch queue to one block maximum. When the callback is fired, it invokes dispatch_semaphore_wait with an immediate timeout, effectively tricking the function to return immediately if the semaphore is not released — as the previous queued block has not yet finished. This technique throttles the number of frames processed by the blurring code without causing main thread slowdowns.

Unfortunately, fancy procrastination can’t save you from being late. You need to draw things fast on the main thread.

Fast Synchronous Drawing

It’s possible that you’ve spotted the #1 time sink — disposable single-use contexts. This approach is really clean because no streams ever crossed and really slow because you’re now constantly deallocating and reallocating. Larger images need to be held in larger chunks of memory and it’s harder to find larger chunks of memory when you’re in a tight spot.

You should therefore reuse the bitmap contexts. Create or re-create them when the working size of the bitmap you have has changed, for example when you’ve got a new bounds that has a different size, and only when you have to. In other times, just draw into the context you have and don’t throw the memory away.

Turns out -[CALayer renderInContext:] is really fast when drawing into a context with a 0.5f scale factor (instead of 2.0f on a Retina Display), and it’s also much faster to convolve a smaller image.

LiveFrost obtains a pretty stable and high frame rate by using these simple rules.

Timing Sources

By default, LiveFrost uses CADisplayLink to drive update notifications. Instead of using a NSTimer which fires at fixed intervals, CADisplayLink allows you to synchronize drawing with the refresh rate of the display. By using CADisplayLink, you can be sure that on every invocation, you get draw and update the exact frame in exactly the right run loop mode you specified. Not such with NSTimer, which is also scheduled on a run loop but does not care about the screen.

The only weakness is that by default, CADisplayLink does not pause. LiveFrost will be convolving the same image over and over even if the underlying view has not been updated. This is, generally speaking, a design tradeoff to avoid exposing more interface than necessary, but you can always take the LFFrostView off screen when done.

If you’re trying to do something with Open GL ES, you can possibly look into the LFDisplayBridgeTriggering protocol:

@protocol LFDisplayBridgeTriggering <NSObject>
- (void) refresh;
@end

By default, interfacing with the display link is done over LFDisplayBridge, which holds a mutable, unretained, unsafe set of pointers to LFFrostView instances. If you pause the display link within LFDisplayBridge, you can then control actual refreshes, still, by calling [[LFDisplayBridge sharedInstance] refresh]. However, if you’re not overlaying UIKit things over your Open GL ES view, you might consider to just convolve things with Open GL ES directly without touching LiveFrost.

Like this, if you’re feeling adventurous:

LFDisplayBridge *displayBridge = [LFDisplayBridge sharedInstance];
CADisplayLink *displayLink = nil;
Ivar displayLink = object_getInstanceVariable(displayBridge, "displayLink", &displayLink);
[displayLink pause];

Hardware Compatible Coding

This is pretty much a side note.

CGImageRef is a versatile wrapper which means it could have to be dynamically decoded if necessary. If you’ve ever profiled an app trying to display a JPEG file obtained from the Internet, you’ll see lots of time spent in decoding and converting such image to the GPU’s native format.

Fortunately, if you’re already drawing into a bitmap buffer, you have full control and the result does not require additional transcoding. It’ll be fast.

New GUI Processing sketch on github

image

I’ve just released my first sketch on github, click to go!

It is a simple GUI to control graphic stuff (not the finest you may expect, as you will se). You can also enable/disable a Syphon server and save the PGraphic you’re drawing into!

Since I used (and re-used) it for a while I thought it could be useful for others and we all know how annoying is to build an interface from code, so you have just to change the names of the variables to customize it!

Any changes are encouraged and you may find it useful not just for graphics stuff!

I’m planning to make some changes and the TODO list for this sketch is long, but I expect you to send me suggestions.

Text
Photo
Quote
Link
Chat
Audio
Video