Michigan Tech course to build your own 3D printer 

“Last fall, Michigan Tech offered a new course: Open Source 3D Printing. Students pay an additional $500 course fee for the components and tools necessary to build their own MOST Delta RepRap 3D printer, which they then use for the course. At the end of the semester, each student keeps the printer they built and modified. The 50 seats for the class filled immediately.


The course essentially distilled the RepRap ethos and formalized it as an introduction to distributed additive manufacturing. We used “Open Source Lab: How to Build Your Own Hardware and Reduce Research Costs” as the textbook to cover the material from an engineering scientist perspective. The course covered the hardware, firmware, slicing, and printer controller software for operating and maintaining the device—all of which are free and open source.


Next, we got into the nitty-gritty of the class: designing hyper-expensive 3D printable scientific equipment. We used the methods outlined in the textbook. As previously covered on, labs can save enormous sums of money by 3D printing equipment. Students formed teams with at least one graduate student per team so that they had access to campus labs. Then they did a commissioned assignment for another professor, designing everything from vortex mixers to shadow masks for semiconductor research. We used the NIH 3D printable repository and GitHub (as NIH only supported publishing the STL but not the source). Again, the abilities students demonstrated when they were given the freedom to innovate in open source space was impressive. You can see their work and many more examples here. Consider, for example, this customizable face plate designed in OpenSCAD, which a student group designed for an electrical engineering professor. The students designed all of them, and now even novices can choose their ports, position, and rotate them into place.”




We’ve spent some serious time looking into Google’s #deepdream and creating our own images and the discoveries we’ve made have been no more complex than those by other obsessed media outlets, but nonetheless astonishing. First, a small recap as to what Google’s #deepdream is:

Basically, Google has employed the most brilliant minds in engineering and they wondered what it would be like for a computer to dream, because their brilliant minds are capable of actually making something like that possible. First, they figured out enough on Artificial Neural Networks to spur “remarkable recent progress in image classification and speech recognition,” then they looked inward to see what these were capable of. They call it “inceptionism,” and they do a very good job of breaking down to the rest of our stupid brains what it means, more or less.

Basically, Google trains a program to recognize images within a picture by feeding it millions of pictures containing that specific object. After much input, it is asked to emphasize these objets when they are noticed. Layers are built on top of one another in a network, with the highest layers being the most specific, and a “decision” is made after a picture is run through all of these layers. A much more expansive but overwhelmingly simple read can be found from Google’s developers themselves in their blog post.

Where it gets really interesting is when they go to the program with a specific idea in mind, but feed it nothing except what it’s already learned. Taking pictures of random visual noise, or completely unrelated pictures, and feeding them into this network of layers bears some revolutionary results, which can be seen above or in this video of how the process works:

It started with developers asking the software to detect something specific, but it moved into them giving an arbitrary image and asking it to interpret what it saw, and the results were nightmarish. This is how they described it:

We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.

After that, they reserved judgment by the software to one layer only – the highest and most potent that detects whole objects within an image – and said “give me more of whatever you see!”

This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.

Thus was born the dreaming of AI.

The response to this blog post by Google’s developers was otherworldly like the images themselves, and they started throwing all sorts of images into the software’s infinite feedback loop, at times zooming and starting over (like the video above) or just keeping with an image and applying the filter many times over (like the computers dreaming of porn above; try to decipher what you’re looking at).

They ended up deciding to opening source the code, because their first question once they discovered this process was about creativity.

It also makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general.

Since then, tons of people have taken up creating their own ways to #deepdream, tagging their results so Google can see, and still others have created websites where you can upload your own images, including a recent one from Psychic VR Lab and (h/t: Prosthetic Knowledge ).

So hit up these websites and Google’s blog posts about the software, and begin taking yourself out of your art and injecting the subconscious of your computer. It can dream now, and create; two activities which used to be reserved for us only. Tag them with #deepdream.

A reddit list of places to Deep Dream, including some that take under 15 seconds to produce:

The Hardest Button to Button

“Hey kids, you gonna make a game tout seul using RedWire! Topic’s ‘machines’, so have fun.”

*starts hyperventilating*

*but not in the good way*

Code. Tout seul. Even though RedWire is all about cannibalizing other people’s games, if you just can’t get your head around what is syntactically possible in code, no amount of drag and drop may help you.

My pessimistic self is adorable.

In the end I decided not to aim for a game with a) exaggeratedly nice graphics, since this would have been nothing but high quality procrastination, b) complex gameplay. First, I wouldn’t know how to do it, secondly, it would have lead to robots. And c) I totally didn’t want to make a robot game.

After some thinking I found the perfect primitive candidate for this game. It even involves a machine. It has two lights, red and green, that flash on/off, in random order. The player has to button the only button of the machine (see what I did there?), whenever both lights are lit. As often and as accurate as possible throughout a set amount of time.

Amaze. Such complexity.

I will aim for pixel art once more, since I’ve got a crush on them square cuties and I want to level up my game. I need two animated lights (red and green, both on and off) and one animated button (idle and pushed). I want to add a machinery kind of background, but this will be the treat once the coding is done. Otherwise I’ll spend my days pushing pixels instead of fighting the real fight. Concerning sounds, I will probably – if time allows it – just make stupid sounds and tune them using Audacity. Simple. Primitive. Beautiful.

Now for the fun part. Not.


With help from the RedWire authorities I obtained the secret power to randomize the light switching. The Boolean values for light on/off depend on the randomly determined values

lightA (red): Math.random() > 0.5
lightB (green): Math.random() < 0.5

and are switched every 600 ms (for now) via Limit Rate, saved as lastRefreshTime in RedWire’s memory.

Now for the even funnier To Do part.

I need to relate the Boolean values to the right sprites (false = off, true = on) for both lights and display them according to their values

I additionally want to create an output after the time runs out to give player feedback on clicking accuracy (kinda like a high score, since a friend of mine is very insistent about high scores and how they improve every game 8947581%)


(aka “Parts I stole from others”. Long live open source.)

I’ve got a counter that adds 1 to the score on each mouse up. Eaaaaand I’ve got a timer, that counts down for a number of undetermined seconds (atm it’s 20 s, might be less in the end)

Of course I still have a couple of To Dos for these hijacked body parts, too.

I need to tweak the score counter to only count, if the winning conditions (lightA = true && lightB = true) are fulfilled, otherwise it shall not increase the score. Though my basic idea included to count the wrong clicks, too, to determine a percentage of reflexes, attention and perfect clicking.

And I would like the countdown to result in displaying some panicky “Badaaaammm! YOUR TIME IS OVER!” (relevance level 0.63)

Nah, well. At least I managed to change the colour of both score board and timer to the same blue the button has (yeah, that’s how good I am.)




The Hardest Button to Button on RedWire

Paralyzed men move legs with new non-invasive spinal cord stimulation.

Paralyzed men move legs with new non-invasive spinal cord stimulation. Thoughts health innovators?

Five men with complete motor paralysis were able to voluntarily generate step-like movements thanks to a new strategy that non-invasively delivers electrical stimulation to their spinal cords, according to a new study from University of California, Los Angeles; University of California, San Francisco; and the Pavlov Institute. The researchers state that these encouraging results provide continued…

View On WordPress
The Process: Oscilloscope Music
An oscilloscope is a device used to measure the frequency of electrical signals and display waveforms of those signals against a graph. If that sounds boring, it's because you haven't considered the creative capacity of this kind of tool. Jer…

I did a little walkthrough for the Kickstarter blog. Check it out and try for yourself with the provided Pure Data patch!


AKER is an open source, modular urban agriculture system. We share tools that help build ecologically resilient, healthy communities.

What I've found after digging into Facebook's Github Contributions

All it started when I had a look at Facebook’s Github org, and what I’ve found is this:

What I can see is that, approx 15 repositories got updated in a timeframe of 4 Hours, that means a lot code per day, per week, per month,……

And then I thought, digging into the entire Facebook’s org repositories with all commits from (2004-2015) and finding some stats, would be sensible.

So I started off with a BOT to do this, these are the steps which it does:

  1. Dives into Github’s API and find the list of repositories and merge to a single JSON object
  2. Filters the above JSON object into minimal JSON object
  3. Parse through the above JSON object and clone all the repositories
  4. Do a ‘git log’ (And with a lot of custom flags) to fetch the log of commits on a time frame
  5. Parse the above raw git log to get list of commits with respective to years and repository, and create files (YEAR.log.json) which have all the details of that respective year.
  6. Parse the above Objects into single minimal JSON
  7. Parse the JSON and create custom Objects to integrate with Google Charts.

So what did i found?

Millions of lines of code, thousands of commits each year. In 2014 alone there were 35K commits. And It’s really hard to Imagine the fact that there were 1.5 Million lines of C++ code and 1.1 Million lines of C code.

After a while I’m able to create visualizations of the data, and made 4 visualizations.

  1. Overview - Visualization with all repos, commits and years
  2. Programming Languages - Visualization with different programming languages LOC
  3. Contributions Repos - Visualization categorized on repos and total commits
  4. Contributions Years - Visualization categorized on years and total commits.

Some cool stats:

  1. Facebook created Its first repository CodeMod on 2009-April-2nd
  2. As of now (June, 12 2015) there are a total of 125 repositories
  3. HHVM alone has 15K commits, which is in first place by no of commits
  4. 24 repositories were created in 2015
  5. There is only one repository (xhp-lib) with the Programming HACK
  6. There are only 7 forked repositories (Forked by Facebook)
  7. 41 repositories have a dedicated home page
  8. React was forked by 3334 Github users, which is the highest of forked repositories
  9. Facebook uses 22 different programming languages
  10. All 125 repositories were updated in 2015
  11. HHVM alone have 781 open issues, which is the highest no of open issues, while C3D has 0 open issues.
  12. In 2014 there were 35K commits compares to 15K commits in 2013.
  13. As of June, 10th 2015, there were 15K commits which equals to total in 2013

I’ve opensourced all the logs and visualizations, which you can be found here:

I will update this post if I find something cool, hope you enjoyed reading.

Researchers have visualised changes made to RNA in the brain by administered drugs. Thoughts health innovators?

A group of researchers from Kyoto University have successfully visualized RNA behaviour and its response to drugs within the living tissue brain of live mice by labeling specific RNA molecules with fluorescent probes.