opensource

2

Michigan Tech course to build your own 3D printer 

“Last fall, Michigan Tech offered a new course: Open Source 3D Printing. Students pay an additional $500 course fee for the components and tools necessary to build their own MOST Delta RepRap 3D printer, which they then use for the course. At the end of the semester, each student keeps the printer they built and modified. The 50 seats for the class filled immediately.

(…)

The course essentially distilled the RepRap ethos and formalized it as an introduction to distributed additive manufacturing. We used “Open Source Lab: How to Build Your Own Hardware and Reduce Research Costs” as the textbook to cover the material from an engineering scientist perspective. The course covered the hardware, firmware, slicing, and printer controller software for operating and maintaining the device—all of which are free and open source.

(…)

Next, we got into the nitty-gritty of the class: designing hyper-expensive 3D printable scientific equipment. We used the methods outlined in the textbook. As previously covered on Opensource.com, labs can save enormous sums of money by 3D printing equipment. Students formed teams with at least one graduate student per team so that they had access to campus labs. Then they did a commissioned assignment for another professor, designing everything from vortex mixers to shadow masks for semiconductor research. We used the NIH 3D printable repository and GitHub (as NIH only supported publishing the STL but not the source). Again, the abilities students demonstrated when they were given the freedom to innovate in open source space was impressive. You can see their work and many more examples here. Consider, for example, this customizable face plate designed in OpenSCAD, which a student group designed for an electrical engineering professor. The students designed all of them, and now even novices can choose their ports, position, and rotate them into place.”

~ opensource.org

10

WHAT GOOGLE’S #DEEPDREAM IS AND WHAT IT MEANS TO YOU (AN ARTIST)

We’ve spent some serious time looking into Google’s #deepdream and creating our own images and the discoveries we’ve made have been no more complex than those by other obsessed media outlets, but nonetheless astonishing. First, a small recap as to what Google’s #deepdream is:

Basically, Google has employed the most brilliant minds in engineering and they wondered what it would be like for a computer to dream, because their brilliant minds are capable of actually making something like that possible. First, they figured out enough on Artificial Neural Networks to spur “remarkable recent progress in image classification and speech recognition,” then they looked inward to see what these were capable of. They call it “inceptionism,” and they do a very good job of breaking down to the rest of our stupid brains what it means, more or less.

Basically, Google trains a program to recognize images within a picture by feeding it millions of pictures containing that specific object. After much input, it is asked to emphasize these objets when they are noticed. Layers are built on top of one another in a network, with the highest layers being the most specific, and a “decision” is made after a picture is run through all of these layers. A much more expansive but overwhelmingly simple read can be found from Google’s developers themselves in their blog post.

Where it gets really interesting is when they go to the program with a specific idea in mind, but feed it nothing except what it’s already learned. Taking pictures of random visual noise, or completely unrelated pictures, and feeding them into this network of layers bears some revolutionary results, which can be seen above or in this video of how the process works:

It started with developers asking the software to detect something specific, but it moved into them giving an arbitrary image and asking it to interpret what it saw, and the results were nightmarish. This is how they described it:

We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.

After that, they reserved judgment by the software to one layer only – the highest and most potent that detects whole objects within an image – and said “give me more of whatever you see!”

This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.

Thus was born the dreaming of AI.

The response to this blog post by Google’s developers was otherworldly like the images themselves, and they started throwing all sorts of images into the software’s infinite feedback loop, at times zooming and starting over (like the video above) or just keeping with an image and applying the filter many times over (like the computers dreaming of porn above; try to decipher what you’re looking at).

They ended up deciding to opening source the code, because their first question once they discovered this process was about creativity.

It also makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general.

Since then, tons of people have taken up creating their own ways to #deepdream, tagging their results so Google can see, and still others have created websites where you can upload your own images, including a recent one from Psychic VR Lab and (h/t: Prosthetic Knowledge ).

So hit up these websites and Google’s blog posts about the software, and begin taking yourself out of your art and injecting the subconscious of your computer. It can dream now, and create; two activities which used to be reserved for us only. Tag them with #deepdream.

A reddit list of places to Deep Dream, including some that take under 15 seconds to produce:

https://www.reddit.com/r/deepdream/comments/3cawxb/what_are_deepdream_images_how_do_i_make_my_own/

The Hardest Button to Button

“Hey kids, you gonna make a game tout seul using RedWire! Topic’s ‘machines’, so have fun.”

*starts hyperventilating*

*but not in the good way*

Code. Tout seul. Even though RedWire is all about cannibalizing other people’s games, if you just can’t get your head around what is syntactically possible in code, no amount of drag and drop may help you.

My pessimistic self is adorable.

In the end I decided not to aim for a game with a) exaggeratedly nice graphics, since this would have been nothing but high quality procrastination, b) complex gameplay. First, I wouldn’t know how to do it, secondly, it would have lead to robots. And c) I totally didn’t want to make a robot game.

After some thinking I found the perfect primitive candidate for this game. It even involves a machine. It has two lights, red and green, that flash on/off, in random order. The player has to button the only button of the machine (see what I did there?), whenever both lights are lit. As often and as accurate as possible throughout a set amount of time.

Amaze. Such complexity.

I will aim for pixel art once more, since I’ve got a crush on them square cuties and I want to level up my game. I need two animated lights (red and green, both on and off) and one animated button (idle and pushed). I want to add a machinery kind of background, but this will be the treat once the coding is done. Otherwise I’ll spend my days pushing pixels instead of fighting the real fight. Concerning sounds, I will probably – if time allows it – just make stupid sounds and tune them using Audacity. Simple. Primitive. Beautiful.

Now for the fun part. Not.

CODING.

With help from the RedWire authorities I obtained the secret power to randomize the light switching. The Boolean values for light on/off depend on the randomly determined values

lightA (red): Math.random() > 0.5
lightB (green): Math.random() < 0.5

and are switched every 600 ms (for now) via Limit Rate, saved as lastRefreshTime in RedWire’s memory.

Now for the even funnier To Do part.

I need to relate the Boolean values to the right sprites (false = off, true = on) for both lights and display them according to their values

I additionally want to create an output after the time runs out to give player feedback on clicking accuracy (kinda like a high score, since a friend of mine is very insistent about high scores and how they improve every game 8947581%)

FRANKENSTEINING YOUR GAME.

(aka “Parts I stole from others”. Long live open source.)

I’ve got a counter that adds 1 to the score on each mouse up. Eaaaaand I’ve got a timer, that counts down for a number of undetermined seconds (atm it’s 20 s, might be less in the end)

Of course I still have a couple of To Dos for these hijacked body parts, too.

I need to tweak the score counter to only count, if the winning conditions (lightA = true && lightB = true) are fulfilled, otherwise it shall not increase the score. Though my basic idea included to count the wrong clicks, too, to determine a percentage of reflexes, attention and perfect clicking.

And I would like the countdown to result in displaying some panicky “Badaaaammm! YOUR TIME IS OVER!” (relevance level 0.63)

Nah, well. At least I managed to change the colour of both score board and timer to the same blue the button has (yeah, that’s how good I am.)

THE ART.

Booyah.

THE GAME SO FAR.

The Hardest Button to Button on RedWire

kickstarter.com
The Process: Oscilloscope Music
An oscilloscope is a device used to measure the frequency of electrical signals and display waveforms of those signals against a graph. If that sounds boring, it's because you haven't considered the creative capacity of this kind of tool. Jer…

I did a little walkthrough for the Kickstarter blog. Check it out and try for yourself with the provided Pure Data patch!

Paralyzed men move legs with new non-invasive spinal cord stimulation.

Paralyzed men move legs with new non-invasive spinal cord stimulation. Thoughts health innovators?

Five men with complete motor paralysis were able to voluntarily generate step-like movements thanks to a new strategy that non-invasively delivers electrical stimulation to their spinal cords, according to a new study from University of California, Los Angeles; University of California, San Francisco; and the Pavlov Institute. The researchers state that these encouraging results provide continued…

View On WordPress

youtube

AKER is an open source, modular urban agriculture system. We share tools that help build ecologically resilient, healthy communities.

http://www.aker.me/

Researchers have visualised changes made to RNA in the brain by administered drugs. Thoughts health innovators?

A group of researchers from Kyoto University have successfully visualized RNA behaviour and its response to drugs within the living tissue brain of live mice by labeling specific RNA molecules with fluorescent probes.