3-D Printed Prosthetics: Crowdsourcing a Solution for Disabled Kids Johns Hopkins Medicine hosts event for 3-D printing enthusiasts who provide kids with affordable and durable prosthetic hands.
Most kids take swinging a baseball bat for granted. For children missing a hand or fingers due to congenital disabilities, that simple act can feel like reaching for the stars. Prosthetic limbs are expensive and quickly outgrown, leaving many families without options. But recently, a group of volunteers and professionals joined forces to put more durable, less constrictive and much less expensive prosthetic hands within the grasp of thousands of children — all for free.
On Sept. 28, 2014, Johns Hopkins Medicine hosted a symposium titled Prosthetists Meet Printers: Mainstreaming Open Source 3-D Printed Prosthetics for Underserved Populations. The event included workshops on strategy, techniques and policy regarding 3-D prosthetics. Johns Hopkins trauma surgeon Albert Chi, the e-NABLE organization, the Kennedy Krieger Institute and other leaders in medicine and industry donated 3-D printed prosthetics to children with upper limb differences.
The event brought 21st century practices and technologies to almost 500 prosthetists, printer owners, parents, kids and wounded warriors. It provided a forum for 3-D printer owners who donate free prosthetic limbs, allowing them to share specs and meet with the professionals and families who can benefit from their work.
How did we get computers to have a “memory”? I mean, if computers are just made up of chips of metal and electricity, how can they store information?
Asked by anonymous
Computers have been around for quite some time— perhaps not in the way we typically think of them, but they have been there. At first, it was easy to conceive a mechanical way to store information, the problem came when we began demanding more out of computers, switching into electronic and magnetic components.
The main principle is storing information in one of two states: either 1 or 0. In terms of electrical components, this is simple: you either have a component in the “on” state or “off” state. The ways to process that information, save it, optimize the process, and make it fully automated vary immensely.
Back in the good old days of computing, memory worked through purely mechanical means. How exactly did we achieve this? Well, one fairly well known method was using punched cards, or Hollerith cards. These were pieces of stiff paper with holes in them. This holes were punched in predefined positions, allowing for early computers— and I mean 1800s computers, not your grandma’s computers— to be able to process data and run automated processes. Note how the concept is fundamentally the same as our system: you still have a set of two distinct states. Several other mechanical ways of accessing and storing information also arose during the early periods of computing, methods which included valves and gears, but these processes were still slow and tedious.
Eventually, we began to need faster, more efficient, and less bulky ways for storing and accessing information.
The first attempt was using electrical valves, which are basically circuits wired so that one valve can be turned on and the other one off. This posed several problems in terms of space efficiency and was incredibly expensive, not to mention highly inefficient in terms of energy consumption. Another concern was how to make these system “non-volatile”, so that you could restart your machine and still have your information there.
Another idea was to place a long tube of mercury with one end on a loudspeaker. Ideally, you would have waves travel through the tube and would be able to detect pulses at the end of the tube. The problem was that you had to constantly circulate these waves, and you could only detect the pulse for a very brief period, right when the wave was “bouncing back”.
Eventually, we got to the point where we managed to create “cores”, which are basically magnetic rings threaded on wires. Bits of information were stored using the direction of the magnetization of the cores. The first cores were huge— storing 1Mbyte required the space of a small car, but we got around to making them smaller and more efficient.
To further optimize our computers, we shifted from magnetized cores toward electronic components. Namely, we’re using transistor chains, which apply a precise voltage to the circuit to produce a pattern of 1’s and 0’s depending on whether or not the current is conducted.
Nowadays, chances are your computer has either a Hard Disk Drive (HDD) or a Solid State Drive (SSD). HDDs are the most common way of storing information on your average computer. The’re basically metal platters with a magnetic coating that stores your information. The platters are spinning rapidly in an enclosed space from which a read/write arm accesses the data. SSDs are a bit more of a novelty for your average PC user, but are faster and more reliable. Instead of having your data stored in a magnetic coating, data in a SSD is stored in an interconnected flash memory chip, much like your USB. Since they do not rely on magnetic coatings, nor do they depend on mechanical parts (like moving arms), SSDs are more reliable and faster, but the drawback is that they are, at least for now, more expensive than HDDs.
In the end, the history of computers revolves around the same central theme: how do we make information readily available and easy to process? Over time, we’ve been demanding more and more out of our computers. As we do so, we of course face increasingly difficult challenges and are forced (or encouraged, if you like) to reinvent our ways in order to keep up with the demand for power and efficiency.
So how did we do it? We say: ingenuity, that’s how.
US ITER researchers based at the Department of Energy’s Oak Ridge National Laboratory (ORNL) are leading the development of a disruption mitigation system to reduce the effects of plasma disruptions on ITER. The US Domestic Agency for ITER signed a formal arrangement with the ITER Organization on 29 July for the work and during the week of 8 September the ITER Fuelling & Wall Conditioning Section leader, So Maruyama, paid a visit to Oak Ridge to assess US progress and plan for an upcoming design review of the ITER disruption mitigation technologies.
Google gave us a few more details on Project Ara, the modular mobile phone concept scheduled for early 2015. Specifically, it will be sold via “a new online store” as well as Google Play once available, according to the Ara developer blog, Phoneblocks.
Project Ara phones will also run a modified version of its upcoming Android L operating system, which it has developed in conjunction with developers of the Linaro Linux-based software.
While there has been little news about Project Ara in recent months, the device, which is supposed to allow users to switch out various components according to their design and hardware preferences, has been an ongoing topic of interest for some time. The device includes an “endoskeleton” frame, to which consumers can connect different screens, cameras, batteries and other block-shaped components called modules. Google describes Project Ara as a phone for 6 billion people, sighting affordability and ease of hardware customization as reasons why it should be appealing to many consumers.
“The smartphone is one of the most empowering and intimate objects in our lives,” says the Project Ara website. “Yet most of us have little say in how the device is made, what it does, and how it looks.”