Stereo-Vision

Samsung Galaxy Note 8 could come with a Useful Freebie

The Samsung Galaxy Note 8 is sure to be a very expensive phone, but you might get more than just the handset for your money, with a report saying that the upcoming phablet will come with a free case thrown in. That’s according to ETNews, which says the case will be made of a transparent plastic material, designed to be as thin as possible while covering the back and sides of the Galaxy Note 8.

It sounds like a basic case then, but one which should offer some protection without obscuring your phone or adding too much thickness to it. Although perhaps not the most exciting freebie and one which many buyers will probably either choose not to use or only use until they can get a better case, it could still be useful, ensuring your phone is protected without you having to spend any extra.

“When we did our first game you’d have to take the little balls like the ones that are used on the body and stick them on the face of the actor. I can’t remember how many exactly, but I think you had to have around 80 or 90 every morning, and it would take two or three hours to glue on every single one.”

When we asked how using real-time facial capture made the process of making Hellblade different from that for other games, Antoniades told us: “In most games and animated films they capture face and body separately. They have actors doing the body, animators doing the face and someone else doing the voice.”

The problem with this is that it’s hard to get a cohesive performance, and often a director is left unable to tell if a body performance is completely right because they have no facial expressions to accompany it. “At the end of the day you’re doing a scene with Melina performing and me as a cameraman,” explained Antoniades.

“You need to capture body face and voice to get a full performance. For us, the suits with the markers capture the body and the cameras all around the room are just triangulating the position of the dot so that you can move around. It captures with sub millimeter precision so it’s super accurate. What this means is that the game’s lead actress is able to put on all the equipment, perform a scene from the game in its entirety with her body, face and voice all together and see it rendered in real-time into the game world.

“Usually when we’re shooting we have wireless microphones attached to the head piece to capture the voice. The facial capture head rig has two computer vision cameras that capture the face in stereo and using computer vision tech it can read where the eyes are. It captures the mouth, the eyes, the facial expressions, everything, just by looking at the face. It translates that in real-time so the face in the game is a 3D digital scan of Melina.”

anonymous asked:

eyeballs replaced with two hummingbirds, which could leave at any time and allow you stereo vision from two sources at once, from any angle

No | rather not | I dunno | I guess | Sure | Yes | FUCK yes | Oh god you don’t even know 

Now we’re just getting silly Anon…

Playlist Q&A

As requested on twitter, I’ll do another one of these before I pass out and probably in the morning. 

Basically, you know the WicDiv playlist? It’s on spotify and I’ll put it all beneath the cut?

Go to my ask box. Name a track. I’ll say who it’s connected to or why it’s on there.

Some are clearly huge spoilers, so I won’t answer. Others I won’t answer just because I haven’t much to say. Others I won’t answer as I’m very tired.

Go.

Keep reading

3

I was going through some of my old photos and realized I had two that were taken at slightly different angles.  Then, I wondered if they could be made into a stereoscopic image.  Granted, I know nothing about stereo images other than liking to look at them, but the first one actually worked!  The last one here is a little weird because the foreground flower is blurry, but for my first try with some random old photographs, I have found something new to play with.

TO VIEW:  Pull image up full-size.  Cross your eyes until a third image appears in the middle.  Relax your eyes, and the image becomes 3D.  Kind of like those Magic Eye things.  :)

Aerial Stereo/Motion Analysis (Lab 5)

Due to the nature of this fifth and final lab being open-ended in terms of topic, type of analysis, and software used, I took the liberty of utilizing my programming skills for stereo analysis of aerial photographs. The following images are aerial views of the Pentagon. The Pentagon - the headquarters of the United States Department of Defense - measures 41 acres in area and 75 meters (7 floors) in height. It lies in Arlington County, VA at 38.8712° N, 77.0563° W. 

STEREO VISION: two 2-D images -> a 3-D image.

In humans and animals, stereopsis involves the brains’ processes of combining a left-eye image and a right-eye image into one 3-D image in which depth can be perceived automatically. Binocular vision and the corresponding offset of the two eyes provides enough disparity between the images to perceive depth - and in real time at that.

Similarly, two side-by-side cameras can be used, taking two photos simultaneously, to distinguish depth with mathematical and logical image processing. Computer stereo vision is analogous to the biological process of stereopsis. The ever-growing field of computer vision uses these techniques in real-time for innovative technologies such as the Microsoft Kinect.

Also, images can be taken with a camera at two different moments to interpret motion. A stereo disparity map and a motion optical flow map both give 2-D vectors at every image point to show the correspondence between the images. Two aerial photographs taken at two different locations (for example, at two differing moments of an airplane flight or by two nearby satellites) can be used to distinguish depth (altitude of the images’ components).

The following aerial views act as the left-eye and right-eye images of stereopsis.

LEFT:

RIGHT:

Although the offsets are small and difficult to see, we can distinguish depth from these images.

The Canny edge detection function in Matlab is used to find the edges in the image:

We can see the Pentagon, roads, the parking lot, and other buildings defined as edges in the image.

Matlab’s built-in ‘disparity’ function takes a left and right image as input and outputs a depth map as a result. Closer (higher) components show up as light pixels in the depth map:

Recalling that the Pentagon was 75 meters high, it is pleasing to see most of it show up white and other parts show up black; the left fourth of the image shows up completely black and, quite frankly, the depth map has very little detail.

We turn to C++. I created a C++ class called DepthMapMaker and used its various methods to create a depth map from a left and right image. I used GIMP to convert bitmap images to the function’s image input type of ASCII PPM. It uses a Big-O(N2) “greedy algorithm” to find matching pixels and measure disparity by pixel. Each pixel in the left image and its surrounding 5x5 pixel grid was compared to the each pixel in the corresponding row in the right image to find a best match. Here is the depth map output, converted from the function’s output of PGM to JPG:

I cleaned up the noisy artifacts with GIMP:

I experimented with various Gaussian blurs in Matlab, GIMP, and Paint.NET, but the best output for this use came when I applied a Gaussian blur to the original depth map out using the OpenCV library for C++. It provides several manipulable parameters, including size of the kernel to perform a blur with, to get unique outputs. OpenCV is an amazing open source, multi-platform computer vision library often used for image/video processing and real-time video stream processing. (Computers are catching up to the animal kingdom in the ability to perceive depth in real time.) Here is the blurred depth map:

I coded my own Matlab program as well to compute stereo correspondence. My broad program structure was very similar to the one in my C++ program. However, to find a pixel match, I used Matlab’s normalized cross-correlation function to compare 11x11 pixel grids in the left image to the corresponding row in the right image. Unlike Matlab’s disparity function output of a binary (black or white pixel) image, this output is simply grayscale and provides more detail.

While my self-programmed depth map algorithms provide more depth distinction at a fine scale (the C++ program providing the most), the Matlab disparity function algorithm provides distinction at a coarse scale. Nevertheless, stereo computer vision has proven capable of deciphering altitude from aerial photographs in a similar fashion that humans decipher depth with our eyes.

A simple chemical circumstance led to a great moment in the history of our planet. There were many molecules in the primordial soup. Some were attracted to water on one side and repelled by it on the other. This drove them together into a tiny enclosed spherical shell like a soap bubble, which protected the interior. Within the bubble, the ancestors of DNA found a home and the first cell arose.

It took hundreds of millions of years for tiny plants to evolve giving off oxygen. But that branch didn’t lead to us. Bacteria that could breathe oxygen took over a billion years to evolve. From a naked nucleus, a cell developed with a nucleus inside. Some of these amoeba-like forms led eventually to plants. Others produced colonies with inside and outside cells performing different functions.

Becoming a polyp attached to the ocean floor filtering food from the water and evolving little tentacles to direct food into a primitive mouth. This humble ancestor of ours also led to spiny-skinned armored animals with internal organs including our cousin, the starfish. But we don’t come from starfish.

About 550 million years ago filter feeders evolved gill slits which were more efficient at straining food particles. One evolutionary branch led to acorn worms. Another led to a creature which swam freely in the larval stage but, as an adult, was still firmly anchored to the ocean floor. Some became living hollow tubes.

But others retained the larval forms throughout the life cycle and became free-swimming adults with something like a backbone. Our ancestors now 500 million years ago, were jawless filter-feeding fish a little like lampreys. Gradually, those tiny fish evolved eyes and jaws.

Fish then began to eat one another if you could swim fast, you survived. If you had jaws to eat with, you could use your gills to breathe in the water. This is the way modern fish arose. During the summer, swamps and lakes dried up. Some fish evolved a primitive lung to breathe air until the rains came. Their brains were getting bigger.

If the rains didn’t come, it was handy to be able to pull yourself to the next swamp. That was a very important adaptation. The first amphibians evolved, still with a fish-like tail. Amphibians, like fish, laid their eggs in water where they were easily eaten.

But then a splendid new invention came along: The hard-shelled egg, laid on land where there were as yet no predators. Reptiles and turtles go back to those days. Many of the reptiles hatched on land never returned to the waters. Some became the dinosaurs.

One line of dinosaurs developed feathers, useful for short flights. Today, the only living descendants of the dinosaurs are the birds. The great dinosaurs evolved along another branch. Some were the largest flesh-eaters ever to walk the land. But 65 million years ago they all mysteriously perished.

Meanwhile, the forerunners of the dinosaurs were also evolving in a different direction. Small, scurrying creatures with the young growing inside the mother’s body. After the extinction of the dinosaurs, many different forms developed. The young were very immature at birth.In the marsupials, the wombat, for example and in the mammals, the young had to be taught how to survive. The brain grew larger still.

Something like a shrew was the ancestor of all the mammals. One line took to the trees, developing dexterity stereo vision, larger brains and a curiosity about their environment. Some became baboons, but that’s not the line to us.

Apes and humans have a recent common ancestor. Bone for bone, muscle for muscle, molecule for molecule. There are almost no important differences between apes and humans. Unlike the chimpanzee, our ancestors walked upright freeing their hands to poke and fix and experiment. We got smarter. We began to talk.

Many collateral branches of the human family became extinct in the last few million years. We, with our brains and our hands, are the survivors. There’s an unbroken thread that stretches from those first cells to us.

Those are some of the things that molecules do given 4 billion years of evolution. We sometimes represent evolution as the ever-branching ramifications of some original trunk each branch pruned and clipped by natural selection.

Every plant and animal alive today has a history as ancient and illustrious as ours.

Humans stand on one branch. But now we affect the future of every branch of this 4-billion-year-old tree.

- Carl Sagan, Cosmos