eulerian

Reader isotropicposts writes:

Hi, I’m taking a fluids class and I’m not sure I understand the whole lagrangian-eulerian measurements of velocity and acceleration. Could you explain this?

This is a really great question because the Eulerian versus Lagrangian distinction is not obvious when you first learn about it. If you think about a fluid flowing, there are two sensible reference frames from which we might observe. The first is the reference frame in which we are still and the fluid rushes by. This is the Eulerian frame. It’s what you get if you stand next to a wind tunnel and watch flow pass. It’s also how many practical measurements are made. The photo above shows a Pitot tube on a stationary mount in a wind tunnel. With the air flow on, the probe measures conditions at a single stationary point while lots of different fluid particles go past.

The other way to observe fluid motion is to follow a particular bit of fluid around and see how it evolves. This is the Lagrangian method. While this is reasonably easy to achieve in calculations and simulations, it can be harder to accomplish experimentally. To make these kinds of measurements, researchers will do things like mount a camera system to a track that runs alongside a wind tunnel at the mean speed of the flow. The resulting video will show the evolution of a specific region of flow as it moves through time and space. The video below has a nice example of this type of measurement in a wave tank. The camera runs alongside the the wave as it travels, making it possible to observe how the wave breaks.

In the end, both reference frames contain the same physics (Einstein would not have it any other way), but sometimes one is more useful than the other in a given situation. For me, it’s easiest to think of the Eulerian frame as a laboratory-fixed frame, whereas the Lagrangian frame is one that rides alongside the fluid. I hope that helps! (Photo credit: N. Sharp; video credit: R. Liu et al.)

9

Amazing Mathematics Teaching Activity
from: jdh.hamkins.org

A mathematical investigation of graph coloring, chromatic numbers, map coloring and Eulerian paths and circuits. By the end each child will compile a mathematical “coloring book” containing the results of their explorations.

We begin with vertex coloring, where one colors the vertices of a graph in such a way that adjacent vertices get different colors. It starts with some easy examples, and then moves on to more complicated graphs.

The aim is to use the fewest number of colors, and the chromatic number of a graph is the smallest number of colors that suffice for a coloring.

Map coloring, where one colors the countries on a map in such a way that adjacent countries get different colors, is of course closely related to graph coloring.

Next, we consider Eulerian paths and circuits, where one traces through all the edges of a graph without lifting one’s pencil and without retracing any edge more than once. We started off with some easy examples, but then considered more difficult cases.

An Eulerian circuit starts and ends at the same vertex, but an Eulerian path can start and end at different vertices.

You can discuss the fact that some graphs have no Eulerian path or circuit. If there is a circuit, then every time you enter a vertex, you leave it on a fresh edge; and so there must be an even number of edges at each vertex. With an Eulerian path, the starting and ending vertices (if distinct) will have odd degree, while all the other vertices will have even degree.

It is a remarkable fact that amongst connected finite graphs, those necessary conditions are also sufficient. One can prove this by building up an Eulerian path or circuit (starting and ending at the two odd-degree nodes, if there are such); every time one enters a new vertex, there will be an edge to leave on, and so one will not get stuck. If some edges are missed, simply insert suitable detours to pick them up, and again it will all match up into a single path or circuit as desired.

This is an excellent opportunity to talk about The Seven Bridges of Königsberg. Is it possible to tour the city, while crossing each bridge exactly once?

Read the full report: http://jdh.hamkins.org/math-for-seven-year-olds-graph-coloring-chromatic-numbers-eulerian-paths/

Get the resource: Andrej Bauer has assembled the images into a single pdf file: https://drive.google.com/file/d/0B7eG5PHUDcmZX1FnOHRhSU9ubUU/edit?usp=sharing, and filtered the color to black/white to improve printing.

Support it with an in class app: from the apple App Store OR One Touch Draw http://www.windowsphone.com/en-us/store/app/one-touch-draw/2c1b6ab4-d8c0-45f8-a2ac-b94fa721b39e

(Most amazingly…..this was a second grade class!!!)

John Venn’s 180th Birthday

Today’s Google Doodle celebrates the guy who popularized what we now call venn diagrams.

Wikipedia:

Venn diagrams were introduced in 1880 by John Venn (1834–1923) in a paper entitled On the Diagrammatic and Mechanical Representation of Propositions and Reasonings in the “Philosophical Magazine and Journal of Science”, about the different ways to represent propositions by diagrams. The use of these types of diagrams in formal logic, according to Ruskey and M. Weston, is “not an easy history to trace, but it is certain that the diagrams that are popularly associated with Venn, in fact, originated much earlier. They are rightly associated with Venn, however, because he comprehensively surveyed and formalized their usage, and was the first to generalize them”.

Venn himself did not use the term “Venn diagram” and referred to his invention as “Eulerian Circles.”

Filed under: diagrams

REVEALING THE IMPERCEPTIBLE

It’s midterm season, which means that your heart has likely raced through at least one tough math problem or puzzling Ec10 question during the past few weeks. For most people, the only way to know this would be to physically take your pulse. But what if someone could actually see your rushing heartbeat, displayed in vivid color on your face? Scientists at MIT’s Computer Science and Artificial Intelligence Laboratory recently devised a way to do just that, by amplifying subtle changes in the color or motion of video footage.

The program, called Eulerian Video Magnification, works by analyzing video sequences on the pixel level. First it decomposes the video input into bands of different spatial frequency, and then applies a temporal filter to identify the bands of interest. Finally, these signals are magnified by a specified factor and added back into the original video frames.

In other words, by tracking and amplifying the imperceptible flush of blood in your face each time your heart beats, the program could transform a simple video to make you appear to turn crimson in conjunction with your pulse. Talk about school spirit.

Of course, the algorithm has many other more practical applications. It was initially created to help monitor the vital signs of newborns, but Professor William T. Freeman and the rest of the team have since brainstormed countless potential uses, from search-and-rescue and lie detection, to safeguarding in construction and engineering. Now you can even come up with your own, by uploading a video and applying the program at this site (thanks to Quanta Research Cambridge).

youtube

Revealing Invisible Changes In The World (by Miki Rubinstein)

youtube

This is really scary technology. 

Our goal is to reveal temporal variations in videos that are difficult or impossible to see with the naked eye and display them in an indicative manner. Our method, which we call Eulerian Video Magnification, takes a standard video sequence as input, and applies spatial decomposition, followed by temporal filtering to the frames. The resulting signal is then amplified to reveal hidden information. Using our method, we are able to visualize the flow of blood as it fills the face and also to amplify and reveal small motions. Our technique can run in real time to show phenomena occurring at temporal frequencies selected by the user.

youtube

Eulerian Video Magnification Technology

youtube

Eulerian Video Magnification

[Ref.] Eulerian video magnification for revealing subtle changes in the world.

Abstract
Our goal is to reveal temporal variations in videos that are difficult or impossible to see with the naked eye and display them inan indicative manner. Our method, which we call Eulerian VideoMagnification, takes a standard video sequence as input, and applies spatial decomposition, followed by temporal filtering to the frames. The resulting signal is then amplified to reveal hidden information. Using our method, we are able to visualize the flow of blood as it fills the face and also to amplify and reveal small motions. Our technique can run in real time to show phenomena occurring at temporal frequencies selected by the user. CR Categories: I.4.7 [Image Processing and Computer Vision]: Scene Analysis—Time-varying Imagery; Keywords: video-based rendering, spatio-temporal analysis, Eulerian motion, motion magnification

1 Introduction
The human visual system has limited spatio-temporal sensitivity, but many signals that fall below this capacity can be informative. For example, human skin color varies slightly with blood circulation. This variation, while invisible to the naked eye, can be exploited to extract pulse rate [Verkruysse et al. 2008; Poh et al. 2010; Philips 2011]. Similarly, motion with low spatial amplitude, while hard or impossible for humans to see, can be magnified to reveal interesting mechanical behavior [Liu et al. 2005]. The success of these tools motivates the development of new techniques to reveal invisible signals in videos. In this paper, we show that a combination of spatial and temporal processing of videos can amplify subtle variations that reveal important aspects of the world around us. 

Our basic approach is to consider the time series of color values at any spatial location (pixel) and amplify variation in a given temporal frequency band of interest. For example, in Figure 1 we automatically select, and then amplify, a band of temporal frequencies that includes plausible human heart rates. The amplification reveals the variation of redness as blood flows through the face. For this application, temporal filtering needs to be applied to lower spatial frequencies (spatial pooling) to allow such a subtle input signal to rise above the camera sensor and quantization noise. 

Our temporal filtering approach not only amplifies color variation, but can also reveal low-amplitude motion. For example, in the supplemental video, we show that we can enhance the subtle motions around the chest of a breathing baby. We provide a mathematical analysis that explains how temporal filtering interplays with spatial motion in videos. Our analysis relies on a linear approximation related to the brightness constancy assumption used in optical flow formulations. We also derive the conditions under which this approximation holds. This leads to a multiscale approach to magnify motion without feature tracking or motion estimation. 

Previous attempts have been made to unveil imperceptible motions in videos. [Liu et al. 2005] analyze and amplify subtle motions and visualize deformations that would otherwise be invisible. [Wang et al. 2006] propose using the Cartoon Animation Filter to create perceptually appealing motion exaggeration. These approaches follow a Lagrangian perspective, in reference to fluid dynamics where the trajectory of particles is tracked over time. As such, they relyon accurate motion estimation, which is computationally expensiveand difficult to make artifact-free, especially at regions of occlusion boundaries and complicated motions. Moreover, Liu et al. [2005] have shown that additional techniques, including motion segmentation and image in-painting, are required to produce good quality synthesis. This increases the complexity of the algorithm further. 

In contrast, we are inspired by the Eulerian perspective, where properties of a voxel of fluid, such as pressure and velocity, evolve over time. In our case, we study and amplify the variation of pixel values over time, in a spatially-multiscale manner. In our Eulerian approach to motion magnification, we do not explicitly estimate motion, but rather exaggerate motion by amplifying temporal color changes at fixed positions. We rely on the same differential approximations that form the basis of optical flow algorithms [Lucas and Kanade 1981; Horn and Schunck 1981]. 

Temporal processing has been used previously to extract invisible signals [Poh et al. 2010] and to smooth motions [Fuchs et al. 2010]. For example, Poh et al. [2010] extract a heart rate from a video of a face based on the temporal variation of the skin color, which is normally invisible to the human eye. They focus on extracting a single number, whereas we use localized spatial pooling and bandpass filtering to extract and reveal visually the signal corresponding to the pulse. This primal domain analysis allows us to amplify and visualize the pulse signal at each location on the face. This has important potential monitoring and diagnostic applications to medicine, where, for example, the asymmetry in facial blood flow can be a symptom of arterial problems. 

Fuchs et al. [2010] use per-pixel temporal filters to dampen temporal aliasing of motion in videos. They also discuss the high-pass filtering of motion, but mostly for non-photorealistic effects and for large motions (Figure 11 in their paper). In contrast, our method strives to make imperceptible motions visible using a multiscale approach. We analyze our method theoretically and show that it applies only for small motions. 

In this paper, we make several contributions. First, we demonstrate that nearly invisible changes in a dynamic environment can be revealed through Eulerian spatio-temporal processing of standard monocular video sequences. Moreover, for a range of amplification values that is suitable for various applications, explicit motion estimation is not required to amplify motion in natural videos. Our approach is robust and runs in real time. Second, we provide an analysis of the link between temporal filtering and spatial motion and show that our method is best suited to small displacements and lower spatial frequencies. Third, we present a single framework that can be used to amplify both spatial motion and purely temporal changes, e.g., the heart pulse, and can be adjusted to amplify particular temporal frequencies—a feature which is not supported by Lagrangian methods. Finally, we analytically and empirically compare Eulerian and Lagrangian motion magnification approaches under different noisy conditions. To demonstrate our approach, we present several examples
….

Read more in:
http://people.csail.mit.edu/mrub/papers/vidmag.pdf


Wu, H. Y., Rubinstein, M., Shih, E., Guttag, J. V., Durand, F., & Freeman, W. T. (2012). Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graph., 31(4), 65.