The Encyclopedia of Mathematics (2002) defines ergodic theory as the “metric theory of dynamical systems. The branch of the theory of dynamical systems that studies systems with an invariant measure and related problems.” This modern definition implicitly identifies the birth of ergodic theory with proofs of the mean ergodic theorem by von Neumann (1932) and the pointwise ergodic theorem by Birkhoff (1931). These early proofs have had significant impact in a wide range of modern subjects. For example, the notions of invariant measure and metric transitivity used in the proofs are fundamental to the measure theoretic foundation of modern probability theory (Doob 1953; Mackey 1974). Building on a seminal contribution to probability theory (Kolmogorov 1933), in the years immediately following it was recognized that the ergodic theorems generalize the strong law of large numbers. Similarly, the equality of ensemble and time averages – the essence of the mean ergodic theorem – is necessary to the concept of a strictly stationary stochastic process. Ergodic theory is the basis for the modern study of random dynamical systems, e.g., Arnold (1988). In mathematics, ergodic theory connects measure theory with the theory of transformation groups. This connection is important in motivating the generalization of harmonic analysis from the real line to locally compact groups.
4

CSA: The Ergodicity Exhibition

Developed from their Evolo skyscraper competition entries, Ergodicity, an exhibition hosted by Canterbury School of Architecture, presented thesis work from eleven Graduate Diploma students.

With over 70 percent of the worlds growing population soon to live within major cities, the exhibition reconsiders the effect of increasing densities. Projects developed their research and design to accommodate for a variety of topics affecting our urban areas today, including: population increase, the rising demand for resources, pollution, waste management, and the digital revolution.

The projects which were shown covered a wide range of locations and programmatic responses, but as a collective all questioned ‘what role can the Skyscraper play in improving our urban areas?’

Responses included approaches such as Tiny Tokyo by Carma Masson, a mixed-use community micro scraper based in the business district of central Tokyo. Tiny Tokyo re-evaluates the approach towards designing skyscrapers, using them as a tool for reviving local heritage and culture, whilst introducing relevance for the people they are designed for, rather than designing them as a corporate tool. 

The future of our history is a concept which has been explored within Luke Hill’s project titled Dis.Assemble. This project involves a complex network composed of 6 miles of disused rail systems buried deep beneath London’s streets which provides a subterranean industrial waste facility: its sole intention to ‘Dis.Assemble’ materials produced by the metropolis above.

Unused space has also been explored within Jake Mullery’s SYMCITY thesis, describing an architectural construct that occupies the ‘dead’ space between existing skyscrapers. 

A comedic thesis by Paul Sohi told the story of one man’s life growing and living in a world of 10 billion people, where 90% of society lives in urbanised cities. The comic explores what such a world would be like.

The launch night was attended by many and with special guest Peter Wynne Rees, chief planner for the City of London, the exhibition was an opportunity to showcase the work of students at the Canterbury School of Architecture ahead of the end of year summer show which starts on the 31st of May.

-Text+photography by Taylor Grindley

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies. […] Why is the difference between ensemble and time averages of such importance? Well, basically, because when you assume the processes to be ergodic, ensemble and time averages are identical. Let me give an example even simpler than the one Peters gives:

Assume we have a market with an asset priced at 100€. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100€ – because we here envision two parallel universes (markets) where the asset price falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150€, giving an average of 100€ ((150+50)/2). The time average for this asset would be 75€ – because we here envision one universe (market) where the asset price first rises by 50% to 150€, and then falls by 50% to 75€ (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

Is ergodicity at the root of all macroeconomic opinions?

Schools of macroeconomic thought differ widely in their policy preferences to achieve social optima. A broad chiasm exists between Keynesians and neoclassical economists with respect to monetary policy and fiscal policy preferences. While the following description is a summary, it will suffice to illustrate how different views on ergodicity explain the differences in these schools of thoughts.

Keynesians and allies believe that there are economic conjectures whereby monetary intervention can generate real growth (situations where the output gap is significant and inflation is below target for example). Neoclassical economists and their monetary allies believe that the gravity of market forces is so powerful that monetary surprises cannot yield real economic benefits.

On the monetary debate, neoclassical economists & monetarists believe that economies are ergodic as market forces ensure price adjustments that maintain the economy at potential at most times and thus any gains due to a monetary surprise today will be balanced by a price change that will annihilate those nominal gain. Keynesians and allies believe that a short-term gain will forever alter the development path of an economy, hence initial conditions matter. Depending on each perspective the economy either has a long run steady state or a path that can be altered at each short-term junction. While neoclassical economics believes in the ergodicity of economic systems, Keynesians and associates believe in path dependence.

With respect to the fiscal debate, neoclassical economists believe that changes in government expenditures cannot efficiently modulate economic activity and change potential output because agents’ behaviour is altered by the expectations of a balancing fiscal change in the future. Since the government must over time keep a reasonable balance, a tax cut that leads to a deficit heralds higher future taxes and leads agents to save the tax cut (Ricardian equivalence). Keynesians on the other hand feel that short-term stimuli may create a boost in the economy’s growth path whose value exceeds the amount of the stimulus. 

Who should we believe? Both schools of thought have a point. Unlike natural systems ergodicity does not apply always and everywhere with the same power. The challenge of wise economic management lies in the ability to recognize with a certain degree of certainty when a change in expected policy can yield positive results from those instances where a change in policy simply changes the timeframe of economic consequences.

The anti-black swan: oversignifying unlikely events and large deviations is as dangerous as undersignifying?

http://www.youtube.com/watch?v=f1vXAHGIpfc Time for a Change: Introducing irreversible time in economics Ole Peters

An exploration of the remarkable consequences of using Boltzmann’s 1870s probability theory and cutting-edge 20th Century mathematics in economic settings. An understanding of risk, market stability and economic inequality emerges.

The lecture presents two problems from economics: the leverage problem “by how much should an investment be leveraged”, and the St Petersburg paradox. Neither can be solved with the concepts of randomness prevalent in economics today. However, owing to 20th-century developments in mathematics these problems have complete formal solutions that agree with our intuition. The theme of risk will feature prominently, presented as a consequence of irreversible time.

Our conceptual understanding of randomness underwent a silent revolution in the late 19th century. Prior to this, formal treatments of randomness consisted of counting favourable instances in a suitable set of possibilities. But the development of statistical mechanics, beginning in the 1850s, forced a refinement of our concepts. Crucially, it was recognised that whether possibilities exist is often irrelevant — only what really materialises matters. This finds expression in a different role of time: different states of the universe can really be sampled over time, and not just as a set of hypothetical possibilities. We are then faced with the ergodicity problem: is an average taken over time in a single system identical to an average over a suitable set of hypothetical possibilities? For systems in equilibrium the answer is generally yes, for non-equilibrium systems no. Economic systems are usually not well described as equilibrium systems, and the novel techniques are appropriate. However, having used probabilistic descriptions since the 1650s economics retains its original concepts of randomness to the present day.

The solution of the leverage problem is well known to professional gamblers, under the name of the Kelly criterion, famously used by Ed Thorp to solve blackjack. The solution can be phrased in many different ways, in gambling typically in the language of information theory. Peters pointed out that this is an application of the ergodicity problem and has to do with our notion of time. This conceptual insight changes the appearance of Kelly’s work, Thorp’s work and that of many others. Their work - fiercely rejected by leading economists in the 1960s and 1970s - is not an oddity of a specific case of an unsolvable problem solved. Instead, it is a reflection of a deeply meaningful conceptual shift that allows the solution of a host of other problems.

The transcript and downloadable versions of the lecture are available from the Gresham College website:

youtube

Biochemists uphold law of physics (EurkAlert)

Experiments by biochemists at the University of California, Davis show for the first time that a law of physics, the ergodic theorem, can be demonstrated by a collection of individual protein molecules — specifically, a protein that unwinds DNA. The work will be published online by the journal Nature on July 14.

The ergodic theorem, proposed by mathematician George Birkhoff in 1931, holds that if you follow an individual particle over an infinite amount of time, it will go through all the states that are seen in an infinite population at an instant in time. It’s a fundamental assumption in statistical mechanics — but difficult to prove in an experiment.

Using technology invented at UC Davis for watching single enzymes at work, Bian Liu, a graduate student in the Biophysics Graduate Group and professor Steve Kowalczykowski, Department of Microbiology and Molecular Genetics and UC Davis Cancer Center, found that when they paused and restarted a single molecule of the DNA-unwinding enzyme RecBCD, it could restart at any speed achieved by the whole population of enzymes.

Caption: RecBCD enzymes are unwinding DNA at different speeds. The bright ball at left is a bead, the bright strand is a stretch of DNA that shortens as it is unwound by the enzyme. The enzymes show ergodic behavior, supporting an important theory in statistical physics.

Credit: Bian Liu, UC Davis

【教材資料/統計物理学-Liouville Theorem( (Hamiltonian) 】
Ilya Prigogine散逸構造の解説における統計力学での出発点としての、確率密度関数pで表現される散逸系を作るergodic theoryと不可逆過程運動論への導入。
画:

Simply put, there is no theory to help us understand works in the interactive fiction form directly. Several applicable theories and concepts exist, such as Espen Aarseth’s formulation of ergodic literature and the Oulipo’s concept of potential literature, both of which help to explain how narratology can be used to understand these objects that are not, in fact, narratives, but that produce narratives when a person interacts with them. But there is still much to do to develop a strong theory that is specific to the form of interactive fiction.
—  Montfort, Nick. Twisty Little Passages: An Approach to Interactive Fiction. Cambridge, MA: The MIT Press, 2003.
Text
Photo
Quote
Link
Chat
Audio
Video