ergodicity

4

CSA: The Ergodicity Exhibition

Developed from their Evolo skyscraper competition entries, Ergodicity, an exhibition hosted by Canterbury School of Architecture, presented thesis work from eleven Graduate Diploma students.

With over 70 percent of the worlds growing population soon to live within major cities, the exhibition reconsiders the effect of increasing densities. Projects developed their research and design to accommodate for a variety of topics affecting our urban areas today, including: population increase, the rising demand for resources, pollution, waste management, and the digital revolution.

The projects which were shown covered a wide range of locations and programmatic responses, but as a collective all questioned ‘what role can the Skyscraper play in improving our urban areas?’

Responses included approaches such as Tiny Tokyo by Carma Masson, a mixed-use community micro scraper based in the business district of central Tokyo. Tiny Tokyo re-evaluates the approach towards designing skyscrapers, using them as a tool for reviving local heritage and culture, whilst introducing relevance for the people they are designed for, rather than designing them as a corporate tool. 

The future of our history is a concept which has been explored within Luke Hill’s project titled Dis.Assemble. This project involves a complex network composed of 6 miles of disused rail systems buried deep beneath London’s streets which provides a subterranean industrial waste facility: its sole intention to ‘Dis.Assemble’ materials produced by the metropolis above.

Unused space has also been explored within Jake Mullery’s SYMCITY thesis, describing an architectural construct that occupies the ‘dead’ space between existing skyscrapers. 

A comedic thesis by Paul Sohi told the story of one man’s life growing and living in a world of 10 billion people, where 90% of society lives in urbanised cities. The comic explores what such a world would be like.

The launch night was attended by many and with special guest Peter Wynne Rees, chief planner for the City of London, the exhibition was an opportunity to showcase the work of students at the Canterbury School of Architecture ahead of the end of year summer show which starts on the 31st of May.

-Text+photography by Taylor Grindley

The Encyclopedia of Mathematics (2002) defines ergodic theory as the “metric theory of dynamical systems. The branch of the theory of dynamical systems that studies systems with an invariant measure and related problems.” This modern definition implicitly identifies the birth of ergodic theory with proofs of the mean ergodic theorem by von Neumann (1932) and the pointwise ergodic theorem by Birkhoff (1931). These early proofs have had significant impact in a wide range of modern subjects. For example, the notions of invariant measure and metric transitivity used in the proofs are fundamental to the measure theoretic foundation of modern probability theory (Doob 1953; Mackey 1974). Building on a seminal contribution to probability theory (Kolmogorov 1933), in the years immediately following it was recognized that the ergodic theorems generalize the strong law of large numbers. Similarly, the equality of ensemble and time averages – the essence of the mean ergodic theorem – is necessary to the concept of a strictly stationary stochastic process. Ergodic theory is the basis for the modern study of random dynamical systems, e.g., Arnold (1988). In mathematics, ergodic theory connects measure theory with the theory of transformation groups. This connection is important in motivating the generalization of harmonic analysis from the real line to locally compact groups.
Is ergodicity at the root of all macroeconomic opinions?

Schools of macroeconomic thought differ widely in their policy preferences to achieve social optima. A broad chiasm exists between Keynesians and neoclassical economists with respect to monetary policy and fiscal policy preferences. While the following description is a summary, it will suffice to illustrate how different views on ergodicity explain the differences in these schools of thoughts.

Keynesians and allies believe that there are economic conjectures whereby monetary intervention can generate real growth (situations where the output gap is significant and inflation is below target for example). Neoclassical economists and their monetary allies believe that the gravity of market forces is so powerful that monetary surprises cannot yield real economic benefits.

On the monetary debate, neoclassical economists & monetarists believe that economies are ergodic as market forces ensure price adjustments that maintain the economy at potential at most times and thus any gains due to a monetary surprise today will be balanced by a price change that will annihilate those nominal gain. Keynesians and allies believe that a short-term gain will forever alter the development path of an economy, hence initial conditions matter. Depending on each perspective the economy either has a long run steady state or a path that can be altered at each short-term junction. While neoclassical economics believes in the ergodicity of economic systems, Keynesians and associates believe in path dependence.

With respect to the fiscal debate, neoclassical economists believe that changes in government expenditures cannot efficiently modulate economic activity and change potential output because agents’ behaviour is altered by the expectations of a balancing fiscal change in the future. Since the government must over time keep a reasonable balance, a tax cut that leads to a deficit heralds higher future taxes and leads agents to save the tax cut (Ricardian equivalence). Keynesians on the other hand feel that short-term stimuli may create a boost in the economy’s growth path whose value exceeds the amount of the stimulus. 

Who should we believe? Both schools of thought have a point. Unlike natural systems ergodicity does not apply always and everywhere with the same power. The challenge of wise economic management lies in the ability to recognize with a certain degree of certainty when a change in expected policy can yield positive results from those instances where a change in policy simply changes the timeframe of economic consequences.

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies. […] Why is the difference between ensemble and time averages of such importance? Well, basically, because when you assume the processes to be ergodic, ensemble and time averages are identical. Let me give an example even simpler than the one Peters gives:

Assume we have a market with an asset priced at 100€. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100€ – because we here envision two parallel universes (markets) where the asset price falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150€, giving an average of 100€ ((150+50)/2). The time average for this asset would be 75€ – because we here envision one universe (market) where the asset price first rises by 50% to 150€, and then falls by 50% to 75€ (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

The anti-black swan: oversignifying unlikely events and large deviations is as dangerous as undersignifying?

http://www.youtube.com/watch?v=f1vXAHGIpfc Time for a Change: Introducing irreversible time in economics Ole Peters

An exploration of the remarkable consequences of using Boltzmann’s 1870s probability theory and cutting-edge 20th Century mathematics in economic settings. An understanding of risk, market stability and economic inequality emerges.

The lecture presents two problems from economics: the leverage problem “by how much should an investment be leveraged”, and the St Petersburg paradox. Neither can be solved with the concepts of randomness prevalent in economics today. However, owing to 20th-century developments in mathematics these problems have complete formal solutions that agree with our intuition. The theme of risk will feature prominently, presented as a consequence of irreversible time.

Our conceptual understanding of randomness underwent a silent revolution in the late 19th century. Prior to this, formal treatments of randomness consisted of counting favourable instances in a suitable set of possibilities. But the development of statistical mechanics, beginning in the 1850s, forced a refinement of our concepts. Crucially, it was recognised that whether possibilities exist is often irrelevant – only what really materialises matters. This finds expression in a different role of time: different states of the universe can really be sampled over time, and not just as a set of hypothetical possibilities. We are then faced with the ergodicity problem: is an average taken over time in a single system identical to an average over a suitable set of hypothetical possibilities? For systems in equilibrium the answer is generally yes, for non-equilibrium systems no. Economic systems are usually not well described as equilibrium systems, and the novel techniques are appropriate. However, having used probabilistic descriptions since the 1650s economics retains its original concepts of randomness to the present day.

The solution of the leverage problem is well known to professional gamblers, under the name of the Kelly criterion, famously used by Ed Thorp to solve blackjack. The solution can be phrased in many different ways, in gambling typically in the language of information theory. Peters pointed out that this is an application of the ergodicity problem and has to do with our notion of time. This conceptual insight changes the appearance of Kelly’s work, Thorp’s work and that of many others. Their work - fiercely rejected by leading economists in the 1960s and 1970s - is not an oddity of a specific case of an unsolvable problem solved. Instead, it is a reflection of a deeply meaningful conceptual shift that allows the solution of a host of other problems.

The transcript and downloadable versions of the lecture are available from the Gresham College website:

phys.org
Exploring gambles reveals foundational difficulty behind economic theory (and a solution)
In the wake of the financial crisis, many started questioning different aspects of the economic formalism. This included Ole Peters, a Fellow at the London Mathematical Laboratory in the U.K., as well as an external professor at the Santa Fe Institute in New Mexico, and Murray Gell-Mann, a physicist who was awarded the 1969 Nobel Prize in physics for his contributions to the theory of elementary particles by introducing quarks, and is now a Distinguished Fellow at the Santa Fe Institute.

Abstract: 

Gambles are random variables that model possible changes in wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles, and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages. Linear and logarithmic “utility functions” appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate. These invalidate a commonly cited argument for bounded utility functions.

tmaɪæðərɪʌəŋniɑsðɪtsn (loeoryhprədɛebɪoɛɛyv)

Pronounced: tmiiathuhriuuhngniahsthitsn
Diapedesis: diapedesis.
Sustainability: sustainability.
Hot Stuff: hot stuff.
Randomness: ergodicity.
Obedience: passivity.
Bilocation: bilocation.
Obsessiveness: obsessiveness.
Decalescence: decalescence.
Legends: agglutination test, whistling, casino, substitution, hasty defense.
Prophecies: rogation, chemical defense, intervention.
Relations: itɪaʊirəgrotylsyltsli (net estate), tɛəmeuiɛməkʌiiullnɛɑ (palmitin).

Ergodic Lit: “work path” active (non trivial). 

Cybertext: machine for expression, difinitive aesthetic process, medium, user expression, 

Role of Reader: part of creation process

February Reception Desk Sale 2016

February Reception Desk Sale 2016

External image

Today on the Office Furniture Deals Blog we’re highlighting best selling reception desks from brands like Mayline, Offices To Go, Cherryman Industries, and OFM. These affordable reception stations are on sale with free shipping this month. In addition to the hot bargains listed here, please find out active coupons below.

Active OfficeFurnitureDeals.com Coupons For February 2016:

Code: ERGODEAL
Sav…

View On WordPress

Cybertext and The Adventure Game

Aarseth begins by defining cybertext as something that focuses on mechanical organization of a text. The reader has more control and interaction during a reading of cybertext. It is also known as ergodic text, where the reader has to make an effort to traverse a text. Cybertext forces readers to make choices about the path they will read along, which necessarily means they will miss out on reading some parts of the text at each read through. Rather than just a reader, cybertext readers become players and participants. Aarseth then goes on to establish examples of cybertext, such as the Chinese I Ching from around 1100 BC. He emphasizes that cybertext is not new nor is it only for digital literature, but that it is a way of looking at literature that has been marginalized by literary critics. He says that the text is just a machine for getting operators to understand verbal signs.

In the adventure game Aarseth begins by giving the genre’s history, including oral adventure games like Dungeons and Dragons. He states that adventure text is made up of four functional layers: data, processing engines, the interface, and the user. He then discusses how the genre hasn’t been taken seriously in part because it is so new, and in part because critics don’t know how to respond to it. Some see adventure texts as having gaps which the reader fills in by choosing certain paths, but Aarseth sees it as a narrowing of paths, with each choice a reader makes limiting their ever-decreasing pool of story outcomes. The intrigue is the framework of the story, the intriguee is the narrator, and the intrigant is the implied narrator, or voice that relates to the reader. He then uses an example story to illustrate these terms.

The term cybertext was originally hard for me to understand, because it sounds like something digital, which is not necessarily the case. It was made easier for me to understand by realizing that it is an adventure game. Thinking as adventure games as literature is something I had never done before, but it is interesting to think of them that way. I always end up thinking about video games rather than paper or electronic choose your own adventure stories, because many video games are just more high-tech choose your own adventures. Some of the terms like intrigue, intriguee, and intrigant were harder for me to wrap my mind around, but I think will become easier to understand over time after reading an adventure story and putting the terms into use in real life.

Discussion question: are all cybertexts adventure stories and are all adventure stories cybertexts?

arxiv.org
[1602.01849] Quantum nonergodicity and fermion localization in a system with a single-particle mobility edge

[ Authors ]
Xiaopeng Li, J.H. Pixley, Dong-Ling Deng, Sriram Ganeshan, S. Das Sarma
[ Abstract ]
We study the many-body localization aspects of single-particle mobility edges in fermionic systems. We investigate incommensurate lattices and random disorder Anderson models. Many-body localization and quantum nonergodic properties are studied by comparing entanglement and thermal entropy, and by calculating the scaling of subsystem particle number fluctuations, respectively. We establish a nonergodic extended phase as a generic intermediate phase (between purely ergodic extended and nonergodic localized phases) for the many-body localization transition of non-interacting fermions where the entanglement entropy manifests a volume law (`extended’), but there are large fluctuations in the subsystem particle numbers (`nonergodic’). We argue such an intermediate phase scenario may continue holding even for the many-body localization in the presence of interactions as well. We find for many-body states in non-interacting 1d Aubry-Andre and 3d Anderson models that the entanglement entropy density and the normalized particle-number fluctuation have discontinuous jumps at the localization transition where the entanglement entropy is sub-thermal but obeys the “volume law”. In the vicinity of the localization transition we find that both the entanglement entropy and the particle number fluctuations obey a single parameter scaling. We argue using numerical and theoretical results that such a critical scaling behavior should persist for the interacting many-body localization problem with important consequences. Our work provides persuasive evidence in favor of there being two transitions in many-body systems with single-particle mobility edges, the first one indicating a transition from the purely localized nonergodic many-body localized phase to a nonergodic extended many-body metallic phase, and the second one being a transition eventually to the usual ergodic many-body extended phase.