i love it when things I’ve already learned in physics come up in m1 and m2 mechanics, a little less to learn and revise

1. Lab Partners - Dave/Jade College AU

First finished work for my self-imposed AU list challenge! Based on the prompt “Wait, I actually have a competent lab partner?” from this college AU post.

Introductory physics.

It was every Biology major’s worst nightmare - a hard as hell science class (with an equally difficult lab component) that was a requirement in order to major in any science class, no matter how tangentially related. Dave knew there was a purpose for it - the pre-meds would need to know this stuff for their MCAT, which they’d need to get into a good medical school.

But Dave just wanted to be a paleontologist. He wanted to find cool dead things. He sincerely doubted that knowing the equations for the laws of motion would help him discover and name his very own dinosaur species.

(He had a name ready and picked out and everything. Hella jeffinius would be the next T. rex.)

Hamiltonian Mechanics

*See more posts about Hamiltonian mechanics*

For a Hamiltonian *H*, given by

where *T* and *U* are the total kinetic energy and total potential energy of the system, respectively; *q*
is a generalised position (such as *x*, *y*, or *r*), and; *p* is a generalised momentum. Using this notation, Hamilton’s equations of motion are

Notice that *q* with a ‘dot’ represents a generalised velocity and *p* with a ‘dot’ represents a generalised force.

We know that the kinetic energy is classically expressed as

where

is the velocity. Recalling that *p* = *mv*, we find that

Let’s now consider the partial derivative of the Hamiltonian (in equation 1) with respect to this generalised
momentum *p*:

Clearly, *U* doesn’t depend on *p* so does not factor into this equation. Now we may evaluate the
partial derivative of the kinetic energy to find

recalling that

we have

Finally,

which we recognise as the Hamiltonian equation of motion for generalised velocity!

Now, we have a potential energy, given by

where *W* is the work done. Given that

*i.e*. Newton’s Second Law of Motion, we find that

Thus, our Hamiltonian is

which has an infinitesimal change

In the following steps we divide through by the d*q* element, keeping in mind that the two are linearly
independent and therefore do not depend on one-another:

We recognise this as the Hamiltonian equation of motion for generalised force!

For those of us who are Earthbound, it’s easy to think of liquids and gases as being the most common fluids. But plasma–the fourth state of matter–is a fluid as well. Plasmas are essentially ionized gases, which, thanks to their freely flowing electrons, are electrically conductive and sensitive to magnetic fields. Their motions are described by a combination of the Navier-Stokes equations–the usual equations of motion for a fluid–and Maxwell’s equations–the equations governing electricity and magnetism. Studies of plasma motion often fall under the subject of magnetohydrodynamics and can include topics like planetary auroras, sunspots, and solar flares. (Video credit: SciShow)

Breaking Newton’s Law

Intriguing oscillatory back-and-forth motion of a quantum particle

A ripe apple falling from a tree has inspired Sir Isaac Newton to formulate a theory that describes the motion of objects subject to a force. Newton’s equations of motion tell us that a moving body keeps on moving on a straight line unless any disturbing force may change its path. The impact of Newton’s laws is ubiquitous in our everyday experience, ranging from a skydiver falling in the earth’s gravitational field, over the inertia one feels in an accelerating airplane, to the earth orbiting around the sun.

In the quantum world, however, our intuition for the motion of objects is strongly challenged and may sometimes even completely fail. What about imagining a marble falling through water oscillating up and down rather than just moving straight downwards? Sounds strange. Yet, that’s what experimental physicist from Innsbruck in collaboration with theorists from Munich, Paris and Cambridge have discovered for a quantum particle. At the heart of this surprising behavior is what physicists call ‘quantum interference’, the fact that quantum mechanics allows particles to behave like waves, which can add up or cancel each other.

Approaching absolute zero temperature

To observe the quantum particle oscillating back and forth the team had to cool a gas of Cesium atoms just above absolute zero temperature and to confine it to an arrangement of very thin tubes realized by high-power laser beams. By means of a special trick, the atoms were made to interact strongly with each other. At such extreme conditions the atoms form a quantum fluid whose motion is restricted to the direction of the tubes. The physicists then accelerated an impurity atom, which is an atom in a different spin state, through the gas. As this quantum particle moved, it was observed to scatter off the gas particles and to reflect backwards. This led to an oscillatory motion, in contrast to what a marble would do when falling in water. The experiment demonstrates that Newton’s laws cannot be used in the quantum realm.

Quantum fluids sometimes act like crystals

The fact that a quantum-wave may get reflected into certain directions has been known since the early days of the development of the theory of quantum mechanics. For example, electrons reflect at the regular pattern of solid crystals, such as a piece of metal. This effect is termed 'Bragg-scattering’. However, the surprise in the experiment performed in Innsbruck was that no such crystal was present for the impurity to reflect off. Instead, it was the gas of atoms itself that provided a type of hidden order in its arrangement, a property that physicist dub 'correlations’. The Innsbruck work has demonstrated how these correlations in combination with the wave-nature of matter determine the motion of particles in the quantum world and lead to novel and exciting phenomena that counteract the experiences from our daily life.

Understanding the oddity of quantum mechanics may also be relevant in a broader scope, and help to understand and optimize fundamental processes in electronics components, or even transport processes in complex biological systems.

IMAGE….Innsbruck physicists have observed an intriguing oscillatory back-and-forth motion of a quantum particle in a one-dimensional atomic gas. Credit Florian Meinert

What IS the canonical momentum?

This post is going to try and explain the concepts of Lagrangian mechanics, with minimal derivations and mathematical notation. By the end of it, hopefully you will know what my URL is all about.

In 1687, Isaac Newton became the famousest scientist jerk in Europe by writing a book called *Philosophiæ Naturalis Principia Mathematica*. The book gave a framework of describing motion of objects that worked just as well for stuff in space as objects on the ground. Physicists spent the next couple of hundred years figuring out all the different things it could be applied to.

(Newton’s mechanics eventually got downgraded to ‘merely a very good approximation’ when quantum mechanics and relativity came along to complicate things in the 1900s.)

In 1788, Joseph-Louise Lagrange found a different way to put Newton’s mechanics together, using some mathematical machinery called Calculus of Variations. This turned out to be a very useful way to look at mechanics, almost always much easier to operate, and also, like, the basis for all theoretical physics today.

We call this **Lagrangian mechanics**.

The way we think of it these days is, whatever we’re trying to describe is a **physical system**. For example, this cool double pendulum.

The physical system has a **state** - “the pieces of the system are arranged this way”. We can describe the state with a list of numbers. The double pendulum might use the angles of the two pendulums. The name for these numbers, in Lagrangian mechanics, is **generalised coordinates**.

(Why are they “generalised”? When Newton did his mechanics to begin with, everything was thought of as ‘particles’ with a position in 3D space. The *coordinates* are each particle’s \(x\), \(y\) and \(z\) position. Lagrangian mechanics, on the other hand is cool with *any* list of numbers can be used to distinguish the different states of the system, so its coordinates are “generalised”.)

Now, we want to know what the system does as time advances. This amounts to knowing the state of the system for every single point in time.

There are lots of possibilities for what a system might do. The double pendulum might swing up and hold itself horizontal forever, for example, or spin wildly. We call each one a **path**.

Because the generalised coordinates tell apart all the different states of the system, a path amounts to a value of each generalised coordinate at every point in time.

OK. The point of mechanics is to find out *which* of the many *imaginable* paths the system/each coordinate *actually* takes.

To achieve this, Lagrangian mechanics says the system has a mathematical object associated with it called the **action**. It’s almost always written as \(S\).

OK, so here’s what you do with the action: you take one of the paths that the system might take, and feed it in; the action then spits out a number. (It’s an object called a functional, to mathematicans: a function from functions to numbers).

So every path the system takes gets a number associated with it by the action.

The actual numbers associated with each path are not actually that useful. Rather, we want to compare ‘nearby’ paths.

We’re looking for a path with a special property: **if you add any tiny little extra wiggle to the path, and feed the new path through the action, you get the same number out**. We say that the path with this special property is the one the system actually takes.

This is called the **principle of stationary action**. (It’s sometimes called the “principle of least action”, but since the path we’re interested in is not necessarily the path for which the action is lowest, you shouldn’t call it that.)

The answer is sort of, because we pick out an action which produces a stationary path corresponding to our system. Which might sound rather circular and pointless.

If you study quantum field theory, you find out the principle of stationary action falls out rather neatly from a calculation called the Path Integral. So you could say that’s “why”, but then you have the question of “why quantum field theory”.

A clearer question is why is it *useful* to invent an object called the action that does this thing. A couple of reasons:

- the general properties actions frequently make it possible to work out the action of a system just by looking at it, and it’s easier to calculate things this way than the Newtonian way.
- the action gives us a mathematical object that can be thought of as a ‘complete description of the behaviour of the system’, and conclusions you draw about this object - to do with things like symmetries and conserved quantities, say - are applicable to the system as well.

So, OK, let’s crack the action open and look at how it’s made up.

So “inside the action” there’s another object called the Lagrangian, usually written \(L\). (As far as I know it got called that by Hamilton, who was a big fan of Lagrange.) The Lagrangian takes a state of the system and a measure of how quickly its changing, and gives you back a number.

The action crawls along the path of the system, applying the Lagrangian at every point in time, and adding up all the numbers.

Mathematically, the action is the integral of the Lagrangian with respect to time. We write that like $$S[q]=\int_{q(t)} L(q,\dot{q},t)\dif t$$

Lots and lots of things.

The main thing is that you use the Lagrangian to figure out what the stationary path is.

Using a field of maths called *calculus of variations*, you can show that the path that stationaryises the action can be found from the Lagrangian by solving a set of differential equations called the Euler-Langrange equations. If you’re curious, they look like $$\frac{\dif}{\dif t}\left(\frac{\partial L}{\partial \dot{q}_i}\right) = \frac{\partial L}{\partial q_i}$$but we won’t go into the details of how they’re derived in this post.

The Euler-Lagrange equations give you the **equations of motion** of the system. (Newtonian mechanics would also give you the same equations of motion, eventually. From that point on - *solving* the equations of motion - everything is the same in all your mechanicses).

The Lagrangian has some useful properties. Constraints can be handled easily using the method of Lagrange multipliers, and you can add Lagrangians for components together to get the Lagrangian of a system with multiple parts.

These properties (and probably some others that I’m forgetting) tell us what a Lagrangian made of multiple Newtonian particles looks like, if we know the Lagrangian for a single particle.

In the old, Newtonian mechanics, the world is made up of *particles*, which have a position in space, a number called a mass, and not much else. To determine the particles’ motion, we apply things called *forces*, which we add up and divide by the mass to give the acceleration of the particle.

Forces have a direction (they’re objects called *vectors*), and can depend on any number of things, but very commonly they depend on the particle’s position in space. You can have a *field* which associates a force (number and direction) with every single point in space.

Sometimes, forces have a special property of being *conservative*. A conservative force has the special property that

- depends on where the particle is, but not how fast its going
- if you move the particle in a loop, and add up the force times the distance moved at every point around the loop, you get zero

This is great, because now your force can be found from a **potential**. Instead of associating a *vector* with every point, the potential is a *scalar field* which just has a *number* (no direction) at each point.

This is great for lots of reasons (you can’t get very far in orbital mechanics or electromagnetism without potentials) but for our purposes, it’s handy because we might be able to use it in the Lagrangian.

So, suppose our particle can travel along a line. The state of the system can be described with only one generalised coordinate - let’s call it \(q(t)\). It’s being acted on by a conservative force, with a potential defined along the line which gives the force on the particle.

With this super simple system, the Lagrangian splits into two parts. One of them is $$T=\frac{1}{2}m\dot{q}^2$$which is a quantity which Newtonian mechanics calls the kinetic energy (but we’ll get to energy in a bit!), and the other is just the potential \(V(q)\). With these, the Lagrangian looks like $$L=T-V$$and the equations of motion you get are $$m\ddot{q}=-\frac{\dif V}{\dif q}$$exactly the same as Newtonian mechanics.

As it turns out, you can use that idea really generally. When things get relativistic (such as in electromagnetism), it gets squirlier, but if you’re just dealing with rigid bodies acting under gravity and similar situations? \(L=T-V\) is all you need.

This is useful because it’s usually a lot easier to work out the kinetic and potential energy of the objects in a situation, then do some differentiation, than to work out the forces on each one. Plus, constraints.

The canonical momentum in of itself isn’t all that interesting, actually! Though you use it to make Hamiltonian mechanics, and it hints towards Noether’s theorem, so let’s talk about it.

So the Lagrangian depends on the state of the system, and how quickly its changing. To be more specific, for each generalised coordinate \(q_i\), you have a ‘generalised velocity’ \(\dot{q}_i\) measuring how quickly it is changing in time at this instant. So for example at one particular instant in the double pendulum, one of the angles might be 30 degrees, and the corresponding velocity might be 10 radians per second.

The **canonical momenta** \(p_i\) can be thought of as a measure of how responsive the Lagrangian is to changes in the generalised velocity. Mathematically, it’s the partial differential (keeping time and all the other generalised coordinates and momenta stationary): $$p_i=\frac{\partial L}{\partial \dot{q}_i}$$They’re called momenta by analogy with the quantities *linear momentum* and *angular momentum *in Newtonian mechanics. For the example of the particle travelling in a conservative force, the canonical momentum is exactly the same as the linear momentum: \(p=m\dot{q}\). And for a rotating body, the canonical momentum is the same as the angular momentum. For a system of particles, the canonical momentum is the sum of the linear momenta.

But be careful! In situations like motion in a magnetic field, the canonical momentum and the linear momentum are different. Which has apparently led to no end of confusion for Actual Physicists with a problem involving a lattice and an electron and somethingorother I can no long remember…

OK a little maths; let’s grab the Euler-Lagrange equations again: $$\frac{\dif}{\dif t} \left(\frac{\partial L}{\partial \dot{q}}\right) = \frac{\partial L}{\partial q_i}$$Hold on. That’s the canonical momentum on the left. So we can write this as $$\frac{\dif p_i}{\dif t} = \frac{\partial L}{\partial q_i}$$Which has an interesting implication: suppose \(L\) does not depend on a coordinate directly, but only its velocity. In that case, the equation becomes $$\frac{\dif p_i}{\dif t}=0$$so the canonical momentum corresponding to this coordinate does not change ever, no matter what.

Which is known in Newtonian mechanics as *conservation of momentum*. So Lagrangian mechanics shows that momentum being conserved is equivalent to the Lagrangian not depending on the absolute positions of the particles…

That’s a special case of a **very very important theorem **invented by Emmy Noether.

The canonical momenta (or in general, the *canonical coordinates*) are central to a closely related form of mechanics called **Hamiltonian mechanics**. Hamiltonian mechanics is interesting because it treats the ‘position’ coordinates and ‘momentum’ coordinates almost exactly the same, and because it has features like the ‘Poisson bracket’ which work almost exactly like quantum mechanics. But that can wait for another post.

Lagrangian mechanics may be a useful calculation tool, but the reason it’s *important* is mainly down to something that Emmy Noether figured out in 1915. This is what I’m talking about when I refer to Lagrangian mechanics forming the basis for all the modern theoretical physics.

[OK, I am a total Noether fangirl. I think I have that it common with most vaguely theoretical physicists (the fan part, not the girl one, sadly). To mathematicians, she’s known for her work in abstract algebra on things like “rings”, but to physicists, it’s all about Noether’s Theorem.]

Noether’s theorem shows that there is a very fundamental relationship between **conserved quantities** and **symmetries** of a physical system. I’ll explain what that means in lots more detail in the next post I do, but for the time being, you can read this summary by quasi-normalcy.

leo(+ n) singing love equation 150524

*WHAT IS CHARGE & WHY IS IT IMPORTANT?*

One of [if not THE] most fundamental properties of physics, it remains elusive to description and explanation in modern physics today.

Often described simply as the attractive [or repulsive] vector force experienced at a distance between two or more particles of Matter and mathematically formulated in 1785 a complete and satisfactory explanation of the quantum source of and role of Charge in physics remains stubbornly elusive to all who have sought it , until now…

Tetryonic theory’s equilateral Planck quanta reveals charge to be an inherent geometric property of electromagnetic energy arising from their quantum field geometries, and while electrical energies themselves are inherently non-polarised the magnetic moments of each side of a Planck quantum quoin each possess a distinct magnetic vector that in turn determines the weak force interactions of fields and particles on all scales of physics.

Ie it is the magnetic vector, not the electric field that determines whether a quantum field of EM energy [a boson or photon] is ‘positive’ or ‘negative’ and in turn the equilateral asymmetry of many combined quantum quoins of EM energy momenta that in turn creates fields of electrostatic and electromagnetic energies that go on to create the charged 3D Matter topologies of the particles of the Standard Model

**CHARGE is a measure of the inherent geometric magnetic dipole moments of all equilateral Planck energy momenta that comprise all the 2d fields, 3D particles and 1d forces of our Universe….**.

Tetryonic theory’s new geometric model of electromagnetic charge quanta advances our understanding of the quantum source and role of charge in the physics of our Universe in new and exciting ways.

While the Standard Model posits spherical sub-atomic particles Tetryonics reveals quantum charge to be an inherent and essential geometric property of the elementary Platonic topologies of all particles with each particle [charged or neutral] having a distinct number of charged fascia [Higgs bosons] comprising the final topology that makes each and every elementary particle unique…

Special Relativity, developed and advanced as an explanation for the constant velocity of light irrespective of the motion of the source posited that Lorentz corrections applied equally to mass & Matter alike through Einstein’s mass-energy equivalence formula and that the contraction of spherical Matter in motion was the source of magnetic moments and observed emfs in magnet-conductor experiments…..

*Tetryonics shows that all particles have rigid 3D Matter topologies and that their magnetic moments are the result of the asymmetric distribution of Planck mass-energy quanta [E=hv] in their 2d kEM fields of motion… [Mv^2/c^2] not any physical distortion in their material topologies thus undermining a central and foundational assumption of relativity theory.*

This simple and elegant explanation of charge wrt material particle topologies and the associated velocity related magnetic moments of their kEM fields of motion also does away with the virtual particles and infinities of QED that have so long plagued its formulation and reconciliation with experimental results to date… totally removing renormalisation and inherent quantum indeterminacy from its equations and thus restoring deterministic motion and mechanics to physics.

With SR & QED having been corrected from this foundational geometric understanding of EM charge geometries attention can turn back to reconciling the mathematical similarity of Newton’s formula of Gravitation and Coulomb’s formula of Charge interaction – at which point Tetryonic field geometries come to the rescue yet again and show that between any two charged particles two equilateral fields of EM energy momenta exist, reaching out from each particle to create a field of interaction irrespective of time and proportional to the charge topology and motion of each particle respectively…. [Instantaneous interaction-at-a-distance]

Likewise the mass-Matter content of celestial bodies and their GRaviational attraction to each other can be similarly modelled but in this case the extended fields of EM energy and resultant force vectors are based on the strictly attractive vectors forces between the bodies firstly and mathematically formulated from this field geometry secondly…

The end result is an identical mathematical formulation [save the force constants k & G] where one produces an interactive vector force while the other produces a strictly attractive force only…. Irrespective of time.

Einstein, with his concepts of relativistic motion and no differentiation between charge, mass or Matter and a time limited speed of propagation [c] required a different explanation and formulation of gravity in order to account for the observed perihelion of Mercury in its motion about the SUN.

His failure to differentiate 2d mass and 3D Matter from an understanding of the source and role of charge at the quantum level of physics is understandable as quantum theory had yet to be invented at the time he formulated his ideas of relativistic uniform motion [SR] & accelerated motion [GR] leading to his much vaunted formula of mass-energy equivalence [E=mc^2] which completely neglects to define and equate 3D Matter in its formulation [m/c^2=E/c^4=?]. It stands in obvious disagreement with both Newton and Coulomb as to the possibility let alone mechanics of the observed instantaneous action of Gravity over cosmological distances irrespective of time, leading to an exhaustive search for ‘gravity waves’ since GR was released publically.

Many other well know ‘proofs’ of GR exist such as the ‘bending’ of light near bodies of massive Gravity such as stars and the ‘time dilation’ of photons of EM mass-energy in GPS satellite communications etc. – but all of these ‘tests’ also suffer from the same foundational problem that beset Einstein in the first place – there is no differentiation of and between planar 2d fields of mass-energy momenta and material bodies of 3D Matter.

If science cannot [or will not] define and differentiate between 2d mass & 3D Matter and provide a understanding of the source and mechanics of charged energy over space and time in physics then any explanation developed from such a limited understanding may at best mirror reality but offer no real explanation as to the real mechanics at work in Nature – at least Sir Isaac Newton was honest at his failings in this respect in terms of his mathematical solution.

As always there exists a ‘kernel of truth’ in both formulations but only Tetryonic theory with its charged quantum geometries offers a complete ‘picture’ of the mechanics at work and why the maths formulates the way it does and how two wildly different formulations of universal gravitation can both mirror reality despite the inclusion or exclusion of time as a factor…..

**NATURE AT HEART IS GEOMETRIC IN FORM – mathematics is simply our human attempt to describe the geometric properties of EM energies at work, energy forms we could never hope to see, until now.**

In short Tetryonic theory both explains and unites the maths of classical, quantum and relativistic mechanics through the power of equilateral quantum energy momenta geometries & Matter topologies.

So which is real, the Ptolemaic or the Copernican system? Although it is not uncommon for people to say that Copernicus proved Ptolemy wrong, that is not true. As in the case of our normal view versus that of the goldfish, one can use either picture as a model of the universe, for our observations of the heavens can be explained by assuming either the earth or the sun to be at rest. Despite its role in philosophical debates over the nature of our universe, the real advantage of the Copernican system is simply that the equations of motion are much simpler in the frame of reference in which the sun is at rest.

—
Stephen Hawking (The Grand Design, 2010, pp. 41, 42)

*7.05.15*physics notes on equations of projectile motion from today’s revision lesson.

Standing in the shower, water running down my face, pretending to be in a music video n shit. Every movement you make has to be slow to equate to being in slow motion. Next level shit.

The image is not mine. It is a fantastic creation by* bigblueboo* that has caught some attention outside of the usual math tumblverse. You should definitely check out eir blog and if you like this post you should (also?) reblog the original. With that out of the way:

**Linear Algebra. Linear algebra is the study of vector spaces, which are “flat” structures that have a notion of addition and dimension. **

**L^2(S^1) is the space of “square integrable functions on the circle”. Every continuous function from the closed interval [-π,π] is in L^2(S^1), but the space also includes some functions which have discontinuities, so long as they are not too extreme.**

bigblueboo’s image happens to be an excellent source of mathematical content: this post is final post in a series in which I discover some of its secrets. In the first post you can see a derivation of its symbolic equations of motion, and part two contains a sweet characterization. The third post, which explains that the non-constant speed in the gif is not (entirely) a result of the viewing angle, and the fourth post quantifies the variation.

I have some more questions about HTPs; there are certainly natural questions to look at. Therefore, I am planning on doing an epilogue post which will lay out some of the questions I have. If you have any questions you’d like me to share, please reblog and I’ll (probably) include them! However, that post will not be written in fancy images like usual but I’ll just do the best I can in plaintext so that it can be a legitimate reblog of the OP (hopefully he’ll see some of the work that his wonderful piece inspired!)