Pyrargyrite, Ag3SbS3 (top) and proustite, Ag3AsS3 (bottom).

These are isomorphous minerals, meaning the atoms have the same arrangement in the two compounds, but where pyrargyrite has antimony, proustite has arsenic.

Antimony and arsenic are in the same column of the periodic table, so we would correctly expect them to make this swap without much fuss.

This little gem is dedicated to Frédéric Vanhove, who passed away yesterday and was our assistent for graph theory. He had a passion for graphs, so I hope he’d like this beautiful theorem - in my opinion one of the most elegant in whole mathematics.

Draw n points and connect them without creating any “loops”; in the result, every point should be accessible from every other point by exactly one path. Such a configuration (a graph) is called a tree. You can see all possible trees on four points in the image. Lots of these trees are essentially the same (isomorphic), but we label the points to distinguish between them.

Carl Wilhelm Borchardt found an elegant formula to express the total number of possible spanning trees on n labeled points, but nowadays the result is named after Arthur Cayley. In our example (with n=4) we find 16 trees, but in general, the formula states that this number is exactly nn-2.

Shortly, nn-2 is the number of spanning trees of a complete graph Kn.

Group Theory, Isomorphisms & Permutations

I’m not feeling social interaction today so I learned a bunch of group theory, and it just got too cool, so i had to post some of it on here!

Now, before I go on to talk about the main things which are Isomorphisms and Permutations, I should properly define groups.

Taking the axiomatic approach, groups are sets with an operation for combining elements which pass the following tests

Groups are closed under one operation, meaning if you combine any two elements using an operation of your choice (the symbols +and * are often used), you will never be able to create an element which is not in the group.

The operation is associative meaning A*(B*C)=(A*B)*C=A*B*C as long as you combine the elements as shown. This effect can be extended to any string of n elements. 

There exists an element “e” such that x*e=x and e*x=x. This element is called the unit, and if we are talking about multiplication with ordinary numbers, this number would correspond to one.

For each number in a group, there will be an inverse. When we are using multiplication-like operations like we are above, we usually write the number like this a-1, and call it “a inverse” for addition we use -a and call it “the negative of a”. The inverse of the unit is the unit again, and any element combined with its inverse becomes the unit.

You can see that the commutative axiom is not listed, where A*B=B*A this is not always true, as we will see in a second. Take the group consisting of the elements




under the operation of matrix multiplication which works like this.

  Now, if we do the operation A*B, we get the matrix C, if we do the operation B*A, we get the matrix K, meaning that this operation is not commutative. This is not the only operation which behaves this way, so in general we have to say that the operation does not have to commute. If it does, we call the group an Abelian group.

Now that that’s out of the way, we can define an Isomorphism. 

An Isomorphism is a one-to-one function which takes one group and converts it into another while preserving the operation. Basically, if G  is a group under the operation * denoted <G,*> and <H,+> is a group, and 

then an Isomorphism f(x) will take f(a)=a’ and f(b)=b’ and make, f(a*b)=a’+b’ or, entirely in function notation, f(a*b)=f(a)+f(b).

To prove that two groups are isomorphic, you need to prove that the function you are using is one-to-one, that is proving that the function is both injective and surjective. To do this, you have to prove that each element of the range is the image of at most 1 element of the domain, which can be done by showing that it fits this definition 

You then have to prove that the function is surjective by showing that for some element y in he range, there exists an element who’s image is y. Together, these two things prove that the function is bijective, which means that you can turn elements from G into elements in H and vice versa.

From there, all you have to prove is that f(a*b)=f(a)+f(b) and you’re done.

Now why are isomorphisms important? Because they can be used to prove things like this Cayley’s theorem which says that every group is isomorphic to a group of permutations.  Quite a bit is known about permutations, which makes this theorem very strong in terms of its use.

We can prove this in a second, but first, we have to look at permutations. 

A permutation is a function on a set S which takes the elements of S and rearranges them. Now, this must by definition be a bijective function, which takes S and maps it to its self. Taking individual permutations to be elements of a set, we can create a set of n! elements, where n is the number of numbers being used in the permutation. For instance, the set of all permutations on three numbers is:





and has 3!=6 elements. To combine the elements, we define the permutation of a permutation to be a function which takes the elements as they are arranged by the inner permutation and then rearranges them under the outer permutation called the composition. For instance:

To show that this combining of permutations is closed under the operation above, we can make a chart like the one below, where the first column is multiplied on the left by the first row,

but there are an infinite number of groups of permutations, so we must prove it. We can actually take it a step further into abstraction, and prove the composite of any two functions which are both bijective will yield another bijective function. Why are we proving the combinations of permutations are bijective? Because we started with the assumption that we had made a list of all the possible permutations for n objects, so showing that this new function is also bijective permutation means that it was on this list in the first place. 

Assume a f(x) and g(x) are injective functions, then we have to prove that [f*g][x]=[f*g][y] implies that x=y. Well suppose  [f*g][x]=[f*g][y], then f(g(x))=f(g(y)) because f(x) is injective, g(x)=g(y), and because g(x) is injective, x=y, so [f*g][x] is injective

Assume g:A ->B and f:B ->C are surjective, then we have to prove that every element of C is f*g of some element in a. Assume z is an element of C, then because f is surjective,  f(y)=z for some y in B. Because g is surjective,f(g(x))=z so, for any element of C, there is an element for which z is its image under f*g, so [f*g][x] is surjective.

Now, assuming the functions f and g are both bijective, it follows from what was just proven that f*g is bijective as well.

By this, the group of permutations under composition is closed.

There exist inverses for each element, which return each permutation to the permutation

which also happens to be identity element. Finally, combining permutations is associative, so we have a group.

Now we can prove Cayley’s theorem which again states that each group is isomorphic to a group of permutations.  


Begin by assuming we have a group G, we wish to prove that it is isomorphic to a set of permutations. Well, permutations require a set, so why not use the set that G is built off of? The only set we have around is the one used to make G, so we have no other choice but to use it. We then define a function

which is defined as:

And gives all the permutations of G, changing each element a of G into ax, and then ranging X over all elements of G to give one permutation per element. 

This function is injective, that is 

and surjective, as for some y in g, 

so y is the image of some element a^(-1)*y.

So we have a bijective function which turns an element a of G into a permutation which turns x_1 into a*x_1, x_2 into a*x_2 and so on. So we let G* represent the set of permutations of G created by pi_a(x) as a ranges through the elements of G. 

Notice now that the set of permutations created is not necessarily the set of all permutations of G, but a permutation corresponding to each element. We now have to prove that it is a group, that is

and that for any permutation contained by G*

Where pi_e is the identity permutation given by multiplying each x in G by the identity. The associative property can be assumed due to the fact that the operation is simple non-commutative multiplication. 

To begin, we have to prove that 

We can say that 

Because ab is a member of G, we can say that G* is closed with respect to composition. 

Finally, because G is a group with inverses, 

So we have proved that G* is in fact a group.

Now all we have to do is prove that G is isomorphic to G*. We already have the function ready, so we use it as the isomorphism. 

It is injective,

and surjective, every element pi_a of G* is the image of some f(a)


Thus, f is an isomorphism, and

That’s about all I have left in me for today, but I’ll try to get more up on here tomorrow! I thought Cayley’s theorem would be a good start!


I just have lots of Tron feels… xD I did this comic a while back when I found out what a NAVI Bit is. The concept of ISOs being undirected and lost got to me. I think Quorra’s NAVI would’ve gotten very cross with her for always sneaking off into places where she shouldn’t go—which would’ve been funny to make into a comic, buuut this happened instead. Cuz feels. xD Enjoy! <3

The Rado graph

The Rado graph is the unique (up to isomorphism) countable graph R such that for every finite graph G and every vertex v of G, every embedding of G−v as an induced subgraph of R can be extended to an embedding of G into R. This implies R contains all finite and countable graphs as induced subgraphs.

Rado gave the following construction: identifiy the vertices of the graph with the natural numbers. For every x and y with x<y, an edge connects vertices x and y in the graph if the xth bit of y's binary representation is nonzero.

Thus, for instance, the neighbors of vertex 0 consist of all odd-numbered vertices, while the neighbors of vertex 1 consist of vertex 0 (the only vertex whose bit in the binary representation of 1 is nonzero) and all vertices with numbers congruent to 2 or 3 modulo 4.

Striking Back: A Retrospective

[Crossposted from a FB post I sent to some of my friends.]

Sixteen years ago, I watched a particular children’s movie.

Almost exactly ten years ago, I started making a webcomic.

Four years ago, I ended that webcomic and started writing a novel based on that movie.

As of the last few days, I’ve finished that novel.

Perhaps I should explain.

Over the last few years, I’ve been working on a giant novel-length fanfiction set in the world of Pokémon, based heavily on the movie Mewtwo Strikes Back. It’s a retelling of the life of the film’s chief figure, Mewtwo, in his own words, interspersed with dreamlike poetic visions from the perspective of another of the film’s major figures, Mew. Within can be found ethical debates, asexuality, towering ambitions, attempted genocide, dark humor, religious speculation, animism, godlike powers, metafictional memoir, isomorphisms between author and character, and the search for the meaning of life. And as of Wednesday, this story—all four hundred and seventy pages of it—is complete…

Read More

ASIDE 1.47: Although various classes of field, for example, number fields and function fields, had been studied earlier, the first systematic account of the theory of abstract fields was given by Steinitz in 1910 (Algebraische Theorie Der Korper, J Reine Angew. Math., 137:167-309). Here he introduced the notion of a prime field,distinguished between separable and inseparable extensions, and showed that every field can be obtained as an algebraic extension of a purely transcendental extension. He also proved that every field has an algebraic closure, unique up to isomorphism. His work influenced later algebraists (Noether, van der Waerden, Artin, …) and his article has been described by Boubaki as “.. a fundamental work which may be considered as the origin of today’s concept of algebra”.
—  Milne, Field Theory
When the goal of self interest is seen to be perfectly isomorphic with universal well-being, bad people will do what it takes to get universal well-being…But even if the bad but smart people do general good for their own sakes, there are still foolish people who won’t recognize this one-to-one isomorphy, and some foolish people will be bad too, and they will f$%# things up.
—  from “2312” by Kim Stanley Robinson
In mathematics a stack or 2-sheaf is, roughly speaking, a sheaf that takes values in categories rather than sets. Stacks are used to formalise some of the main constructions of descent theory, and to construct fine moduli stacks when fine moduli spaces do not exist. Descent theory is concerned with generalisations of situations where geometrical objects (such as vector bundles on topological spaces) can be "glued together" when they are isomorphic (in a compatible way) when restricted to intersections of the sets in an open covering of a space. In more general set-up the restrictions are replaced with general pull-backs, and fibred categories form the right framework to discuss the possibility of such "glueing". The intuitive meaning of a stack is that it is a fibred category such that "all possible glueings work". The specification of glueings requires a definition of coverings with regard to which the glueings can be considered. It turns out that the general language for describing these coverings is that of a Grothendieck topology. Thus a stack is formally given as a fibred category over another base category, where the base has a Grothendieck topology and where the fibred category satisfies a few axioms that ensure existence and uniqueness of certain glueings with respect to the Grothendieck topology. Stacks are the underlying structure of algebraic stacks (also called Artin stacks) and Deligne–Mumford stacks, which generalize schemes and algebraic spaces and which are particularly useful in studying moduli spaces. There are inclusions: schemes ⊆ algebraic spaces ⊆ Deligne–Mumford stacks ⊆ algebraic stacks ⊆ stacks. Edidin (2003) and Fantechi (2001) give a brief introductory accounts of stacks, Gómez (2001), Olsson (2007) and Vistoli (2005) give more detailed introductions, and Laumon & Moret-Bailly (2000) describes the more advanced theory.


Homotopical bits

It’s not a big thing; but I’ve been musing on this point for embarassingly long, and quite by accident (as it seems) stumbled on the right argument on Monday.

Recall that the mod-$k$ Moore-Peterson spaces $P^{j+1}_{(k)}$ are defined as cofibers
$$ \xymatrix{ \sph^j \ar[r]^{\underline{k}} \ar[d] & \sph^j \ar[d] \\ * \ar[r] & P^{j+1}_{(k)} } $$ and that the degree maps compose multiplicatively (cf. Hurewicz isomorphism), so that a perfectly natural thing to consider is the relative cofiber sequence of the composite
$$ \xymatrix{ \sph^j \ar[r]^{\underline{k}} \ar[d] & \sph^j \ar[r]^{\underline{k}} \ar[d] & \sph^j \ar[d] \\
* \ar[r] & P^{j+1}_{(k)} \ar[r]\ar[d] & P^{j+1}_{(k^2)} \ar[d]\\
& * \ar[r] & P^{j+1}_{(k)} } $$ Now (isn’t it always the way) this isn’t the thing of the day, but taking wedges with $P^2_{(k)}$ will give … something… If the divisor of $k$ at $2$ isn’t exactly $2$, the resulting picture isn’t helpful much, because the $\underline{k}$ maps smash to zero, but if $k=2$, we have the Very Interesting Picture
$$ \xymatrix{ P^{j+2}_{(2)} \ar[r]^{\underline{2}} \ar[d] & P^{j+2}_{(2)} \ar[r]^{\underline{2}} \ar[d] & P^{j+2}_{(2)} \ar[d] \\
* \ar[r] & P^2_{(2)} \wedge P^{j+1}_{(2)} \ar[r]\ar[d] & P^2_{(2)} \wedge P^{j+1}_{(4)} \ar[d]\\
& * \ar[r] & P^2_{(2)}\wedge P^{j+1}_{(2)} } $$ which is made only More Interesting by the observation that $P^2_{(2)} \wedge P^{j+1}_{(4)} $ is just as well as $ P^2_{(4)} \wedge P^{j+1}_{(2)} $, which is the cofiber in
$$ \xymatrix{
P^{j+1}_{(2)} \ar[r]^{\underline{4}} \ar[d] & P^{j+1} \ar[d] \\ * \ar[r] & P^2_{(4)} \wedge P^{j+1}_{(2)} } $$ except that, now, the degree-$4$ map is trivial, so we have an equivalence
$$ P^2_{(4)} \wedge P^{j+1}_{(2)} \simeq P^{j+2}_{(2)} \vee P^{j+3}_{(2)}.$$

Using this to clarify the previous diagram, and continuing a few more squares, and specializing to the minimal $j=1$ for full effect,
$$ \xymatrix{ P^{3}_{(2)} \ar[r]^{\underline{2}} \ar[d] & P^{3}_{(2)} \ar[r]^{\underline{2}} \ar[d] & P^{3}_{(2)} \ar[d] \ar[r] & * \ar[d] \\
* \ar[r] & P^2_{(2)} \wedge P^{2}_{(2)} \ar[r]\ar[d] & P^3_{(2)} \vee P^4_{(2)} \ar[d] \ar[r] & P^4_{(2)} \ar[d]^{\pm \underline{2}} \\
& * \ar[r] & P^2_{(2)}\wedge P^{2}_{(2)} \ar[r] & P^4_{(2)} } $$ and once we have drawn that, the key observation to make is that the downwards map $P^3 \to P^3\vee P^4$ has a retract, and the rightward map $P^3\vee P^4\to P^4$ has a section. That is, the doubling maps $ P^{j+2} \to P^{j+2}$ factor through both $ (P\wedge P)^{j+2}$ and $ (P\wedge P)^{j+3}$!
$$ \xymatrix{ P^4 \ar[r] \ar[d] \ar[dr]|{\underline{2}} & P^2\wedge P^2 \ar[d] \\ P^2\wedge P^3 \ar[r] & P^4 
} $$ (all $P$’s here being $P_{(2)}$s)

Admittedly, this isn’t the fullness of what I was hoping to have settled (about which you can guess if you like, or point to references in case I’m being obtuse about something), but it’s a start, anyways!

A brief linguistic note:

A subcategory is full if its hom-objects are isomorphic to their ambient hom-objects.

A functor is full if its induced map between hom-objects is epic.

The stipulation that a functor preserves identity maps can be dropped (and recovered) if the functor is full (by the uniqueness of identity, since \(\mathrm{id}_A = \mathrm{id}_A \mathrm{id}_A’ = \mathrm{id}_A’\)).

(In particular, \(\mathcal{F}(\mathrm{id}_A)\) acts like an identity inside the image of \(\mathrm{Hom}(A,A)\) under \(\mathcal{F}\), so once you have fullness you get \(\mathcal{F}(\mathrm{id}_A) = \mathrm{id}_{\mathcal{F}(A)}\).