It might seem like were pretty far in the weeds by now but there’s a point to make here which is essential to my work. And we may as well introduce some of that noxious stuff — mathematical jargon. The collection of symmetries we’ve been talking about that can act in sequence or be done and then undone is usually called a *group* in mathematics. Maybe because it’s sort of like a group that gets together to play ball (though it might be with a line). By the way — not everyone in the group actually knows how to really play. There is always one character called *the identity* that just leaves the object as it is, moving nothing. This is still a symmetry though, in the same way 0 is a number, which is worth thinking about some.

An interesting question to ask is: where can points on the line go as the group bobbles it around? To simplify, how about where does a particular point, say zero, go? Where it ends up depends on the particular symmetry we apply, but considering all possibilities we see that zero can really go anywhere on the line. How? Well, if we want to take 0 to some value we actually have two options: either apply the translation that adds to every point, or apply the reflection through the value . Since the point 0 can go anywhere, we say that its *orbit* under the group is the whole line.

We’ve seen that we can think of the members (elements) of our group as constituting two copies of the line. On the first line, each point is a translation. On the second line, each point is a reflection (which can actually be realized as a translation and a reflection through 0). The fact that we have two ways to take 0 to, say, mirrors the fact that the group looks like two copies of the line.

So here’s a key idea: probably you would agree that the line is a natural enough geometric object. We need lines to get notions of distance and angle going in geometry! On the other hand, this group which is two copies of the line seems a bit funny and abstract. But it is still intimately attached to the line by keeping track of its symmetries. So what if there was a way to have both? That is, to have just the line in hand as an object but manifest in a way that also keeps track of the symmetry. This is what *homogeneous space*s * *are for.

To explain how we get a homogeneous space here, I need to try to explain something in group theory called the *orbit-stabilizer theorem. *Now, I don’t really know how to do this without either bringing in a little notation or being extremely verbose (and likely confusing). But there is power in notation so let’s try that. We’ve observed that all reflections can be considered as reflections through zero together with a translation. Lets call the reflection through zero “flip,” and label translations by the value that is added to every point, . Then every element of our group is a pair of the form (, flip) or (, no flip) according to whether it is a reflection or a translation.

Notice that the reflection through zero (0, flip) “fixes” zero, that is leaves it put, and other than *the identity*, which can be thought of as (0, no flip), it is the only element that does this. In group theory terms, these two elements form a *subgroup* which consists of the elements that fix 0. The orbit-stabilizer theorem says that if we clump these two elements together and think of them as a single object, and then do the same for all of the pairs (, flip) and (, no flip), then the resulting collection will be the same as the orbit of the point 0. But this is the just the line again! This should make some sense; in our group, we had two copies of the line and all we’ve done is glued those copies together at points that line up.

This “clumping” of objects according to a subgroup is called taking a *quotient* in group theory, and a homogeneous space is really just a quotient. It’s useful because now the coordinates on the line now come from the coordinates on the group, which facilitates studying how the symmetries act, especially in more complicated examples. A lot of my work is studying how these quotient spaces break up and fit together in pieces when you restrict your symmetry set. We’ll try to come back to this in the next post.

*Response to a prompt from a non-mathematician friend.*

What is symmetry? Usually we think of symmetry as how something looks — it’s the same on both sides of an axis, or you can fold an image along a crease but you only need one side to know how to complete the picture. You achieve this reflecting the half you see across the axis.

The word “symmetry” in the vernacular usually refers to this kind of reflective symmetry. In its origin, the word means something like “agreement in measure or form.” But mathematicians have deranged this word to turn it from a property that is into something you can *do*. This means instead of just observing the symmetry across an axis, we think of the symmetry as actually making the reflection happen. In the end, the resulting image is the same of course, but thinking of symmetries as actions is an important and useful step.

Now that symmetry is made into something kinetic, we can reconsider the forms symmetry can take. What is the fundamental result of applying a symmetry? Well, it’s really what we said already, ultimately the image or object to which the symmetry is applied appears the same. Taking this as our defining property allows us to include a few other kinds of symmetry we haven’t yet considered. In addition to *reflective *symmetry, there also *rotational *symmetry — like taking a wheel and spinning it on its axle — and *translational *symmetry.

Translational symmetry takes us immediately into the infinite. It is what it sounds like: you pick something up, slide it over, plunk it back down, and then it appears exactly the same as it did before. But I don’t mean it looks like the same object just in a different place. Rather, the whole montage looks exactly the same! Like if you had two photos of the scene where this happened side by side, a “before” and an “after,” they would look exactly the same. The thing is that no “finite” or “bounded” objects (as we are accustomed to) can have this symmetry. This is is because the bounded objects have extremities — a northernmost point, an southeasternmost point etc. If you take one of these and then drag further in that direction, you’ll always be cutting a new path, so the extremity can’t wind up back in a position occupied by the original object. Necessarily, the picture will look different.

But if you take something that extends infinitely — say an abstracted line, the continuum, our model of the real numbers — all of a sudden every point has somewhere back on the line to go when you slide. Here’s another funny thing that begins to emerge: the set of symmetries begins to look like the object it’s acting on. Considering the real line, every real number corresponds a symmetry of translation. That is, for a fixed real number , we can send each number in the line to and have the whole line slide over (to the left or the right depending on whether is negative or positive) and land back on top of itself. So the line has a line’s worth of symmetries! Does it have any others?

Well, again we have reflection. Pick a point in the line, and reflect the line about it. It’s not hard to see that everything goes back to somewhere else on the line. Moreover, points that were close together end up the same distance apart after applying the reflection. Symmetries that satisfy this property are called *isometries*, as they preserve the intrinsic geometry of the line.

So do we have another line’s worth of reflections to add to our list of symmetries? Well, you could say that if you want to think of these symmetries in isolation, as objects of a kind that are unable to interact with each other. But that would run counter to the notion we are developing of symmetries not as things that are but as things that *do!* A symmetry *acts* *on * the line, and once it has acted, another symmetry can act on the result and so on, kicking the points on the line this way and that.

To make the next point, let’s consider an example. Take the reflection through the number 1. Under this reflection, 1 stays put and 0 goes to 2. Maybe you can convince yourself that this is sufficient information to completely determine this symmetry. But here’s another way to do the same thing: reflect through 0, and then slide everything 2 to the right. With a bit more thought, you can probably convince yourself that the reflection through any point can be obtained similarly by a reflection through 0 followed by a slide, or a slide and and then a reflection through 0. What we say is that the reflections and translations altogether are *generated* by just the reflection through 0 together with the translations.

Pandemic life has found me, like many, drinking much more coffee made at home, and also exploring new hobbies and ventures. One of these new ventures is experimenting with mushroom cultivation. There are myriad techniques at all levels of sophistication for the many varieties of edible and medicinal mushrooms, but I was instantly excited when I learned that one can grow oyster mushrooms on used coffee grounds with virtually no technical set-up.

I mean for real, cheap coffee that turns into free food?! This is like a math graduate student’s dream.

So I saved my spent coffee grounds in the freezer for a few weeks, got some oyster mushroom spawn, then inoculated the grounds in a sanitized plastic container and waited. Among cultivated mushrooms, oysters are known for being particularly vigorous and adaptable, growing on anything from logs to old clothes to coffee and tea. The rapidity with which they colonized the grounds was astounding; already the next day the mass was glowing with patches of white mycelium waking up and stretching its legs. Within a week, a fluffy cushion of the stuff coated the whole surface. A few more weeks go by (maintaining high moisture in a cool, dark place) and voila!

One day the fruiting bodies start to “pin” and then they double in size for a few more days until they are ready to harvest. The next night, mushroom fried rice for dinner. : D

As the myco-aware are eager to tell you, this is a large untapped resource that society in general and the math community in particular has been sleeping on. In our department, another graduate student used to collect used grounds from the coffee machine for compost, but with very little effort every department could put its spent theorem-precursor to good use by setting up its own mushroom growing operation (myceliated substrate makes great compost after the fact).

Growing food is a radical community-building and liberatory act. The origin of the word university is the combination of *uni* (one) and *versus *(turned), “turned into one”: basically, *community. *Subsidizing graduate student grocery budgets is one thing, but whenever we get back to being able to sharing pots of coffee, this would be a great project for pushing on universities to be a little more like communities and less like bureaucracies.

I basically followed the instructions here, and I recommend the source book (in the link) to anyone interested in learning more.

]]>We, the undersigned members of the Mathematics Department and Tulane University community, call upon the senior university administration, President Mike Fitts, and his appointed Naming Review Task Force to undertake the renaming of

(1) Gibson Hall,

(2) the online university platform called Gibson, and

(3) any other monuments to Randall Lee Gibson within the university’s purview.

We make this petition with the facts in mind that Randall Lee Gibson

(1) was a slaver and sugar plantation owner in Terrebonne Parish,^{1}

(2) adhered to hardline beliefs in racial inequality, authored and published pro-slavery essays, ran for political office as a secessionist before the Civil War to preserve slavery in an independent South,^{1}

(3) enlisted as a Confederate soldier upon outbreak of the Civil War, eventually rising to the rank of Brigadier General,^{3}

(4) helped to restore former Confederates to political power in the backlash to Reconstruction and benefited from violent white terror campaigns meant to suppress black voters,^{4 }and

(5) convinced Paul Tulane to “confine his bequest [to the university] to white persons.”^{4}

We support the efforts of the** **Naming Review Task Force and President Fitts’ message that “racism has no place” on our campus, and so we insist upon the removal of all monuments to individuals aligned with slavery, racial segregation, and other forms of oppression. **It is unacceptable that Tulane University continues to honor the name of a person that profited by and fought to protect chattel slavery. **

Our purpose is not to deny history, but rather to recognize it and connect its meaning to our present so that we may move beyond the moral deficiencies of our forebears. Gibson Hall was named in honor of Randall Lee Gibson’s role as the first president of the Administrators of the Tulane Educational Fund, in which he oversaw the transformation of the public University of Louisiana into the private, exclusively white Tulane University of Louisiana. This conversion was made with explicit racialized intent through Paul Tulane’s act of donation.^{2}

The university is much different now than it was in Gibson’s time, but its entanglement with white supremacy remains. The removal of monuments to oppressors is essential to our university’s project to become a more inclusive and equitable institution.** **With this aim in mind,** we assert the necessity of renaming Gibson Hall.**

Sincerely,

[names]

[1] Allardice, Bruce S., and Lawrence Lee Hewitt, eds. *Kentuckians in Gray: Confederate Generals and Field Officers of the Bluegrass State*. University Press of Kentucky, 2015.

[2] Dyer, John Percy. *Tulane: The biography of a university, 1834-1965*. Harper & Row, 1966.

[3] United States Congress (1893-1894). Memorial address, and 2d session, 52d Cong. *Memorial Addresses On the Life And Character of Randall Lee Gibson, (a Senator From Louisiana,): Delivered In the Senate And House of Representatives, March 1, 1893, And April 21, 1894.* Washington: Govt. print off., 1894.

[4] Sharfstein, Daniel J. *The invisible line: A secret history of race in America*. Penguin, 2011.

For those that may find it useful, I made the puzzle from scratch in LaTeX with tikz. There are actually dedicated LaTeX packages for making crosswords, but I was determined to homebrew.

Here’s the post!

]]>**Theorem (Rabinowitsch, 1913): **The field is a unique factorization domain (UFD) if and only if is prime for all , where .

The paper was published in German in Crelle’s journal over a century ago, and was somewhat hard to find on its own. We could not locate any other source where the content has been rewritten since then, so we translated and typeset the article (with the aid of various online translation tools). To preserve the spirit and style of the writing, some outdated and perhaps idiosyncratic (to author or translator, as the case may be) jargon has been allowed to survive. These designations are hopefully all made clear within the article so it is readable. Some notation has been modified for clarity.

Perhaps one day this material can be condensed and fully modernised. The article consists mostly of pleasant and clean elementary number theory, situated near the headwaters of one of the important achievements of 20th century number theory. It seems worthy of further propagation, so we post here our translation (no warranty included).

]]>McCleary introduces the concept of a differential graded algebra in section 1.3 (Definition 1.6, p. 11). These are algebras (over a field ), which tend to be -graded, and importantly carry with them a map called a differential which is -linear, shifts the degree of elements (in the grading) up by one, and satisfies a “Leibniz rule:”

for in our algebra . This is a twisted version of what is usually called Leibniz’ rule in calculus (which is basically just product rule), which coincides with how the differential works in the algebra of differential forms.

This idea is easily extended to the notion of a differential *bigraded *algebra , where now the elements are graded (for the time being, later we’ll have ), but remains a total-degree 1 mapping. That is,

and still satisfies the Leibniz rule

(1)

where .

A standard construction is to form a bigraded algebra by tensoring two graded algebras together. This would work with just component-wise multiplication, but to get a working differential that satisfies our version of the Leibniz rule 1 as well, we introduce an extra sign: we mean, supposing and are differential graded algebras, then we can assign , and furthermore

(2)

Then if we define a differential on by

(3)

then satisfies the Leibniz rule 1. It is clarifying to check this, so we’ll record it here. Switching notation a bit, we will write instead of . To satisfy 1 we need

we then apply 3 to the individual terms on the right side above to get

Now applying the multiplication rule 2 and distributing, we find

(4)

To check the rule holds, we perform this computation by instead multiplying first and then applying the differential. That calculation looks like

Finally, remarking that and shows that terms of the last line above match with those of 4, so everything checks out and becomes a *differential bigraded algebra.*

Given the length and detail of section 1.3, surprisingly we find no glaring errors in this section, but the use of the differential becomes somewhat muddled in calculation in section 1.4. Again, perhaps as an undesirable side effect of the fact that we remain at the “informal stage,” it’s always difficult to keep track of what assumptions we’re working with in each example. Case in point, example 1.H, p. 20. The paragraph preceding definition 1.11 seems to indicate that all graded algebras are assumed to be graded commutative — at least for the rest of the section, one guesses, though the language is vague. Let’s try this here with a bit more force.

**Assumption: All graded algebras are graded commutative for the rest of the post. **This is to say, for all in any , we have . Now let’s have a look at the example. We suppose a spectral sequence of algebras with , converging to the graded algebra which is in degree 0 and in all others. The example asserts that if is a graded commutative polynomial algebra in one generator/variable, then is a graded commutative exterior algebra in one generator, and vice versa.

The first confusion appears in a restatement of the Leibniz rule near the bottom of page 20, except this time there are tensors involved. This appears to be a mixed use/abuse of notation, which was slightly different in the first edition of the book, but not more consistent. The idea is as follows. and embed into under the maps and . Then one can also write an element (mind the inexplicable inconsistent choice of letters) as

(5)

since the degree of 1 is zero in each graded algebra. Note that this also allows us to regard as graded commutative with the tensor product as multiplication between pure and pure elements, writing

One can apply Leibniz rule to the product in 5 so that if comes with a differential , we get

The thing is we really need not write the tensor product ; it is just as correct to write on it’s own, as we often do with polynomial algebras and so on. Then the above can be written instead as

as McCleary does near the bottom of page 20. What makes this confusing is that up to this point we had only seen differentials acting on tensors by defining the bigraded differential from tensoring two differential graded algebras together, seen above. In this context, the differential of the bigraded algebra must act on an element of the algebra coming from , it cannot act on just one side of the tensor. What’s different here is that *the tensor product is actually the multiplication operation* on each page of the spectral sequence. Thus, the restatement of the familiar rule with new notation.

Nevertheless, the next equality is also a bit confounding at first, partly because McCleary, goes back to writing the extra in the tensor, suggesting that we need to pay attention to its effect. He says that if , then

(6)

which looks sort of reasonable as it resembles something like a chain rule, . It is presented as if it should follow immediately from the Leibniz rule stated before. But this seems weird when the degree of is odd. To be totally transparent about this, let’s illustrate the case where , suppressing the subscript on the differential again, but maintaining the tensorial notation.

where the last line follows since has total degree , so the sign inside the sum there has exponent which is even. We see that if has odd degree then, these terms cancel and we get 0. So you say “wait a minute, that’s not right, I wan’t my chain rule looking thing” until you eventually realize that if has odd degree, since it’s sitting in a graded commutative algebra, is actually zero! And the same goes for all higher powers of . Then, makes complete sense. Meanwhile, if has even degree, the terms will pile up with positive sign and we get the chain rule looking thing that was claimed. So the statement 6 is in fact true, though it really breaks down into two distinct cases.

Going forward in the example, McCleary only really seems to use the chain rule (liberally mixing in the described sort of abuse of notation) on terms of even degree, so it’s tempting to think that it only applies there, but it is sort of “vacuously true” in odd degree as well. Oh well. Onwards.

]]>The next thing to address in McCleary is an apparent mistake on p. 9 of section 1.2. Here we again assume a first quadrant spectral sequence converging to a graded vector space . This is mentioned at the beginning of the section, but it’s easy to forget that when a bold-faced and titled example (1.D) seems to be presenting a reset of assumptions, rather than building upon prior discussion. Furthermore, in this example, McCleary seems to be working again with the assumption from example 1.A that for . On the other hand, this can be seen as a consequence of the fact that our spectral sequence is limited to the first quadrant, provided the filtration is finite in the sense of Weibel’s *Homological Algebra*, p. 123 ( for some ). But then it would be unclear why McCleary took this as an additional assumption rather than as a consequence of prior assumptions in the first case. : /

The new part of this example is the assumption that unless or , so all terms of the spectral sequence are to be found just in two horizontal stripes. In particular is only possibly non-zero in these stripes, and since these correspond to filtration quotients, the filtration takes a special form.

First, we might look at the filtration on where . Note that the spectral sequence terms that give information about are those along the diagonal line where . Since , the only place where anything interesting might happen is when this line crosses the -axis, i. e. when . This forces , so the only possible nonzero filtration quotient is

working with the assumption that . So on the one hand, we get no interesting filtration of for , but on the other hand we can see exactly what it is from the spectral sequence limit.

Now we treat the case of , where . I find this awkward notation again, preferring to reserve for a pure arbitrary spectral sequence index, but since we are trying to address the mistake in this notation, we should keep it for now. The filtration of this vector space/cohomology is interesting when and , where the quotients are given by

Every where else, successive quotients are 0, meaning the filtration looks like…

In the filtration on page 9, McCleary puts one of the (possibly) non-trivial quotients at instead of at where it should be. That’s all I’m saying.

This situation is modeled on a spectral sequence for sphere bundles i.e. bundles where the fibers are spheres of a given dimension. The stripes coincide with the fact that a sphere has nontrivial cohomology only at and . This sort of computation is famous enough that it has a name: the Thom-Gysin sequence (or just Gysin sequence).

As a final remark on section 1.2, McCleary says that the sequence in example 1.C is the Gysin sequence. Example 1.C doesn’t exist, we mean example 1.D : )

]]>

The first comment worth making is regarding some confusing notation, largely an overuse of the letter . The first use comes on p. 4 (Section 1.1), given a graded vector space and a filtration of , by defining

Here, the symbol seems to designate an endofunctor on (graded) vector spaces — it eats one and gives back another; transporting morphisms through the filtration and quotient shouldn’t be a problem either. It isn’t really clear what the subscript is supposed to indicate at this point, but the reader sits tight expecting the truth to be revealed.

However, on the very same page, McCleary twists things by making the assignment

(1)

where , the graded piece of the filtration. Now, with the extra index , is a vector space on it’s own. The notation doesn’t indicate reference to , though in this case it really depends on . For instance, McCleary indicates that we should write something like

The definition immediately afterwards (Definition 1.1) indicates is to be used to designate a vector space in a spectral sequence which is irrespective of any for all . The typical way to relate a spectral sequence , to a graded vector space is the situation of convergence (Definition 1.2, p. 5) where instead

The right hand side above has nothing to do with the spectral sequence (since we take in our definition), it is just an instance of the definition from equation 1… but with distinct use of notation… oh. So on the one hand, should be a standalone vector space, like the other ‘s, but also it needs to come from an so one should really write as in Definition 1.2. Wha? Shoot. Couldn’t we have used like an instead or something?

Perhaps there is good reasoning for all of this to be discovered once we get further in. Also, it seems so far that initial terms are usually . Why not ? And why don’t we allow -pages? In these cases the differentials would be vertical and horizontal (resp.) instead of diagonal, which feels less interesting somehow, though this doesn’t seem like it would be totally frivolous… TBD.

Finishing out the first section, we address what seems to be a typo in example 1.A (p. 6). McCleary’s expository style consists of many statements which are not obvious, though usually not difficult to work out. This is perhaps for the best, as the community seems to indicate that the only real way to learn spectral sequences (make that: all math?) is by working them out. Nevertheless, it is a bit discouraging to find yourself at odds with the author at the first example…

We have assumed a first quadrant spectral sequence with initial term converging to with a filtration satisfying for all . Then we have a filtration on in particular, given by

since, by the assumption, etc., and by definition. By convergence, then,

so is a submodule of . But also because lies on the -axis (depicted as what is usually the -axis) and our spectral sequence has only first quadrant terms, must be the zero map for all . Furthermore, is too close to the -axis to get hit by any differential , thus survives as the kernel of every , mod the image of a from a zero vector space in the second quadrant. This is all to say

We then have part of the short exact sequence McCleary gives for as

How can we describe the third term using the spectral sequence? Well, from our definitions, . The book seems to be indicating that but **this is not necessarily the case!** It also doesn’t make sense with how the short exact sequences are spliced later on.

Let’s address the first claim first. Because lies on the -axis, and the differentials point “southeast” towards the empty fourth quadrant, is the zero map for any , but it can’t be hit by anything so we have now

The denominator is the image of a map from a zero vector space, so it is zero, and thus is a subspace of , but this latter space can be larger! This is all to say, the short exact sequence for is misprinted, and should go

(2)

One can confirm this by examining the SES given just below, where we see injecting into :

(3)

This is a standard decomposition of the map in the middle: for any morphism (in an abelian category at least, we suppose) there is a SES

It remains to see that . Because of where sits on the -axis, it is again the kernel of for all . Further, it can only possibly be hit by , so in fact survives through all further terms to give the desired equality

To splice all this together, we recall that we can connect

as

where . We maintain exactness since and .

Performing this surgery on sequences 2 and 3 yields the main exact sequence claimed by the example, namely

(4)

Stay tuned for more clarifications from Chapter 1.

]]>