Mushrooms into corollaries

OK this is gonna be a quick oddball post, but I’m so excited about this that I can’t not tell the world.

Pandemic life has found me, like many, drinking much more coffee made at home, and also exploring new hobbies and ventures. One of these new ventures is experimenting with mushroom cultivation. There are myriad techniques at all levels of sophistication for the many varieties of edible and medicinal mushrooms, but I was instantly excited when I learned that one can grow oyster mushrooms on used coffee grounds with virtually no technical set-up.

I mean for real, cheap coffee that turns into free food?! This is like a math graduate student’s dream.

Colonizing mycelium

So I saved my spent coffee grounds in the freezer for a few weeks, got some oyster mushroom spawn, then inoculated the grounds in a sanitized plastic container and waited. Among cultivated mushrooms, oysters are known for being particularly vigorous and adaptable, growing on anything from logs to old clothes to coffee and tea. The rapidity with which they colonized the grounds was astounding; already the next day the mass was glowing with patches of white mycelium waking up and stretching its legs. Within a week, a fluffy cushion of the stuff coated the whole surface. A few more weeks go by (maintaining high moisture in a cool, dark place) and voila!

One day…

One day the fruiting bodies start to “pin” and then they double in size for a few more days until they are ready to harvest. The next night, mushroom fried rice for dinner. : D

… and the next! Look at this monster!!

As the myco-aware are eager to tell you, this is a large untapped resource that society in general and the math community in particular has been sleeping on. In our department, another graduate student used to collect used grounds from the coffee machine for compost, but with very little effort every department could put its spent theorem-precursor to good use by setting up its own mushroom growing operation (myceliated substrate makes great compost after the fact).

Growing food is a radical community-building and liberatory act. The origin of the word university is the combination of uni (one) and versus (turned), “turned into one”: basically, community. Subsidizing graduate student grocery budgets is one thing, but whenever we get back to being able to sharing pots of coffee, this would be a great project for pushing on universities to be a little more like communities and less like bureaucracies.

I basically followed the instructions here, and I recommend the source book (in the link) to anyone interested in learning more.

Renaming the math building at T. U. of Louisiana

**The following letter was delivered (Sep. 2020) to the board and senior administrators of the university on behalf of a group of graduate students and faculty in the math department, and other concerned community members.**

We, the undersigned members of the Mathematics Department and Tulane University community, call upon the senior university administration, President Mike Fitts, and his appointed Naming Review Task Force to undertake the renaming of

(1) Gibson Hall,

(2) the online university platform called Gibson, and

(3) any other monuments to Randall Lee Gibson within the university’s purview.

We make this petition with the facts in mind that Randall Lee Gibson

(1) was a slaver and sugar plantation owner in Terrebonne Parish,1 

(2) adhered to hardline beliefs in racial inequality, authored and published pro-slavery essays, ran for political office as a secessionist before the Civil War to preserve slavery in an independent South,1

(3) enlisted as a Confederate soldier upon outbreak of the Civil War, eventually rising to the rank of Brigadier General,3

(4) helped to restore former Confederates to political power in the backlash to Reconstruction and benefited from violent white terror campaigns meant to suppress black voters,4 and

(5) convinced Paul Tulane to “confine his bequest [to the university] to white persons.”4

We support the efforts of the Naming Review Task Force and President Fitts’ message that “racism has no place” on our campus, and so we insist upon the removal of all monuments to individuals aligned with slavery, racial segregation, and other forms of oppression. It is unacceptable that Tulane University continues to honor the name of a person that profited by and fought to protect chattel slavery. 

Our purpose is not to deny history, but rather to recognize it and connect its meaning to our present so that we may move beyond the moral deficiencies of our forebears.  Gibson Hall was named in honor of Randall Lee Gibson’s role as the first president of the Administrators of the Tulane Educational Fund, in which he oversaw the transformation of the public University of Louisiana into the private, exclusively white Tulane University of Louisiana. This conversion was made with explicit racialized intent through Paul Tulane’s act of donation.2

The university is much different now than it was in Gibson’s time, but its entanglement with white supremacy remains. The removal of monuments to oppressors is essential to our university’s project to become a more inclusive and equitable institution. With this aim in mind, we assert the necessity of renaming Gibson Hall.

Sincerely,

[names]

[1] Allardice, Bruce S., and Lawrence Lee Hewitt, eds. Kentuckians in Gray: Confederate Generals and Field Officers of the Bluegrass State. University Press of Kentucky, 2015.

[2] Dyer, John Percy. Tulane: The biography of a university, 1834-1965. Harper & Row, 1966.

[3] United States Congress (1893-1894). Memorial address, and 2d session, 52d Cong. Memorial Addresses On the Life And Character of Randall Lee Gibson, (a Senator From Louisiana,): Delivered In the Senate And House of Representatives, March 1, 1893, And April 21, 1894. Washington: Govt. print off., 1894. 

[4] Sharfstein, Daniel J. The invisible line: A secret history of race in America. Penguin, 2011.

Rabinowitsch in Translation

One of our projects in number theory led us to thinking about the class number problem, which has a story too long and interesting to recount here. See the survey of Goldfeld from an old AMS Bulletin to get an overview. Briefly, the question is about the degree to which unique factorization holds in quadratic extensions of the rational numbers.  In any case, one of the first important results in the program is an old theorem of G. Rabinowitsch (Rabinowitz), which gives a testable criterion for whether the number field \Q(\sqrt{D}), with D a negative integer, possesses unique factorization. In modernish language (cf. Theorem 6 in the linked-to document), keeping this framework we have:

Theorem (Rabinowitsch, 1913): The field \Q(D) is a unique factorization domain (UFD) if and only if x^2-x+ m is prime for all x\in \{1,2, \dots, n\}, where D=1-4m.

The paper was published in German in Crelle’s journal over a century ago, and was somewhat hard to find on its own. We could not locate any other source where the content has been rewritten since then, so we translated and typeset the article (with the aid of various online translation tools). To preserve the spirit and style of the writing, some outdated and perhaps idiosyncratic (to author or translator, as the case may be) jargon has been allowed to survive. These designations are hopefully all made clear within the article so it is readable. Some notation has been modified for clarity.

Perhaps one day this material can be condensed and fully modernised. The article consists mostly of pleasant and clean elementary number theory, situated near the headwaters of one of the important achievements of 20th century number theory. It seems worthy of further propagation, so we post here our translation (no warranty included).

Rabinowitsch

Spectral Sequences III

Bigraded Algebrae

McCleary introduces the concept of a differential graded algebra in section 1.3 (Definition 1.6, p. 11). These are algebras (over a field k), which tend to be \N-graded, and importantly carry with them a map d called a differential which is k-linear, shifts the degree of elements (in the grading) up by one, and satisfies a “Leibniz rule:”

    \[d(a\cdot a')=d(a)\cdot a'+(-1)^{\deg(a)} a\cdot d(a')\]

for a, a' in our algebra A^*. This is a twisted version of what is usually called Leibniz’ rule in calculus (which is basically just product rule), which coincides with how the differential works in the algebra of differential forms.

This idea is easily extended to the notion of a differential bigraded algebra (E^{*,*}, d), where now the elements are \N^2 graded (for the time being, later we’ll have \Z^2), but d remains a total-degree 1 mapping. That is,

    \[d: \bigoplus_{p+q=n} E^{p,q} \lra \bigoplus_{r+s=n+1}E^{r,s},\]

and d still satisfies the Leibniz rule

(1)   \begin{equation*} d(e\cdot e')=d(e)\cdot e'+(-1)^{p+q}e\cdot d(e') \end{equation*}

where e\in E^{p,q}.

A standard construction is to form a bigraded algebra by tensoring two graded algebras together. This would work with just component-wise multiplication, but to get a working differential that satisfies our version of the Leibniz rule 1 as well, we introduce an extra sign: we mean, supposing (A^*,d) and (B^*, d') are differential graded algebras, then we can assign E^{p,q}:=A^p\ox B^q, and furthermore

(2)   \begin{equation*}  (a_1\ox b_1 )\cdot (a_2\ox b_2):= (-1)^{(\deg a_2)(\deg b_1)}a_1a_2\ox b_1 b_2.\end{equation*}

Then if we define a differential d_\ox on E^{*,*} by

(3)   \begin{equation*}  d_\ox(a\ox b)=d(a)\ox b + (-1)^{\deg a} a \ox d'(b), \end{equation*}

then d_\ox satisfies the Leibniz rule 1. It is clarifying to check this, so we’ll record it here. Switching notation a bit, we will write (-1)^{\ab{a}} instead of (-1)^{\deg a}. To satisfy 1 we need

    \begin{align*}d_\ox((a_1\ox b_1)\cdot (a_2 \ox b_2))& =  d_\ox (a_1\ox b_1)\cdot (a_2 \ox b_2) + \\ &(-1)^{\ab{a_1}+\ab{b_1}} (a_1\ox b_1) \cdot d_\ox(a_2\ox b_2) \end{align*}

we then apply 3 to the individual terms on the right side above to get

    \begin{align*} d_\ox((a_1\ox b_1)\cdot (a_2 \ox b_2)) = [d(a_1)\ox b_1+((-1)^{\ab{a_1}} a_1 \ox d'(b_1)]\cdot (a_2\ox b_2)+\\ (-1)^{\ab{a_1}+\ab{b_1}} (a_1\ox b_1) \cdot [d(a_2)\ox b_2+(-1)^{\ab{a_2}} a_2\ox d'(b_2)]. \end{align*}

Now applying the multiplication rule 2 and distributing, we find

(4)   \begin{align*}  d_\ox((a_1\ox b_1)\cdot (a_2 \ox b_2)) = (-1)^{\ab{a_2}\ab{b_1}}d(a_1)a_2\ox b_1b_2 +(-1)^{\ab{a_1}+\\\ab{d'(b_1)}\ab{a_2}}a_1a_2\ox d'(b_1)b_2 + (-1)^{\ab{a_1}+\ab{b_1}+\ab{d(a_2)}\ab{b_1}} a_1 d(a_2)\ox b_1b_2 +\\ (-1)^{\ab{a_1}+\ab{a_2}+\ab{b_1} +\ab{b_1}\ab{a_2}} a_1a_2 \ox b_1d'(b_2). \end{align*}

To check the rule holds, we perform this computation by instead multiplying first and then applying the differential. That calculation looks like

    \begin{align*} d_\ox((a_1\ox b_1)\cdot (a_2 \ox b_2)) &= d_\ox((-1)^{\ab{a_2}\ab{b_1}} a_1 a_2 \ox b_1b_2) \\ &=(-1)^{\ab{a_2}\ab{b_1}}[d(a_1a_2)\ox b_1b_2+ (-1)^{\ab{a_1}+\ab{a_2}}a_1a_2\ox d'(b_1b_2)] \\ &=(-1)^{\ab{a_2}\ab{b_1}}[(d(a_1)a_2+(-1)^{\ab{a_1}}a_1d(a_2)\ox b_1b_2 +\\ &(-1)^{\ab{a_1}+\ab{a_2}}a_1a_2\ox(d'(b_1)b_2+(-1)^{\ab{b_1}}b_1d'(b_2))]  \qquad \\ &=(-1)^{\ab{a_2}\ab{b_1}}d(a_1)a_2\ox b_1b_2+(-1)^{\ab{a_1}+\ab{a_2}\ab{b_1}} a_1d(a_2)\ox b_1b_2+ \\ &(-1)^{\ab{a_1}+\ab{a_2}+\ab{a_2}\ab{b_1}} a_1 a_2\ox d'(b_1) b_2 +\qquad\\ & (-1)^{\ab{a_1}+\ab{a_2}+\ab{b_1} +\ab{a_2}\ab{b_1}}a_1a_2\ox b_1d'(b_2). \end{align*}

Finally, remarking that \ab{d'(b_1)}=\ab{b_1}+1 and \ab{d(a_2)}+1=\ab{a_2} shows that terms of the last line above match with those of 4, so everything checks out and (A^*\ox B^*, d_\ox) becomes a differential bigraded algebra.

A Chain Rule

Given the length and detail of section 1.3, surprisingly we find no glaring errors in this section, but the use of the differential becomes somewhat muddled in calculation in section 1.4. Again, perhaps as an undesirable side effect of the fact that we remain at the “informal stage,” it’s always difficult to keep track of what assumptions we’re working with in each example. Case in point, example 1.H, p. 20. The paragraph preceding definition 1.11 seems to indicate that all graded algebras are assumed to be graded commutative — at least for the rest of the section, one guesses, though the language is vague. Let’s try this here with a bit more force.

Assumption: All graded algebras are graded commutative for the rest of the post. This is to say, for all x,y in any A^*, we have x\cdot y =(-1)^{\ab{x}\ab{y}} y\cdot x. Now let’s have a look at the example. We suppose a spectral sequence of algebras (E_r^{*,*}, d_r) with E_2^{*,*}\cong V^*\ox W^*, converging to the graded algebra H^* which is \Q in degree 0 and \{0\} in all others.  The example asserts that if V^* is a graded commutative polynomial algebra in one generator/variable, then W^* is a graded commutative exterior algebra in one generator, and vice versa.

The first confusion appears in a restatement of the Leibniz rule near the bottom of page 20, except this time there are tensors involved. This appears to be a mixed use/abuse of notation, which was slightly different in the first edition of the book, but not more consistent. The idea is as follows. V* and W^* embed into V^*\ox W^* under the maps v \mapsto v \ox 1 and w \mapsto 1\ox w.  Then one can also write an element w\ox z \in V^*\ox W^* (mind the inexplicable inconsistent choice of letters) as

(5)   \begin{equation*}  w \ox z = (-1)^0 (w\ox 1)\cdot (1 \ox z)= (w\ox 1) \cdot (1 \ox z) \end{equation*}

since the degree of 1 is zero in each graded algebra. Note that this also allows us to regard V^*\ox W^* as graded commutative with the tensor product as multiplication between pure V^* and pure W^* elements, writing

    \[z\ox w:=(1\ox z) \cdot (w \ox 1)=(-1)^{\ab{z}\ab{w}} (w\ox z). \]

One can apply Leibniz rule to the product in 5 so that if V^*\ox W^* comes with a differential d, we get

    \[d(w\ox z) = d((w\ox 1) (1\ox z)) =d(w\ox 1)(1\ox z)+(-1)^{\ab{w}}(w\ox 1) d(1\ox z).\]

The thing is we really need not write the tensor product w\ox 1; it is just as correct to write w on it’s own, as we often do with polynomial algebras and so on. Then the above can be written instead as

    \[d(w\ox z) = d(w)\ox z +(-1)^{\ab{w}} w \ox d(z) \]

as McCleary does near the bottom of page 20. What makes this confusing is that up to this point we had only seen differentials acting on tensors by defining the bigraded differential from tensoring two differential graded algebras together, seen above. In this context, the differential of the bigraded algebra must act on an element of the algebra coming from V^* \ox W, it cannot act on just one side of the tensor. What’s different here is that the tensor product is actually the multiplication operation on each page of the spectral sequence. Thus, the restatement of the familiar rule with new notation.

Nevertheless, the next equality is also a bit confounding at first, partly because McCleary, goes back to writing the extra 1 in the tensor, suggesting that we need to pay attention to its effect. He says that if d_i(1\ox u) =\sum_j v_j\ox w_j, then

(6)   \begin{equation*}  d_i(1\ox u^k) = k \lt(\sum_j v_j \ox ( w_ju^{k-1})\rt) \end{equation*}

which looks sort of reasonable as it resembles something like a chain rule, d(u^k)=k u^{k-1} d(u). It is presented as if it should follow immediately from the Leibniz rule stated before. But this seems weird when the degree of u is odd. To be totally transparent about this, let’s illustrate the case where k=2, suppressing the subscript on the differential again, but maintaining the tensorial notation.

    \begin{align*} d(1\ox u^2) & =d((1\ox u)(1\ox u)) \\ & = d(1\ox u) (1\ox u) +(-1)^{\ab{u}} (1\ox u) d (1\ox u) \\ &= \sum_j (v_j \ox w_j)(1\ox u) + (-1)^{\ab{u}} \sum_j (1\ox u) (v_j \ox w_j) \\ &=\sum_j v_j\ox w_j u + (-1)^{\ab{u}} \sum_j (-1)^{\ab{u}\ab{v_j}} v_j \ox u w_j \\ &=\sum_j v_j\ox w_j u + (-1)^{\ab{u}} \sum_j (-1)^{\ab{u}\ab{v_j}+\ab{u} \ab{w_j}} v_j \ox  w_j u \\ &=\sum_j v_j\ox w_j u + (-1)^{\ab{u}} \sum_j  v_j \ox  w_j u \\ \end{align*}

where the last line follows since v_j\ox w_j has total degree \ab{u}+1, so the sign inside the sum there has exponent \ab{u}(\ab{u}+1) which is even. We see that if u has odd degree then, these terms cancel and we get 0. So you say “wait a minute, that’s not right, I wan’t my chain rule looking thing” until you eventually realize that if u has odd degree, since it’s sitting in a graded commutative algebra, u^2 is actually zero! And the same goes for all higher powers of u. Then, d(u^2)=d(0)=0 makes complete sense. Meanwhile, if u has even degree, the terms will pile up with positive sign and we get the chain rule looking thing that was claimed. So the statement 6 is in fact true, though it really breaks down into two distinct cases.

Going forward in the example, McCleary only really seems to use the chain rule (liberally mixing in the described sort of abuse of notation) on terms of even degree, so it’s tempting to think that it only applies there, but it is sort of “vacuously true” in odd degree as well. Oh well. Onwards.

Spectral Sequences II

Two Stripes

The next thing to address in McCleary is an apparent mistake on p. 9 of section 1.2. Here we again assume a first quadrant spectral sequence converging to a graded vector space H^*. This is mentioned at the beginning of the section, but it’s easy to forget that when a bold-faced and titled example (1.D) seems to be presenting a reset of assumptions, rather than building upon prior discussion. Furthermore, in this example, McCleary seems to be working again with the assumption from example 1.A that F^p+kH^p=0 for k>0. On the other hand, this can be seen as a consequence of the fact that our spectral sequence is limited to the first quadrant, provided the filtration is finite in the sense of Weibel’s Homological Algebra, p. 123 (F^mH^s=0 for some m). But then it would be unclear why McCleary took this as an additional assumption rather than as a consequence of prior assumptions in the first case. : /

The new part of this example is the assumption that E^{p,q}_2=0 unless q=0 or q=n, so all terms of the spectral sequence are to be found just in two horizontal stripes. In particular E_\infty^{p,q} is only possibly non-zero in these stripes, and since these correspond to filtration quotients, the filtration takes a special form.

First, we might look at the filtration on H^s where 0\leq s \leq n-1.  Note that the spectral sequence terms that give information about H^s are those along the diagonal line where p+q=s.  Since s\leq n-1, the only place where anything interesting might happen is when this line crosses the p-axis, i. e. when q=0. This forces p=s, so the only possible nonzero filtration quotient is

    \[E_\infty^{s,0}=F^sH^s/F^{s+1}H^s=F^sH^s=F^0H^s=H^s \]

working with the assumption that F^{s+1}H^s=0. So on the one hand, we get no interesting filtration of H^s for s<n, but on the other hand we can see exactly what it is from the spectral sequence limit.

Now we treat the case of H^{n+p}, where p\geq 0. I find this awkward notation again, preferring to reserve p for a pure arbitrary spectral sequence index, but since we are trying to address the mistake in this notation, we should keep it for now. The filtration of this vector space/cohomology is interesting when q=n and q=0, where the quotients are given by

    \[E^{p,n}_\infty=F^pH^{p+n}/F^{p+1}H^{p+n}\quad\text{and}\quad E^{p+n,0}_\infty=F^{p+n}H^{p+n}/0.\]

Every where else, successive quotients are 0, meaning the filtration looks like…

    \[0\sus F^{n+p}H^{n+p}= \dots =F^{p+1}H^{n+p} \sus F^pH^{n+p}=\dots=F^1H^{n+p}\sus H^0{n+p}\]

In the filtration on page 9, McCleary puts one of the (possibly) non-trivial quotients at F^nH^{n+p} instead of at F^pH^{n+p} where it should be.  That’s all I’m saying.

This situation is modeled on a spectral sequence for sphere bundles i.e. bundles where the fibers are spheres of a given dimension. The stripes coincide with the fact that a sphere \mathbb{S}^n has nontrivial cohomology only at H^n and H^0. This sort of computation is famous enough that it has a name: the Thom-Gysin sequence (or just Gysin sequence).

As a final remark on section 1.2, McCleary says that the sequence in example 1.C is the Gysin sequence. Example 1.C doesn’t exist, we mean example 1.D : )

Spectral Sequences I

A goose chase through homological algebra etc. has led us to start reading McCleary’s A User’s Guide to Spectral Sequences. The book seems like a nice introduction for those that know their way around graduate topology and geometry, but haven’t yet encountered cause to pull out this extra machinery to compute (co)homology. First published by Spivak’s Publish or Perish Press in the 80’s, a second edition was released by Cambridge University Press in 2000. Though it sounds like the second edition is rather improved, there seem to be a number of mistakes remaining which may frustrate those trying to learn a notoriously complicated subject for the first time. Pending an official list of errata, we may as well collect some of them here.

Section 1.1 – Notation

 

The first comment worth making is regarding some confusing notation, largely an overuse of the letter E. The first use comes on p. 4 (Section 1.1), given a graded vector space H^* and a filtration F^* of H^*, by defining

    \[E_0^p(H^*) := F^p(H^*)/F^{p+1}(H^*). \]

Here, the symbol E_0^p seems to designate an endofunctor on (graded) vector spaces — it eats one and gives back another; transporting morphisms through the filtration and quotient shouldn’t be a problem either. It isn’t really clear what the 0 subscript is supposed to indicate at this point, but the reader sits tight expecting the truth to be revealed.

However, on the very same page, McCleary twists things by making the assignment

(1)   \begin{equation*}   E^{p,q}_0 := F^pH^{p+q} / F^{p+1} H^{p+q} , \end{equation*}

where F^pH^r : = F^pH^*\cap H^r, the r\ts{th} graded piece of the p\ts{th} filtration. Now, with the extra index q, E^{p,q}_0 is a vector space on it’s own. The notation doesn’t indicate reference to H^*, though in this case it really depends on H^*. For instance, McCleary indicates that we should write something like

    \[ E^p_0(H^*)=\bigoplus_q E^{p,q}_0.\]

The definition immediately afterwards (Definition 1.1) indicates E^{p,q}_r is to be used to designate a vector space in a spectral sequence which is irrespective of any H^* for all r\geq 1. The typical way to relate a spectral sequence \{E^{*,*}_r, d_r\}, to a graded vector space H^* is the situation of convergence (Definition 1.2, p. 5) where instead

    \[E_\infty^{p,q} \cong E_0^{p,q}(H^*).\]

The right hand side above has nothing to do with the spectral sequence E^{*,*}_r (since we take r\geq 1 in our definition), it is just an instance of the definition from equation 1… but with distinct use of notation… oh. So on the one hand, E^{p,q}_0 should be a standalone vector space, like the other E_r^{p,q}‘s, but also it needs to come from an H^* so one should really write E_0^{p,q}(H^*) as in Definition 1.2. Wha? Shoot. Couldn’t we have used like an A instead or something?

Perhaps there is good reasoning for all of this to be discovered once we get further in. Also, it seems so far that initial terms are usually E_2^{*,*}. Why not E_1^{*,*}? And why don’t we allow 0-pages? In these cases the differentials would be vertical and horizontal (resp.) instead of diagonal, which feels less interesting somehow, though this doesn’t seem like it would be totally frivolous… TBD.

Splicing Short Exact Sequences

Finishing out the first section, we address what seems to be a typo in example 1.A (p. 6). McCleary’s expository style consists of many statements which are not obvious, though usually not difficult to work out. This is perhaps for the best, as the community seems to indicate that the only real way to learn spectral sequences (make that: all math?) is by working them out. Nevertheless, it is a bit discouraging to find yourself at odds with the author at the first example…

We have assumed a first quadrant spectral sequence with initial term E_2^{*,*} converging to H^* with a filtration satisfying F^{p+k}H^p=\{0\} for all k> 0. Then we have a filtration on H^1 in particular, given by

    \[ \{0\}\sus F^1H^1\sus H^1, \]

since, by the assumption, F^{2}H^1=\{0\} etc., and F^0H^1=H^1 by definition. By convergence, then,

    \[E_\infty^{1,0}=F^1H^1/F^2H^2=F^1H^1,\]

so E_\infty^{1,0} is a submodule of H^1. But also because E_r^{1,0} lies on the p-axis (depicted as what is usually the x-axis) and our spectral sequence has only first quadrant terms, d_r(E_r^{1,0}) must be the zero map for all r. Furthermore, E_r^{1,0} is too close to the q-axis to get hit by any differential d_r, thus E_2^{1,0} survives as the kernel of every d_r, mod the image of a d_r from a zero vector space in the second quadrant. This is all to say

    \[E_2^{1,0}=E_\infty^{1,0}\cong F^1H^1.\]

We then have part of the short exact sequence McCleary gives for H^1 as

    \[ 0 \lra E_2^{1,0}\lra H^1 \lra H^1/F^1H^1 \lra 0 .\]

How can we describe the third term using the spectral sequence? Well, from our definitions, H^1/F^1H^1=F^0H^1/F^1H^1\cong E_\infty^{0,1}. The book seems to be indicating that E_\infty^{0,1}=E_2^{0,1} but this is not necessarily the case! It also doesn’t make sense with how the short exact sequences are spliced later on.

Let’s address the first claim first. Because E_r^{0,1} lies on the q-axis, and the differentials point “southeast” towards the empty fourth quadrant, d_r(E_r^{0,1}) is the zero map for any r\geq 3, but it can’t be hit by anything so we have now

    \[E_\infty^{0,1}=E_3^{0,1} = \frac{\ker d_2 : E_2^{0,1}\to E_2^{2,0} }{\im d_2: E_2^{-2,0}\to E_2^{0,1}}. \]

The denominator is the image of a map from a zero vector space, so it is zero, and thus E_\infty^{1,0} is a subspace of E_2^{0,1}, but this latter space can be larger! This is all to say, the short exact sequence for H^1 is misprinted, and should go

(2)   \begin{equation*} 0 \lra E_2^{1,0}\lra H^1 \lra E_\infty^{0,1} \lra 0 . \end{equation*}

One can confirm this by examining the SES given just below, where we see E_\infty^{0,1} injecting into E_2^{0,1}:

(3)   \begin{equation*} 0 \lra E_\infty^{0,1}\lra E_2^{0,1}\overset{d_2}{\lra} E_2^{2,0}\lra E_\infty^{2,0}\lra 0. \end{equation*}

This is a standard decomposition of the map d_2 in the middle: for any morphism \phi:A\to B (in an abelian category at least, we suppose) there is a SES

    \[0\lra\ker\phi\lra A \overset{\phi}{\lra} B\lra B/\im \phi \lra 0 .\]

It remains to see that E_\infty^{2,0}\cong E_2^{2,0}/\im d_2. Because of where E_r^{2,0} sits on the p-axis, it is again the kernel of d_r for all r. Further, it can only possibly be hit by d_2, so in fact E_3^{2,0} survives through all further terms to give the desired equality

    \[E_\infty^{2,0}=E_3^{2,0} = E_2^{2,0}/\im d_2.\]

To splice all this together, we recall that we can connect

    \[\dots\oset{s}{\lra} L\overset{\alpha}{\lra} M\lra 0 \quad \text{and} \quad 0 \lra M \overset{\beta}{\lra} N \oset{t}{\lra} \dots \]

as

    \[\dots \oset{s}{\lra} L \overset{\gamma}{\lra}  N \oset{t}{\lra} \dots \]

where \gamma=\beta\circ\alpha. We maintain exactness since \ker \gamma=\ker \alpha=\im s and \ker t = \im \beta = \im \gamma.

Performing this surgery on sequences 2 and 3 yields the main exact sequence claimed by the example, namely

(4)   \begin{equation*} 0 \lra E_2^{1,0}\lra H^1\lra E_2^{0,1}\overset{d_2}{\lra} E_2^{2,0}\lra E_\infty^{2,0}\lra 0. \end{equation*}

Stay tuned for more clarifications from Chapter 1.