Spectral Sequences III

Bigraded Algebrae

McCleary introduces the concept of a differential graded algebra in section 1.3 (Definition 1.6, p. 11). These are algebras (over a field k), which tend to be \N-graded, and importantly carry with them a map d called a differential which is k-linear, shifts the degree of elements (in the grading) up by one, and satisfies a “Leibniz rule:”

    \[d(a\cdot a')=d(a)\cdot a'+(-1)^{\deg(a)} a\cdot d(a')\]

for a, a' in our algebra A^*. This is a twisted version of what is usually called Leibniz’ rule in calculus (which is basically just product rule), which coincides with how the differential works in the algebra of differential forms.

This idea is easily extended to the notion of a differential bigraded algebra (E^{*,*}, d), where now the elements are \N^2 graded (for the time being, later we’ll have \Z^2), but d remains a total-degree 1 mapping. That is,

    \[d: \bigoplus_{p+q=n} E^{p,q} \lra \bigoplus_{r+s=n+1}E^{r,s},\]

and d still satisfies the Leibniz rule

(1)   \begin{equation*} d(e\cdot e')=d(e)\cdot e'+(-1)^{p+q}e\cdot d(e') \end{equation*}

where e\in E^{p,q}.

A standard construction is to form a bigraded algebra by tensoring two graded algebras together. This would work with just component-wise multiplication, but to get a working differential that satisfies our version of the Leibniz rule 1 as well, we introduce an extra sign: we mean, supposing (A^*,d) and (B^*, d') are differential graded algebras, then we can assign E^{p,q}:=A^p\ox B^q, and furthermore

(2)   \begin{equation*}  (a_1\ox b_1 )\cdot (a_2\ox b_2):= (-1)^{(\deg a_2)(\deg b_1)}a_1a_2\ox b_1 b_2.\end{equation*}

Then if we define a differential d_\ox on E^{*,*} by

(3)   \begin{equation*}  d_\ox(a\ox b)=d(a)\ox b + (-1)^{\deg a} a \ox d'(b), \end{equation*}

then d_\ox satisfies the Leibniz rule 1. It is clarifying to check this, so we’ll record it here. Switching notation a bit, we will write (-1)^{\ab{a}} instead of (-1)^{\deg a}. To satisfy 1 we need

    \begin{align*}d_\ox((a_1\ox b_1)\cdot (a_2 \ox b_2))& =  d_\ox (a_1\ox b_1)\cdot (a_2 \ox b_2) + \\ &(-1)^{\ab{a_1}+\ab{b_1}} (a_1\ox b_1) \cdot d_\ox(a_2\ox b_2) \end{align*}

we then apply 3 to the individual terms on the right side above to get

    \begin{align*} d_\ox((a_1\ox b_1)\cdot (a_2 \ox b_2)) = [d(a_1)\ox b_1+((-1)^{\ab{a_1}} a_1 \ox d'(b_1)]\cdot (a_2\ox b_2)+\\ (-1)^{\ab{a_1}+\ab{b_1}} (a_1\ox b_1) \cdot [d(a_2)\ox b_2+(-1)^{\ab{a_2}} a_2\ox d'(b_2)]. \end{align*}

Now applying the multiplication rule 2 and distributing, we find

(4)   \begin{align*}  d_\ox((a_1\ox b_1)\cdot (a_2 \ox b_2)) = (-1)^{\ab{a_2}\ab{b_1}}d(a_1)a_2\ox b_1b_2 +(-1)^{\ab{a_1}+\\\ab{d'(b_1)}\ab{a_2}}a_1a_2\ox d'(b_1)b_2 + (-1)^{\ab{a_1}+\ab{b_1}+\ab{d(a_2)}\ab{b_1}} a_1 d(a_2)\ox b_1b_2 +\\ (-1)^{\ab{a_1}+\ab{a_2}+\ab{b_1} +\ab{b_1}\ab{a_2}} a_1a_2 \ox b_1d'(b_2). \end{align*}

To check the rule holds, we perform this computation by instead multiplying first and then applying the differential. That calculation looks like

    \begin{align*} d_\ox((a_1\ox b_1)\cdot (a_2 \ox b_2)) &= d_\ox((-1)^{\ab{a_2}\ab{b_1}} a_1 a_2 \ox b_1b_2) \\ &=(-1)^{\ab{a_2}\ab{b_1}}[d(a_1a_2)\ox b_1b_2+ (-1)^{\ab{a_1}+\ab{a_2}}a_1a_2\ox d'(b_1b_2)] \\ &=(-1)^{\ab{a_2}\ab{b_1}}[(d(a_1)a_2+(-1)^{\ab{a_1}}a_1d(a_2)\ox b_1b_2 +\\ &(-1)^{\ab{a_1}+\ab{a_2}}a_1a_2\ox(d'(b_1)b_2+(-1)^{\ab{b_1}}b_1d'(b_2))]  \qquad \\ &=(-1)^{\ab{a_2}\ab{b_1}}d(a_1)a_2\ox b_1b_2+(-1)^{\ab{a_1}+\ab{a_2}\ab{b_1}} a_1d(a_2)\ox b_1b_2+ \\ &(-1)^{\ab{a_1}+\ab{a_2}+\ab{a_2}\ab{b_1}} a_1 a_2\ox d'(b_1) b_2 +\qquad\\ & (-1)^{\ab{a_1}+\ab{a_2}+\ab{b_1} +\ab{a_2}\ab{b_1}}a_1a_2\ox b_1d'(b_2). \end{align*}

Finally, remarking that \ab{d'(b_1)}=\ab{b_1}+1 and \ab{d(a_2)}+1=\ab{a_2} shows that terms of the last line above match with those of 4, so everything checks out and (A^*\ox B^*, d_\ox) becomes a differential bigraded algebra.

A Chain Rule

Given the length and detail of section 1.3, surprisingly we find no glaring errors in this section, but the use of the differential becomes somewhat muddled in calculation in section 1.4. Again, perhaps as an undesirable side effect of the fact that we remain at the “informal stage,” it’s always difficult to keep track of what assumptions we’re working with in each example. Case in point, example 1.H, p. 20. The paragraph preceding definition 1.11 seems to indicate that all graded algebras are assumed to be graded commutative — at least for the rest of the section, one guesses, though the language is vague. Let’s try this here with a bit more force.

Assumption: All graded algebras are graded commutative for the rest of the post. This is to say, for all x,y in any A^*, we have x\cdot y =(-1)^{\ab{x}\ab{y}} y\cdot x. Now let’s have a look at the example. We suppose a spectral sequence of algebras (E_r^{*,*}, d_r) with E_2^{*,*}\cong V^*\ox W^*, converging to the graded algebra H^* which is \Q in degree 0 and \{0\} in all others.  The example asserts that if V^* is a graded commutative polynomial algebra in one generator/variable, then W^* is a graded commutative exterior algebra in one generator, and vice versa.

The first confusion appears in a restatement of the Leibniz rule near the bottom of page 20, except this time there are tensors involved. This appears to be a mixed use/abuse of notation, which was slightly different in the first edition of the book, but not more consistent. The idea is as follows. V* and W^* embed into V^*\ox W^* under the maps v \mapsto v \ox 1 and w \mapsto 1\ox w.  Then one can also write an element w\ox z \in V^*\ox W^* (mind the inexplicable inconsistent choice of letters) as

(5)   \begin{equation*}  w \ox z = (-1)^0 (w\ox 1)\cdot (1 \ox z)= (w\ox 1) \cdot (1 \ox z) \end{equation*}

since the degree of 1 is zero in each graded algebra. Note that this also allows us to regard V^*\ox W^* as graded commutative with the tensor product as multiplication between pure V^* and pure W^* elements, writing

    \[z\ox w:=(1\ox z) \cdot (w \ox 1)=(-1)^{\ab{z}\ab{w}} (w\ox z). \]

One can apply Leibniz rule to the product in 5 so that if V^*\ox W^* comes with a differential d, we get

    \[d(w\ox z) = d((w\ox 1) (1\ox z)) =d(w\ox 1)(1\ox z)+(-1)^{\ab{w}}(w\ox 1) d(1\ox z).\]

The thing is we really need not write the tensor product w\ox 1; it is just as correct to write w on it’s own, as we often do with polynomial algebras and so on. Then the above can be written instead as

    \[d(w\ox z) = d(w)\ox z +(-1)^{\ab{w}} w \ox d(z) \]

as McCleary does near the bottom of page 20. What makes this confusing is that up to this point we had only seen differentials acting on tensors by defining the bigraded differential from tensoring two differential graded algebras together, seen above. In this context, the differential of the bigraded algebra must act on an element of the algebra coming from V^* \ox W, it cannot act on just one side of the tensor. What’s different here is that the tensor product is actually the multiplication operation on each page of the spectral sequence. Thus, the restatement of the familiar rule with new notation.

Nevertheless, the next equality is also a bit confounding at first, partly because McCleary, goes back to writing the extra 1 in the tensor, suggesting that we need to pay attention to its effect. He says that if d_i(1\ox u) =\sum_j v_j\ox w_j, then

(6)   \begin{equation*}  d_i(1\ox u^k) = k \lt(\sum_j v_j \ox ( w_ju^{k-1})\rt) \end{equation*}

which looks sort of reasonable as it resembles something like a chain rule, d(u^k)=k u^{k-1} d(u). It is presented as if it should follow immediately from the Leibniz rule stated before. But this seems weird when the degree of u is odd. To be totally transparent about this, let’s illustrate the case where k=2, suppressing the subscript on the differential again, but maintaining the tensorial notation.

    \begin{align*} d(1\ox u^2) & =d((1\ox u)(1\ox u)) \\ & = d(1\ox u) (1\ox u) +(-1)^{\ab{u}} (1\ox u) d (1\ox u) \\ &= \sum_j (v_j \ox w_j)(1\ox u) + (-1)^{\ab{u}} \sum_j (1\ox u) (v_j \ox w_j) \\ &=\sum_j v_j\ox w_j u + (-1)^{\ab{u}} \sum_j (-1)^{\ab{u}\ab{v_j}} v_j \ox u w_j \\ &=\sum_j v_j\ox w_j u + (-1)^{\ab{u}} \sum_j (-1)^{\ab{u}\ab{v_j}+\ab{u} \ab{w_j}} v_j \ox  w_j u \\ &=\sum_j v_j\ox w_j u + (-1)^{\ab{u}} \sum_j  v_j \ox  w_j u \\ \end{align*}

where the last line follows since v_j\ox w_j has total degree \ab{u}+1, so the sign inside the sum there has exponent \ab{u}(\ab{u}+1) which is even. We see that if u has odd degree then, these terms cancel and we get 0. So you say “wait a minute, that’s not right, I wan’t my chain rule looking thing” until you eventually realize that if u has odd degree, since it’s sitting in a graded commutative algebra, u^2 is actually zero! And the same goes for all higher powers of u. Then, d(u^2)=d(0)=0 makes complete sense. Meanwhile, if u has even degree, the terms will pile up with positive sign and we get the chain rule looking thing that was claimed. So the statement 6 is in fact true, though it really breaks down into two distinct cases.

Going forward in the example, McCleary only really seems to use the chain rule (liberally mixing in the described sort of abuse of notation) on terms of even degree, so it’s tempting to think that it only applies there, but it is sort of “vacuously true” in odd degree as well. Oh well. Onwards.

Leave a Reply

Your email address will not be published.