# Math Circle Resources

In preparation for a piece of writing I did about starting a math circle in county jails in Boulder and New Orleans, I collected the activities that we’ve developed (based on many other math circle resources available online and in print) to share broadly via Google Drive. The piece got long and may turn into a journal article rather than the original intention of blog post, but when it is available I will link to it here. It describes the issues, obstacles, and joys of engaging in this kind of outreach which I’ve been organizing on a weekly basis since April 2023. Look out for more coming soon!

# New paper!

My friend and academic brother Néstor Díaz and I started talking about working on a project together probably a bit over a year ago. I pitched this idea on proving shellability of Bruhat order for some of the symmetric varieties I was familiar with, figuring that it would be a fun way to learn more about poset topology, and that it wouldn´t be terribly difficult to get some results building off of past work of Incitti, Can, Cherniavsky, Twelbeck, Wyser and others.

The results we set out for turned out to have some fight in them! We thought we had something several times and then found problems and counterexamples in the course of writing things more carefully. Eventually, we shifted gears slightly to working with the sects of the (p,q)-clans. The idea here was that (1) the sects are slightly simpler and smaller than arbitrary intervals of clans, as they group “like” clans together, and (2) we know a nice bijection of sects and collections of rook placements. This problem was also not without challenges and subtleties, but after working out a way to associate a partial permutation to a clan in a way so that covering moves could be coherently labelled, the argument came together.

We had pretty much worked this out by the time we met up at the Schubert Summer School at UIUC in June, but for various reasons it took us a while to write up the details. We’re excited that we were finally able to post a pre-print to arXiv.org last week just in time for the holidays. Enjoy!

# Talks

Recently I gave a personal record of three (3!) talks in one day in two different languages. I was honored to be invited to speak in the online colloquium of the Universidad de El Salvador, but the date available happened to be on the same day that I had already agreed to speak in the Rocky Mountain Algebraic Combinatorics Seminar (RMACS) at Colorado State in nearby Fort Collins. The topics were similar so fortunately it wasn’t too too much work to prepare, but on top of two Calculus classes in the morning, seldom have I spoken so much in one day.

I enjoyed the format of RMACS which gave the opportunity for an introductory talk before a regular research talk to try create an open environment for graduate students and folks from other fields. It reminded me of a concept I’m still eager to try to make happen in math of talks-as-dialogues. That is, I think it would be interesting to have math talks operate as conversations between two people where one is trying to explain something to the other and the “explainee” can interrupt, question, elaborate or make connections as much as they like.

I think the most enjoyable math talks I’ve attended have more-or-less functioned in this way already, as conversations between the attendees and the presenter. This is often to the expositor’s credit, having selected and introduced their topic (or themselves) in a way that invites this sort of interaction. But building a conversational flow into the structure of our talks I think would frequently force the rendering of ideas in ways that are, if not more clear, at least more diverse and therefore accessible to an audience.

Good discussion is part of good exposition. One can observe this in the way radio programs, podcasts or videos media that interview experts (like, say, Numberphile) are structured. Radiolab also comes to mind as a program exploits of this technique. An expert isn’t just invited on the show to explain their theory and known results for an hour. There are constant interruptions for clarifications, “what ifs”, auditory “illustrations,” philosophical tangents, and expressions of wonder and amusement. You could also see parallels in the idea of masterclasses from music performance. Though the tone and politic is more authoritarian than what I am imagining, a masterclass (in which musician performs, receives coaching from a “master,” and responds and adjusts before an audience) appeals to our interest in the dialogic element of engaging with art.

For a math talk, this would put some pressure on the “explainee,” but it would also alleviate the pressure on other audience members to feel guilty for interrupting the speaker or asking questions at the wrong time etc. Interaction from the rest of the audience would hopefully encouraged in this set-up, and we could democratize research talks in the ways we are beginning to do with our classrooms. The audience would be freer to dictate what it is they want to get out of the talk. Multiple perspectives could be heard. Knowledge would be shared, and interpreted.

I’m not saying all traditional math talks should go away. There’s always a time and place for a well-delivered lecture. But I think every mathematician’s experience with research talks is uneven enough for us to suspect that maybe there should be other ways for us to communicate our research.

# Machines for Math

I came across this article today while looking for information on how Jensen and Williamson implemented an algorithm for computing the -canonical basis. Not recognizing any of the co-authors I assumed it was a group of students until coming across this paragraph.

In this work we prove a new formula Kazhdan-Lusztig polynomials for symmetric groups. Our formula was discovered whilst trying to understand certain machine learning models trained to predict Kazhdan-Lusztig polynomials from Bruhat graphs. The new formula suggests an approach to the combinatorial invariance conjecture for symmetric groups.

Turns out the co-authors are Google I mean DeepMind people. This seemed kind of wild to me — to throw machine learning at something like computing KL polynomials, and I guess it is sort of radical. At least enough that they published another paper in Nature describing this and other efforts at using machine learning to formulate or solve conjectures. And U. Sydney is going on a PR push about it.

Reading the Nature article, it is clear that this isn’t really the doomsday scenario we worry about where machines can prove all the theorems and we mortals are useless to them. It sounds like there’s still a high degree of specialized knowledge and intuition that goes into training the supervised learning model. At least that’s my sense from this paragraph:

We took the conjecture as our initial hypothesis, and found that a supervised learning model was able to predict the KL polynomial from the Bruhat interval with reasonably high accuracy. By experimenting on the way in which we input the Bruhat interval to the network, it became apparent that some choices of graphs and features were particularly conducive to accurate predictions. In particular, we found that a subgraph inspired by prior work may be sufficient to calculate the KL polynomial, and this was supported by a much more accurate estimated function.

I know not much about machine learning, and the second sentence doesn’t give a clear sense of how the “experimenting on” inputting went, but I would be surprised if the machine was able to actually pull out hypercube decompositions and diamond completeness as the features it needed to really predict the KL polynomials. That is, it sounds like ML could be a useful interactive tool for discovering and verifying ideas, like interactive theorem proving seeks to be, but they’re still not doing the math for us. Not to mention the proof of the main formula of the paper involves some layers of categorification and pretty heavy geometry.

Still, all in all, it looks like we’re gonna all have to learn how machines learn sooner or later.

# Study Music

I often listen to music while doing math. I’ve never been one to pay strict attention to lyrics, so it doesn’t usually matter too much whether the music is instrumental or not. I remember listening to a lot of Kanye West while studying for quals for instance, which seems kind of weird in retrospect but I guess it worked out. Usually I get stuck in music ruts, which is kind of fine for doing work to because you don’t necessarily want to be challenged by unfamiliar music while you’re busy doing heavy mental lifting.

Gnawa music has origins in Sufi mysticism, and is apparently supposed to be for exorcising jinns, possessive spirits. It later caught on with some American jazz musicians and is now identified with Morocco as sort of a “national music” (though there is more socio-political subtext to that story than I realized). In any case, it is great math music because it has good impulse but is also kind of drone-y so it keeps you sort of cruising along in the zone. I think this album from Mahmoud Guinia was my first exposure, and one I still come back to regularly.

Whenever I’m really stuck but I want some study music, Zuckerzeit from early German electronic duo Cluster is another album I go for regularly. The title means “sugar time,” which, coincidentally, is often math time as well. Bring on the cookies.

# Knuth on P v NP, god(s)

I don’t remember exactly how I got there, but reading through a short profile on Don Knuth I was glad to learn that my worldview roughly coincides.

Taking the interview for this article as a mini-instance of “All Questions Answered,” this reporter asked about the question of P versus NP. “It’s probably true that P equals NP, but we will never know why,” Knuth answered. The question has two aspects, he explained. The first: Given a computational problem, does there exist a polynomial-time algorithm for its solution? And the second: Is that algorithm knowable—that is, can we actually write it down? “What I suspect is that there is some algorithm, it’s out there, but it’s so complicated that for practical purposes, it makes no difference because nobody will ever know what it is,” he said. A suggestive example comes from the Robertson-Seymour theorem, which says that for any minor-closed family of graphs, there exists a polynomial-time algorithm to recognize whether a given graph belongs to the family. But “almost never do we know what the algorithm is.”

But just because something is hard doesn’t mean we should give up!

Also worth noting that this appears in the middle of a piece which characterizes the early 21st century mainstream North American academic position on diversity pretty well. Knuth is being praised for his contributions to diversity for giving money that allowed MathSciNet to add authors’ names typeset in their native languages instead of just transliterating to English. I am for this, but the tenor hews a little too close to the willful ignorance of claiming mathematics is a diverse and inclusive field because it is international. How eager we were to “celebrate diversity.” Oops!

Also, if your read further, it talks about Knuth’s lectures on religion and coming out as a Christian. I think all I can say about that is that I appreciate the acknowledgement that “God” is really a just a catch-all for mystery. It makes more sense to me as a metaphor though.

So how much more complicated can this game get? What motivation do we have to study other homogeneous spaces of higher dimensions? How is combinatorics involved?

I think the first and second questions need to be answered together, as there is constant tension between the impulse to add additional complexity to mathematical structure and the need to explain to other people why they should care. Mathematicians have for the most part chosen a sort of middle path which is to build the tools that would enable the study of arbitrarily complicated spaces by classifying the components of which they can be built. Then, whenever motivation to study a particular case of these things comes along from physics, economics, computing, or other parts of mathematics (most often), the motivated researcher has some footing to make progress by putting together the pieces.

You could think of it sort of like being a plumber going to the hardware store looking for the right fittings for some pipes. Maybe you’re a physicist and you’re trying to study a system of particles that obey certain symmetries. So what do you do? You go to the group store of course, and see if they have the right group. Sometimes they don’t have the perfect one, but they have the right parts and documentation so that you can assemble the right one yourself without too much difficulty. Now that you have a sensible way to keep track of symmetries in your system, you can go back to worrying about predicting how the universe behaves.

It is on the one hand wonderfully remarkable that the business of classifying the simple pieces of which geometric spaces can be built is a tractable pursuit at all. On the other hand, it is a mathematicians job to make it tractable. If some definition of what constitutes a “simple piece” leads to an impossible classification, then it is not the right definition. While some people might have you believe that theorems like the classification of finite simple groups or semi-simple Lie algebras are miracles from gods, they have also come through people toying with and massaging definitions until the classification task began to seem sensible (though still possibly monumental).

I don’t mean to demean the examples above; I too am enamored of them, and most of my published research uses the classification of simple Lie algebras as sort of a starting point. But I think of integer factorization as the proto-problem for these deconstructive classification quests. The fact that natural numbers all break down into prime factors (they are “classified” by their prime divisors) really does seem like a miraculous piece of order in the universe, or at least an inescapable part of how human consciousness interprets it. That we don’t have a very efficient way to take a number and break it down into factors is astonishing given how fundamental this problem is.

There is a dilemma tied to the practicalites of trying to have a career. We are incentivized to mathematician not to work on old, very difficult problems, but rather to invent new theories (or more likely work on the pet theories of the prior generation), asking questions possibly nobody else is asking and solving problems that aren’t necessarily super difficult — just no one’s been around to bother with them yet. This is a cynical view of what could also be romanticized as a tremendous freedom and creativity afforded to the profession by the fact that we’ve managed to convince the rest of the scientific and engineering fields that they need us to teach them calculus, statistics, and the like. (See: Mackey’s lecture on what it is to be a mathematician.) But there are also cases where solutions to old, difficult problems have eventually come through extended scenic detours.

Moreover, I don’t want to diminish the amount of work that has gone in to building up the vast amount of theory that exists today. It is not that it comes so easily, but rather that it comes at all that appeals to the hungry mathematician. It can be a joyous experience to sit down with a problem, toy with it for hours, weeks or months, and then to finally resolve it. And it is sort of incredible that human intellect is suited to the process of assimilating abstract information, experimenting, ordering it, and manipulating it in a way that reveals a underlying structure. The pleasure that comes with this experience is, I guess, probably the main force behind the proliferation of mathematical theory. It is a good thing that we will never run out of problems.

But now I haven’t said anything about combinatorics.

It might seem like were pretty far in the weeds by now but there’s a point to make here which is essential to my work. And we may as well introduce some of that noxious stuff — mathematical jargon. The collection of symmetries we’ve been talking about that can act in sequence or be done and then undone is usually called a group in mathematics. Maybe because it’s sort of like a group that gets together to play ball (though it might be with a line). By the way — not everyone in the group actually knows how to really play. There is always one character called the identity that just leaves the object as it is, moving nothing. This is still a symmetry though, in the same way 0 is a number, which is worth thinking about some.

An interesting question to ask is: where can points on the line go as the group bobbles it around? To simplify, how about where does a particular point, say zero, go? Where it ends up depends on the particular symmetry we apply, but considering all possibilities we see that zero can really go anywhere on the line. How? Well, if we want to take 0 to some value we actually have two options: either apply the translation that adds to every point, or apply the reflection through the value . Since the point 0 can go anywhere, we say that its orbit under the group is the whole line.

We’ve seen that we can think of the members (elements) of our group as constituting two copies of the line. On the first line, each point is a translation. On the second line, each point is a reflection (which can actually be realized as a translation and a reflection through 0). The fact that we have two ways to take 0 to, say, mirrors the fact that the group looks like two copies of the line.

So here’s a key idea: probably you would agree that the line is a natural enough geometric object. We need lines to get notions of distance and angle going in geometry! On the other hand, this group which is two copies of the line seems a bit funny and abstract. But it is still intimately attached to the line by keeping track of its symmetries. So what if there was a way to have both? That is, to have just the line in hand as an object but manifest in a way that also keeps track of the symmetry. This is what homogeneous spaces are for.

To explain how we get a homogeneous space here, I need to try to explain something in group theory called the orbit-stabilizer theorem. Now, I don’t really know how to do this without either bringing in a little notation or being extremely verbose (and likely confusing). But there is power in notation so let’s try that. We’ve observed that all reflections can be considered as reflections through zero together with a translation. Lets call the reflection through zero “flip,” and label translations by the value that is added to every point, . Then every element of our group is a pair of the form (, flip) or (, no flip) according to whether it is a reflection or a translation.

Notice that the reflection through zero (0, flip) “fixes” zero, that is leaves it put, and other than the identity, which can be thought of as (0, no flip), it is the only element that does this. In group theory terms, these two elements form a subgroup which consists of the elements that fix 0. The orbit-stabilizer theorem says that if we clump these two elements together and think of them as a single object, and then do the same for all of the pairs (, flip) and (, no flip), then the resulting collection will be the same as the orbit of the point 0. But this is the just the line again! This should make some sense; in our group, we had two copies of the line and all we’ve done is glued those copies together at points that line up.

This “clumping” of objects according to a subgroup is called taking a quotient in group theory, and a homogeneous space is really just a quotient. It’s useful because now the coordinates on the line now come from the coordinates on the group, which facilitates studying how the symmetries act, especially in more complicated examples. A lot of my work is studying how these quotient spaces break up and fit together in pieces when you restrict your symmetry set. We’ll try to come back to this in the next post.

Response to a prompt from a non-mathematician friend.

What is symmetry? Usually we think of symmetry as how something looks — it’s the same on both sides of an axis, or you can fold an image along a crease but you only need one side to know how to complete the picture. You achieve this reflecting the half you see across the axis.

The word “symmetry” in the vernacular usually refers to this kind of reflective symmetry. In its origin, the word means something like “agreement in measure or form.” But mathematicians have deranged this word to turn it from a property that is into something you can do. This means instead of just observing the symmetry across an axis, we think of the symmetry as actually making the reflection happen. In the end, the resulting image is the same of course, but thinking of symmetries as actions is an important and useful step.

Now that symmetry is made into something kinetic, we can reconsider the forms symmetry can take. What is the fundamental result of applying a symmetry? Well, it’s really what we said already, ultimately the image or object to which the symmetry is applied appears the same. Taking this as our defining property allows us to include a few other kinds of symmetry we haven’t yet considered. In addition to reflective symmetry, there also rotational symmetry — like taking a wheel and spinning it on its axle — and translational symmetry.

Translational symmetry takes us immediately into the infinite. It is what it sounds like: you pick something up, slide it over, plunk it back down, and then it appears exactly the same as it did before. But I don’t mean it looks like the same object just in a different place. Rather, the whole montage looks exactly the same! Like if you had two photos of the scene where this happened side by side, a “before” and an “after,” they would look exactly the same. The thing is that no “finite” or “bounded” objects (as we are accustomed to) can have this symmetry. This is is because the bounded objects have extremities — a northernmost point, an southeasternmost point etc. If you take one of these and then drag further in that direction, you’ll always be cutting a new path, so the extremity can’t wind up back in a position occupied by the original object. Necessarily, the picture will look different.

But if you take something that extends infinitely — say an abstracted line, the continuum, our model of the real numbers — all of a sudden every point has somewhere back on the line to go when you slide. Here’s another funny thing that begins to emerge: the set of symmetries begins to look like the object it’s acting on. Considering the real line, every real number corresponds a symmetry of translation. That is, for a fixed real number , we can send each number in the line to and have the whole line slide over (to the left or the right depending on whether is negative or positive) and land back on top of itself. So the line has a line’s worth of symmetries! Does it have any others?

Well, again we have reflection. Pick a point in the line, and reflect the line about it. It’s not hard to see that everything goes back to somewhere else on the line. Moreover, points that were close together end up the same distance apart after applying the reflection. Symmetries that satisfy this property are called isometries, as they preserve the intrinsic geometry of the line.

So do we have another line’s worth of reflections to add to our list of symmetries? Well, you could say that if you want to think of these symmetries in isolation, as objects of a kind that are unable to interact with each other. But that would run counter to the notion we are developing of symmetries not as things that are but as things that do! A symmetry acts on the line, and once it has acted, another symmetry can act on the result and so on, kicking the points on the line this way and that.

To make the next point, let’s consider an example. Take the reflection through the number 1. Under this reflection, 1 stays put and 0 goes to 2. Maybe you can convince yourself that this is sufficient information to completely determine this symmetry. But here’s another way to do the same thing: reflect through 0, and then slide everything 2 to the right. With a bit more thought, you can probably convince yourself that the reflection through any point can be obtained similarly by a reflection through 0 followed by a slide, or a slide and and then a reflection through 0. What we say is that the reflections and translations altogether are generated by just the reflection through 0 together with the translations.