In this episode there was an unfortunately low quantity of mathematics, so we'll talk about a mathematical object that was mentioned in passing, Grobner bases (the plural of basis).
Grobner Bases are fairly technical objects in the field of commutative algebra, so we'll only describe a simplified version, and we'll also try to use simple examples to demonstrate their use. They are used to ease computations with polynomials in multiple variables.
Let's start with an example of polynomial division in 1 variable. This works just like long division you learned in grade school. Let's divide by . The first step is to cancel the first term of f by multiplying g by the appropriate term and substracting the result from f. When we do that we get . Doing this again we get . Now we won't be able to continue this process any further because if we multiply g by anything, even a constant, it will still have a higher degree than x. So the remainder when we divide f by g is x. In general, it is important to note that the degree of the remainder is always less than the degree of g. Now what would happen if wanted to divide by 2 polynomials, ? To be more precise, what we want to do is write where r has degree less than both . With some careful thought, one can show that you can find r by dividing and then dividing the remainder of that by . Also, one can prove that the result of this does not change if you switch the order in which you do the divisions.
Now you might wonder "is it possible to extend polynomial division to multiple variable polynomials in such a way that the remainder term is unique"? The answer is yes, but it is much more complicated than you might think. First let's look at the division algorithm in 1 variable in more detail. Here's a list of the steps we do to divide f by g, written in psuedo-code.
Now how could we extend this algorithm to polynomials with multiple variables? Well, in the second line it looks like we need a way to say whether one polynomial is less than another. The way to properly generalize this condition to multiple variables is subtle. What we actually want is a way to say whether 1 term (called a monomial) is less than another term. (Here we don't mean "less than" like "3 < 6", we mean a more general kind of "less than" called a "total ordering," which is described in the tangent.) Then to a polynomial f we assign the term in f which is the largest with respect to this ordering, which we call the leading term, or Lf. Then what we actually mean in the second line is "if Lg does not divide any term in f, return f." We'll get back to how to generalize the other lines after we discuss total orderings a bit.
Now how do we assign a total order to monomials? With just one variable a monomial is just a constant times x^n, for some integer n, and it is easy to assign a total order, we just say x^n < x^m if n < m. However, if there are multiple variables, this becomes a little trickier. For example, which is "smaller," x or y? They both have the same degree, which is 1. Therefore, we must make some arbitrary choice, so for our examples we'll say x < y. Then we'll say x^a y^b < x^c y^d if either (a < c) or (a = c and b < d). So now we can compare monomials, which means we fully know how to do the second line.
Now we know that there is a monomial in f, which we'll call t, such that Lg divides t. Now let h = t / Lg, so that we can make f "smaller" by subtracting hg (because this will remove the term t). Now we are in a position to state the generalized algorithm, which you might already guess is significantly more complicated than our first algorithm.
A bit of explanation is in order. The "WHILE C DO" command interacts with the indentation. It means that if the condition C is true, then you go to the next line, which is indented. Once you reach the end of the lines that are indented more than the WHILE, you go back to the beggining and check to see if C is true again. If C is false, then you go to the next line that is indented the same as the WHILE command is. The IF command is similar, except that once you reach the line that has the same indentation as it, you don't go back, you just keep on going forwards. This next activity will be more difficult than the previous one.
If you followed the algorithm correctly, you got different answers. This would cause significant problems in computations, which are a bit too theoretical to discuss here. However, Grobner bases are a way to fix this problem. Given a monomial ordering and input polynomials , one can compute a new set of polynomials so that the set where all the g polynomials are equal to 0 at once is equal to the set where all the f polynomials are equal to 0 at once (this is a very important condition in commutative algebra, but it's a bit too theoretical to explain here). Also, if you divide any polynomial h by the g's using the above algorithm, you will always get the same remainder.
In summary, Grobner bases are fairly technical, which is probably why they were only mentioned in passing on the show. They were invented in 1965, so they are a fairly recent invention as far as mathematics is concerned, and they have had a very important effect on both theoretical and practical mathematics. For theoretical math, it has made it much easier to compute explicit examples, which means it's easier to figure out new theoretical phenomena. It has also greatly improved the ability of computers to make computations with polynomials, which is important for various engineering applications, like tracking the movement of robot arms.