You are currently browsing the category archive for the ‘Mathematics’ category.

The principle of **mathematical induction** is a simple but powerful technique for proving mathematical results. In its most basic form, it says the following:

If something’s true for an integer , and if it being true for the number implies it is true for the number , then it is true for all integers .

This is a more-or-less “obvious” statement. It has two components: the **base case** and the **inductive step**. The base case establishes truth for one number. The inductive step does not actually establish truth of the statement for any particular value. Instead, it establishes the truth of a logical implication. In formal terms, the above box reads:

If is a proposition for the integer , then if is true and , then is true for all integers .

Think of induction as a string of dominos. Establishing the base case is like knocking down the first domino. Establishing the inductive step is like setting up the dominos so that if the -th domino falls, it knocks over the -th domino. Thinking of it this way, it should be obvious that once these two things are established, then all the dominos will fall. This is one of the concepts that is best learned through examples.

Consider the following game: I write down a random real number between 0 and 1, and ask you to guess it. What’s the probability that you guess it correctly? The answer is zero. You might wonder: “But it’s *possible* for me to guess the correct answer! That means that the probability has to be more than zero!” and you would be justified in wondering, but you’d be wrong. It’s true that events that are impossible have zero probability, but the converse is not true in general. In the rest of this post, we show why the answer above was in fact zero, and why this doesn’t need to do irreparable damage to your current worldview.

In mathematics and science, it’s often too difficult to get precise formulas for things, so often researchers just *estimate* the growth. For example, in a previous post on the Fibonacci sequence, we found that the Fibonacci numbers grow roughly exponentially; that is, they are close to an exponential function, but there is a tiny error term. Often, we don’t actually care about the exact formula but we just want an idea of how quickly something grows. In order to capture this vague notion precisely, we use **asymptotic notation**, which captures all the long-run information contained in the original function.

Sometimes sequences of numbers are defined recursively, so that given the previous terms of the sequence we can find the next term. A classic example of this is the sequence of **Fibonacci numbers**, where every two consecutive terms determine the next term, and so if we are given the first two terms, we can calculate the whole sequence. However, if we wanted to, say, calculated the 1000th Fibonacci number, we’d have to start out with the first two, add them to compute the 3rd, add the second and third to compute the 4th, and so on, to slowly build up every single Fibonacci number before the 1000th in order to get there. It’d be much nicer if we had a formula for the Fibonacci sequence. That way, if we wanted the 1000th Fibonacci number, all we’d have to do is plug 1000 into the formula to compute it. Of course, the formula is only useful if it takes less time to crunch it out than it does to do it the brute-force way. A method of finding closed formulas for sequences defined by recurrences is the use of **generating functions**. Generating functions are functions which “encapsulate” all the information about a sequence, except you can define it without knowing the actual terms of the sequence. The power of generating functions comes from the fact that you can do things like add and multiply them together to create generating functions of other sequences, or write them in terms of themselves to find an explicit formula. Once you have an explicit form for a generating function, you can use some algebra to “extract” the information from the function, which usually means you can find a formula for the sequence in question.

Most high school students in the United States learn about matrices and matrix multiplication, but they often are not taught *why* matrix multiplication works the way it does. Adding matrices is easy: you just add the corresponding entries. However, matrix multiplication does not work this way, and for someone who doesn’t understand the theory behind matrices, this way of multiplying matrices may seem extremely contrived and strange. To truly understand matrices, we view them as representations of part of a bigger picture. Matrices represent *functions* between spaces, called vector spaces, and not just any functions either, but **linear** functions. This is in fact why **linear algebra** focuses on matrices. The two fundamental facts about matrices is that *every matrix represents some linear function*, and *every linear function is represented by a matrix*. Therefore, there is in fact a one-to-one correspondence between matrices and linear functions. We’ll show that multiplying matrices corresponds to composing the functions that they represent. Along the way, we’ll examine what matrices are good for and why linear algebra sprang up in the first place.