You are currently browsing Alan Guo’s articles.

The principle of mathematical induction is a simple but powerful technique for proving mathematical results. In its most basic form, it says the following:

If something’s true for an integer n_0, and if it being true for the number k implies it is true for the number k+1, then it is true for all integers n \ge n_0.

This is a more-or-less “obvious” statement. It has two components: the base case and the inductive step. The base case establishes truth for one number. The inductive step does not actually establish truth of the statement for any particular value. Instead, it establishes the truth of a logical implication. In formal terms, the above box reads:

If P(n) is a proposition for the integer n, then if P(n_0) is true and P(k) \Rightarrow P(k+1), then P(n) is true for all integers n \ge n_0.

Think of induction as a string of dominos. Establishing the base case is like knocking down the first domino. Establishing the inductive step is like setting up the dominos so that if the k-th domino falls, it knocks over the (k+1)-th domino. Thinking of it this way, it should be obvious that once these two things are established, then all the dominos will fall. This is one of the concepts that is best learned through examples.

Read the rest of this entry »

Consider the following game: I write down a random real number between 0 and 1, and ask you to guess it. What’s the probability that you guess it correctly? The answer is zero. You might wonder: “But it’s possible for me to guess the correct answer! That means that the probability has to be more than zero!” and you would be justified in wondering, but you’d be wrong. It’s true that events that are impossible have zero probability, but the converse is not true in general. In the rest of this post, we show why the answer above was in fact zero, and why this doesn’t need to do irreparable damage to your current worldview.

Tell me more!

Hi all! If you’re really interested in math and science and feel like digging a little deeper, there are various blogs out there maintained by college students, grad students and professors on technical subjects. Most recently, I joined a group blog called Concrete Nonsense, which is maintained by grad students and focuses primarily on math. My contribution (hopefully) to the blog will be posts connecting mathematics and computer science by discussing topics in algorithms, complexity theory, artificial intelligence, and quantum computing. The posts on there are considerably more technical and precise. A lot of topics that I find really interesting but feel are too advanced for this blog I will end up posting about on there, until I figure out how to explain them without using too much technical jargon.

Digital information is transfered in the form of electric currents through wires that may contain impurities that alter the currents, or electromagnetic waves through space with ambient radiation that alter the waves, yet digital information is so dependent on precision. Unlike analog information, digital information gets ruined when there is any noise involved. For example, say I shouted “Hello!” to you from far away during a windy day. The sound waves in the air constitute analog information, but the wind along the way can alter the waves and therefore perturb the information. You might hear something distorted but it would probably still resemble the original “Hello!” message. Now suppose I sent the text message “Hello!” to you via instant messaging. In decimal, the ASCII string “Hello!” (excluding the quotation marks) is 072 101 108 108 111 033, which is actually sent in bytes (blocks of 8 bits) in binary as 01001000 01100101 01101100 01101100 01101111 00100001. Now, imagine if there was some noise (perhaps radiation) in the air which altered even one of the bits. Suppose in the byte containing the letter “o”, the bits changed from 01101111 to 00101111 (the first 1 was changed to a 0). Then the message received would be “Hell/!”, a completely different message! And that’s just from one altered bit! If billions of messages are transfered every day, shouldn’t there be an overwhelming number of confusing messages all the time? And that’s just the tip of the iceberg. The messages transmitted could be in machine language telling some server to do a specific thing, and an altered bit could change its actions entirely. How come our computer-controlled devices all work properly most of the time? Maybe the radiation in the air and the imperfections in the wires that we use to transmit digital information actually don’t molest the messages we send. This is not at all true. Instead, we use error correcting codes to ensure that our messages are received as intended.

Tell me more!

In mathematics and science, it’s often too difficult to get precise formulas for things, so often researchers just estimate the growth. For example, in a previous post on the Fibonacci sequence, we found that the Fibonacci numbers grow roughly exponentially; that is, they are close to an exponential function, but there is a tiny error term. Often, we don’t actually care about the exact formula but we just want an idea of how quickly something grows. In order to capture this vague notion precisely, we use asymptotic notation, which captures all the long-run information contained in the original function.

Tell me more!

Algorithms lie at the heart of computing. When you surf the web, you open up a web browser. When you view a website, your web browser takes the HTML file, which contains code, and then the browser uses an algorithm to convert the code into a page that actually makes sense to humans. When you listen to music on iTunes, the music player takes the MP3 file, which encodes the sound as 0’s and 1’s, and it uses an algorithm read the 0’s and 1’s and tell your computer speakers to output sound. When you’re looking for a particular song in your music library, you might want to search alphabetically, so you click on the Name column to sort your songs alphabetically. Your computer doesn’t magically know how to sort your library. It follows a precise set of instructions every time it’s told to sort something. That set of instructions is an algorithm!

Tell me more!

Sometimes sequences of numbers are defined recursively, so that given the previous terms of the sequence we can find the next term. A classic example of this is the sequence of Fibonacci numbers, where every two consecutive terms determine the next term, and so if we are given the first two terms, we can calculate the whole sequence. However, if we wanted to, say, calculated the 1000th Fibonacci number, we’d have to start out with the first two, add them to compute the 3rd, add the second and third to compute the 4th, and so on, to slowly build up every single Fibonacci number before the 1000th in order to get there. It’d be much nicer if we had a formula for the Fibonacci sequence. That way, if we wanted the 1000th Fibonacci number, all we’d have to do is plug 1000 into the formula to compute it. Of course, the formula is only useful if it takes less time to crunch it out than it does to do it the brute-force way. A method of finding closed formulas for sequences defined by recurrences is the use of generating functions. Generating functions are functions which “encapsulate” all the information about a sequence, except you can define it without knowing the actual terms of the sequence. The power of generating functions comes from the fact that you can do things like add and multiply them together to create generating functions of other sequences, or write them in terms of themselves to find an explicit formula. Once you have an explicit form for a generating function, you can use some algebra to “extract” the information from the function, which usually means you can find a formula for the sequence in question.

Tell me more!

Most high school students in the United States learn about matrices and matrix multiplication, but they often are not taught why matrix multiplication works the way it does. Adding matrices is easy: you just add the corresponding entries. However, matrix multiplication does not work this way, and for someone who doesn’t understand the theory behind matrices, this way of multiplying matrices may seem extremely contrived and strange. To truly understand matrices, we view them as representations of part of a bigger picture. Matrices represent functions between spaces, called vector spaces, and not just any functions either, but linear functions. This is in fact why linear algebra focuses on matrices. The two fundamental facts about matrices is that every matrix represents some linear function, and every linear function is represented by a matrix. Therefore, there is in fact a one-to-one correspondence between matrices and linear functions. We’ll show that multiplying matrices corresponds to composing the functions that they represent. Along the way, we’ll examine what matrices are good for and why linear algebra sprang up in the first place.

Tell me more!

My name is Alan and I will be a contributing author to this blog. My background is in mathematics and computer science. I graduated with a Bachelor of Science in Mathematics and a minor in Computer Science from Duke University in 2011, and beginning in the fall of 2011 I will be a graduate student in the EECS department at MIT. My primary areas of interest in mathematics are combinatorics and commutative algebra, and my primary areas of interest in computer science are algorithms, complexity theory, and artificial intelligence.

My academic homepage can be found here.

This blog serves a dual purpose. First, it is an attempt to write about technical topics in a super-accessible way so that any interested, educated layman will be able to gain a deeper understanding and appreciation of the subjects. Often, readers pick up a technical book, say, a book on algebraic topology, only to put it back down after quickly drowning in waves of symbols and equations. Our goal is to provide a bridge. We want to provide friendly, gentle introductions to various topics we find interesting, so that anyone without a specific background may also enjoy the topic and find it equally interesting. To be honest, we hate getting bogged down in equations just as much as anyone else, but it’s the underlying idea which pulls us in. For those of you who don’t have the time or patience to dig through the symbols and equations or take tons of classes in mathematics to get to the underlying idea, this blog is for you.

The second purpose of this blog is to help the writers. Clear writing goes hand-in-hand with clear understanding. By writing about technical topics in such a way that a layman can understand them, we help ourselves understand the topics better. Often, one gains a better grasp of something by internalizing it as intuition. Once this intuition exists, one no longer needs to waste time re-deriving the underlying facts anymore and can move on to pondering more complex problems.

In terms of content, we currently have writers with backgrounds in mathematics and computer science, so naturally the posts starting out will be on topics in these fields. However, our hope is that we can pull in more writers who have backgrounds in other technical fields as well, such as physics, economics, statistics, etc. The goal of this blog is to raise interest and appreciation in the underlying ideas in technical fields which might otherwise go unnoticed. The reader we have in mind is a curious, high school- or college-educated individual who wishes to learn more about these technical fields but does not have the time to peruse tomes and textbooks in order to do so. Of course, we will not be a substitute for formal education; our goal is only to provide that bridge so that the reader may cross it if he or she so chooses.