“To know recursion, you must first know recursion.”

(Original author unknown)

Much of the fun of writing code is learning about all of the various algorithms and techniques that help get the job done. Some of these are relatively straightforward: if (this), then (do that); while (this is true) then (keep doing that), and so on.

It’s what makes the magic work.

One of the more interesting techniques is recursion. Analogous to mathematical induction, recursion involves using the function being written in part of its own definition. This sounds not only unintuitive but unsound — but it turns out to be mathematically rigorous.

Computing factorials is a straightforward task that can easily be done either recursively or non-recursively. X factorial, written as X!, is (for integers) the product of all integers from 2 up to X. So, 5! == 5 * 4 * 3 * 2, or 120. (Factorials get large very quickly.)

Writing a FOR loop is one way of computing the factorial (and probably the best way, since recursion doesn’t buy you much here). To implement this, you would simply multiply the integers between 2 and X, inclusive:

```
int factorial(int num){
int n, result=1;
if(num < 2){return(1);}
for (n=2;n<=num;n++){result *= n;}
return(result);
}
```

This works well and doesn’t have the overhead associated with recursive calls, so it’s efficient, as well. A recursive approach, while less efficient, is instructive — especially since it’s so concise:

```
int factRecurs(int num){
if(num < 2){return(1);}
return(num * factRecurs(num-1));
}
```

You could even make it a one-liner, using the ternary operator:

`int factR(int num){return((num<2) ? 1 : num*factR(num-1));}`

That’s it. That’s the algorithm. We tell it how to handle the factorial of 1 (or less), then how to handle N!, given that we know how to compute (N-1)!. It feels like cheating, but it’s sound.

If factRecurs(3) were called, for instance, it would call factRecurs(2), which would call factRecurs(1), which would return a value of 1. factRecurs(2) would then multiply this by 2 and return it to factRecurs(3), which would multiply it by 3 and return the correct answer: 6.

The “Towers of Hanoi” puzzle is probably the best-known “poster child” problem for recursion. There is a nonrecursive formula, but it’s unintuitive.

“Towers of Hanoi” is a puzzle consisting of a stack of discs of increasing diameter, and three piles in which they can be placed. The puzzle starts out with the discs stacked in order from largest to smallest in one pile; the goal is to move them to another pile, by moving only one disc at a time and without ever stacking a larger disc on a smaller one.

The optimal solution, produced by the recursive algorithm, takes 2^N-1 moves for a pile of N discs. (One move with one disc; three moves with two; seven moves with three, etc.) Surprisingly, a pile of any size can be moved according to the rules, given enough time.

The recursive algorithm may not be obvious at first, but once it is understood, it almost feels like cheating — yet it works. The base case is, of course, moving one disc. If our function is called to move one disc from one pile to another, it simply prints out that move. (We could include a move legality check, but as it turns out, we won’t need to.)

The question arises of what to do when presented with more than one disc to move. The recursive Hanoi algorithm’s answer to this is that *this is really the same problem we’re trying to solve*, so we will just assume we know how to solve it!

The recursive case of moving N discs from A to B goes like this:

```
* Move N-1 discs from A to C;
* Move one disc from A to B;
* Move N-1 discs from C to B.
```

It seems too simple to work — somewhat like Douglas Adams’ story of how the Infinite Improbability Drive was created by someone simply figuring out how improbable it would be for one to appear, then giving a Finite Improbability Drive a cup of really hot tea. It shouldn’t work — but it does. (And it’s mathematically correct.)

And it turns out the Towers of Hanoi solution is even related to Sierpinski triangles.