Recursion, how does it work?

Recursion, how does it work?

Today I am writing about recursion, a very powerful and useful tool used by programmers to solve repetition problems. Sometimes a problem is too difficult or too complex to solve because it is too big. If the problem can be broken down into smaller versions of itself, we may be able to find a way to solve one of these smaller versions and then be able to build up to a solution to the entire problem. This is the idea behind recursion; recursive algorithms break down a problem into smaller pieces which you either already know the answer to, or can solve by applying the same algorithm to each piece, and then combining the results. But how would we define it?

What is Recursion?

One of the definitions I like is from the Oxford Dictionary: "the repeated application of a recursive procedure or definition." It has to be one of the most illustrative definitions, where de definition uses recursion itself calling it a "recursive procedure", brilliant. It is the process in which a function calls itself directly or indirectly, and the corresponding function is called as recursive function.

The recursion has to main parts, the base case, or exit condition and the iteration, in order to avoid circular function or infinite loop (after all the recursion is a kind of loop).

The Base case

No alt text provided for this image

The base case, or exit condition of a function is the problem that we know the answer to, that can be solved without any more recursive calls. The base case is what stops the recursion from continuing on forever, or until we reach a Stack Overflow (computer tells us it won't give us anymore memory to play with).

It is one of the two most important parts in recursion. Another big part is the iteration of changing condition.

Iteration

As in any loop, if we don't change the initial value we will continue to test the same case ending in another infinitive loop, this happens with recursion too.

Let's see both in an example to make it a little more clear:

float _pow_recursion(float x, float y)
{
    if (y == 0) // BASE CASE
        return (1);
    if (y < 0)
        return (_pow_recursion(x, y + 1) / x); // ITERATION

    return (_pow_recursion(x, y - 1) * x); // ITERATION
}
        

In this example we can se that the base condition is when y is equal to cero, so we end the function and return and clear the stack. Then we have two iteration, one where the number y is lower than 0, we sum y to iterate, and when y is higher than 0 we take one until it reaches the base case.

In order to deeply understand recursion we need to understand two important concepts in programming, the stack and the heap.

The Stack and the Heap

Stack is used for static memory allocation and Heap for dynamic memory allocation, both stored in the computer's RAM .

No alt text provided for this image

Variables allocated on the stack are stored directly to the memory and access to this memory is very fast, and it's allocation is dealt with when the program is compiled. When a function or a method calls another function which in turns calls another function etc., the execution of all those functions remains suspended until the very last function returns its value. This is what happens in recursion, until the base case is met, the function is stored in the stack. The stack has a LIFO order, it means Last In First Out, where we store things in the stack the first thing we return is the last thing we put in, just like in a pile of books to make an example. Adding things to the stack is call "Push" and removing things from the stack is called "Pop".

As we saw before, if we don't iterate or have a base or exit condition, we can complete the stack and cause a Stack Overflow, where we run out of space to continue pushing things to the stack.

Variables allocated on the heap have their memory allocated at run time and accessing this memory is a bit slower, but the heap size is only limited by the size of virtual memory . Element of the heap have no dependencies with each other and can always be accessed randomly at any time. You can allocate a block at any time and free it at any time.

Let's take another look at the example and illustrate how the recursive function works on our stack.

Ok, this is our function:


float _pow_recursion(float x, float y)
{
    if (y == 0)
        return (1);
    if (y < 0)
        return (_pow_recursion(x, y + 1) / x);

    return (_pow_recursion(x, y - 1) * x);
}
        

In this example we would want to find the pow of 2^4, that we all know is 16, so we enter x = 2 and y = 4.

No alt text provided for this image

Here we see the function calling itself and changing the value of "y" until it met the exit condition, then this value is return and updated to each function call multiplying the value of the return by "x" until the last return is done and we get the final value, in this case 16.

What are the disadvantages of recursive programming over iterative programming??

Note that both recursive and iterative programs have the same problem-solving powers, i.e., every recursive program can be written iteratively and vice versa is also true. The recursive program has greater space requirements than iterative program as all functions will remain in the stack until the base case is reached. It also has greater time requirements because of function calls and returns overhead.

What are the advantages of recursive programming over iterative programming??

Recursion provides a clean and simple way to write code. Some problems are inherently recursive like tree traversals, Tower of Hanoi, factoring, etc. For such problems, it is preferred to write recursive code. We can write such codes also iteratively with the help of a stack data structure.?

Thank you for reading, hope you enjoyed it and see you on my next article.

要查看或添加评论,请登录

Santiago Borgia的更多文章

社区洞察

其他会员也浏览了