Mastering Dynamic Programming
Huzaifa Asif
Engineering Lead | Solution Architect | Cloud Engineer | FinTech | SaaS | PaaS | AWS | Azure | GCP
Introduction
Dynamic Programming is a powerful technique that can revolutionize your algorithms by making them more efficient and blazingly fast. Whether you’re a seasoned coder or just starting your programming adventure, dynamic programming is a game-changer you’ll want to add to your toolbox.
Understanding the Essence of Dynamic Programming
Dynamic programming might sound intimidating, but it’s actually a straightforward concept with profound implications. In essence, dynamic programming is all about optimizing computations by storing intermediate results. Instead of re-calculating the same values repeatedly, dynamic programming allows us to save time and resources by relying on previously computed solutions. This technique is particularly valuable when an algorithm involves redundant calculations.
One of the classic examples to illustrate dynamic programming is the Fibonacci sequence. This sequence begins with two ones, and each subsequent number is the sum of the two preceding ones. For example, the third Fibonacci number is two (1 + 1), the fourth is three (1 + 2), and so on, ad infinitum.
Recursive Technique
Now, let’s dive into how we can utilize dynamic programming to efficiently solve the Fibonacci sequence problem. Our objective is to find the nth Fibonacci number through a function called?fib(n). For instance,?fib(3)?should return 2, and?fib(5)?should return 5.
Initially, we might implement a naive recursive solution to compute the Fibonacci numbers. While this approach works fine for small values of n, it becomes highly inefficient as n grows. The recursive solution leads to multiple redundant calculations, creating a time complexity of O(2^n).
def fib_naive(n):
# [n]
# / \
# [n-1] [n-2]
# / \ / \
# [n-2][n-3][n-3][n-4]
# ... ... ...
if n == 1 or n == 2:
return 1 # base case
return fib_naive(n - 1) + fib_naive(n - 2) # recursive case
# This ASCII art shows that each function call branches into two new calls (n-1 and n-2),
# each of those calls branch out again, and so on, forming a binary tree structure.
# The '...' indicates that this branching continues until reaching the base case.
Here's a step-by-step explanation of how the code works:
While the naive recursive approach is easy to implement, it's not suitable for large values of n as it leads to an exponential increase in the number of function calls and redundant calculations. For better performance, it's recommended to use the memoization or bottom-up approach to efficiently calculate Fibonacci numbers.
Memoization Technique
To overcome the inefficiency of the naive recursive approach, we introduce the concept of memoization. Memoization is a process of storing already computed values to avoid redundant calculations in the future. In our Fibonacci case, we use an array (e.g.,?memo) to save the values of?fib(n)?for various?n. This way, we can quickly look up the result when needed and eliminate unnecessary computation.
The beauty of memoization lies in its simplicity and significant performance improvement. By storing and reusing intermediate results, we reduce the time complexity to a linear O(n) for finding the nth Fibonacci number.
def fib_memo(n, memo={}):
# Check if the value for this function call is already cached/memoized:
#
# {1:1, 2:1, 3:2, ...}
# (memoization)
# |
# [n] --------> return memo[n]
# / \
# [n-1] [n-2]
# / \ / \
# [?] [?] [?] [?] <-- If not computed yet (not in memo)
#
# Each function call checks the 'memo' dictionary to see if the result for that
# specific 'n' is already there (which would mean we have already calculated it
# during a previous function call) to avoid unnecessary calculations.
if n in memo:
return memo[n] # return cached result
if n == 1 or n == 2:
memo[n] = 1 # base case
else:
# recursive case with memoization:
memo[n] = fib_memo(n - 1, memo) + fib_memo(n - 2, memo)
return memo[n]
Here's a step-by-step explanation of how the code works:
By using the recursive approach with memoization, this function can efficiently compute Fibonacci numbers for reasonably large values of n without redundant calculations, making it a more optimized solution compared to the naive recursive approach. The memoization technique allows the function to avoid recalculating Fibonacci numbers for the same n, thus significantly improving the performance.
领英推荐
Bottom-Up Technique
If recursion isn’t your cup of tea, dynamic programming still has you covered with the bottom-up approach. This approach eschews recursion altogether and explicitly builds the memoization array iteratively, from the bottom up. Starting from the base cases (fib(1) and fib(2)), we progressively calculate and store the Fibonacci numbers until we reach the desired nth number.
Embracing the bottom-up approach not only simplifies the code but also avoids potential recursion depth limitations. It ensures a faster and more reliable solution, particularly for larger values of n.
def fib_bottom_up(n):
# Bottom-Up Approach to Fibonacci sequence:
#
# memo = [0, 1, 1, 0, 0, ..., 0]
# index 0 1 2 3 4 n
#
# Starting from the base cases at memo[1] and memo[2], iterate upwards:
#
# For i in range(3, n+1):
#
# (i-2) (i-1) (i)
# | | |
# [1, 1, (2), 0, 0, ..., 0] -> i=3, fill memo[3] as memo[1] + memo[2]
# [1, 1, 2, (3), 0, ..., 0] -> i=4, fill memo[4] as memo[2] + memo[3]
# ......................... continue updating memo[i] until i=n
# [1, 1, 2, 3, ..., (fn)] -> i=n, memo[n] is the answer
#
# Each iteration fills in the next Fibonacci number until the nth is reached.
if n == 1 or n == 2:
return 1
memo = [0] * (n + 1)
memo[1] = memo[2] = 1 # base cases
for i in range(3, n + 1):
memo[i] = memo[i - 1] + memo[i - 2] # fill in the ith Fibonacci number
return memo[n] # the nth Fibonacci number
Here's a step-by-step explanation of how the code works:
By using the bottom-up approach and memoization (storing calculated results to avoid redundant calculations), this function can efficiently compute Fibonacci numbers for reasonably large values of n without repetitive calculations, making it a more optimized solution compared to the naive recursive approach.
Putting Dynamic Programming to the Test with Python and Jupyter Notebook
To demonstrate the remarkable difference in performance, let’s compare the three Fibonacci functions (naive recursive, memoized, and bottom-up) using Python and Jupyter Notebook. The code used is given above and you can go try this out yourself.
Running Fibonacci with n = 5:
Running Fibonacci with n = 35:
Running Fibonacci with n = 1000:
Dynamic Programming — A Paradigm Shift in Algorithm Design
As you can see, dynamic programming brings a whole new level of efficiency to algorithm design. By incorporating memorization or the bottom-up approach, we can transform inefficient algorithms into highly optimized solutions. Moreover, dynamic programming fosters a deeper understanding of algorithmic complexities, making it a crucial skill for every programmer.
The applications of dynamic programming extend far beyond the Fibonacci sequence, enabling you to tackle a wide range of complex computational problems with elegance and speed.
Conclusion
Dynamic programming is a true game-changer in the world of algorithms. By storing intermediate results using memoization or opting for the bottom-up approach, you can turn inefficient algorithms into blazingly fast solutions. Whether you’re a novice coder or an experienced developer, embracing dynamic programming will undoubtedly level up your programming prowess.