A Not-so-scary Introduction to DP Mechanisms in Kotlin: Maximum Subarray

Understanding the fibonacci mechanism in 1D dynamic programming

Amanda Hinchman
6 min readDec 15, 2022
A picture of an abacus, also called a counting frame, which is a calculating tool with beads.

Dynamic programming (DP) can be an intimidating topic to learn. Give or take, I’ve arranged notes on the topic from previous job changes in the form of a digestible guide for quicker ramp-up. Less mathy, more pseudo — sciency.

At a high-level, dynamic programming follows a similar strategy across the board:

  1. Break down a complex problem into simpler sub-problems
  2. Finds an optimal solution to sub-problems
  3. Store the results of the sub-problems (memoization)
  4. Reuse them so that the same sub-problem set is not calculated
  5. Finally calculate the results of the complex problem

Understanding the mechanics of 1D Dynamic Programming

Today, we’ll be navigating this mechanism for 1D DP by first examining one leetcode problem to death, 53. Maximum Subarray. Then, we compare simplified 1D DP forms that are easy to remember and apply later on in harder problems.

No need to navigate to a site that incites fight-or-flight mode — the problem is stated below:

Problem Description
Given an integer array nums, find the subarray
which has the largest sum and return its sum.

Example:
Input: nums = [5, 4, -10, 7, 8]
Output: 15
Explanation: [7, 8] has the largest sum = 15.

The key to recognizing DP problems is locating patterns/keywords in written problems i.e. overlapping subproblems, contiguous subarrays, maximum/minimum result values, etc.

The following words are highlighted in the problem to help indicate the ability to execute a 1D dynamic programming algorithm:

  • subarray — continuous calculations allows for comparative iteration, which, in itself, defines the “current” subarray aggregate value. This is how DP is able to reduce the need for repeated calculations: it accesses stored data via some structure you’ve invoked yourself or the callstack itself.
  • largest sum — the conditional of each comparison.
  • return its sum — since we need to return the result of the aggregated array, this means that we need to always keep track of the calculated number.

There are additional sample sets included in the actual problem useful for testing edge cases: for clarity and brevity, we work with a more agreeable set of numbers to better illustrate the mechanism of the algorithm we come up with together.

Why is DP cool?

Under the right conditions, dynamic programming enables us to reduce computations exponentially while only having to solve for the first two numbers of a set, f(0) and f(1).

When we solve for f(0) and f(1), we can make additional computations with what was already calculated to reduce repeated computations.

A binary tree of functions representing the number of calculations made while traversing through a set of inputs. Level-by-level traversal reads: First level: f(4) Second level: f(3), f(2) Third level: f(2), f(1), f(1), f(0) Fourth level: f(1), f(0) With DP, we can “cross out” the already calculated functions so the tree reads like so: First level: f(4) Second level: f(3), Third level: f(2), f(1) Fourth level: f(1), f(0)
The fibonacci pattern in some DP problems refers to a set of calculations made previously before being calculated again.

To better appreciate how this can work, let’s examine the brute-force way fo evaluating the input of nums = [5, 4, -10, 7, 8]. First, a maximum sum is compared at subarray length 1, where by the end of the first iteration, 8 is saved as the current maximum. Then, another iteration is made for subarray of length 2 such as [5,4], [4, -10], and so on.

Visual diagram of brute-force recursion for maxSumSubarray problem. Given an array of nums of size 5, the diagram shows the total number of comparisons made for every calculated sum of a subarray. nums = [5, 4, -10, 7, 8] Level 1, subarray length 1 = [5], [4], [-10], [7], [8] Level 2, subarray length 2 = [5, 4], [4, -10], [-10, 7], [7, 8] Level 3, subarray length 3 = [5, 4, -10], [4, -10, 7], [-10, 7, 8] Level 4, subarray length 4 = [5, 4, -10, 7], [4, -10, 7, 8] Level 4, subarray length 5 =
The red values represent the current sum of a calculation.

Each iteration increased the subarray length to the maximum length, leaving for an amortized O(2^n) in time complexity. Perhaps you might have noticed certain sets of calculations unnecessarily repeated, like having to repeat the calculation for the sum of the subarray [4, -10] within larger subarrays like [5, 4, -10], [4, -10, 7], [5, 4, -10, 7], [5, 4, -10, 7], and [5, 4, -10, 7, 2]. This pattern can be described as brute-force recursion.

Brute Force Recursion

Time complexity: O(2^n) | Space Complexity: O(n)

fun recursionFib(n: Int): Int {
if (n <= 1) return n
return recursionFib(n - 1) + recursionFib(n - 2)
}

This is where dynamic programming really shines, because its algorithms stores already computed results. Let’s revisit the maximum sum subarray problem again:

Problem Description
Given an integer array nums, find the subarray
which has the largest sum and return its sum.

Example:
Input: nums = [5, 4, -10, 7, 8]
Output: 15
Explanation: [7, 8] has the largest sum = 15.

For DP, only the first two calculations are needed — f(0) the base case, and f(1) , which is the conditional. These two calculations will determine how the rest of the functions are computed.

The strategy we use is called Kadane’s algorithm, where the max\min of the aggregate of the current contiguous value is saved for tracking.

The base case is nums[0] (assuming the array is non-empty). We store this value in a result table dp to keep track as the currMax value. Now we look at the next value in nums and we ask ourselves, is 4 greater than 4 + 5? It is not, so we store the sum of nums[0] + nums[1] in dp[1] .

dp[0] = nums[0]
dp[1] = maxOf(dp[0]+nums[1], nums[1] ) // maxOf( 5+4, 4)
dp[1] = 9

If we apply the same logic to the 3rd index, we evaluate:

dp[2] = maxOf(dp[1]+nums[2], nums[2] ) // maxOf( 9 - 10, -10)
dp[2] = -1

dp[2] is now -1, which is less than dp[1] means that a contiguous subarray has reached its possible maximum.

That’s it, we’ve figured out the mechanism. We can use this formula to calculate every aggregate index after.

dp[i] = maxOf(dp[i-1]+nums[i], nums[i] )

With dynamic programming, we can iterate through the array just once for a lean, amortized O(n) time and store already computed results so there is no need to repeat already calculated values as we iterate through the array.

The red values represent the current max sum of a calculation.

We don’t actually have to store these results in another table. We could just use the existing nums to store results in since we only have to move forward once and we only want the maxSum , while all the other “current maximum sums” are stored in each index.

fun maxSubArray(nums: IntArray): Int {
if (nums.isEmpty()) return 0
if (nums.size == 1) return nums[0]

var maxSum = nums[0]

(1 until nums.size).forEach { i ->
nums[i] = maxOf(nums[i], nums[i] + nums[i-1])
maxSum = maxOf(nums[i], maxSum)
}
return maxSum
}

Now that we’ve looked at how to create an algorithm for a 1D DP problem, we turn to a quick guide of comparative forms of 1D dynamic programming that might better match certain conditions for a problem.

Tabulation (Bottom-Up)

Time complexity: O(n) | Space Complexity: O(n)

If all sub-problems must be solved at least once, a bottom-up algorithm usually outperforms a top-down memoized algorithm. This is a similar form that we used in our previous example!

fun tabulationBottomUp(n: Int): Int {
if (n <= 1) return n

val dp = IntArray(n + 1) { 0 }
dp[1] = 1

(2..n).forEach { i ->
dp[i] = dp[i - 1] + dp[i - 2]
}
return dp[n]
}

Memoization (Top-Down)

Time complexity: O(n) | Space Complexity: O(n)

Memoization is better for when all sub-problems must be solved at once.

fun memoizationTopDownFib(n: Int): Int {
if (n <= 1) return n
val dp = IntArray(n) { n }
if (dp[n] != 0) return n

dp[n] = memoizationTopDownFib(n - 1) + memoizationTopDownFib(n - 2)
return dp[n]
}

Space-Optimized

Time complexity: O(n) | Space Complexity: O(1)

fun spaceOptimization(n: Int): Int {
if (n <= 1) return n

var one = 1
var two = 0

(0 until n).forEach { i ->
tmp = one
one += two
two = tmp
}
return one
}

Additional resources

This information was compiled from hours of sifting through youtube resources and other helpful articles. I totally recommend going into these problem sets as a deep dive as a lot of folks find great ways to explain this concept just a little different every time. Each one of these people deserves a medal for providing in depth knowledge on the subject.

--

--

Amanda Hinchman

Kotlin GDE and Android engineer. Co-author of O'Reilly's "Programming Android with Kotlin: Achieving Structured Concurrency with Coroutines"