Luny
Back

One Problem, Four Paradigms

Explore with me how one simple problem, taken from Advent of Code, can be solved in four different languages using four different ways of thinking.

Published on
Updated on

17 min read

Let’s see how one problem is solved in four different methods of thinking. The problem is as followed, given a file with numbers in this setup:

3   4
4   3
2   5
1   3
3   9
3   3

This represents two lists of numbers vertically, you’re required to pair up the numbers from two lists, so that the smallest number on the left list is paired with the smallest number on the right, and find the absolute difference between them. Then sum all of them up, that’s our final answer.

Well, you probably already imagined some simple solution here: read the two lists, sort them, then calculate the difference between each index pair.

Object-Oriented Programming (Java)

Let’s apply some of the key concepts of OOP here: encapsulation and mutable objects.

public class OOPMain {

  public static void main(String[] args) {
    NumberReader reader = new NumberReader("../input.txt");
    reader.read();

    NumberList left = reader.getLeft();
    NumberList right = reader.getRight();

    PairList pairList = left.pair(right);
    System.out.println(pairList.sumOfDifferences());
  }

}

I only copied here the most relevant piece of code, but now I will explain how this highlights the ways of OOP:

Exercise for the Reader

Can you reimplement the Java example, using the same exact methods I declared here? As long as the main caller here does not get to know your internals, then you understood OOP very well.

Final Results:

11

Not bad, right! I’m sure some of you are saying this is stupidly a lot for a simple problem, and you can just use a simple loop to read file, then sort and just print it out. Which, you are correct about, but what you’re doing is simply using a different paradigm with Java. Here, I want to highlight the core of OOP in a simple problem, which inevitably requires a bit of overengineering.

Functional Programming (Haskell)

Functional programming gives a few things that I want to highlight in the following Haskell piece of code: Data Immutability, Functional Composition and Pure Functions. Also, the Haskell example is short enough that I decided to paste the entire thing in instead.

import Data.List (partition, sort)

stoi :: String -> Int
stoi s = read s :: Int

parse :: String -> [Int]
parse = map stoi . words

split :: [Int] -> [[Int]]
split xs = [map fst left, map fst right]
  where
    (left, right) = partition (even . snd) $ zip xs [0 ..]

mySum :: [Int] -> Int
mySum [] = 0
mySum [x] = x
mySum (x : xs) = x + mySum xs

computeDifferences xs = mySum . map abs $ zipWith (-) left right
  where
    [left, right] = map sort xs

main :: IO ()
main = do
  contents <- readFile "../input.txt"

  let result = computeDifferences $ split . parse $ contents
  print result

Okay, there’s a lot of things going on here, let’s go through each of them one by one. We declare a total of 6 functions: stoi, parse, split, mySum and computeDifferences. The last one is main which you can probably guess what it does.

stoi is as simple as it gets so I will not delve into it, it simply converts a String into an Integer in Haskell, that’s it. The more interesting things are the other functions.

parse

Let’s check parse:

parse :: String -> [Int]
parse = map stoi . words

Here, we use one of Haskell’s bread and butter: function compositions. It looks somewhat cryptic here, but it essentially translates to the following in other languages: map(stoi, words(arg)). You can read further on function compositions on Maths, such that f . g means creating another function called h that does h(x) = f(g(x)). This function parse essentially breaks a String down into words (like .split("")) then runs stoi on each split component.

split

Next, let’s move on to the split function. This function’s job is to taken an integer list, then split it into two lists, one with all the odd indices and one with all the even indices. (Example: [1, 2, 3, 4] would become [1, 3] and [2, 4]).

split :: [Int] -> [[Int]]
split xs = [map fst left, map fst right]
  where
    (left, right) = partition (even . snd) $ zip xs [0 ..]

First, we need to define what left and right first, since the original expression on split xs = ... uses it. You can define “variables” or “bindings” in functions using the where clause in Haskell. Here, I defined left and right as the values of the tuple created by the expression there. Let’s break down that last expression, since we have a new token $.

The token $ in Haskell essentially just means “do everything on the right of me, then use that result as an argument to the left of me”. That means, Haskell will compute zip xs [0 ..] first, which some Python users may have already guessed: zip [1, 2, 3] [0 ..] would become [(1, 0), (2, 1), (3, 2)], where the second value of the tuple is the index of that list. Okay, now we take this list, and put it as an argument for the partition function.

partition simply takes a “predicate function” (yields either true or false), then partitions the argument list into two arrays, the first one holds elements that yielded true for the predicate and the second one holds the rest. Our predicate here is even . snd, which is also a function composition. Can you guess what it does?

Pop Quiz!

What does `even . snd` do?

Okay, now you can deduce what left and right contain which elements. But there’s still a problem! Each of our element in both lists are still TUPLES, we have to take back the original value now: map fst runs the fst function, called first on all elements on both arrays, basically removing the indices we zip in the first place.

Phew! There’s a lot going on in 4 lines.

mySum

mySum :: [Int] -> Int
mySum [] = 0
mySum [x] = x
mySum (x : xs) = x + mySum xs

This function, based on the name, you probably can guess it computes the sum of a list. But why is it called mySum? Haskell already provides a function called sum that does exactly this, but I wanted to highlight another quirk of declarative and functional programming here, so I re-implemented our own version of summation.

This uses Haskell’s pattern matching and tail recursion to compute the sum. You probably already know for loops, but in purely functional languages, recursion is usually the way to loop, not using for.

We split the mySum function into three cases:

Exercise for the Reader

Can you reimplement this mySum in any of your favorite OOP languages in a recursive manner?

computeDifferences

Okay, the problem-solving function. Let’s take it on.

computeDifferences xs = mySum . map abs $ zipWith (-) left right
  where
    [left, right] = map sort xs

Again, we see the where clause. But this time it looks a lot less intimidating. It just applies the sort function on each of the input, then we can destructure both out to a respective left and right list. This is to be able to map “the smallest number on left” to the “smallest number on the right”.

First, we call a zipWith, this is similar to zip, but we can tell Haskell what function to use to zip with, we can use this to compute our differences between each element right here! So I passed in - (subtract) as the function.

You probably already imagined, there are negative numbers! This is what map abs is there to deal with. And finally, mySum! Can you now build in your head what the invocation tree looks like?

Final Results

Pure functions

Did you notice it? Every single function apart from Main does not have any side effects (printing to the screen, reading files), or rely on any external dependencies! These also can guarantee that if you provide one input, you will get the exact same output every time. These are called pure functions!

A lot of languages now should also make note of this, and sometimes you may hear that the most core of your program should always be pure, and leave all the “dirty” work (the I/O) to only a part of your program. Pure functions have guaranteed correctness, so it’s very easy to unit-test.

Final Results:

[1 of 2] Compiling Main             ( main.hs, main.o ) [Source file changed]
[2 of 2] Linking main [Objects changed]
11

Same thing!

Procedural Programming (C)

void fparse(FILE *fp, struct list *left, struct list *right) {
  int a;
  int b;
  while (fscanf(fp, "%d %d", &a, &b) > 0) {
    list_add(left, a);
    list_add(right, b);
  }
}

int main() {
  FILE *fp = fopen("../input.txt", "r");

  struct list *left = list_init();
  struct list *right = list_init();
  fparse(fp, left, right);

  list_sort(left);
  list_sort(right);

  int sum = 0;
  for (int i = 0; i < left->len; i++) {
    sum += abs(left->arr[i] - right->arr[i]);
  }

  printf("%d\n", sum);
  fclose(fp);
  return 0;
}

Here’s my C solution in procedural style. I snipped out a large part where I implemented a dynamic vector of elements since C does not have that. But I want to highlight a few things of C here: the way you call functions has a different thinking to it than OOP, which you’re probably confusing with.

In this example, everything like structs are completely visible to the end caller, which means encapsulation is not implemented here. And you call functions by dictating what goes where, for example you pass in what list to sort, but the list can’t sort itself. OOP revolves around objects, PP revolves around procedures that take arguments to carry out an action. That’s the key difference here.

Every single line in a PP program is an instruction to accomplish an action, there are no hidden away values. Objects don’t mutate themselves. YOU are the one calling procedures to mutate those objects.

Usually, a PP program will give you the fastest result as well as the shortest and simplest out of them. While the Haskell version is shorter by line count, but not everyone can just read the Haskell solution and understand, but virtually everyone can read this C solution and understand immediately.

Array Programming (APL)

The most awkward child of all of them! But I adore it!

Instead of bombing you with it like the Haskell thing, let’s go step-by-step on how I would construct the solution in Dyalog APL. Warning though: APL uses a different script where it uses symbols instead of names for primary functions.

First, let’s read in the input file first. This uses the function named ⎕NGET in APL along with the input data file name. Not that complicated so far, right?

      ⎕NGET 'input.txt'
┌─────┬───────────┬──┐
3   4UTF-8-NOBOM10
4   3│           │  │
2   5│           │  │
1   3│           │  │
3   9│           │  │
3   3│           │  │
│     │           │  │
│     │           │  │
└─────┴───────────┴──┘

Oh no, the end result looks kinda weird, it’s an array of 3 elements! Shocker, what is this paradigm again?

We only care about the data tho, so we would like to do something to get the first element of this array.

⊃⎕NGET'input.txt'
────┐
3   4
4   3
2   5
1   3
3   9
3   3
│     │
│     │
└─────┘

Okay, there’s another character here: . This, in APL is called a right shoe, because it looks like the tip of the shoe pointing right, or in its “monadic form”, computes the function first.

APL Evaluation

APL is different from a lot of languages. It uses right-to-left evaluation. On that last example, we read the data using NGET on the right, and then use the first function on the left. This is how APL flows.

APL also has two types of functions: monadic and dyadic. As the name suggests, they are functions that either take one argument (conventionally named omega), or take two arguments (conventionally named alpha-omega). Each of APL symbols usually has different meanings when used monadically or dyadically.

For example, right shoe has two forms: monadic form is first, where as dyadic form is pick (or selecting).

1 2 3
21 2 3

The first statement would yield the first of the array, yielding 1 as the result. But the second statement would pick the 2nd element of the array, yielding 2 as the result.

Oh! Another surprise! APL counts from 1, not 0!

Okay, what next? We have the input as the array right there, but it’s actually a normal string! Just formatted that way because of new-line characters.

      ⊃⎕NGET'input.txt'1
────────────────────────────────────────────────────┐
│ ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌⊖┐ │
│ │3   4│ │4   3│ │2   5│ │1   3│ │3   9│ │3   3│ │ │ │
│ └─────┘ └─────┘ └─────┘ └─────┘ └─────┘ └─────┘ └─┘ │
└∊────────────────────────────────────────────────────┘

Before that, let’s add the second argument to NGET, usually the second argument, called alpha would be on the left of the function name, but in special functions, there are more like directives instead of arguments. The number 1 here tells APL to split lines for us instead of returning an entire string. What’s that dumb empty array doing over there though! Let’s remove it with drop.

      ¯1↓⊃⎕NGET'input.txt'1
────────────────────────────────────────────────┐
│ ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ │
│ │3   4│ │4   3│ │2   5│ │1   3│ │3   9│ │3   3│ │
│ └─────┘ └─────┘ └─────┘ └─────┘ └─────┘ └─────┘ │
└∊────────────────────────────────────────────────┘

Okay, now it looks great. You are seeing a dyadic function in action! The little bar up high is called a high minus, in APL the - has two meanings also, so a high minus is more a literal negative number, then a negate function. As you knew, APL evaluates right-to-left, it would drop 1 THEN negate the entire result, instead of drop -1.

      ⍎¨ ¯1↓⊃⎕NGET'input.txt'1
────────────────────────────────────┐
│ ┌──┐ ┌──┐ ┌──┐ ┌──┐ ┌──┐ ┌──┐ │
│ │3 4│ │4 3│ │2 5│ │1 3│ │3 9│ │3 3│ │
│ └~──┘ └~──┘ └~──┘ └~──┘ └~──┘ └~──┘ │
└∊────────────────────────────────────┘

Next, we runs an “each” or map (the two dots high, also called a diaeresis). It applies the function provided on its left to each of the argument on the right. The hydrant function is named exec, which evaluates the expression in APL. We can take advantage of that since APL is array-oriented, a 2 3 expression is automatically converted to an array of [2,3], and it doesn’t matter how many spaces are between them!

Now we have a list of lists of numbers. Let’s make this into a 2D matrix where APL excels at (N-Dimensional Arrays).

      ↑⍎¨ ¯1↓⊃⎕NGET'input.txt'1
──┐
3 4
4 3
2 5
1 3
3 9
3 3
└~──┘

The up arrow function name is mix! It converts a list of lists into a matrix of elements if possible, basically “ranking up” an array. An array’s rank tells about its dimensions. A rank-1 array is exactly what you expect an array to be, and a rank-2 array is what you call a 2D matrix! But remember, APL computes with array rows, not array columns.

We have the left list on the left, and the right list on the right exactly like on our input list. We first need to sort each column first, but APL doesn’t allow that! But we have a tool at our disposal: transpose.

      ⍉ ↑⍎¨ ¯1↓⊃⎕NGET'input.txt'1
──────────┐
3 4 2 1 3 3
4 3 5 3 9 3
└~──────────┘

Now we have rows, but sorting needs an array, not a matrix! Do we have to split (the opposite of mix) here? Or is there a way to sort each row directly?

      {[⍋]}⍤1 ⍉↑⍎¨ ¯1↓⊃⎕NGET'input.txt'1
──────────┐
1 2 3 3 3 4
3 3 3 4 5 9
└~──────────┘

Okay, a lot appeared. Let’s understand it. The first called a jot diaeresis, because it has a huge circle under a diaeresis. This is its dyadic form, called rank, which applies the left function onto rank-omega arrays of the argument, here omega is 1. So it applies to all rank-1 arrays of our matrix (rows) the left function.

What’s that in {} though? {} denotes a user-defined function in APL. You can define two parameters for your functions: namely (alpha), and (omega). omaga is always the right argument, or the monadic value. Here’s we take the grade up of the argument, which just returns the indices of an array in ascending order.

5 3 4 1 2
────────┐
4 5 2 3 1
└~────────┘

If we select the 4th index, 5th index, and so on, we would have the sorted array. That’s what grade up does. The symbol doesn’t have a name lol, it’s also just grade up. The [] brackets are “selecting” or “indexing” operators, where it takes in an arbitrary array and select elements in that form.

      5 3 4 1 2[⍋5 3 4 1 2]
────────┐
1 2 3 4 5
└~────────┘

If you apply that to its own array, you basically have a sort function! You now understand what the function in {} does to each rank-1 array, and why they are all sorted. We are almost done.

      -/[1] {[⍋]}⍤1 ⍉↑⍎¨ ¯1↓⊃⎕NGET'input.txt'1
───────────────┐
¯2 ¯1 0 ¯1 ¯2 ¯5
└~───────────────┘

Next, we use a slash, named reduce to collapse the rows onto each other, using a function on its left (- or subtract). You’re wondering what [1] is here for? As I told you, APL acts only on rows, which means if you don’t specify anything, APL will make a reduction on EACH ROW, but we want it to collapse each row onto each other (column-direction). [1] here tells what axis to reduce: 0 is the leading axis (rows-wise), and 1 is the trailing axis (columns-wise).

Of course, you can transpose and then reduce with no [1], it would also do the same thing.

      -/⍉ {[⍋]}⍤1 ⍉↑⍎¨ ¯1↓⊃⎕NGET'input.txt'1
───────────────┐
¯2 ¯1 0 ¯1 ¯2 ¯5
└~───────────────┘

We are at our last stretch! Same problem as any paradigms, there are negative numbers! We only want the absolute value!

      |-/⍉ {[⍋]}⍤1 ⍉↑⍎¨ ¯1↓⊃⎕NGET'input.txt'1
──────────┐
2 1 0 1 2 5
└~──────────┘

The |, called a stile has a monadic variant named magnitude (or better known as absolute value). You can definitely guess what it does. But can you guess what the last operation is?

      +/|-/⍉ {[⍋]}⍤1 ⍉↑⍎¨ ¯1↓⊃⎕NGET'input.txt'1

11

There we go! We’re at the end! This is what an APL solution looks like! In case you didn’t catch what we did, it was a reduce with sum instead of subtract. That’s it.

It was DEFINITELY a lot to take in. But I hope this blog provides you some insights into more paradigms, and appreciate how APL can be extremely elegant in the right hands.