Excerpt from the Advent of Code website;
"Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets
and skill levels that can be solved in any programming language you like. People use them as a
speed contest, interview prep, company training, university coursework, practice problems, or
to challenge each other."
Working through the advent of code while learning Kotlin. Acts as a nice format for personal development. I'm treating the code-base a little more enterprise-esque, for lack of a better term. Meaning I'm focusing more on design, readability, and ensuring code is test-driven with full coverage etc. This doesn't mean, however, that I'm not considering performance. Any solutions that don't meet a respectable order of runtime complexity will be refactored and improved upon in a later pass of the days.
If you wanted to clone the repository and run the solutions, you need only clone/download, import it as a Gradle project in your IDE, and run the respective solution file.
Annotation Processing must be enabled in your IDE for the JMH tests to run.
In IntelliJ;
Preferences -> Build, Execution, Deployment -> Compiler -> Annotation Processors
Task | Description |
---|---|
test |
Runs the unit tests for the implementation and common sub-projects. |
testCoverage |
Runs the unit tests, calculates the coverage and verifies that its > 90%. |
benchmark |
Runs the JMH tests for the implementation sub-project. |
detekt |
Runs DeteKT with the custom configuration rules defined in detekt-config.yml. |
The package structure was something that changed almost every day. My goal was "package-by-feature". For the first few days it seemed like I'd just end up having 25 different packages whose names were relevant keywords from that day. However, towards the latter half of the days, there were consistencies in the naming that started to make the language a little more ubiquitous. This allowed me to start grouping packages together and start abstracting out common code.
I created two Gradle root projects, implementation
and solutions
. With the former having sub-projects, common
, for
behaviour that is commonly used across multiple days and test-support
for unit test utility classes.
-implementation
-src
-common
-test-support
-solutions
-Design Patterns
I used the DeteKT Gradle plugin to perform static code analysis on the
codebase. It produces a report containing all the code-smells that it found based on the set configuration.
The custom configuration is defined in detekt-config.yml
and can be found in the implementation
resources.
Due to the nature of the puzzle inputs, lots of the second parts scaled the input size significantly.
The shear size of these inputs made the solutions impossible to brute-force or to even wait for with a runtime
complexity of O(n)
. I used VisualVM to manually investigate crippling performance
issues, but I wanted an automated solution to run across the whole codebase.
From the OpenJDK website;
"JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks
written in Java and other languages targeting the JVM."
The JUnit5 Jupiter API exposes some really nice functionality in the form of annotations and offers callback
interfaces for each of the life-cycle events. Utilising the @ParameterizedTest
can significantly reduce the
quantity of test-methods in your unit tests suites while @Nested
allows you to organise your tests in nested
classes.
AssertK is an assertion library inspired by AssertJ but for Kotlin. I'm familiar with AssertJ and prefer the naming conventions and variety of assertions over the default JUnit offering.
I'm a huge advocate of testing in general, but particularly test-driven development. Some days were easy to test-drive as they provided lots of examples that generally covered all branches of the algorithm and sometimes even edge cases. Given I was taking a proper object-orientated approach, and not just hacking it into a single function, I still had to write more tests than just the example inputs, but they were a nice baseline to run against while writing a solution.
During the first iterations of the graphing algorithms, runtime performance was an issue. There were several cases where the first rough implementation passed all the example tests, but didn't suffice for the scaled puzzle input.
This is where VisualVM came in handy. The aforementioned desktop-client in tandem with the IntelliJ Plugin made it easy to run a given unit test with a VisualVM instance attached the JVM process. This allowed me to sample and profile classes of my choice to find the source of the performance issues.
One problem I ran into (and seemingly many others online too) was that JUnit5 bootstrapped and ran my test so quickly
that it finished before VisualVM even had a chance to boot-up. A common solution I'd seen was to simply add a
Thread.sleep(x)
line at the start of the test method. Although this is the solution I technically went with, I
abstracted it into a @WaitForVisualVM
annotation and created a custom implementation
of the Jupiter APIs BeforeTestExecutionCallback
interface called SupportsVisualVM.kt
which can be added to a test-suite class using the
@ExtendWith
annotation.
This kept things inline with the 'enterprise-style' aspect of my codebase as it did the following;
- Wrapped the dubious
Thread.sleep()
call with a self-explanatory annotation (and is also documented). - Removed noise from the test method and ensured that it always runs before test-execution.
- Allows developers to easily disable all waiting for tests in a suite by simply removing the support extension.
- Makes it easier to refactor in the future the as implementation specifics are encapsulated in the annotation.
Although it might seem over-engineered, it's really just a trivial example to demonstrate the concept and its benefits.
This day was an odd one. Usually the implementation is the 'easy' bit, and figuring out the theory behind the solution is the harder bit that can take a while. However, Day 18s theory was simple. It was a maze filled with doors with corresponding keys, and you had the find the shortest path to collect all keys. So it was just a case of using an exhaustive DFS (Depth First Search) Algorithm that would graph all the keys and their respective weights to any other accessible ones. Then running Dijkstra's Algorithm on the weighted graph to find the shortest path. I just couldn't figure out how to implement it. I spent so many hours trying different things until I eventually managed to get 4 of the 5 examples working. Example 5, however, just ran indefinitely and so did my puzzle input. I needed to improve the performance of my graphing algorithm, so my solution input didn't take a literal eternity to run.
Day 22, Slam Shuffle, started off in part 1 as nice and easy puzzle with some interesting card shuffling strategies
that work nicely with behavioural design patterns. However, part 2 threw a spanner in the works with what some have
called 'Advent of Math'. The second part increased the deck size from 1x10^5
-> 1x10^14
and the number of shuffles
from 1 to an order of magnitude similar to that of the deck size. The solution in short was to;
- Represent the shuffling algorithms as linear functions in the form
f(x) = ax + b
(See table below). - Create a single aggregate function by composing all the converted linear functions together.
- Use exponentiation by squaring with modular arithmetic in order to reduce the number of shuffles required.
Shuffling Technique | Linear Transformation Function |
---|---|
Deal Into New Stack | f(x) = m - x - 1 |
Cut N | f(x) = x - n mod(m) |
Deal Increment N | f(x) = n.x mod(m) |
This day was the most fun because I didn't understand what was actually going on until I'd successfully implemented a solution. It was only shortly after re-reading the documentation that I understand what was happening. The output below is the result of flattening a three-dimensional array of integers that represented pixel colours. After traversing the array vertically and exposing the upper-most opaque pixels, it produced a bitmap that represented an image of the spaceships' registration number.
1 0 0 1 0 0 1 1 0 0 1 1 1 0 0 0 1 1 0 0 1 1 1 1 0
1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 0 0
1 1 1 1 0 1 0 0 0 0 1 1 1 0 0 1 0 0 0 0 1 1 1 0 0
1 0 0 1 0 1 0 1 1 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0
1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 0 0
1 0 0 1 0 0 1 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 0 0
After I realised what my output was, I parsed the black pixels as octothorps, and the transparent as spaces and got the below output. This was interesting to me as it meant you couldn't programmatically test your solution and therefore required human interpretation. I suppose you could use an OCR (Optical Character Recognition) software.
# # # # # # # # # # # # #
# # # # # # # # #
# # # # # # # # # # # #
# # # # # # # # #
# # # # # # # # #
# # # # # # # # # # #
The moment it clicked, and I saw the ASCII style text printed out, it put a huge grin on my face as this was the first time I'd seen such a niche puzzle in Advent of Code.
A friend of mine had been raving about Kotlin for months at work, and I finally decided to start learning it when I started the Advent of Code. Having written in nothing but Java 10 hours a day for 2 years, I was amazed by how easy it was to pick the language up. The first thing I noticed was how succinct the language was syntactically. Most simple functions could be one-liners; return types were mostly implicit; and features such as data classes removed a lot of the boilerplate code that is prevalent in enterprise Java code.
Graphs, trees and mazes were a common theme and meant that path-finding algorithms were common.
Where there are Data Structures & Algorithms, there are always performance considerations...
Implementing the aforementioned data structures and algorithms means that some solutions needed refactoring for performance optimisation due to the vast puzzle input sizes. This meant that careful consideration was needed when choosing data structures and designing the algorithms. It even meant that in some cases, a brute-force approach was impossible due to it taking an eternity to run (Looking at you, Day 12... simulating the repetition of time).
For the most part, only runtime-complexity was a concern. There were few occasions where space-complexity was an issue.
Day | Part 1 | Part 2 | Name | Documentation |
---|---|---|---|---|
01 | 3184233 | 4773483 | The Tyranny of the Rocket Equation | Link |
02 | 10566835 | 2347 | 1202 Program Alarm | Link |
03 | 529 | 20386 | Crossed Wires | Link |
04 | 466 | 292 | Secure Container | Link |
05 | 5044655 | 7408802 | Sunny with a Chance of Asteroids | Link |
06 | 314702 | 439 | Universal Orbit Map | Link |
07 | 21860 | 2645740 | Amplification Circuit | Link |
08 | 2904 | HGBCF | Space Image Format | Link |
09 | 3100786347 | 87023 | Sensor Boost | Link |
10 | 263 | 1110 | Monitoring Station | Link |
11 | 1564 | RFEPCFEB | Space Police | Link |
12 | 7722 | 292653556339368 | The N-Body Problem | Link |
13 | 247 | 12954 | Care Package | Link |
14 | 1037742 | 1572358 | Space Stoichiometry | Link |
15 | 212 | 358 | Oxygen System | Link |
16 | 77038830 | 28135104 | Flawed Frequency Transmission | Link |
17 | 7404 | 929045 | Set and Forget | Link |
18 | 5262 | 2136 | Many-Worlds Interpretation | Link |
19 | 181 | 4240964 | Tractor Beam | Link |
20 | 526 | 6292 | Donut Maze | Link |
21 | 19350258 | 1142627861 | Springdroid Adventure | Link |
22 | 2604 | 79608410258462 | Slam Shuffle | Link |
23 | 23815 | 16666 | Category Six | Link |
24 | 32506764 | 1963 | Planet of Discord | Link |
25 | 196872 | 49 Stars | Cryostasis | Link |