omniverse theirix's Thoughts About Research and Development

Testing PostgreSQL Extensions

You have written a useful PostgreSQL extension. It works perfectly, includes some tests and has excellent documentation. But can it be tested properly and automatically for multiple PostgreSQL versions?

Single-machine build

First, the best way to build an extension on a single machine is to use the standard PGXS build system. It allows writing a simple declarative Makefile that includes a lot of PGXS machinery:

PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)

Only a few Makefile variables with extension metadata must be added, and PGXS does the rest. Just type make, and the platform-specific binary and SQL scripts for extension are built. Target make install install the extension to the current (pointed by pg_config) PostgreSQL installation.

PGXS also provides regression tests with a tool pg_regress and makefile wrappers. It’s enough to put SQL inputs to test/sql/test1.sql and expected text output to test/expected/test1.out. Invoking make installcheck runs all regression tests and provides a report and a status code.

It’s an easy and powerful system.

Testing at scale

But what about testing against multiple PostgreSQL versions? Typical package managers like homebrew and apt don’t allow the installation of a large selection of PostgreSQL binaries because older versions are quickly replaced with newer ones. For a specific Debian installation, only one PostgreSQL version is supported at any moment, which the Debian community chooses. Homebrew allows to have multiple servers installed side-by-side. To select a specific version, pass a PG_CONFIG variable before calling make:

PG_CONFIG=/usr/local/opt/postgresql@14/bin/pg_config make

Another way is to build multiple PostgreSQL versions from the source and install them side-by-side. Usually, it works well because the build system works fine on a range of operating systems versions.

But we don’t want to build everything from scratch. Okay, let’s use official Docker images:

docker run --rm -it postgres:14-bullseye find / -name postgres.h

Oops, no headers. The image is not suitable for building extensions.

How can we avoid building code by ourselves?

It turned out that there are official APT repositories provided not by Debian but by the Postgres community itself. These APT and APT FAQ describe them in details. You can install postgresql-14 and postgresql-server-dev-14 (any version here) on any Debian distribution. Packages provide all needed headers and development files.

Test harness for images

Unfortunately, there aren’t any prebuild Docker images with these packages. With a prebuilt server binary, it’s a much simpler task.

All CI-related files for my parray_gin extension lay inside this directory.

First of all, the image contains everything needed to run actual tests.

Here we specify the build argument with a version.

FROM debian:bullseye

# PostgreSQL version (like 15)
ARG PGVERSION
ARG PGCHANNEL=bullseye-pgdg

We use the same compiler (gcc10 in bullseye) for all different PostgreSQL versions:

ENV DEBIAN_FRONTEND=noninteractive

# Install core packages
RUN apt-get update && \
    apt-get install --no-install-recommends -y sudo curl gnupg tzdata \
    locales lsb-release ca-certificates \
    make gcc libssl-dev libkrb5-dev libicu-dev libdpkg-perl

The APT repository for PostgreSQL must be configured in a modern way. Do not use apt-key anymore because it is deprecated. Remember to raise the priority of new packages because Debian packages could be pulled in instead of the community’s.

# Configure repository
RUN mkdir -p /etc/apt/keyrings && \
    curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | \
    gpg --dearmor -o /etc/apt/keyrings/postgres.gpg && \
    echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/postgres.gpg] http://apt.postgresql.org/pub/repos/apt/ ${PGCHANNEL} main ${PGVERSION}" | \
    tee -a /etc/apt/sources.list.d/postgres.list

# Prefer packages from the Postgres repository
RUN echo "Package: *\nPin: release o=apt.postgresql.org\nPin-Priority: 900" | \
    tee -a /etc/apt/preferences.d/pgdg.pref

# Install PostgreSQL
RUN apt-get update && \
    apt-get install --no-install-recommends -y postgresql-${PGVERSION?} \
      postgresql-server-dev-${PGVERSION?} && \
    rm -rf /var/lib/apt/lists/*

Finally, bootstrap the cluster and prepare the environment for building extension:

# Create cluster
ENV PGBIN=/usr/lib/postgresql/${PGVERSION}/bin
ENV PGDATA="/var/lib/postgresql/${PGVERSION}/test"
ENV PATH="${PATH}:${PGBIN}"

RUN echo "postgres ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers

COPY docker/entrypoint.sh /entrypoint.sh
COPY docker/test.sh /test.sh
RUN chmod a+x /entrypoint.sh /test.sh

RUN mkdir /src && chown postgres /src

USER postgres

RUN initdb

WORKDIR /src

ENTRYPOINT [ "/entrypoint.sh" ]

The entrypoint launches the cluster with pg_ctl start in the background so the command in a container will be executed with an environment with an already running cluster:

#!/bin/sh
set -e
pg_ctl start
exec "$@"

Images could be prebuilt in advance:

docker build -t pgx:14 -f docker/Dockerfile --build-arg PGVERSION=14 .

Container builds and runs tests for an extension whose source code is in a given directory. The directory is mounted as a read-only volume. Everything from this directory is copied to the intermediate directory and built there.

docker run --rm -v `pwd`:/workspace:ro pgx:14 /test.sh

There is one tricky part. Sometimes at slow CI machines cluster has yet to start when the test is begun. We need to wait until Postgres becomes ready. For newer versions pg_isready command can be used, but for older versions (9.x), the fallback with a blind timeout is used.

if command pg_isready ; then
  timeout 600 bash -c 'until pg_isready; do sleep 10; done'
else
  sleep 30
fi

So the typical local workflow for testing against arbitrary versions is a one-liner:

export PGVERSION=14 && \
  docker build -t pgx:$PGVERSION -f docker/Dockerfile --build-arg PGVERSION=$PGVERSION . && \
  docker run --rm -v `pwd`:/workspace:ro pgx:$PGVERSION /test.sh

Testing on nightly

If you are going to test an extension against unreleased PostgreSQL, there are packages with development nightly snapshots. You don’t need to build a database from the trunk!

My scripts allow specifying a development channel PGCHANNEL=bullseye-pgdg-snapshot instead of standard PGCHANNEL=bullseye-pgdg for the image so the nightly package version will be used for tests.

Wrapping up in GitHub Actions

Since all tests are parametrised with a database version and channel, it’s easy to wrap them in any CI that supports matrix builds.

Here we go with GitHub Actions matrix build:

name: test

on:
  push:
    branches: [ "master" ]
  pull_request:
    branches: [ "master" ]

jobs:
  build:

    runs-on: ubuntu-latest
    strategy:
      matrix:
        pg-version: [ "9.1", "9.5", "9.6", "10", "11", "12", "13", "14", "15" ]

    steps:
      - name: Checkout
        uses: actions/checkout@v3

      - name: Build the Docker image
        run: docker build . --file docker/Dockerfile --tag "pgxtest:$" --build-arg "PGVERSION=$"

      - name: Test
        run: docker run --rm -v `pwd`:/workspace "pgxtest:$" /test.sh

Snapshot workflow is more straightforward and includes the only job for the snapshot.

Summary

The build and testing system for PostgreSQL extensions is stable and useful for local development. More efforts are needed when you need to test it against multiple PostgreSQL versions automatically. Code and approaches introduced in this post could greatly simplify the development experience with local development and public CIs.

Why the Riak matters

Riak is one of the first and most prominent key-value databases implementing the famous Amazon Dynamo paper. The paper was unveiled in 2007. Sixteen years ago, it could even get a passport!

The more widespread database that follows the paper is Apache Cassandra. Funny enough that AWS DynamoDB had walked away from original ideas (leaderless replication), and all of these systems have different storage engines.

Riak differs by following the original ideas from Dynamo. So if you’d like to explore those principles and design a system for database, or maybe even enhance a database, Riak is a very good candidate, with a tiny peculiarity. It is written in Erlang and OTP which raises a barrier to entry.

In this post, I’ll describe the architecture of Riak, map-reduce pipeline development in Erlang and the Bitcask storage engine.

Dynamo foundations

Attempts to speak with the Riak Ring in Erlang as shown in the Arrival movie:

arrival

Maybe Riak’s glory days are passed. After Basho’s departure and giving away projects to the community, it lacks support. For example, you cannot download binary releases for macOS since the bucket with artifacts has some permission errors. So it’s required to build it from the source. And it’s impossible to build Riak with the latest Erlang 25, I only managed to do it with Erlang 22. A lot of dead links in the documentation. So you were warned.

Riak has a nice CLI to manage a cluster. Each node is accessible by HTTP and protobuf protocols. You can even fire an Erlang shell and work with a node directly because it is an OTP application. It really shines with distributed deployments since it is fully decentralised and uses leaderless replication. Membership and state are replicated across the cluster with gossip protocols. Riak is leaderless, therefore a client can connect to any node and perform operations. Depending on request types, a node can interact with other nodes to execute the request. Regarding CAP, Riak is called an AP system, but I’ll talk more about consistency guarantees later.

Data is partitioned between nodes with consistent hashing at the ring. The ring of ordered hashes represents the partitioning of keyspace to multiple continuous partitions identified by start and end keys. Each partition on the ring is handled by a vnode. Each Riak node is mapped to multiple vnodes. To fetch or store an object in the cluster, Riak first hashes the key, finds a small sequence (depending on availability requirements) of hashes on the ring, finds the vnodes, maps them onto physical nodes, perform reads/writes and waits for a read/write quorum until returning an answer to the client. The idea of consistent hashing lies in a faster rehashing when a node joins and leaves a cluster, becaise it affects only a small part of the ring instead of the whole ring. If a node leaves the cluster temporarily (i.e., network problems), rehashing is still unnecessary because this situation can be handled by hinted handoff. If a failure is permament, differences in keys between nodes can be found using anti-entropy procedures.

So many concepts from Dynamo are directly implemented in Riak without major changes. There is a good page in documentation about how closely these concepts are mapped to Riak’s.

Client interfaces

Riak is widely known for its good interface. It had been thoroughly following the REST principles. According to Richardson Maturity Model, which describes the adoption of REST concepts at Level 0 (HTTP transport), Level 1 (resources), Level 2 (HTTP verbs), Riak stays at the highest Level 3 (Hypermedia Control). It provides walking across linked keys using standard Link headers. Multiple values are served with the standard HTTP multipart/mixed responses. Different types of media are handled with the MIME types.

I’ve seen such powerful HTTP API in Hashicorp products too, for example, in Consul. The ability to have a full-featured HTTP API to perform KV operations, resolve conflicts and even perform a leader election without needing to write a lot of Java clients (I look at you, ZooKeeper) is fascinating.

Another available protocol is Protobuf. Curiously, it is not used over gRPC transport but instead with a very simple TCP transport.

Of course, many client libraries and drivers are written for Riak to interact with the store. For example, here is some Ruby code to create and put a value:

client = Riak::Client.new(:pb_port => 10017)
bucket = client.bucket("test")

obj = bucket.new('mykey')
obj.data = {'answer': 42}
obj.store()

val = bucket.get('mykey')
fail unless val.data['answer'] == 42

And an excerpt of quering the key by HTTP:

http http://localhost:10018/buckets/test/keys/mykey
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Link: </buckets/test>; rel="up"
X-Riak-Vclock: a85hYGBgzGDKBVI8R4M2cgds+X+VgUFdMIMpkTGPlSFaZ8tdviwA
{
    "answer": 42
}

Libraries lack maintenance too and show their age. For example, it’s not possible to use Python newer than 3.7.

Tunable consistency

It’s well known that Dynamo databases are eventually-consistent. Usually, they are put to “AP” class, but Kleppman disagrees and instists on “P”. Let’s check why.

To handle conflicts, Riak provides a variety of options. I will describe four of them.

The simplest conflict resolution strategy is “last write wins” (Cassandra uses it too) based on the latest timestamp. Client does not need to address conflicts.

The recommended option is to use multi-version values. For this option, Riak uses dotted version vectors. This structure is an improvement of vector clocks which can handle many clients with conflicting writes. Two conflicting writes create siblings – different values identified by their vector clock values. So you need to handle siblings at the client application.

Let’s create a bucket type with version vectors.

riak-admin bucket-type create vv '{"props":{"allow_mult":true}}'

Create conflict:

http PUT http://localhost:10018/types/vv/buckets/cats/keys/favourite name=Puffy
http PUT http://localhost:10018/types/vv/buckets/cats/keys/favourite name=Pirate

When we try to get value, Riak returns both siblings

http http://localhost:10018/types/vv/buckets/cats/keys/favourite
HTTP/1.1 300 Multiple Choices
...
Siblings:
4IgpOUc3rR7wPOKbgJTGfJ
5UX8mBVbx3Y9Vno6ncicuO

So to resolve the conflict, the next update must include a context (X-Riak-Vclock value from PUT responses) and therefore resolve a conflict. If no context is given, Riak cannot reconstruct lineage.

http http://localhost:10018/types/vv/buckets/cats/keys/favourite \
    name=Pirate X-Riak-Vclock:"hexademical-clock-value-from-response"

Libraries encapsulate sibling handling with a fetch-modify-write loop. For example, the equivalent code in Ruby:

bucket = client.bucket('cats')
# read from Riak
obj = bucket.get_or_new('favourite', type: 'vv')
# deal with siblings
fail if obj.siblings
# update object locally
obj.content_type = 'application/json'
obj.data = {"name"=>"Pirate"}
# write to Riak
obj.store

So to write a key, you always need to read a key before and therefore create a causal context which will be included in write operation.

The third option is a strong consistency mode. Strong consistency is specified at a per-bucket level. There Riak guarantees that no siblings will be returned during read operations. Since the causal context is used for write operations, no siblings will be produced too. Client code is free from handling conflicts (a line fail if obj.siblings can be dropped).

The fourth option appeared in 2014 in version 2.0 with the introduction of CRDT types (Conflict-free replicated data types) like flags, registers, counters, sets, and maps. It is a rare and complex feature for databases. CRDT types allow the developer to ignore all eventual consistency quirks, vector clocks, and handling sibling versions because the conflict resolution is fused into data types, not operations. Of course, data modelling using CRDTs is more challenging than just dropping blobs into the KV store. Full-fledged sequence CRDTs were out of scope too.

With this broad choice of consistency modes and options, it’s hard to call Riak a “simply AP” system.

MapReduce

As with every good SQL and NoSQL database, Riak provides server-side computational primitives.

In past years, we were able to run MapReduce queries with JavaScript functions which are registered to the Riak via HTTP API:

echo 'function(v) {
var parsed_data = JSON.parse(v.values[0].data); var data = {};
data[parsed_data.style] = parsed_data.capacity; return [data];
}' | http PUT http://localhost:10018/riak/my_functions/map_capacity

And then call the MapReduce with some inputs:

echo '{
  "inputs":[["rooms","101"],["rooms","102"],["rooms","103"] ],
  "query":[
    {"map":{
      "language":"javascript",
      "bucket":"my_functions",
      "key":"map_capacity"
		}}
	]
}' | http POST http://localhost:10018/mapred

The function can be inlined in this mapred call. Unfortunately, the JavaScript interface is deprecated and doesn’t work reliably anymore.

You should write your functions in Erlang, and it could be a little tricky. The official documentation on MapReduce is very good but still lacks a complete example. The general approach is similar to Hadoop and HBase ecosystem techniques. You build JARs, push them to the cluster, register with runtime and then execute. Let me show how to do it.

Colouring all the cats

Let’s analyse cat’s colours by their names:

White Stripes
Black Ash
Grey Hound
Small White Dot
Orange Pirate
Snow White

As we know from the internet, the number of cats is enormous. We should do it in a massively parallel fashion!

First, fire up the cluster. Official documentation described the process well enough. Let nodes live locally at $CLUSTERDIR/{dev1,dev2,dev3}. And the Riak libraries needed to build our extension are located at $SDKDIR.

To check cluster, run riak-admin member-status for the cluster membership and riak-admin ring-status for the state of the ring.

riak-admin member-status
================================= Membership ========
Status     Ring    Pending    Node
-----------------------------------------------------
valid      34.4%      --      [email protected]
valid      32.8%      --      [email protected]
valid      32.8%      --      [email protected]
-----------------------------------------------------
Valid:3 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
ok

The interesting thing is that each VNode is implemented as an Erlang process, specifically a finite state machine with gen_fsm behaviour. For example, my three-node cluster has 64 partitions and 64 vnodes. Each physical node will get 21 or 22 vnodes (approximately 33% of all partitions). More than that, a lot of operations for node (storage maintenance, handoff) are also implemented as processes. Since they are really lightweight, running such an amount of processes is not a problem.

% riak-admin ring-status
================================== Claimant ========
Claimant:  '[email protected]'
Status:     up
Ring Ready: true

============================== Ownership Handoff ===
No pending changes.

============================== Unreachable Nodes ===
All nodes are up and reachable

ok

Write some code in colorer.erl for loading cats to the database (function load/1), for map phase (colormap/3) and for reduce phase (coloreduce/2).

-module(colorer).

-export([load/1]).
-export([colormap/3]).
-export([coloreduce/2]).
-export([get_color/1]).

%% Determine a cat color from its name
get_color(Name) ->
    NameTokens = string:tokens(Name, " "),
    case lists:member("White", NameTokens) of
        true ->
            "White";
        false ->
            case lists:member("Black", NameTokens) of
                true ->
                    "Black";
                false ->
                    "Mixed"
            end
    end.

%% map phase
colormap(Value, _Keydata, _Arg) ->
    ObjKey = riak_object:key(Value),
    KeyStr = binary_to_list(ObjKey),
    Color = get_color(KeyStr),
    error_logger:info_msg("colorer mapped key ~p to ~p", [KeyStr, Color]),
    [{Color, 1}].

%% map reduce
coloreduce(List, _Arg) ->
    error_logger:info_msg("colorer reduce list ~p", [List]),
    ListOfDicts = [dict:from_list([I]) || I <- List],
    Merged =
        lists:foldl(fun(Item, Acc) -> dict:merge(fun(_, X, Y) -> X + Y end, Item, Acc) end,
                    dict:new(),
                    ListOfDicts),
    % convert dict to list
    dict:to_list(Merged).

%% End of list
insert(_Client, []) ->
    ok;
%% Put a cat from head of list to the Riak
insert(Client, [Head | Tail]) ->
    % All cats are adorable, so it goes to the Riak value as a JSON
    Body = list_to_binary(mochijson2:encode(#{<<"adorable">> => true})),
    % Riak key is a cat name
    Key = Head,
    RawObj = riakc_obj:new(<<"cats">>, Key, Body),
    Obj = riakc_obj:update_content_type(RawObj, "application/json"),
    case riakc_pb_socket:put(Client, Obj) of
        ok ->
            io:format("Inserted key ~p~n", [Head]);
        {error, Error} ->
            io:format("Error inserting key ~p", [Error])
    end,

    insert(Client, Tail).

%% Load cats from text file
load(Filename) ->
    {ok, Client} = riakc_pb_socket:start_link("127.0.0.1", 10017),
    {ok, Data} = file:read_file(Filename),
    Lines = re:split(Data, "\r?\n", [{return, binary}, trim]),
    insert(Client, Lines).

Put cats in a bucket

Cats in bucket

First things first, load all our cats into Riak.

Compile the file with erlc colorer.erl. It produces a BEAM file colorer.beam in the current directory. Then run the loader:

erl -pa $SDKDIR/lib/riakc/ebin \
  $SDKDIR/lib/riak_pb/ebin $SDKDIR/lib/mochiweb/ebin \
  -noshell -eval 'colorer:load("cats.txt"), halt().'

Since we connect to the Riak cluster from the load/1 function by Protobuf protocol, we can start the standalone virtual machine, not the one which Riak uses.

Now check the bucket cats:

http http://localhost:10018/buckets/cats/keys keys==true
HTTP/1.1 200 OK
Content-Type: application/json
Server: MochiWeb/3.0.0 WebMachine/1.11.1 (greased slide to failure)
Vary: Accept-Encoding

{
    "keys": [
        "Snow White",
        "White Stripes",
        "Grey Hound",
        "Small White Dot",
        "Black Ash",
        "Orange Pirate"
    ]
}

Check a random cat:

http http://localhost:10018/buckets/cats/keys/Snow%20White
HTTP/1.1 200 OK
Link: </buckets/cats>; rel="up"
Server: MochiWeb/3.0.0 WebMachine/1.11.1 (greased slide to failure)
Vary: Accept-Encoding
X-Riak-Vclock: a85hYGBgzGDKBVI8R4M2ckekvudgYIyfmMGUyJnHyjDl29a7fFkA

{
    "adorable": true
}

Great, all our cats are in a bucket.

Running MapReduce job

All Riak nodes must be able to load the code colorer.beam. Add the path to this directory to riak/etc/advanced.config for each node and restart the cluster:

[
 {riak_kv,
 [
   {add_paths, ["YOURDIR"]}
 ]},

 {riak_core,
 ...
 }.

Each time you recompile the file, the cluster must reload the BEAM file. Since Erlang uses hot reloading, it could be achieved without restarting the cluster!

erlc colorer.erl && for N in 1 2 3 ; do ${CLUSTERDIR}/dev$N/riak/bin/riak-admin erl-reload ; done

Now we can ask Riak to launch a job like it was in the JavaScript example.

echo '{"inputs":"cats", "query":[
  {"map":{"language":"erlang","module":"colorer","function":"colormap"}},
  {"reduce":{"language":"erlang","module":"colorer","function":"coloreduce"}}]}' | http POST localhost:10018/mapred

And we got the answer

{
    "Black": 1,
    "Mixed": 2,
    "White": 3
}

So we just run a distributed colorer job on our cluster!

Let’s explore what is going on inside. Actually, you can test map and reduce functions from REPL with manually specified inputs. But for actual remote invocations let’s log inputs

error_logger:info_msg("colorer mapped key ~p to ~p", [KeyStr, Color]).

Output can be seen in $CLUSTER_DIR/dev*/riak/log/console.log: colorer mapped key "Black Ash" to "Black". So the mapper produces a list of tuples with the cat’s colour. We produce a string with a colour name (actually it is an Erlang binary) for each cat and initial count as one. Value is not analysed for simplicity but can be easily extracted with riak_object:get_value(Value) and converted to dict using mochijson2.

Then all the outputs of the map phase are passed to reduce phase (remember, we specified the function and module name in HTTP invocation). The coloreduce/2 gets the list of all tuples: colorer reduce list [{"Black",1},{"Mixed",1},{"White",1},{"White",1},{"White",1},{"Mixed",1}]. Then it converts them to dicts and folds in a resulting dict:

[{"Black",1},{"Mixed",2},{"White",3}]

So the function returns a list of dicts. Riak produces the MapReduce output as JSON. Outputs and inputs between stages are still binary-serialized Erlang objects.

It isn’t difficult at all.

Bitcask

Riak has pluggable backend architecture. The default storage engine is Bitcask. Another widely-used backend is a customised version of LevelDB which is an LSM-based (log-structured merge tree) key-value store. The main reason to prefer LevelDB to Bitcask is having a lot of keys that could not fit into the node’s memory because Bitcask requires the entire keyspace to be in memory.

A lot of literature is dedicated to LSM engines. I would like to describe Bitcask architecture a bit (pun intended).

This famous storage engine is described in a really short paper that is worth reading. The Bitcask is a key-value store. The original design covers on-disk format, data flow and access protocols and the Erlang application. Some parts could remind the LSM but it is a much simpler engine.

A database node handles a directory of data files exclusively. The directory has multiple datafiles, and the most recent one is called an active file. A node can only append to the active file. All other files are read-only. So all the writes are sequential and therefore blazing fast. When the file has grown over the limit, it is closed and another one is created as an active file.

The structure of key/value entry is simple. It is a variable-sized record with sizes of key and value specified in a header. The whole record is checksummed.

0           4           8            16           24
+-----------+-----------+------------+------------+--------+--------+
|   CRC     | TIMESTAMP | KEY_SIZE   | VALUE_SIZE |  KEY   | VALUE  |
+-----------+-----------+------------+------------+--------+--------+

Bitcask requires maintaining an in-memory structure called “keydir”. It stores a mapping from each key to a location information: file path, offset in a file to the corresponding entry, and the most recent timestamp.

Reads are simple. Node checks keydir in a constant time. Then it opens a file (if not opened yet) where the record is stored and seeks to the specified offset. Since size information is stored in an entry header, the node reads value_size bytes containing a value. The total cost of a read operation is a single random disk seek.

To perform a write, node just appends a new entry to the active file and updates information in keydir. If the key is already present, its previous apperance in data files became orphaned (no links to them). The cost of a write operation is a sequential write to disk without any seeks.

To prune orphaned records, Bitcask uses merging. It is much simpler than LSM merges. Bitcask knows all the keys because they are stored in keydir. It needs to iterate over all directory files and get only the latest entry for each key. Then a new file is written with only the latest entries for each key. Old files can be removed. Merge is completed.

The major advantage and both disadvantage of Bitcask is the keydir. It must store all the keys. If the keyspace is larger than available memory, it makes no sense to store keys in a hybrid manner (in memory and disk), and an LSM-based engine should be preferred. Also, when a node is started, it must populate a keydir with all keys from all files (not only an active file). It can be optimised by using hint files.

The interesting thing in Bitcask is the lack of a commit log or WAL. Data files are the log itself. Concurrency is not described in the paper and is addressed only in the implementation.

My implementation

Speaking of implementation, I stumbled upon a really fun project CaskDB. It is a skeleton for designing your own Bitcask store in Python. A lot of tests are already written.

I have written an extremely simple storage engine. Concurrency and network protocol are out of scope. Surprisingly, it was a really fun project to code Bitcask in modern Python for a few evenings. If by any chance you are interested, it is on my GitHub. I made a few additions compared to the original task, primarily in tests. I have used property-based testing with Hypothesis which helped to pinpoint hard-to-find bugs. I think it is an essential technique for data engines. Other useful techniques that should be used are fuzzing and secure coding with invariants. Of course, a whole class of problems can be avoided by using more safe languages with static typing like Rust, but it will be another project. Stay tuned!

Outcome

Implementing Bitcask storage engine helps to understand why the engine works, where it shines and what the downsides are. Definitely worth your Netflix weekend:)

The Riak is a little outdated, and 15 years seems like an eternity in the database world. But it is a pure elixir (pun intended) of design approaches for leaderless replication, tunable consistency, causal contexts, CRDTs, effective storage engines and another important concept. Also, it is a beautiful example of a well-architected OTP application. Long live Riak!