Log structures

Now that we’ve implemented the filesystem-as-a-database approach to get some experience working with the filesystem, we’re going to backtrack a little. As I mentioned before, filesystem-as-a-database is pretty straightforward and works well on a small scale, but has serious limitations beyond that. Modern filesystems typically aren’t optimized for the usage pattern required to operate as a database directly.

We’re going to set aside the filesystem-as-a-database code and start implementing a log-structured system to persist our key-value store. We’ll start simple to get a good understanding of tradeoffs for various levels of complexity.

First we’ll just write our entries directly to a log on disk for set() and delete() operations, and scan everything to find the value for get() operations. Then we can run the benchmarks and see how much room there is for improvement (spoiler: there’ll be a lot).

Let’s cover some terminology first. In this section I’ll be talking about entries and logs.



For our purposes, we’ll define an entry as a key and value pair. For example, { name: 'Joe' } is an entry where name is the key and Joe is the value. An entry can be represented as a string by serializing it to JSON; e.g. JSON.stringify({ name: 'Joe' }) produces the JSON string {"name":"Joe"}.


A log is a data structure where new data is only written to the end. In other words, it’s append-only. For example, if we only push new elements onto an array and don’t perform any other mutations, we can consider it a log structure.

Likewise, if we have a file that we only append data to at the end, it’s also a log. If we’re mutating an array or file, it’s generally not considered a log anymore.

File format

Our file format is going to be a log of JSON strings, one on each line. The JSON strings will represent entries where the first element of the array on each line is they key, and the second element is the value.

We’ll use the .json file extension to make it easy to view the files in an editor, even though they will actually contain several JSON strings, separated by newlines (specifically, Line Feed characters, often designated as LF or \n).

Here’s an example of what one of our files might contain:

[["animal",1],{"name":"Hobgoblin","description":"An unsightly little elf"}]\n

We’ll typically use arrays for our data layout so that we have control over the way it serializes to JSON. Although modern JavaScript (ES6+) has predictable property ordering and likely would serialize consistently, I’d rather be explicit.

For example, the javascript object { key: 'key-1', value: 'value-1' } could serialize to JSON as either {"key":"key-1","value":"value-1"} or {"value":"value-1","key":"key-1"}. That’s why we’re going to use arrays for the entry and the key, e.g. ['key-1','value-1'], so we always get the same result when serializing. Later on, we’ll write code to normalize any keys that are JavaScript objects into consistently sorted arrays.

All whitespace in JSON outside of data values is optional, and special characters within JSON data values are escaped (with \n for Line Feeds, for example). Whitespace just allows JSON to be pretty-printed for readability. When we write JSON entries to disk, we won’t use any optional whitespace. This allows us to have a single, complete JSON payload on each line without doing extra work.


We don’t need to make any changes to the tests, since we’ve already handled setup for the database paths in the filesystem-as-a-database section. The tests should continue to pass with the new implementation.


In the repo, check out the key-value-store-disk_before branch if you haven’t already and install dependencies by running npm install.

Imports and constants

Open src/key-value-store.js and update the following code above the class export:

import path from 'path'
import fs from 'fs'
import shell from 'shelljs'

const KEY_INDEX = 0
const VALUE_INDEX = 1

This brings in some packages we’ll be using and sets a few constants. These constants name the array indexes where we will be reading and writing parts of the entries we use to store data.

We don’t need to change the constructor() or init() methods for now.

set(key, value, isDeleted)

Now we get to the implementation for the set method. Any time this method is called, we’re going to append a new line to our store.json file with the JSON string representing the key and value for the entry, and a flag for whether or not the key has been deleted.

Update the set method so it looks like the following:

set(key, value, isDeleted = false) {
  // Creates the file if it does not exist, and appends a new line to the end
  // containing the JSON for the entry.
    path.resolve(this.dbPath, 'store.json'),
    JSON.stringify([key, value, isDeleted]) + '\n'

Notice that we’re storing the key at index 0, the value at index 1, and the isDeleted flag at index 2. These indexes correspond to the constants we set at the top of the file. Recall that we are using arrays to store the parts of the entries so we can have a predictable serialization output. This will become important later when we start sorting the files to make searches more efficient.

Here’s an example of calling the set method and the corresponding store.json file contents that will be written to disk:

set('test-1', 'value-1')
set('test-2', 'value-2', false)
set('test-3', null, true)



Since new keys, updated keys, and deleted keys are always represented by appending a new line to the store.json file, it’s a log of entries. Because it contains a collection of strings, separated by newlines, it is also a string table.


Let’s skip ahead to the delete method, since it’s pretty simple. For this implementation, a delete will just write a new entry for the given key with the isDeleted flag set to true.

delete(key) {
  this.set(key, null, true)

We set the value to null because we don’t care what it is–any time an entry has been deleted, get will return undefined as the value. JSON doesn’t have an undefined value, so we arbitrarily use null.


The get method is going to be the most complex for this implementation. We have to read the store.json log file, parse all its strings into entries (array objects), and then search through them for the latest value for the given key. This has to be done in reverse so that we match the most recent entry for the key first.

get(key) {
  if (!fs.existsSync(path.resolve(this.dbPath, 'store.json'))) {
    return undefined // If the store doesn't exist, it can't contain the key.

  // readFileSync returns a Buffer object that represents binary data.
  const buffer = fs.readFileSync(path.resolve(this.dbPath, 'store.json'))

  // Stringify the buffer so we can split it into lines.
  const bufferString = buffer.toString()

  // Split the buffer into lines.
  const lines = bufferString.split('\n')

  // Filter out empty lines--usually the last one since we always write a
  // newline after each set().  This leaves us with just JSON data.
  const jsonLines = lines.filter(line => line.length > 0)

  // Parse the JSON in each line into an array representing an entry.
  const entries = jsonLines.map(jsonLine => JSON.parse(jsonLine))

  // We want to search most recent entries first.  In JavaScript, .sort() and
  // .reverse() modify the array in-place.

  for (const entry of entries) {
    if (entry[KEY_INDEX] === key) {
      return entry[IS_DELETED_INDEX]
        ? undefined // The isDeleted flag is set to true
        : entry[VALUE_INDEX] // Return the value for the key

  return undefined // The key was not found

First we handle an edge case that we have a test for in tests/key-value-store.js. If no values have been set yet, the store.json file won’t exist. In that case, we know there are no keys with a value, so we can just return undefined.

The next several assignments before the for loop read the store.json file and do several transformations on the data. Here’s a walkthrough of how these operations would affect an example store.json file’s content, after stringifying it.

You can open your browser’s developer tools (F12 in Chrome) and paste in these blocks to see what happens. Don’t worry about the extra spaces in the buffer string.

const bufferString = `

const lines = bufferString.split('\n')


/* Output:
  0: ""
  1: "  ["test-1","value-1",false]"
  2: "  ["test-2","value-2",false]"
  3: "  ["test-3",null,true]"
  4: "  ["test-2","new-value",false]"
  5: ""

const jsonLines = lines.filter(line => line.length > 0)


/* Output:
  0: "  ["test-1","value-1",false]"
  1: "  ["test-2","value-2",false]"
  2: "  ["test-3",null,true]"
  3: "  ["test-2","new-value",false]"

const entries = jsonLines.map(jsonLine => JSON.parse(jsonLine))


/* Output:
  0: "test-1", 1: "value-1",   2: false
  0: "test-2", 1: "value-2",   2: false
  0: "test-3", 1: null,        2: true
  0: "test-2", 1: "new-value", 2: false

const reversedEntries = entries.reverse()


/* Output:
  0: "test-2", 1: "new-value", 2: false
  0: "test-3", 1: null,        2: true
  0: "test-2", 1: "value-2",   2: false
  0: "test-1", 1: "value-1",   2: false

Now that all the key-value store’s entries have been read into memory, the next part of the get method is the search for the passed in key. We loop through all the entries, looking for the passed in key. The first time it’s found, we check its isDeleted flag to see if we need to return undefined if the key was deleted. If the key is not marked as deleted, we just return the corresponding value.

Finally, if no entries matched the given key, we return undefined to indicate it was not found.

That’s it for the implementation. You might be wondering about performance, since it appears that we have to read the entire key-value store in to memory each time we want to search for a single value. This is true, and keep in mind that we’re not yet going for performance. This first disk-based persistence implementation keeps things pretty simple and will give us something to build on.

Let’s use our benchmarks to look at the performance and compare it to the previous implementations.


The benchmarks should be ready to run after our changes in the filesystem-as-a-database section. You can now run the benchmarks with the command npm run benchmarks. Here’s the output when I run the benchmarks against the new implementation on my local machine:

> @learndb/learndb@0.1.1 benchmarks C:\projects\learndb
> node --require @babel/register src/benchmarks.js

Starting RSS memory usage: 63.109 MB
KeyValueStore#set x 2,137 ops/sec ±2.88% (64 runs sampled)
KeyValueStore#get x 62.86 ops/sec ±1.98% (63 runs sampled)
KeyValueStore#delete x 2,352 ops/sec ±0.66% (88 runs sampled)
Ending RSS memory usage: 83.066 MB
Difference: 19.957 MB

Compare this to the benchmark results for the simple in-memory implementation:

> @learndb/learndb@0.1.1 benchmarks C:\projects\learndb
> node --require @babel/register src/benchmarks.js

Starting RSS memory usage: 57.676 MB
KeyValueStore#set x 1,774,185 ops/sec ±2.53% (84 runs sampled)
KeyValueStore#get x 11,633,332 ops/sec ±3.46% (83 runs sampled)
KeyValueStore#delete x 11,518,884 ops/sec ±3.51% (85 runs sampled)
Ending RSS memory usage: 82.695 MB
Difference: 25.020 MB

And the benchmark results for the filesystem-as-a-database implementation:

> @learndb/learndb@0.1.1 benchmarks C:\projects\learndb
> node --require @babel/register src/benchmarks.js

Starting RSS memory usage: 47.426 MB
KeyValueStore#set x 1,106 ops/sec ±1.12% (86 runs sampled)
KeyValueStore#get x 9,631 ops/sec ±4.52% (77 runs sampled)
KeyValueStore#delete x 9,730 ops/sec ±4.82% (69 runs sampled)
Ending RSS memory usage: 76.145 MB
Difference: 28.719 MB

Big difference in performance! That’s expected though–we’re just getting started with disk-based persistence. We should see performance increases across the upcoming chapters. Do take note that the set operations are faster when appending entries to a single file instead of writing each entry to a separate file.

Remember, you can check out the corresponding key-value-store-disk_after branch to see the end result of the changes made in this section.