Showing off our fancy new K-sortable IDs

Giving everything a unique identity is even more complicated than it sounds. Here's how we built a system to help us do just that.
By Team member, 20/09/2018
5 minutes read

It sounds simple on the surface, give everything a unique identity. However in computer systems, as in the real world, it is never that simple. While in the real world humans can use common sense to decipher inconsistencies, a distributed system processing hundreds of transactions a second is less tolerant. As we grow, it became apparent we needed to approach this problem, and here we document how we solved it.

What we had

Cuvva's architecture is a distributed series of microservices, mostly communicating over HTTP. When in a distributed system identifying what resources exist uniquely across services is very important, particularly given that our system uses a variety of different languages and databases.

All services at Cuvva use one of two schemes to assign unique identifiers: services which use PostgreSQL as their primary database used a random UUID (RFC 4122 version 4), and services which use MongoDB as their primary database used ObjectID.

Problems with that

In both UUID and ObjectID the client is responsible for creating a new identifier and sending it to the database, however this introduces a rather concerning problem: collisions, where two or more clients are capable of creating the same identifier and create the scenario where multiple resources are identified by the same, supposedly unique, identifier.

A random UUID has no safeguard against collisions, and are based on trust that every system creating a UUID has a suitable source of random numbers (a pseudorandom number generator). A machine which does not have a sufficient level of randomness (entropy), either accidentally or intentionally by an attacker, will very conceivably create a UUID which collides with one which has already been created elsewhere in the system. Additionally there is no time component which is useful in a distributed system, being able to establish the order of events across multiple processes.

An ObjectID does have a safeguard against collisions, each ObjectID embeds a 5-byte unique machine identifier from the client, composed of 3-bytes taken from the machines network MAC Address and 2-bytes for the process ID on the machine. This unique machine identifier means each client can safely create an incrementing sequential identifier trusting that any other client attempting to use the same identifier will be prefixed by, and not collide with, a different machine identifier.

However the unique machine identifier in an ObjectID makes two incorrect assumptions: that a MAC address is guaranteed to be unique and that a process id will never be bigger than 65,535.

While the MAC addresses assigned by reputable manufacturers are reasonably assured to be unique, they are only unique within devices produced by that manufacturer. The device portion of the MAC address can (and does) collide between manufacturers.

In a computer operating system, every running process is assigned a number from an incrementing sequence. Traditionally this was represented using 2-bytes (a uint16), limiting a system to 65,535 processes. However most modern operating systems have removed this limit, for example the highest process ID on macOS is 99,999 and on Linux while the default limit is 32,768, it can be configured as high as 4,194,303. The behavior we observed when an ObjectID was created with a process ID higher than its maximum, it would loop round back to zero and potentially collide with other machine identifiers.

What we want

Before choosing, or building, a new scheme for creating unique identifiers to replace all existing schemes, we needed to set out some desirable characteristics, and some collaboration on a GitHub issue later, we had a list of what we wanted:

  • must be consistent across the languages and compatible with the databases used at Cuvva
  • must be identifiable as to what kind of resource an ID pertains to
  • an unordered list of IDs must be roughly sortable by time to gauge the sequence of events across services
  • there cannot be any coordination, human or otherwise, at startup
  • must be URL safe, to place in HTTP request or HTML body
  • usability should be a factor, i.e. is the sequencing obvious, or does double-click-to-select capture the whole ID
  • We looked at several options, including: Twitter Snowflake, Boundary Flake and Segment KSUID. However we found none of these satisfied our criteria above. At this point we began to collaborate on the design of what a scheme taking in to account all of these criteria would look like.

    What we did

    To begin, we defined what we wanted our encoded "binary" portion to contain. We settled on a 64-bit unix timestamp, 9-bytes of unique instance identifier per running instance and a 32-bit incrementing sequence number, reset every second.

    For the 9-byte unique instance identifier, we break it down into 1-byte to classify which type of instance identifier would be encoded into the following 8-bytes. The 8-bytes must be unique to that instance and be created without requiring coordination, our implementations include using partial Docker container IDs, which are centrally coordinated.

    To comply with our URL safety required, we selected base 62 to encode the binary portion, as it is the most compact encoding available while remaining within the mixed-case alphanumeric character set.

    Finally to solve the problem of being distinguishable and opaque throughout the system, we decided to prefix all identifiers with a short human readable name which is prepended to the ID and separated with an underscore. For use outside of production environments, a second prefix may be prepended with the name of the environment or pull request which created that environment, to prevent cross-pollination between production and development environments.

    With the specification complete, we implemented it in Go which became the reference implementation, and subsequently implemented it in Node.js with its output compared to the Go output.

    Problems

    Overall the project was a success, but not without a few teething problems during development and migration.

    Our first stumbling block was our choice of encoding the binary portion of identifiers with base 62. It turns out there are very few high-quality, high-performance implementations of this encoding in any of our required languages. What then ensued was establishing a process for encoding in base 62 in one language, re-implementing it in another and then comparing the output of both to ensure consistency in their encoding and against third-party implementations.

    While content with our mechanism for establishing a unique instance identifier on a bare host and in a Docker container, we also run a small number of functions within Amazon Lambda and Lambda@Edge which also need to be able to generate identifiers. The trouble is, very little unique information about the container is made available to our function. No trustable MAC address, no container ID. As a result we had to establish the invocation id as a source of entropy to generate a "random" instance identifier.

    As our stack primarily consists of services written in Go and Node.js, it is crucial that these implementations are at least compatible with each other. Unfortunately while Go natively supports 64-bit integers which we decided on for the timestamp portion, Node.js was a different story. JavaScript does not differentiate between integers and decimals, representing all numbers as IEEE-754 double-precision floating point numbers. While they do occupy 64-bits, they only offer integer precision up to 53-bits. As a result large timestamps lose their accuracy in the JavaScript implementation, and while this isn't currently a problem and remains unresolved for us, it will become a problem in the future.

    Finally, as the majority of our migration to KSUID across our services was completing, we started noticing that despite intensive testing, one of our requires traits was not true: sorting by time. When comparing the order of sorted IDs versus the relevant timestamp, there were inconsistencies. While furiously debugging and trying to plan how we would both fix the issue and retroactively apply the fix to already generated IDs, we noticed the sorting problem only effected PostgreSQL. Sorting in Go, JavaScript, MongoDB and the terminal were all consistent with what we expected to see in PostgreSQL.

    Sure enough, it turns out you can control the character set used to sort and collate rows in PostgreSQL, and for us defaulted to `en_GB.utf8` which prioritizes lowercase letters before uppercase, whereas everything else used for testing used `C` ordering which prioritizes uppercase letters before lowercase. Not only can you change the collation used after table creation, you can configure it per column, so we ran a migration across our ID columns and the problem was solved.

    Summary

    Uniquely identifying resources, it turns out, is not simple. A number of solutions exist, but it is far from a solved problem. We are happy with the result of this project, however are keenly following development in this space, and are happy to hear how other companies have tackled the same problem.

    We intend to open source our KSUID implementation in the future, so keep an eye on our GitHub.

    Check out the source here.

Team member