Joining Hands and Singing Merrily Part 2

In the last post of this series, we presented an overview of a few of the security concerns that Stratum's XFIL project has to deal with, as well as of the XFIL project itself. We also started the dissection of our development process by describing the high-level steps we take to define requirements and explore solutions. Towards the end we also talked a little bit about some of the values the team holds with regards to documentation and testing. In this post, we're going to dive into some of the specifics of the technologies we use, such as our choices of programming languages, supporting tools, and infrastructure.

Programming languages

Choosing which programming language to use is an important and sometimes difficult decision to make. Your choice of language brings with it a domain of risks that you have to be prepared to deal with. In Stratum's case, we have a wide assortment of needs in different domains, and we have chosen a few different languages to cover them.

The above diagram roughly illustrates the applications of the three languages employed for most of what we have built until now. Each of the three large circles can be thought of as application domains, and the overlap between them represents interfaces between actual services.

JavaScript

The first language that we adopted, which we've applied mostly to backend services that serve largely as APIs behind the scenes, was JavaScript. While there are plenty of valid criticisms to be made about JS, its choice for web-based applications and services is uncontroversial. Using JavaScript allowed us to quickly develop services taking advantage of the thriving communities behind Node.js and JavaScript as a whole. By adhering to some strong design principles and focusing on the best the language has to offer, I'm happy to say our JS-based services are incredibly elegant and hum away doing their job very well.

Go

Google's Go programming language is perhaps the real workhorse of the XFIL architecture. While the first stable version of Go 1.0 was only released four years ago, the language has a number of things going for it that have made it a pretty sound choice for a lot of projects. For example,

  • Compilation times are incredibly short, and binaries are always statically linked, so development and deployment are fast and easy
  • It compiles to native code but has a garbage collector, so it runs pretty fast but doesn't leave you to deal with memory management
  • It has a really nice concurrency story built into the language in the form of goroutines and channels
  • It is an incredibly simple language by design, so it's easy for people who have never used it to pick it up
  • The standard library contains a very complete set of cryptographic primitives which have been developed in no small part by the legendary Adam Langley

It also has a silly mascot

Go has found itself a place in the hearts of many software engineers, especially those who deal with distributed and concurrent networking-focused systems. In that vein, we have put it to work at Stratum to power our backend services for which security and simplicity are particularly important, and it's taken up the overwhelming majority of our server implementations for data ingestion services.

Rust

The latest addition to the roster is the Mozilla-backed Rust programming language. Rust is a systems programming language with a serious intention to become a replacement for C and C++. While Rust is even younger than Go, not counting the six or seven years it spent in development before the 1.0 release, it has already seen some impressive applications. To name a few, it has been deployed to millions of users in Firefox, been used to build a brand new browser rendering engine, has some high-profile users like Dropbox, and that's not including the hobbyist operating systems and other things people are building.

Something that really sets Rust apart is its designers' assertion that it is possible to have programs that are both safe and performant. Rust makes this assertion a reality through it's ownership system, which provides compile-time guarantees about memory safety, effectively eliminating entire classes of memory-related and data-race vulnerabilities including the likes of Heartbleed. On top of all of that, Rust has a great community, a truly top-notch book and an incredible package manager.

When our other major option for the XFIL Agent was C or C++, for code that we will have to deploy to our clients' own machines, Rust is far and away a better option. Its powerful type system, safety guarantees, and high-level features have allowed us to solve problems with more confidence than any of our other options would have. We're very happy with our results, as well as the efficiency and reliability of the software we've built with it so far.

Supporting Technologies

The development process involves a lot more than just writing code, and no developer worth their salt should ever be content saying "it works on my machine." To make our development and testing process buttery smooth, we've been using a few great tools to help us out.

Docker

Docker is a containerization strategy with accompanying tools. In a nutshell, Docker is used to package up entire environments, from the operating system up, into "containers." Those images can then be run in and virtually networked by Docker to automatically spin up entire clusters of what are effectively virtual machines each running a single part of our architecture, such as a database or a service. Using Docker, we've been able to develop in consistent environments that reflect the environment our software will be deployed to. Needless to say, Docker is pretty cool. Some even say that it's the future!

CodeShip

An important part of making sure our code is running as expected before deploying it anywhere has involved a heavy reliance on continuous integration with CodeShip. We have the service set up to do direct pulls from our repositories' master branches and to send out notifications via Slack and email. We've even been admitted CodeShip's Docker trial, so our usual Docker setup is being run both locally and in testing. The result is that, at the major waypoints between development and production, our code is being thoroughly tested as if everything was already in production, with none of the risk.

Amazon Web Services

Have you heard the one about the book store that became a leading cloud platform? AWS has taken the world by storm as part of a revolution in what is being called Infrastructure as a Service, or IaaS for short. AWS makes it dead simple to spin up and take down servers to cope with changing loads, and the whole process can be automated and configured to fit your budget and setup requirements. In our development cycle, AWS is essentially the real-world realization of our "idealized" Docker setup wherein we've trivialized the process of deploying our many services and configuring them to talk to each other.

Coming up…

That wraps up our discussion about our choice of technologies. Save for the handful of custom tools we’ve written for internal use, we don’t really have a lot of technologies to talk about, but it's for the best that we stick to a small number for our earlier stages. In the next post, we’ll be covering the challenge of authentication, and specifically looking at how we approach authenticating users and agents. That subject in particular will involve a lot of talk about cryptography, and will also give us a chance to introduce a helpful library that we have been able to open source.

Stratum