Perhaps there’s a planet with perfect software, but as Google’s Chris DiBona writes, that planet isn’t the one we live on. As such, developers are left with a trade-off: Tread cautiously and rigorously test your software to find all problems pre-deployment, or test less and ship faster with greater tolerance for bugs in production. The former camp is filled with developers working in regulated industries like healthcare and finance; the latter is populated by adherents to Werner Vogels’ famous “you build it, you run it” dictum (see the PDF at the link).
This trade-off is one of the most nuanced developer productivity debates.
Wherever developers sit on the testing spectrum, there isn’t a one-size-fits-all solution for software testing, leading developers to constantly search for the right blend of testing approaches to suit their evolving needs. To complicate matters, for any of the available testing approaches to become habit, developers must find the sweet spot of both solving a major pain point and not being so prohibitively slow or complicated that they won’t use it.
As I recently wrote, unit testing has found that sweet spot. As a software testing practice, it allows teams to test small, isolated pieces of code, which not only assures that software is working according to its intended specs, but allows developers to reason with parts of the code base written by other developers. Unit testing has been around for decades, but it has really only become ingrained recently because automation has simplified the user experience to the point of real usability.
Today there’s another form of testing that, similar to unit testing, is decades in the making but is only now finding its sweet spot in both addressing an essential problem and giving developers the right abstraction for a greatly simplified approach. I’m talking about integration testing.
A developer’s job is to glue things together
In conventional three-tier architecture systems, developers might have had one database and perhaps an API or two to interact with, and that was the extent of the third-party components they touched.
Nowadays developers tend to break a solution down into many different components—most of which they didn’t write, most of which they haven’t seen the source code to, and many of which are written in a different programming language.
Developers are writing less logic and spending more time gluing things together. Today the average production system has interactions with multiple databases, APIs, and other microservices and endpoints.
Any time your software has to talk to a different piece of software, you can no longer make simple assumptions about how your system is going to behave. Every database, message queue, cache, and framework has its own particular states, rules, and constraints that determine its behavior. Developers need a way to test these behaviors in advance of deployment, and this class of testing is called integration testing.
“Integration tests determine if independently developed units of software work correctly when they are connected to each other,” writes Martin Fowler, who first learned about integration testing in the 1980s.
Until recently, integration testing meant you needed a replica of your production environment. Creating that test environment by hand was an extremely time-consuming process, with great risk of making mistakes. There were penalties for having discrepancies between test and production, and there was the ongoing burden of having to make changes in your test environment every time you made a change in production. Integration testing was so difficult to set up and use that for many developers it remained an obscure, inaccessible software testing discipline.
That was then. This is now.
Testcontainers: Improving integration testing
Richard North created Testcontainers in 2015 while he was chief engineer at Deloitte Digital. He observed that integration testing’s hopelessly complicated setup—everything from creating consistent local environments to configuring databases and managing countless other issues—was a constant source of thrashing for developer teams that needed a reliable way to test their code against real production-like dependencies.
North built Testcontainers as an open source library that lets developers “test with containers” against data stores, databases, or anything else that can run in a Docker container, including popular frameworks such as Apache Kafka. Testcontainers provides an ergonomic, code-based way for developers to harness containers for local and continuous integration testing, without forcing every developer to become an expert on the many nuances of containers.
Today Testcontainers is the most popular Docker-based integration testing library, used by thousands of companies, such as Spotify, Google, Instana, Oracle, and Zalando. Part of the popularity of Testcontainers is its pre-supported library of modules that includes just about every known database and many popular technologies, often contributed to the Testcontainers project and directly maintained by the database and technology vendors. Earlier this year, North and core Testcontainers maintainer Sergei Egorov got $4 million in seed funding and launched AtomicJar to keep extending the ecosystem of supported Testcontainers modules.
Fail faster is a winning pattern
There will always be impassioned debates about how to best balance speed versus software quality. One reason for the great popularity of the Java compiler and similar technologies has been their ability to help developers find failure closer to the point of development so they can fix them quickly.
There will always be diabolical bugs that evade your testing, but with the increasing ease of software unit testing and integration testing today, it’s getting harder to credibly argue against investing more cycles into testing your code and its integration surface before pushing to production.