Microsoft CEO Satya Nadella is fond of saying that “all companies are software companies,” and we all like to hum along to Andreessen’s familiar anthem that “software is eating the world,” but what exactly does this mean? In real life, that is, and not in blog posts or executive keynotes.
Maybe a better question is, “What software (and hardware) should we be building ourselves versus buying from others?”
‘Undifferentiated heavy lifting’
Spend enough time around any AWS employee and you’ll hear them talk about how AWS strives to alleviate the “undifferentiated heavy lifting.” That phrase originated in a 2006 talk given by then CEO Jeff Bezos and has been repeated at least a trillion times since. The idea is that innovators should focus on innovating for their customers, not on doing all the “muck” like server hosting, Kubernetes cluster management, etc.
It’s a great idea, but it’s not always easy to determine muck from essential, customer-facing innovation.
For example, are semiconductors undifferentiated heavy lifting that organizations could turn to Intel, Samsung, Nvidia, or others to handle for them? Consider the automotive industry. Yes, cars have basically become driveable computers, but it’s a huge ask for a traditional industry like this to magically become tech savvy. Yet, that’s precisely what some executives are arguing.
“In the transition to these digital electric vehicles, [effectively managing a semiconductor] supply chain could be one of the biggest advantages a particular company has or doesn’t have,” says Jim Farley, president and CEO of Ford Motor Company. I’ve had some very smart people tell me that companies like Ford would never build their own chips. Now it’s hard to get the CEO of one of the world’s largest automotive companies to stop talking about chips. “We need to design the [system-on-chip] ourselves.” What was once undifferentiated heavy lifting has become essential to Ford. Perhaps the same is true for you.
It’s about people
Cutting against this argument, however, is the reality that all of that time a company spends building chips is time the company is not building software or other technology to improve the customer experience. The biggest asset (and biggest cost) almost any company has is its people. I wrote about this recently in an article about multicloud, quoting former AWS executive Tim Bray. Perhaps not surprisingly, given his years at AWS, Bray suggests that companies should consider going “all in” with a particular cloud provider to realize “pretty big payoffs” like dramatically better scale, reduced costs, improved security, and more.
As Bray puts it, “every time you reduce the labor around instance counts and pod sizes and table space and file descriptors and patch levels, you’ve just increased the proportion of your hard-won recruiting wins that go into delivery of business-critical customer-visible features.”
In such a scenario, companies would invest deeply in all the serverless offerings from the cloud providers, eschewing all the underlying, undifferentiated heavy lifting—at least until it became critical to build that infrastructure themselves. After all, just as Ford is finding with semiconductors, sometimes building your own infrastructure is essential to delivering a great customer experience.
The cost of rolling your own
Given the explosive growth in data during the past decade, we might expect to see global energy use associated with data centers to show a similar spike, but it hasn’t. Why? As AWS’ Shane Miller and Carl Lerche detailed recently, “Cloud and hyperscale data centers have been implementing huge energy-efficiency improvements, and the migration to that cloud infrastructure has been keeping the total energy use of data centers in balance despite massive growth in storage and compute for more than a decade.”
One way that AWS, Google, Microsoft, and other hyperscalers have optimized their energy consumption is by building with energy-efficient languages such as Rust. Not content to simply build with Rust, however, these and other companies are investing in the development of Rust (and other energy-efficient software and hardware technologies).
Indeed, however skilled your own engineers may be with things like building energy-efficient infrastructure, they’re not likely to be better at it than those who do this task full time. Ditto other areas like security, networking, etc. There are times when a company may be able to out-cloud the clouds, but these instances will likely be relatively few in number and somewhat obvious in nature.
This brings us back to the original question: When should we build versus buy? It makes sense to build when doing so is essential to crafting the customer experience, or when access to innovative technology is at risk due to supply chain or other issues (as in the case of Ford and chips). The rest of the time it’s almost certainly going to be easier, faster, and more cost-effective to buy from those in the business of “undifferentiated heavy lifting,” like managing compute, storage, databases, networking, etc.