As browser-based WebAssembly (Wasm) gains interest as a back-end technology, developers are moving from “Hmm, that sounds interesting” to “Let’s see what Wasm can really do beyond browsers, video gaming, and content streaming.”
At the same time, Wasm itself is starting to morph and shift. All of this makes it a good time to take another look at WebAssembly technology. As you evaluate Wasm for new uses, here are five things you should be keeping in mind.
Interface
Wasm was originally designed for the browser, and without a system interface to improve its overall security stance. The authors of the original web-focused Wasm didn’t want applications to be able to request resources, in much the same way that Java applets are restrained within a browser.
But back-end developers using Wasm want an interface, so that they can port and use existing programs and programming paradigms (think Python, Ruby, web servers, etc). Enter the WebAssembly System Interface extension, aka WASI, a set of POSIX-like APIs that provide for OS-style functionality such as file systems, networking, and cryptography. WASI improves on execution and portability for existing software as well as new programs written with common, existing paradigms (using files, ports, etc.).
There has been a lot of push and pull between those who think Wasm should remain pure and those who want a POSIX-like systems interface. In fact, it’s a hotly contested issue in the upstream community. Some in the back-end server community have proposed a kind of compromise, suggesting that those who want to use Wasm as it was originally intended should do so, but that the interface could be added on top for those who want it. Me? I think WASI is necessary for the server side to succeed.
Performance
In some benchmark testing, Wasm demonstrates impressive performance. Wasm is fast and efficient, no doubt, but benchmark numbers should be taken with a grain of salt. For example, in the recent Vercel benchmark testing, Wasm performance was excellent. In the e-digit section, which is a computationally intensive assessment, Wasm was much faster than Java. But the dirty secret is, using the native Rust compiler written in C, running on bare metal, is still something in the neighborhood of four times as fast as Wasm. Further, in some of the other Vercel subtests, Java is much faster than Wasm.
Granted, the full performance of an application is going to be some smattering of a number of different benchmarks, but it’s important to note that Wasm is not a slam dunk performance-wise. This will be especially true if more elements—such as WASI—are laid on top of Wasm. Also, stay tuned for garbage collection, and how that might affect performance.
Security
As noted earlier, Wasm is limited in scope for system security reasons. By making it less restrictive, such as when adding the WASI interface, you increase the attack surface. It’s likely that the more popular Wasm gets, the more will be added to it, which will lead to more venues for human error or malicious actions. Multi-tenancy in particular is an area of concern. Is Wasm more secure than containers? Less than virtual machines? Does Wasm create a sweet security spot between the two? Maintaining this balance between functionality and security will be critical moving forward. Developers considering expanding their use of Wasm will need to be on top of (and part of) the debate.
Portability
One of Wasm’s biggest draws is its cross-platform portability. Wasm is a neutral binary format that can be shoved in a container and run anywhere. This is key in our increasingly polyglot hardware and software world. Developers hate compiling to multiple different formats because every additional architecture (x86, Arm, Z, Power, etc.) adds to your test matrix, and exploding test matrices is a very expensive problem. QE is the bottleneck for many development teams.
With Wasm, you have the potential to write applications, compile them once, test them once, and deploy them on any number of hardware and software platforms that span the hybrid cloud, from the edge to your data center to public clouds. A developer on a Mac could compile a program into a Wasm binary, test it locally, and then confidently push it out to all of the different machines that it’s going to be deployed on.
All of these machines will already have a Wasm runtime installed on them, one that is battle tested for that particular platform, thereby making the Wasm binaries extremely portable, much like Java. And when you compile a program down to that Wasm binary you can ship it out to a container registry, pull it down on another machine that has a Wasm runtime, and then run it anywhere—whether the host is an M1 or M2 Mac, or an x86 system, or whatever.
When you look at how Arm and RISC are taking off, you realize that our polyglot world is only going to become more polyglot in the next five years, if not sooner. Containers plus Wasm looks like a big cross-platform win.
Wasm and Kubernetes
Another area of debate around Wasm is whether Wasm binaries should be run natively, alongside containers, or within containers. The beauty is, it really doesn’t matter, as long as we all adopt the OCI Container Image format. Whether you run a Wasm binary natively on a Wasm runtime, or if that Wasm runtime runs within an OCI container (remember, they’re just fancy processes), you can create one image that can then be deployed across multiple architectures.
A single image saves disk space and compile time and, as previously noted, prevents your test matrix from getting out of hand. The benefits of running Wasm within a container is that you get defense in depth with very little performance impact. The benefit of running Wasm binaries side-by-side with containers is still to be studied, but either way, we should be able to preserve the value of the Kubernetes ecosystem. If you want to schedule Wasm containers, it will be easy because they'll all live in an OCI registry and you’ll be able to pull them down in Kubernetes (or Podman or Docker) and run them.
Conclusion
We know Wasm works well in the browser. Now it’s time to get excited about how Wasm could work on the server side. I think we’re all still learning about what Wasm might become, but in particular, I’m most excited by the cross-platform potential. Could Wasm, combined with containers, truly deliver the promise of ultimate portability? I think it’s possible, but as technologists, we’ll have to wait and see, and guide it where we need it to go.
Wasm is still emerging—and mostly untested—on the back end. It will be important to continue to keep an eye of Wasm’s progress and think about how it could benefit each of our organizations. Will performance really be as good as bare metal? Will Wasm retain enough security, even with a new systems interface, to enable multi-tenancy? Let’s find out together over the coming months and years!
At Red Hat, Scott McCarty is senior principal product manager for RHEL Server, arguably the largest open source software business in the world. Scott is a social media startup veteran, an e-commerce old timer, and a weathered government research technologist, with experience across a variety of companies and organizations, from seven person startups to 12,000 employee technology companies. This has culminated in a unique perspective on open source software development, delivery, and maintenance.
—
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.