I like to observe patterns that emerge around me, certainly where technology is involved. The rise of the Internet has pushed many applications from distributed to centralized. Back in the 1990s, we were all evolving from mainframes to local area networking and client/service development.
This was a paradigm change, but now we’re moving back to centralized computing again. If you’re keeping track, we’ve done mainframe (legacy) computing, then small distributed systems (client/server), and now cloud computing, which is sharing centralized resources.
With interest in edge computing, Internet of Things (IoT), and 5G communications, we are moving from “centrally delivered” to "ubiquitous computing." What the hell does that mean?
First, I get that cloud computing is also ubiquitous in architecture. However, we use these resources as if they are centrally located, at least virtually. Moving to a more ubiquitous model means we can leverage any connected platform at any time for any purpose. This means processing and storage occur across public clouds, your desktop computer, smartwatch, phone, or car. You get the idea—anything that has a processor and/or storage.
With a common abstracted platform, we push applications and data out on an abstracted space, and it finds the best and most optimized platform to run on or across platforms as distributed applications.
For instance, we develop an application, design a database on a public cloud platform, and push it to production. The application and the data set are then pushed out to the best and most optimized set of platforms. This could be the cloud, your desk computer, your car, or whatever, depending on what the application does and needs.
Of course, this is not revolutionary; we’ve been building complex distributed systems for years. What’s new is the mechanism that can support the abstraction of many heterogeneous platform types, from thermostats on your wall to processors and storage within your smartphone.
Aspects of a ubiquitous computing model
These are the critical aspects of ubiquitous computing, at least the way I see it:
Decentralization: Unlike cloud computing’s centralized architecture, ubiquitous computing distributes computing power to the edges of the network, thereby reducing the need for a constant network connection. It also allows many other devices and platforms to be a resource to process applications and store data. This is the core attribute of ubiquitous computing. Let's pay attention and see how this approach and technology will evolve from where we are now.
Context-aware: Systems are designed to respond to the application and/or user’s requirements. For example, an intelligent home system adjusts the temperature and lighting based on the occupants’ preferences and presence based on where the system physically exists.
Real-time interaction: Devices or platforms interact in real time, providing instant feedback and personalized experiences. This has been the primary reason to choose edge computing and IoT, placing the data and processing as close as possible to the entity interacting with data. For instance, a factory robot runs core generative AI processes for quality control via imaging.
Enhanced user experience: Integrating technology seamlessly into daily life enhances user experiences by eliminating barriers between humans and machines. If you look at the objectives of digital transformation, technology provides a better customer experience. The companies that can improve their user experience are more likely to be successful, regardless of their product or service.
Benefits of ubiquitous computing
Based on the attributes of this model, we’ll likely see increased accessibility. This means that ubiquitous computing reduces reliance on constant internet connectivity. Although dependence on connectivity is not fading, it’s much easier to leverage platforms that will not break when the Internet goes down.
Improved efficiency is core to why we’re doing this. Context-aware systems can optimize energy consumption and resource allocation. If you have spare MIPS (million instructions per second) on your smartwatch, why not use them? More realistically, we can put the applications and data on platforms that make the most sense in terms of purpose and optimization of resources. We can run on specific platforms that are faster as well as cheaper.
What’s new here?
Of course, widely distributed but loosely coupled systems are familiar. We’ve been building this type of architecture for years. What would have to be new is the ability to manage the distribution of applications to a widely heterogeneous set of platforms and allow these applications or components to operate successfully over a long period.
Much better application development, deployment, and operations mechanisms would need to exist. We have a single logical platform that can map down to many different platforms, such as phones, smart garage doors, automobiles, and, of course, clouds and traditional hardware platforms that you may own.
This magical technology could profile the applications and connected data and place them on the proper physical platform for processing. It could even move them if things change, such as prices going up for a specific cloud provider or the reliability of remote platforms dropping. It could provide redundancy by running identical copies of the application and data on many different platforms.
What we’ll likely see
Of course, this won’t happen overnight. We’re more likely to see new applications built on various platforms to meet specific requirements, such as autonomous driving systems. That’s been occurring for about 10 years now and is accelerating.
Once we have enough of those, we will likely demand better platform-to-platform integration. A single abstract platform will emerge once it includes enough device and computer types. You’re well on your way to ubiquitous computing.
Remember that this is a trend, not a new type of technology. It will involve many types of technologies, including cloud computing. I’m excited about it. How about you?