You’re at a multicloud strategy planning meeting. You see the network people, the cloud database group, the cloud security team, even the finops people, but no one charged with maintaining existing mainframes or other older systems. Why?
Enterprises that are focused on building the next-generation cloud systems, which are largely multicloud deployments, don’t seem to want to include traditional systems. Typically, “traditional” means most of the systems currently in the data center and usually running 60% to 80% of the core business systems, depending on the company.
I don’t think leadership is intentionally leaving people out of the process; this is more a reaction to the fact that this multicloud stuff is complex enough. It does not make sense to make it more complex by including the older systems in the planning.
By the way, enterprises are likely intending to source some data from the legacy systems to the new or ported cloud-based systems. These are to be loosely coupled integrations that are going to be largely outliers to multicloud operations.
I see the need to remove some of the complexities from multicloud planning, considering that multicloud is a complex distributed architecture. However, we’re missing a huge opportunity to gain better control over a layer of systems and data that can benefit from net-new security, data management, operations, and governance infrastructure that we’re building in and between cloud-based systems that are going to be part of our multicloud.
My argument is that if you are dealing with changes in how core systems are managed in the cloud, it’s best to include legacy systems in those changes, too. This includes updating and upgrading security, operations, governance, etc., putting these cross-cloud services over legacy systems as well.
This does a few important things.
First, it simplifies operations since we’re using the same approaches and tools for both cloud and legacy systems. For example, you can upgrade to identity and access management (IAM) systems that include a directory service that spans all cloud and legacy systems, providing a single set of consistent credentialing services for all systems, cloud and not cloud. This means you’re not dealing with different security technology layers; a single consistent layer spans all applications, users, and data storage systems. This leads to more cost-optimized operations, better security, and overall better reliability across all cloud and non-cloud systems.
Second, this allows you to right-size applications more easily in the future. If legacy systems are already functioning as just another cloud service, then moving that legacy application of data sets to the cloud is a simpler and more risk-averse process. This does not mean that it must move, considering that you can run the legacy system if you need to now. However, it does mean that you can relocate applications and data sets with less cost and risk than if they were more loosely coupled and considered different universes altogether.
This is a word of caution more than anything else. My fear is that many of you will head down the multicloud planning path and find that leaving older systems out of the process won’t get you where you really need to go.