Cloud Repatriation Accelerates

Massive data growth, rising costs of cloud services, and the need for greater flexibility are creating new momentum in hosting data and workloads on-premises, said Eric Bassier, Senior Director of Products at . says there is. quantumI will explain.

In general, the benefits of cloud computing are undisputed. Cloud usage has grown rapidly over the past decade, especially over the last three years. This is due to the need to rapidly modernize IT to enable remote work, and many organizations are turning to the cloud. While some workloads are well served in the cloud, many organizations are now realizing that some data and workloads are better off in their own data centers.

Therefore, most organizations today adopt a hybrid or multi-cloud approach. Given this situation, cloud his repatriation has become an increasingly popular trend over the last few years. Organizations are moving workloads and data back to their data centers. Why are we seeing this trend?

Cloud computing costs are rising.

Public clouds can be cost-effective for certain use cases, but costs are generally high for organizations that have moved data and workloads to the public cloud without considering the best location for the data. There are many good reasons why the price of cloud services is skyrocketing. Rising costs, increasing demand, and increasing complexity in general are all understandable reasons. But one of the biggest reasons cloud budgets are ballooning disproportionately is that egress and other service charges make public cloud storage costs very unpredictable. Ultimately, many organizations paid far more for cloud services than expected or budgeted. To reduce costs and increase predictability, organizations are looking for other options.

Organizations need more flexibility in the cloud.

In a perfect cloud world, customers will be able to easily pick and mix their ideal setups and flexibly move data and workloads between each part of their chosen multi-cloud ecosystem. However, this is by no means an easy task. One is the public’s cloud vendor’s success in locking customers and their data into their own cloud platform. The provider’s price list is structured so that uploading data to the cloud is cheap, but downloading it again is incomparably more expensive. As the amount of data has grown, it has become very costly to store and process it in the cloud, especially with ever-larger files of unstructured data.

In addition to this, “data gravity” forces organizations to keep data and workloads close together. And given cloud storage implementations, data sovereignty, and security concerns, it’s becoming increasingly clear that some of an organization’s data resides in data centers and shouldn’t be sent to the cloud. To solve these problems and improve cloud setups, organizations are weighing all options for where their data and workloads can ideally reside. Inevitably, we concluded that some cloud workloads should “go back” to on-premises. Doing so promises a higher ROI.

Full cloud repatriation makes little sense.

Several factors influence the decision to move from the cloud back to on-premises. At the storage level, the advent of relatively inexpensive object storage on tape offers companies a compelling business case for moving the right data back to their hardware. Table Renaissance sees the storage space shift from disk storage to a storage strategy with his two primary tiers: a fast flash tier for high-performance workloads and a tape tier for low-cost, high-capacity storage. It happens when you are welcoming

However, just as cloud-only strategies are neither economically nor practically ideal, fully repatriating data and workloads from the public cloud makes little sense for most organizations. Historically, enterprises have made the conscious decision to incur the additional costs of running applications in the public cloud in order to maintain flexibility and scalability. However, as organizations are now actively looking for ways to improve data portability, they are realizing the need for more flexibility at both the hardware and software level. Only when the overall system is flexible can you find the optimal combination of infrastructure, such as moving data and workloads back to on-premises hardware, resulting in better net ROI.

Conclusion: Flexibility in a “cloudy world”

To achieve higher ROI, enterprises need more flexibility in their data centers. This enables you to build hybrid and multicloud environments and put your workloads and data where they are best. For some data and workloads, this means moving data from the public cloud back to the data center, where it can be stored much cheaper. His ability to utilize NVMe as a fast flash tier for high-performance workloads adds additional flexibility for workloads that require higher performance. Also, hyperscaler platforms may be best suited for specialized use cases such as PaaS solutions.

To reap the benefits of cloud computing, today’s organizations need maximum flexibility to use the best infrastructure for every use case. Modern technologies such as software-defined data management solutions help organizations achieve that flexibility, but the emergence of fast NVMe and inexpensive tape storage has made it an ideal place to place data and workloads. You are given the option to improve the combination of So this isn’t a “cloud only” or “on premises only” issue, it’s a combination of both, where the flexibility to change, shift, move and migrate data and workloads at any time is key.

tag: on the premises, public cloud, depository

https://www.cloudcomputing-news.net/news/2023/jun/07/cloud-repatriation-is-picking-up-speed/ Cloud Repatriation Accelerates

Show More
Back to top button