CEO Memverge says a high-capacity memory-based architecture will replace performance-tier storage.

“I think the future of infrastructure will be multi-cloud and memory-centric for data-centric applications,” says Charles Fan.

Getty Images / iStockphoto

This week, Memverge, a leading memory software company, announced the release of memory machine software version 1.2. It leverages up to 40 cores and up to 6TB of memory on a 3rd generation Intel Xeon scalable processor (codename Ice Lake) to provide performance and capacity. Intel Optane Persistent Memory 200 Series Capacity Per Socket.

Charles Fan, CEO of MemVerge, said Memory Machine software version 1.2 is designed to help application vendors and end users take full advantage of Intel’s latest Xeon processors and Optane memory technology.

“We started by providing a new level of performance and access to capacity without changing the application,” he said in an interview.

to see: MSP Best Practices: Server Deployment Checklist (TechRepublic Premium)

Fans added that there is an “expanding world of new memory-centric applications that require a variety of different processor and memory layers to scale and operate efficiently.”

According to fans, the CXL open interconnect standard will soon become the future “definition technology” of the big memory industry and will serve as the center of a new ecosystem of processors that share heterogeneous memory.

The release of Memory Machine software version 1.2 coincided with the launch of Intel’s 3rd generation Intel Xeon scalable processor. This is codenamed “Ice Lake” by many in the industry.

“The product we created is called Memory Machine, and the first version, version 1.0, shipped about seven months ago in September 2020. This latest version officially supports the new Intel Ice Lake platform. “We do,” said Fan.

“Our belief is that data-centric applications are driving the rise of memory-centric infrastructure. Basically, we want an infrastructure that allows applications to process data in larger quantities and faster. Only memory-centric infrastructure is possible. They can and we are helped by the advent of persistent memory because persistent memory is large, low cost, and persistent. , On top of that you can develop interesting data services. ”

Fans explained that memory-centric infrastructure is more cost-effective and, among many other benefits, can address the bottlenecks faced by many companies. Features such as snapshots allow organizations to recover from crashes in a more seamless way.

A cloud service provider working with Memverge is using its memory machine to essentially increase the memory available to its customers’ compute file services. Another company working in the bioscience field of genome sequencing and analysis is essentially using Memverge tools like DVRs.

“Pipeline processing has to go through many steps, so eliminating the storage OS can speed things up. However, tools can be used to not only take snapshots, but also previous ones. You can immediately roll back to any of the stages. Perform any analysis and rerun things to reproduce the results. ”

“I think this trend of high memory will have a big impact on the data center architecture and the industry as a whole. I think data-centric applications will be memory-centric. The underlying resources will be more uneven and consumed. It is done through the abstraction of software by applications, “he added.

“Over the next decade, high-capacity memory-based architectures will gradually replace current performance tier storage. There will be an increasing number of real-time applications requiring real-time processing of large amounts of data. We provide software-defined memory for the future of multi-cloud. ”

See also CEO Memverge says a high-capacity memory-based architecture will replace performance-tier storage.

Back to top button