Intel® Optane™ DC Persistent Memory

Intel® Optane™ DC Persistent Memory is a ground breaking technology that addresses real problems and will unleash performance and capabilities that we've only dreamed about. As part of the keynote, Lisa Spelman, vice president, Data Center Group, general manager, Intel® Xeon® processors and Data Center Marketing at Intel hears from Erin Chapple, Corporate VP of the Windows Server Group at Microsoft who discusses a new record for performance featuring Hyper-V and Storage Spaces Direct with Intel® Optane™ DC Persistent Memory. They delivered performance of 13.7M IOPS, more than doubling the record set in 2016. Lisa also introduced Rich Brunner CTO of Server Platform Technologies at VMware who demonstrates how VMware was able to get a system configured with *6TB* of total volatile memory using Intel® Optane™ DC Persistent in Memory Mode. Additionally, Lisa announced the Intel® Optane™ DC Persistent Memory hardware beta program.

Transcript

[MUSIC PLAYING]

Hi. I'm Lisa Spelman. And I'm looking forward to introducing you to the newest member of the data center family. But before we get to that, I want to talk about data. We often discussed how fast it is growing, the challenges of managing it, how to obtain rapid value from it. That can be a conversation with a bias towards data as a burden.

But at Intel, we truly see it as an opportunity. Why is data increasing at an exponential growth rate? It's customers insatiable demand for personalization of services. Admit it. You've bought what Amazon recommended for you. It's new services built on top of user generated data-- either delivering something new or simply reimagining what we already do, but better. How many of you use ways to get directions to locations you know exactly how to get to? That's all data.

And we're just at the beginning of finding out what's possible. In fact, 90% of the world's data was created in the last two years. And only about 1% of it is being utilized and analyzed. Talk about opportunity. IT transformation is accelerating. And new usages built on top of analytics and artificial intelligence have the power to accelerate business results, impact societal change for good, and just flat out make life easier.

So whether it's to improve customer experiences, enhance your security, optimize business operations and efficiencies, the race to transform data from a cost burden to an asset is on. And this race is not without its challenges that we're all trying to navigate. Realistically, most businesses are requiring simultaneous focus on cost efficiency of services, ever increasing data, security vigilance, and the fundamental fact that IT is core to scaling growth within any new business.

At Intel, we are deeply invested in data-centric infrastructure to help our customers overcome these challenges. Building upon our foundation of 20 years of Xeon platform leadership, we have invested in a product lineup that addresses the fundamental challenge and opportunity that data represents-- how to move data faster-- breaking through the bottlenecks that constrain your applications. Store more data more efficiently. And process everything using our workload optimized silicon portfolio.

Now I'd like to focus on a subset of the challenges we talked about. As an IT leader or service provider in the data era, your organization needs to manage massive data sets and process them even faster. Scale services on your virtualized infrastructure rapidly to serve more customers and deliver more services all while maintaining or even improving quality of service as your services scale. These are the types of challenges that motivate us at Intel to innovate. We've spent the last 20 years delivering leadership performance with Xeon processors.

But as we looked holistically at customer challenges, there was clearly more we could do. How do we bring even massive data sets closer to the CPU for faster time to insight? How could we not just incrementally improve downtime but fundamentally change the game by taking database restart times from hours and minutes down to seconds. How might IT increase the efficiency and utilization of their infrastructure if they could scale applications by adding affordable, large capacity memory?

These types of questions led us on a years' long innovation journey to deliver Xeon scalable plus Optane technology. We support fast caching and storage today with opting SSDs. And our newest edition, Optane Data Center Persistent Memory is enabling fast, persistent memory in a dimm form factor. Let's look at what this means in terms of the traditional memory and storage hierarchy that has developed over the last several decades.

We all know that this hierarchy has large enough gaps between performance and efficiency tiers, which really just means that trade-offs are required. Over the past several years, we have been systematically investing to remove those trade-offs. Intel 3D NAND SSDs deliver efficient, scalable storage at greater performance than hard disks. Intel Optane Data Center SSD advancements improve remote data access performance, filling an important gap in warm tier storage. Optane Data Center Persistent Memory will improve hot tier memory capacity to unleash large data set analytics.

When paired with our next generation Xeon scalable processor, users will see real performance, throughput, and persistence advantages on your most memory-bound workloads, and up to 8x improvement on I/O intensive queries on Apache Spark compared to DRAM. Due to the large capacity persistence, restart times will drop to seconds. And on Cassandra, you can see up to 9x more read transactions or up to 11x more users per system versus a comparable system with DRAM and NVMe drives. These are just some of the examples we're starting to see as the ecosystem internalizes what Optane technology can do. But enough from me. I'd like to give you the opportunity to hear from Erin Chapple, Corporate Vice President of the Windows Server Group on how Microsoft is using this breakthrough technology.

Microsoft has embraced Intel Optane DC Persistent Memory in both the cloud, through Azure, and also modern on-premises deployments through Windows Server 2019 and SQL Server, providing customers and developers the flexibility and use cases and the right mix for their multi-cloud environments. Dense persistent memory like Intel's delivers more memory capacity and non-volatile data without the latency impact of storage. This allows larger data sets to be closer to compute, which allows solving for larger, more complex problems. For users of Windows Server and Microsoft SQL, this will mean more VMs per server, faster databases as large data sets are stored entirely in memory, storage capacity and performance that makes hyper-converged infrastructure even more compelling.

Large persistent memory has the potential to be one of those rare, truly transformative technologies. And we want to give Microsoft customers the platforms to fully utilize it. Out of the box, Windows Server 2019 includes native support for Intel Optane DC Persistent Memory, which allows for developers to start creating the next generation of applications today to take advantage of the performance and persistence solution made possible by Intel. The increase in performance and manageability with Intel's Xeon scalable platforms, Intel Optane DC Persistent Memory, and Windows Server 2019 drives more innovation, lower TCO, and accelerated deployments within our customer data centers.

[SWOOSH]

At Microsoft Ignite, Erin and her team announced a new record for performance featuring Hyper V and Storage Spaces Direct with Intel Optane Data Center Persistent Memory. They delivered performance of 13.7 million iops, more than doubling the record set in 2016. And I think we're just at the beginning. In addition to Microsoft, we are working with other leading software companies to ensure that we have a broad ecosystem ready to serve customers at launch. With optimizations in the operating system and applications, users can take advantage of the persistence in application direct mode or app direct.

Persistence at these capacity levels is a unique and differentiated capability compared to other solutions, but not enough. We want broad applications to benefit from the larger capacity sizes. And so we're also offering what we call Memory Mode. This allows an application to view the persistent memory dimm as part of volatile main memory. With increased capacity, you get greater virtual machine, container, and application density while increasing the utilization of your Xeon processors. I'm also pleased to let you know that with both modes, we offer full data encryption.

It turns out we're not the only ones that think this is pretty cool. Improving data center total cost of ownership is top of mind for many of our customers running virtualized cloud environments. There's no company in the industry that knows more about virtualization requirements than VMware. To share with you how we are working together, I'd like to introduce Rich Brunner, the CTO of the Server Technology Platform Group at VMware. Rich, thank you for being here today.

Absolutely. It's a pleasure.

Great. Now, we've been working together for a long time. And I know in your role you are out constantly with VMware's customers hearing about what they're working on and what their challenges are. And I was hoping you'd be able to share with us a little bit about what are those big challenges and pain points you're hearing from them.

Sure. Some of the challenges that they talk about-- some of the pain points, if you will-- relates to the fact that they are generating more and more data.

Yep.

But they're not able to pull all of that into a given server because of the limitations of the memory in the server or the fact that there isn't a persistence point that is close enough and low enough access latency to really be used. And they're looking for an ability to change that game. And they want something that's transparent. They want a transparent boost because going in there and changing each of those VMs is a real pain in the neck.

I think, you know, we share the same customers. And we hear a lot of the same challenges. And I know our teams-- our companies-- have been working together for several years now on this Intel Optane Data Center Persistent Memory. Can you talk a little bit about what you guys have found through that process and working together with us.

We evaluated this technology, of course. And we saw this was going to be a real changer, both for increasing volatile memory as well as allowing customers to start using a much faster, lower latency persistent memory. And so there are a lot of use cases that nicely map to that. So in the area of legacy VMs and applications, we were immediately able to see the simplest thing here is we can get effectively twice-- three-- times the amount of volatile memory using Intel Optane DC Persistent Memory. And that really makes a big difference for consolidation and virtual machine density.

That's good. And I know you've shared before how VMware's going to support this for the vSphere product line. Can you talk and share a little bit about the specifics about what you're doing in that space?

So the customers will see, first of all, for their legacy VMs, they will see that there is more volatile memory to be able to run more of them without over-committing the system. And secondly, they'll be able to run much larger VMs because there is far more volatile memory. Our VMs can go all the way up to at least 8 terabytes. And now with this technology, you can get up to 6 terabytes in a two-socket server. That's pretty incredible.

Yeah.

The other thing is that you can take the virtual storage within the VM. The second thing that we can do is we can take the virtual storage device in the virtual machine, and we can map it to Intel Optane Persistent Memory and use the persistence that way. That will also for some workloads cut down on the storage latency. And then lastly is sort of the new application area-- using the new SNIA application programming model where an application now has byte addressable persistent memory with a DRAM-like access latency and granularity. And that is where the holy grail, if you will-- where we really want to end up in the future. And this technology gives them a chance to sort of dip their toe while still get getting all the benefits of the volatile memory boost. So that's pretty cool.

I think it's a lot for customers to look forward to and to have the opportunity to start working with. So I love to hear about what we're doing. But I think what's even better is when we can actually see it. Do you have something you could show me today?

Yes, I do.

OK.

So if you come with me over here, and this server that we're walking over to has been configured with Intel Optane DC Persistent Memory in Memory Mode. OK, so what we've started with is 28 virtual machines. And each of those virtual machines is running a Redis database. And that Redis database is running Memtier. That's sort of our baseline. We're running that now. And the good news is, of course Intel Optane DC Persistent Memory-- love that term--

You're getting good at it, by the way.

Yeah, thank you. No problems-- runs it just fine-- no surprises, no performance issues, no issues with memory latency. So we start with that. Now, every time I hit the start button, 28 more virtual machines requiring additional 1.5 terabytes of additional memory is allocated. And these VMs start running.

OK.

So you can see I've hit the click once.

OK.

[INAUDIBLE]

You got your 28 virtual machines. Good to go. You got your CPU utilization about 24%. Oh, there you go again.

And so now we're at 56--

OK.

--using 3 terabytes of allocated memory.

But you can do more. Let's see more.

OK.

OK.

Yeah, absolutely. So we can take this, surprisingly, all the way up to 112 of these VMs running the Redis database. And that requires 6 terabytes of volatile memory being provided by the Optane memory in Memory Mode. And that is pretty cool. You cannot do that with a traditional server. Our demo would have stopped with the first 28.

Yeah.

It wouldn't have been able to go further.

That's really cool. And you also see that utilization-- getting your CPU up to that full utilization means you're just fundamentally utilizing your whole infrastructure better. So each time you added 28 VMs, you used another 1.5 terabytes of memory. And you were doing that using our 512 gigabyte dimm.

Funny you should mention that. I happen to have one of these here.

Oh, you brought one.

And we have a whole lot more of them at VMware that we are using to get ready for the final launch.

That's excellent.

We are really excited about that launch. And we are going to be there with you on that day.

That's great. That's awesome. You know, I just want to thank you so much for being here, not just today, but being here in partnership on bringing this exciting technology to market. I think it has a lot of opportunity for not just both our companies, but actually for our customers is where the magic's going to happen. So I really appreciate not only your personal involvement and commitment, but the commitment of VMware.

Thank you very much.

I was fortunate to have the opportunity to have Erin and Rich participate today. And I want to take a moment to recognize the other leading ISVs that we are working with to optimize applications and bring Optane Data Center Persistent Memory value to you, our customers. That deep industry collaboration, of course, extends to our OEM and cloud service providers as well. Today, I am delighted to announce the Intel Optane Data Center Persistent Memory hardware beta program, a collaboration to enable delivery of early access systems and services for adoption of persistent memory enabled solutions. This is an opportunity to get a jump-start on your deployment working with the leading names in technology.

I invite you to explore the virtual site and see the content from our partners as they share their unique perspective. I hope hearing from our ISVs, OSVs, OEMs, and CSPs, today gives you a glimpse into the memory and storage revolution that is upon us. Our goal is to ensure that you, our customers, have an easy path from adoption to value with this new technology. To that end, we are actively working on a family of Intel select solutions that will be available through our OEMs and will offer pre-verified infrastructure stacks optimized for specific workloads. Stay tuned in the coming months for more details.

I encourage you to stay up to date on this exciting product through this virtual event site, both today and in the future as we continue to add content. I've been at Intel for 18 years. And being part of the Optane Data Center Persistent Memory team is definitively a career highlight for me. To work on groundbreaking technology that addresses real problems and will unleash performance and capabilities we've only been able to dream about is inspiring. And I've got to say it's really fun to do something that most people thought was impossible. I look forward to seeing what you revolutionize with Xeon and Optane Data Center Persistent Memory. Thank you.

[MUSIC PLAYING]