Intel® Optane™ DC persistent memory represents a groundbreaking technology innovation for data intensive applications and network architectures.1 2 3 4 Delivered with the next-generation 2nd Generation Intel® Xeon® Scalable processors, this workload optimized technology will help businesses extract more actionable insights from data – from cloud and databases, to in-memory analytics, and content delivery networks.
Intel® Optane™ DC persistent memory is an innovative memory technology that delivers a unique combination of affordable large capacity and support for data persistence. Watch this animation to see how the technology can help businesses get faster insights from their data-intensive applications as well as deliver the benefits of consistently improved service scalability with higher virtual machine and container density.
See how Intel® Optane™ DC Persistent Memory Module (DCPMM) delivers breakthrough restart times for in-memory databases and reduced wait times associated with fetching large data sets from system storage.
Intel® Optane™ DC Persistent Memory Module (DCPMM) allows customers to breakthrough memory capacity barriers for unprecedented virtual machine, container, and application density. Deliver consistent QoS levels at scale to reach more customers and users while realizing improved TCO both from hardware and operating cost levels.
Intel® Optane™ DC persistent memory will help transform content delivery networks, bringing greater memory capacity to deliver immersive content at the intelligent edge and provide compelling user experiences.
Read how Intel, SAP, and Accenture deliver a solution for high-performance, high-capacity, low-cost memory.
With Intel® Deep Learning Boost (Intel® DL Boost) and Intel® Optane™ DC persistent memory, TACC can now offer researchers new ways to solve enduring problems.
Ready to talk with Intel to help you accelerate your data center modernization journey?
Performance results are based on testing as of the dates shown in configurations and may not reflect all publicly available security updates. See configuration disclosure for details. No product or component can be absolutely secure.
Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.in/benchmarks.
Performance results have been estimated based on Microsoft internal tests as of 9/25/2018 using Microsoft* Storage Spaces Direct and Windows Server* 2019 with Intel® Optane™ DC persistent memory and may not reflect all publicly available security updates. No product can be absolutely secure. Configurations: VMFleet. 312 VMs (26 per node) 12 node cluster, random 4 KB read I/O to 9.36 TiB active working set, 2x 28C future Intel® Xeon® Scalable processors (Cascade Lake), 384GB DRAM (12x 32GB), 1.5TB of Intel® Optane™ DC persistent memory, 4x 8TB Intel® P4510 NVMe* (Cliffdale) SSDs. 2018 record of 13.7M IOPs, 2016 record was 6.7M IOPs. Results have been estimated based on tests conducted by Microsoft on pre-production systems, and provided to you for informational purposes.
Performance results have been estimated based on VMware* internal tests as of 10/17/2018 using future version of VMware vSphere* running Redis* memtier workload and may not reflect all publicly available security updates. No product can be absolutely secure. Configurations: 2x 28C future Intel® Xeon® Scalable processors (codenamed Cascade Lake) on WolfPass platform with 12x 16GB DDR4 RDIMMs @ 2666 MT/s and 12x 512GB Intel® Optane™ DC persistent memory modules, running VMware vSphere*, 112 guest VMs running Centos 7, Redis 4.0.11, memtier benchmark with 80/20 GET/SET ratio. Result: @ 112 guest VMs, 4.1ms measured latency and 99% measured cpu utilization. @ 28 guest VMs, 2.3ms measured latency and 25% measured cpu utilization.