Intel® Movidius™ Myriad™ X Vision Processing Unit

Compare Now

Technical Specifications


Product Collection
Movidius™ Myriad™ VPU
Processor Number
Launch Date
16 nm
Warranty Period
1 yrs


Processor Base Frequency
700 MHz

Supplemental Information

Embedded Options Available

Package Specifications

Operating Temperature (Maximum)
105 °C
Operating Temperature (Minimum)
-40 °C
Sockets Supported
Operating Temperature Range
-40°C to 105°C
Package Size
14 mm x 14 mm x 0.9 mm


Bring Your Next Computer Vision or Edge AI Project to Life

Tell us more about your project needs

Intel® Movidius™ Myriad™ X VPU

The Intel® Movidius™ Myriad™ X VPU is Intel's first VPU to feature the Neural Compute Engine — a dedicated hardware accelerator for deep neural network inference. The Neural Compute Engine in conjunction with the 16 powerful SHAVE cores and high throughput intelligent memory fabric makes Movidius Myriad X ideal for on-device deep neural networks and computer vision applications.

The Movidius Myriad X VPU is programmable with the Intel® Distribution of the OpenVINO™ toolkit for porting neural network to the edge, and via the Myriad Development Kit (MDK) which includes all necessary development tools, frameworks and APIs to implement custom vision, imaging and deep neural network workloads on the chip.

Product brief


Dedicated Neural Compute Engine

The Movidius Myriad X is Intel's first VPU to feature the Neural Compute Engine, a dedicated hardware accelerator for running on-device deep neural network applications. Interfacing directly with other key components via the intelligent memory fabric, the Neural Compute Engine is able to deliver outstanding performance per watt without encountering common data flow bottlenecks encountered by other architectures.

16 High Performance SHAVE Cores

These programmable processors, with an instruction set tailored for computer vision, can be used to run traditional computer vision workloads, or can complement the Neural Compute Engine by running custom layer types for CNN applications thanks to extensive support for sparse data-structures.

Enhanced Vision Accelerator Suite

Intel has added a new suite of vision accelerators to the Movidius Myriad X VPU, including a new stereo depth block that is capable of processing dual 720 feeds. With the suite of vision accelerators available, it is possible to offload key vision workloads onto fixed-function hardware to help improve power efficiency.

Flexible Image Processing and Encode

The Movidius Myriad X VPU features a fully tune-able ISP pipeline for the most demanding image and video applications. The Movidius Myriad X VPU also features hardware based encode for up to 4K video resolution, meaning the VPU is a single-chip solution for all imaging, computer vision and CNN workloads.

Support for Multiple VPU Configuration

PCI-e interface allows the VPU to be used as an edge AI accelerator in an edge server, by configuring multiple Movidius Myriad X VPUs in a single PCI-e add-in card, delivering even more performance flexibility.

Neural Compute Engine: Hardware Based Acceleration for Deep Neural Networks

Intel's Movidius Myriad X VPU features the all-new Neural Compute Engine - a purpose-built hardware accelerator designed to dramatically increase performance of deep neural networks without compromising the low power characteristics of the Movidius VPU product line. Featuring an array of MAC blocks and directly interfacing with the intelligent memory fabric, the Neural Compute Engine is able to rapidly perform the calculations necessary for deep inference without hitting the so-called "data wall" bottleneck encountered by other processor designs. Based on the Movidius Myriad X VPU architecture, the maximum number of neural network inference operations per second achievable by the Neural Compute Engine in combination with the 16 SHAVE cores (916 billion operations per second) is more than 10x the maximum number of neural network inference operations per second achievable by the Myriad 2 VPU’s SHAVE processors (80 billion operations per second) for executing neural network inference.

  • Native FP16 and fixed point 8 bit support
  • End-to-End acceleration for many common deep neural networks
  • Rapidly port and deploy neural networks in Caffe and Tensorflow formats
  • High power efficiency in terms of inferences/second/watt

Related Offerings

Intel® FPGAs

Intel® FPGAs and SoCs, along with IP cores, development platforms, and a software developer design flow, provide a rapid development path with the flexibility to adapt to evolving challenges and solutions in each part of the video or vision pipeline for a wide range of video and intelligent vision applications.

Learn about Intel® FPGAs

Intel® Neural Compute Stick 2

Develop, fine-tune, and deploy convolutional neural networks (CNNs) on low-power applications that require real-time inferencing with Intel® Neural Compute Stick 2.

Discover Intel® Neural Compute Stick

Intel® Vision Accelerator Design

Edge AI accelerator cards that let you deploy power-efficient deep neural network inference for fast, accurate video analytics and computer vision applications.

Learn more about Intel® Vision Accelerator Design

Intel® DevCloud for the Edge

Prototype and experiment with AI workloads for computer vision on Intel hardware with Intel® DevCloud for the Edge.

Discover Intel® DevCloud for the Edge

Intel® Distribution of OpenVINO™ Toolkit

Harness the full potential of AI and computer vision across multiple Intel® architectures to enable new and enhanced use cases in health and life sciences, retail, industrial, and more.

Explore OpenVINO™

Notices and Disclaimers1 2

Product and Performance Information

This feature may not be available on all computing systems. Please check with the system vendor to determine if your system delivers this feature, or reference the system specifications (motherboard, processor, chipset, power supply, HDD, graphics controller, memory, BIOS, drivers, virtual machine monitor-VMM, platform software, and/or operating system) for feature compatibility. Functionality, performance, and other benefits of this feature may vary depending on system configuration.

Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.


Intel® technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.