Use an optimized software stack to simplify and accelerate artificial intelligence (AI) success
AT A GLANCE
- Intel has worked closely with the AI ecosystem for a number of years, both optimizing and developing a broad range of software tools, frameworks, and libraries that will satisfy the most demanding data science needs – from development to deployment and scaling.
- We recently launched the Gold version of Intel® AI Analytics Toolkit that provides optimized Python libraries, deep learning frameworks, and a light-weight parallel dataframe - all built using oneAPI libraries - to maximize performance for end-to-end data science and AI workflows on Intel platforms.
- This consolidated package makes it easier for developers and data scientists to obtain Intel’s latest analytics and AI optimizations in one place and ensures the software works seamlessly with one another as well as other Intel® oneAPI toolkits which support additional capabilities for Intel CPUs and discrete accelerators.
Artificial intelligence (AI) data scientists and application developers rely on their software. For over 50 years, we have worked with our customers to ensure that their applications work seamlessly on our hardware, taking into account every layer of the solution stack, including applications, orchestration and hardware. We have also worked closely with the AI ecosystem for a number of years, both optimizing and developing a broad range of software tools, frameworks, and libraries that will satisfy the most demanding data science needs.
As data volumes grow to Petabyte-scale, our aim is to ensure that our customers and partner ecosystem can easily build, run and scale AI workloads using their existing Intel® architecture, without the need to make significant investments of time and money in building new software stacks.
Below we will outline some of the key AI software offerings which Intel helps optimize, that can help your organization accelerate time-to-insight with machine learning and deep learning, using Intel® hardware.
Implement Deep Learning Faster
Many of the more complex AI use cases today, such as computer vision or speech recognition, depend on deep learning algorithms. Deep learning frameworks and libraries offer data scientists, developers, and researchers the ability to use higher-level programming languages, such as Python or R, to train and deploy algorithms based on deep neural networks.
Intel has worked with the AI ecosystem to contribute code to the most popular deep learning frameworks so that they are optimized for Intel architecture. These include optimizations for:
A Shortcut to Optimized AI
Of course, building your own AI applications from scratch may not be the right choice for every organization. You may choose to implement a ready-made solution, working with a specialist system integrator (SI), independent software vendor (ISV) and/or original equipment manufacturer (OEM). In this in case, a good place to start is the Intel® AI Builders ecosystem, which brings together expert solution providers from a wide range of industries and geographies. For example, a manufacturing organization looking to add intelligence to its existing equipment may use Asquared IoT’s Equilips 4.0, which offers an embedded AI solution that works with virtually any manufacturing infrastructure—whether legacy or new—and eliminates the need for any network communication or external computing infrastructure. It is based on innovative sensing technologies that use non-touch, non-intrusive sensing methods such as audio and visual analytics, meaning it can be easily and cost-effectively retrofitted to any machine, creating smart infrastructure that helps generate the data for increased visibility into production lines, proactive maintenance, improved decision-making, and operational efficiency.
Explore the various options to achieving your AI goals in the article ‘Four Paths to AI’ or by learning more about the technologies that underpin Intel architecture’s AI capabilities: Intel Xeon Scalable processors and Intel Deep Learning Boost.
1 3.75x improvement with AI Inferencing Intel Select Solution. The solution was tested with KPI Targets: OpenVINO/ ResNet50 on INT8 on 02-26-2019 with the following hardware and software configuration:
Base configuration: 1 Node, 2x Intel® Xeon® Gold 6248; 1x Intel® Server Board S2600WFT; Total Memory 192 GB, 12 slots/16 GB/2666 MT/s DDR4 RDIMM; HyperThreading: Enable; Turbo: Enable; Storage(boot): Intel® SSD DC P4101; Storage(capacity): At least 2 TB Intel® SSD DC P4610 PCIe NVMe; OS/Software: CentOS Linux release 7.6.1810 (Core) with Kernel 3.10.0-957.el7.x86_64; Framework version: OpenVINO 2018 R5 445; Dataset:sample image from benchmark tool; Model topology: ResNet 50 v1; Batch Size: 4; nireq: 20. The solution was tested with KPI Targets: TensorFlow/ ResNet50 on INT8 on 03-07-2019 with the following hardware and software configuration:
Base configuration: 1 Node, 2x Intel® Xeon® Gold 6248; 1x Intel® Server Board S2600WFT; Total Memory 192 GB, 12 slots/16 GB/2666 MT/s DDR4 RDIMM; HyperThreading: Enable; Turbo: Enable; Storage(boot): Intel® SSD DC P4101; Storage(capacity): At least 2 TB Intel® SSD DC P4610 PCIe NVMe; OS/Software: CentOS Linux release 7.6.1810 (Core) with Kernel 3.10.0-957.el7.x86_64; Framework version: intelaipg/intel-optimizedtensorflow:PR25765-devel-mkl; Dataset: Synthetic from benchmark tool; Model topology: ResNet 50 v1; Batch Size: 80
Notices and Disclaimers
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.
Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
Your costs and results may vary.
Intel technologies may require enabled hardware, software or service activation.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.