Industrial Machine Vision
Accelerating Intelligent Vision
Cameras and other equipment used in video surveillance and machine vision perform a variety of different tasks, such as Image Signal Processing (ISP), video transport, format conversion, compression, and analytics. Because of the frequent technology improvements to camera sensors, the trend to replace analog cameras with smart Internet Protocol cameras, and the advancement of artificial learning deep learning-based video analytics, FPGAs exceed many of the key requirements needed for vision-based systems:
- High performance per watt.
Combined with Intel® CPUs, FPGA-based accelerator solutions are now available for the architectural redesign of next-generation vision-based equipment.
Machine vision (MV) uses a combination of high-speed cameras and computers to perform complex inspection tasks in addition to digital image acquisition and analysis. You can use the resulting data for pattern recognition, object sorting, robotic arm control, and more. Intel® FPGAs are ideal for MV cameras, allowing designs to accommodate a wide variety of image sensors as well as MV-specific interfaces. FPGAs can also be used as vision processing accelerators inside the Edge computing platform to harness the power of artificial intelligence deep learning for analysis of the MV data.
With machine vision (MV) technology, you no longer need people to perform inspections for quality control. MV uses a combination of high-speed cameras and computers to perform complex inspection tasks in addition to digital image acquisition and analysis. You can use the resulting data for pattern recognition, object sorting, robotic arm control, and more. MV applications include:
- Defect detection.
- Guidance, part tracking, and identification.
- Optical character recognition and verification (OCR/OCV).
- Pattern recognition.
- Packaging, product, surface, and web inspections.
Intel® FPGA Advantages—Performance, Flexibility, and Connectivity
As illustrated below, FPGAs, such as the Intel® MAX® 10 and Cyclone® IV device families, enable MV designers like you to:
- Achieve high-performance image preprocessing on frame grabber boards (using protocols such as Camera Link), approaching real-time frame rates.
- Integrate real-time functions into the camera system for pixel-oriented gain control, compensation of defective pixels, increased dynamic range, and more.
- Capitalize on the flexibility of FPGAs to support evolving camera interfaces.
- Implement various bus interfaces, such as PCI*, PCIe*, Gbps Ethernet, USB, and others.
- Integrate a wide range of functions such as image capture, camera interfaces, preprocessing, and communication functions, all within a single FPGA.
- Using the Cyclone® V SoC, combine your image signal processing pipeline with machine vision algorithms executing the ARM* A9 hard processor system to build complete machine vision systems on chip.
- Use Simulink and Embedded Coder from The MathWorks* to generate C/C++ code for Cyclone® V SoCs. When used in combination with Intel SoC support from HDL Coder, this solution can be utilized in a hardware/software workflow spanning simulation, prototyping, verification, and implementation on Intel SoCs. For more information, visit the MathWorks page.
Flexibility—FPGAs Support Different Sensor and MV Interfaces
GigE Vision provides an open, high-performance, scalable framework for image streaming and device control over Ethernet networks. This interface standard provides an environment for networked machine vision systems based on switched client/server architectures, allowing you to connect multiple cameras to multiple computers.
It includes the following characteristics:
- Specification managed by the Automated Imaging Association (AIA).
- Protocol implemented over Ethernet/IP/UDP with data transfer rates up to 1 Gbps using Gbps Ethernet, scalable to 10 Gbps with 10-Gbps Ethernet.
- Data transfer length up to 100 m with copper.
- Use of switches, repeaters, or fiber optic converters to increase the data transfer length.
- Use of low-cost cables (CAT5e or CAT6), standard connectors, and hardware.
GigE Vision Application Example Using Multiple GigE Cameras
Courtesy of Pleora Technologies Inc.
You can obtain several key benefits by implementing GigE Vision applications using FPGAs such as Intel® MAX® 10 FPGA, Cyclone® IV, and Cyclone® V device families:
- Integration of image capture, camera interfaces, preprocessing, and communications within a single FPGA device.
- Flexibility to support different camera interfaces and bus interfaces as they evolve.
- Lower total cost of ownership (TCO) with reduced board size, reduced component count, and minimal hardware re-spins.
- Reduced risk of obsolescence due to long FPGA life cycles and easy migration to newer FPGA families.
For more information, please contact your local Intel® distributor sales office or visit our partners.
Frame grabbers link between MV cameras and the host PC which runs the machine vision algorithms. Current versions typically use PCIe to transfer the video from the camera to the host processor in an industrial computer.
Camera Link is a serial communication protocol designed for point-to-point automated vision applications. It is based on the Texas Instruments* (formerly National Semiconductor) Channel Link interface which has been extended to support general-purpose LVDS data transmission.
Maintained by the Automated Imaging Association (AIA), the Camera Link specification standardizes the camera interface, cables, and frame grabbers used to convert and transmit camera data to a computer, usually across PCITM or PCIe* buses. You'll find the Camera Link interface used in applications like machine vision systems and smart cameras.
Camera Link supports several configurations:
- The base configuration uses up to 24 bits of pixel data (and 3 bits of video sync) to provide a video throughput of up to 255 Mbps.
- The medium configuration adds another 24 bits of data to provide a video throughput of up to 510 Mbps.
- The full and extended configurations use 64 bits of data (or wider) to provide a video throughput up to 680 Mbps (or greater).
Benefits of Using FPGAs in Camera Link Applications
As the following figure illustrates, you can take advantage of low-cost FPGAs—such as Cyclone® IV and Cyclone® V devices — to create high-performance Camera Link applications that lower your TCO and increase your return on investment (ROI). If you need an even higher level of performance, use the Arria® V GX or Stratix® V FPGAs from Intel.
Camera Link Application Example Using a Cyclone® IV FPGA from Intel
With FPGAs, you can do the following:
- Accelerate image preprocessing (such as pixel-oriented gain control, compensation for defective pixels, and increased dynamic range) to approach real-time frame rates.
- Integrate image capturing, camera interfacing, preprocessing, and communications on a single FPGA platform.
- Support different camera/bus/communication interfaces in your design, as they evolve (with no new hardware required).
- Reduce your board size and component count, minimize hardware re-spins, get your product to market faster, and keep your product in the market much longer to lower your TCO.
For more information, please contact your local Intel distributor sales office or visit our industrial partners.
USB 3 Vision
The benefits of using USB 3.0 are many: an abundance of USB 3 interfaces on current PCs, low cost, up to 5 Gbps transfer rate, low power and CPU overhead, and combined data and power on a single cable up to 5 meters without active repeaters.
CoaXPress can transmit up to 6.25 Gbps per cable, while enabling up to 130M cable length. Quad link cables and connectors enable up to 25Gbps for very demanding, high bandwidth connections to high performance cameras. Combinations of single, dual, or quad camera support can be managed with cards supporting the quad link CoaXPress interface.
Thunderbolt™ allows up to 10 Gbps, and Thunderbolt™ 2 allows up to 20 Gbps transfers. Initially deployed on Apple computers and laptops, it is increasingly becoming widely available as these high-performance links are appearing in more chipsets and PC motherboards. It offers similar benefits as USB 3.0, but with higher transfer rates as well as options to utilize copper or optical cables.
Thunderbolt is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries.
Government, municipalities, financial institutions, and businesses are using video surveillance for more than image recording and after-the-fact analysis. Artificial intelligence (AI) is providing real-time, automated, and actionable insights for multiple live camera streams. These advances are powering innovation beyond crime prevention/security into new market business models such as retail asset management and more efficient industrial smart factories.
The challenge for camera manufacturers is where to insert the “smart/analytics” functions within the end-to-end video solution. Some 'simple' intelligence can be implemented in smart Internet Protocol (IP) cameras at low power and with low price points. More complex analytic functions can be implemented 'near the edge' with on-premise gateways or Network Video Recorders (NVRs) supporting multiple, live camera streams. Enterprise class analytics can be implemented using widely available cloud-based computing resources. In general, digital high-definition (HD) IP surveillance cameras are replacing analog cameras because of lower installation costs, scalability, and the ability to add intelligence. Intel® Vision Products provide solutions ranging from the camera to the cloud and include the latest in AI deep learning-based analytics.
Intel® FPGAs play a key role in these next-generation HD IP cameras and NVRs:
- Support for AI deep learning frameworks, models, and topologies to implement FPGA-based convolutional neural network (CNN) inferencing accelerators (read about the Intel® FPGA Deep Learning Acceleration Suite).
- Flexibility to interface to many types of image sensors.
- Fast processing to incorporate a full image sensor pipeline (ISP) intellectual property (IP) that includes techniques, such as defect pixel correction, gamma correction, dynamic range correction, and noise reduction.
- Cost-effective solution that can incorporate functions, such as sensor interfacing, image compression, and even pan-tilt-zoom (PTZ) control.