Ultra-Low Power AI Inference Accelerator
The size of the IoT device market is steadily growing. The demand for edge devices that implement local deep learning, which is performing inference processing on the device itself instead of in the cloud, has expanded. LeapMind’s approach eliminates the need for a fast internet connection and latency reliability, while reinforcing security, because it doesn’t send data to the cloud. The commercialized form of LeapMind’s “extremely low bit quantization,” called “Efficiera” is an ultra-low-power AI inference accelerator IP. LeapMind’s Edge appliance is powered by Intel Edge-Centric FPGAs.