A Self-driving AI task running on a 0.9GHz CPU processor delivers 78% AI computing power of a NVIDIA Titan X (or 1080Ti) GPU, achieving the same object detection accuracy.
Running AI tasks without a GPU or AI chips can actually be faster? That seems to be impossible for decades until 2019 when PQ Labs demonstrated a jaw-dropping AI solution to surprise every visitor in the CES tech show.
Running 718 Fps on Intel i7 without GPU
3.5x times faster than Tiny Yolo on Titan V / 1080Ti
(comparison is based on the same mAP accuracy)
3.5x times faster than Tiny Yolo on Titan V / 1080Ti
(comparison is based on the same mAP accuracy)
Artificial Intelligence vs Human Intelligence
For example, humans can move, navigate and detect objects without extra effort or deep thinking. Such intelligence is "always-on" even in the idle mode. And such basic intelligence is universally available to the world's human populations of 7.7 billion. However, there are 2 billion computers in the world, tens of billion mobile and IoT devices. Most of them are dumb, very few can run simple AI deep learning tasks such as object detection, etc, which requires the installation of a high-end graphics card to run it well.
The story of a parallel tech universe
For historical reasons, the whole AI industry, and academics research are built upon the graphics card programming model (specifically NVIDIA GPU card, while AMD GPU lacks such software/algorithm functionality). There are other AI chips follow suit but they are all using similar AI models and optimization strategy.
|
"The AI technology tree may be unfolded in a wrong way or at least there is another alternative tech path to be explored in order to achieve better AI results. And this is where PQ Labs comes into play," says Frank Lu, PQ Labs CEO and inventor of MagicNet.
MagicNet is the Answer
The new technology is called MagicNet. It is unbelievably fast, running object detection at 718 FPS on an i7 Intel processor without the loss of accuracy compared with Tiny Yolo running 3.6 FPS on the same CPU, achieving 199X times acceleration. It is even 3.5X faster than GPU accelerated Tiny Yolo (207 fps running on Titan X or 1080 Ti). MagicNet is designed and developed from the ground up starting from the fundamentals of deep learning mathematics. All math operations are redesigned and re-implemented into a library called "Magic-Compute", replacing the need of nVidia Cuda, Cudnn or Intel MKL and runs significantly faster. For example, "convolution" operations (the building block of all deep learning models) are replaced by Magic-Convolution to enjoy significant performance boost.
The speedup of MagicNet also comes from its unique AI backbone model. The backbone runs faster than efficient models such as MobileNet V2, ShuffleNet V2 with higher accuracy. By replacing the backbones of Yolo or SSD with MagicNet, the new networks: Magic-Yolo or Magic-SSD can run 199x times faster than the original versions.
MagicNet is born to be different
The current AI industry and academic research are still following the same training procedure defined since the very early days of deep learning. It relies heavily on tuning/training ImageNet classification model and then does transfer learning on other tasks such as object detection, etc. This is the way how deep learning 1.0 worked and worked so well in the past. However, MagicNet is doing in a slightly different way. We believe that tuning ImageNet classifications may not be the best way to get good final result. Sometimes better classification result may hurt the accuracy of other tasks. MagicNet uses a different training procedure to solve the problem. This further increases MagicNet's accuracy and reduces deep learning computation cost.
Compatible with existing models and frameworks
MagicNet is innovative and unbelievably faster, but how about existing models trained in TensorFlow, Caffe, Pytorch, etc. Time and money have already spent on these models and people want to simply run them faster. MagicNet is backward compatible with existing models and makes them run faster without re-training or coding.
Visit our CES 2019 booth @ South Hall 25458 to see Live Demos of the unbelievably fast AI technology.
MagicNet is the Answer
The new technology is called MagicNet. It is unbelievably fast, running object detection at 718 FPS on an i7 Intel processor without the loss of accuracy compared with Tiny Yolo running 3.6 FPS on the same CPU, achieving 199X times acceleration. It is even 3.5X faster than GPU accelerated Tiny Yolo (207 fps running on Titan X or 1080 Ti). MagicNet is designed and developed from the ground up starting from the fundamentals of deep learning mathematics. All math operations are redesigned and re-implemented into a library called "Magic-Compute", replacing the need of nVidia Cuda, Cudnn or Intel MKL and runs significantly faster. For example, "convolution" operations (the building block of all deep learning models) are replaced by Magic-Convolution to enjoy significant performance boost.
The speedup of MagicNet also comes from its unique AI backbone model. The backbone runs faster than efficient models such as MobileNet V2, ShuffleNet V2 with higher accuracy. By replacing the backbones of Yolo or SSD with MagicNet, the new networks: Magic-Yolo or Magic-SSD can run 199x times faster than the original versions.
MagicNet is born to be different
The current AI industry and academic research are still following the same training procedure defined since the very early days of deep learning. It relies heavily on tuning/training ImageNet classification model and then does transfer learning on other tasks such as object detection, etc. This is the way how deep learning 1.0 worked and worked so well in the past. However, MagicNet is doing in a slightly different way. We believe that tuning ImageNet classifications may not be the best way to get good final result. Sometimes better classification result may hurt the accuracy of other tasks. MagicNet uses a different training procedure to solve the problem. This further increases MagicNet's accuracy and reduces deep learning computation cost.
Compatible with existing models and frameworks
MagicNet is innovative and unbelievably faster, but how about existing models trained in TensorFlow, Caffe, Pytorch, etc. Time and money have already spent on these models and people want to simply run them faster. MagicNet is backward compatible with existing models and makes them run faster without re-training or coding.
Visit our CES 2019 booth @ South Hall 25458 to see Live Demos of the unbelievably fast AI technology.
Interested in Demo, Evaluation Kit, Tech Specs, News Updates, etc?