MADISON, Wis. — How does a robo-car perceive the world around it — in real time — safely and accurately? If you think this is a solved problem, think again.
In an exclusive interview with EE Times, DeepScale (Mountain View, Calif.) has disclosed its unique approach to a “perception system” the startup is building for ADAS and highly automated vehicles.
DeepScale is developing the perception technology that ingests raw data, not object data, and accelerates its sensor fusion on an embedded processor.
“A good chunk of research on deep neural networks (DNN) today is based on tweaks or modifications of off-the shelf DNN,” observed Forrest Iandola, DeepScale’s CEO. In contrast, over at DeepScale, “We’re starting from scratch in developing our own DNN by using raw data — coming from not just image sensors but also radars and lidars,” he explained.
Early fusion vs. late fusion
Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), called DeepScale’s approach “very contemporary,” representing “the latest thinking in applying AI to automated driving.”
How does the DeepScale approach — using raw data to train the neural network — differ from other sensor-fusion methodologies?
First off, “Today, most sensor fusion applications fuse the object data, not the raw data,” Magney stressed. Further, in most cases, smart sensors produce object data within the sensors, while other sensors send raw data to the main processor — where objects are produced before it is ingested into the fusion engine, he explained. Magney called such an approach “late fusion.”
Late Fusion: Traditional approach to sensor fusion (Source: DeepScale)
Click here for larger image
Clearly, Iandola sees an inherent issue in late fusion.
It poses problems in fusing object data with raw data, he said, especially when the sensor fusion is tasked to handle multiple types of sensory data. “Think about 3D point cloud created by a lidar,” he said. “While you’re reconstructing 3D-point cloud in your sensor, you are also receiving data coming from cameras at a much different frame rate.”
While creating objects, the raw data that might have been relevant to other sensory data could be lost. Think about the moment when the sun shines directly into the vehicle camera’s lens, or when snow covers the radar, Iandola suggested. Or, when sensor data don’t agree with one another. In such a case, fusing object lists becomes challenging.
DeepScale's approach: Deep Neural Network Sensor Fusion
Click here for larger image
“That’s why we believe we must do raw data fusion early, not late, and do it closer to the sensors,” he said. “We think early fusion can help mitigate some of those problems.”
Next page: Designing its own DNN