top of page
Search

π-BA Bundle Adjustment Acceleration on Embedded FPGAs with Co-Observation Optimization

1. Introduction


This is part of the PerceptIn technology blog on how to build your own autonomous vehicles and robots. The other technical articles on these topics can be found at https://www.perceptin.io/blog.


In this article we dive into a rather advanced topic, Bundle Adjustment (BA), the problem of refining a visual reconstruction to produce jointly optimal 3D structure and viewing parameter, including camera pose and calibration, estimates.


Optimal means that the parameter estimates are found by minimizing some cost function that quantifies the model fitting error, and jointly that the solution is simultaneously optimal with respect to both structure and camera variations.


Given a set of measured image feature locations and correspondences, the goal of BA is to find 3D point positions and camera parameters that minimize the reprojection error. This optimization problem is usually formulated as a non-linear least squares problem, where the error is the squared L2 norm of the difference between the observed feature location and the projection of the corresponding 3D point on the image plane of the camera. In essence, BA is a large sparse geometric parameter estimation problem, the parameters being the combined 3D feature coordinates, camera poses and calibrations.


Figure 1: bundle adjustment

2. Applications of BA


BA is widely used in many modern applications. First, BA is the core component of 3D scene reconstruction applications: Agarwal et al present a system that can match and reconstruct 3D scenes from extremely large collections of photographs such as those found by searching for a given city on Internet photo sharing sites [1].


Figure 2: visual reconstruction

BA is crucial in robotic localization applications: Mur-Artal et al. developed a feature-based simultaneous localization and mapping (SLAM) system, ORB-SLAM. The system consists of four modules, including tracking, mapping, relocalization, and loop closing. BA is used in the mapping stage for optimizing the visual feature map such that the robot can better localize itself [2].


Figure 3: SLAM

BA is used heavily in autonomous driving applications, especially in the production of high-definition maps [3].


Figure 4: autonomous vehicles

BA is used in space exploration mission as well, in multiple mars exploration missions, NASA utilized BA technology to generate and optimize mars explorer localization accuracies [4].


Figure 5: mars explorer

BA is also used in commercial products, such as Google street map, to perform scene reconstruction optimization [5].


Figure 6: street view map

3. Problems of BA Computations

In both online real-time localization applications and offline visual reconstructions applications, BA remains the primary performance and energy consumption bottlenecks: for real-time localization systems (including mobile robots, autonomous vehicles, and space explorers) that perform local BA involving tens to hundreds of images, the latency of BA can be extremely high and thus fails to provide optimal localization updates in real-time.


For offline visual reconstruction systems (including 3D scene reconstruction, street view maps, high-definition maps) that perform global BA involving thousands to millions of images, the power consumption of BA can be extremely costly. Previous approaches of optimizing BA performance heavily rely on parallel processing or distributed computing, which trade higher power consumption for higher performance. Nonetheless, to enable effective and efficient both online and offline applications, we need a BA solution that simultaneously optimize for performance and energy consumption, and thus we explore hardware acceleration techniques.


4. PerceptIn’s π-BA


Aiming to achieve optimal performance and energy efficiency for BA, we present π-BA, the first hardware-software co-designed BA engine on an embedded FPGA-SoC [6]. We will present this work at the 2019 IEEE International Symposium On Field-Programmable Custom Computing Machines (FCCM).


The contribution of this work is three-fold: first, this paper is the first exploration study of implementing a BA hardware accelerator, and the proposed π-BA’s implementation has been proven effective. Second, based on our key observation that not all landmarks appear on all images in a BA problem, we developed a novel Co-Observation Optimization technique for designing BA hardware accelerators. Third, in addition to achieving performance and energy efficiency, we also demonstrate that the proposed π-BA optimizes computing and memory resource usage.


Experimental results confirmed that π-BA outperformed existing BA solutions in both performance and energy consumption. With π-BA, we can enable more robotic localization as well as visual reconstruction applications by allowing larger scale online local BA on energy-constrained embedded devices and more efficient offline global BA by using less computing resources and power consumption.


If you are interested in this work, please drop us an email, or talk to us directly at FCCM 2019 (https://www.fccm.org/). We are really excited about this work, and hopefully together we can enable highly efficient indoor and outdoor map reconstruction for autonomous vehicles and robots. Robots and vehicles should never get lost again!


References

1. S.Agarwal,N.Snavely,I.Simon,S.M.Seitz,andR.Szeliski,“Building Rome in a day,” in Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009, pp. 72–79.

2. R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: a versatile and accurate monocular SLAM system,” IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015.

3. S. Liu, L. Li, J. Tang, S. Wu, and J.-L. Gaudiot, “Creating Autonomous Vehicle Systems,” Synthesis Lectures on Computer Science, vol. 6, no. 1, pp. i–186, 2017.

4. M.Maimone,Y.Cheng,andL.Matthies,“Two years of visual odometry on the Mars exploration rovers,” Journal of Field Robotics, vol. 24, no. 3, pp. 169–186, 2007.

5. B. Klingner, D. Martin, and J. Roseborough, “Street view motion- from-structure-from-motion,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 953–960.

6. S. Qin, Q. Liu, B. Yu, and S. Liu, “π-BA: Bundle Adjustment Acceleration on Embedded FPGAs with Co-observation Optimization”, in FCCM 2019.


bottom of page