Embry-Riddle Aeronautical University - Daytona Beach Campus

Masters of Science in Unmanned Autonomous System and Engineering 

May 2014 - August 2016

  • Creating and enhancing computer vision algorithms to enable vision-based navigation for GPS-less environments (e.g. Mars) which will help robot/rover to navigate with better accuracy in similar environments
  • Using Corobot Robot as testing platform that has LiDAR, INS, IR, and Camera sensors.
  • Integrating LiDAR, Camera, INS, and IR Sensors on Corobot Robot Platform. 
  • Driving the robot around the campus to test integration and data collection for simulation and post processing.
  • Operating system is Ubuntu 12.04. ROS is used for controlling the motors, recording the sensors data for post processing. 
  • Microstrain INS ROS Node
  • ROS Corobot Bag reader


In this thesis, a vision-aided navigation algorithm was developed that provides sensor fusion of data from an inertial measurement unit (accelerometers and rate gyros) with information extracted from a monocular vision sensor. The algorithm, which takes the form of an extended Kalman fi lter implementation, utilizes the IMU data for the state propagation step and vision-based information for the measurement update step. This vision-based information corresponds to the frame-to-frame camera rotation and translation, which is computed using tracked feature points and the classical eightpoint algorithm. The vision-aided navigation lter was implemented on experimental data obtained from a ground vehicle and a quadcopter UAV. The navigation results were then compared with those obtained using an IMU-based solution (i.e., using only the IMU data to estimate the vehicle motion) and a vision-based solution that used the eight-point algorithm alone to estimate the vehicle motion. The experimental results show that, even though the vision-aided navigation filter managed to solve partially some of the problems discussed (the scale problem from vision and the drift error from IMU), due to the randomness of feature selection for estimation of the fundamental matrix, the algorithm output was not guaranteed to be accurate for all cases considered even after introducing the extra safeguard of distance checks to account for noisy movement. Out of 20 test runs, 1 run yielded inaccurate estimates of camera pose from vision, which was corrected by the IMU fusion; however, the position estimation was affected, causing the overall trajectory to drift away from the correct path. One possible solution is to tune the Kalman fi lter parameters but that requires considerable trial and error. Another observation from the results is that with smooth movement the algorithm provides better estimation, which can be seen clearly from the Kitti dataset results. Several feature detection algorithms were implemented for use in the vision-aided navigation filter. These included SURF, FAST and the Harris corner detector. Overall, the best pose estimation was achieved using the SURF feature detection method. While the algorithm is not ready for real time implementation, it provides a practical approach which can be tuned to fit into a semi-real time implementation given the increase of processing power in consumer-grade mobile devices. The algorithm was implemented using Matlab but can easily be ported into embedded device programming such as Java or C++, which can fi t on a mobile device similar to the one used for the test. As a fi nal note, the work done here addresses a monocular camera in contrast to much of the work that has been done in this field which used stereo vision.

We have 11 guests and no members online