Localisation and Mapping Projects
Localisation and Mapping at the ESL
One of the requirements for truly autonomous navigation is a robot’s ability to determine where it is in relation to its surroundings. For this to be achieved, the robot needs to be able to perform simultaneous localisation and mapping, more commonly known as SLAM. Solving the SLAM problem is critical for any robotic or unmanned vehicle applications that require navigating in an unknown environment. This includes applications in search and rescue, exploration tasks, and automated inspections.
A number of work has been done within the ESL on the SLAM problem. Current projects within the ESL are listed below.
Sensor integration to enable the detection/classification of targets with smart sensors
This project involves the combination of sensors using multiple frequency bands to assist in collision avoidance operations on a moving platform. The output from the research is strategies and techniques to aid in the sensor input design and implementation of collision avoidance systems. Active radar will be employed along with optical sensors for surveying the environment in the vicinity of the sensor platform. Sensor information will be fused to identify clear paths, obstacles and to track moving targets that might pose a threat of collision.
A low-cost 24GHz FMCW radar will be acquired for the research. This sensor provides a stable RF front-end and antenna configuration and a high speed digital back-end card that allows the flexible processing of images as required.
The strategy is to overlay the radar information with camera images and/or laser infra-red information to build an augmented target/environment image. Own vehicle movement will be used to identify stationary and moving images.
Click here to view project video.
Autonomous mapping of indoor environments for unmanned vehicle applications
This project focuses on implementing a SLAM solution for GPS denied situations such as indoor environments. Many applications require mobile robots to autonomously explore unknown indoor environments, which not only requires accurate localisation and mapping, but also for the mapping to occur in real time. The project will investigate the use of imaging sensors as well as other on-board range sensors to develop an accurate and efficient autonomous mapping system. Measurements from a Kinect sensor will be used in conjunction with a 360 degree 2D laser scanner in the mapping process. Pose estimation will be performed using colour images and depth maps obtained from the Kinect with accurate mapping taking place using the laser scanner data. The project will consist of a front and back end. The front end will be responsible for generating a local map, which can be used for path planning, while the back end will incorporate the local map into a globally consistent map using loop closure techniques.
Click here to view project video.
Landmark-based mapping of the environment of an autonomous vehicle
Current techniques construct maps in a single inertial frame, i.e. with respect to the pose of the robot, and as a result of this, when the pose of the robot contains a large amount of uncertainty, these techniques struggle to accurately and densely map in 3D. To solve this problem, this project investigates a method of decoupling the uncertainty of the pose of the robot from a map, by building a dense map with respect to persistent landmarks, as opposed to the robot. The advantage of this method lies in the fact that the landmarks are stationary, and as such the iterative estimation of their corresponding poses will, over time, contain less uncertainty than that of dynamic objects. The project is comprised of two sections. Firstly, a dense mapping technique must be employed for 3D mapping, which allows for the iterative transformation of the map as the positions of the landmarks become more certain. Secondly, an algorithm must be implemented to build the dense map with respect to persistent landmarks.
Accurate localisation of a quadcopter for antenna array measurements
In order to do near field antenna propogation measurements on antennas in the Square Kilometer Array, it is necessary to locate the measuring instrument very accurately. This project focuses on accurate localisation of a quadcopter, which will be carrying the measurement equipment, through computer vision and SLAM techniques.
Landmark-based mapping with stereo vision
For accurate localisation and mapping (SLAM) for a mobile robot, landmarks should be uniquely identifiable, persistent and resistant to occlusion. Current landmark based techniques in visual SLAM utilise image feature extractors to find points of interest in an image and then use these directly as landmarks. This generates large numbers of landmarks per image, which are prone to correspondence mismatches and occlusion, and are viewpoint dependent. These problems are usually addressed by simply removing unseen landmarks from the map and using outlier rejection strategies. In contrast, this project will investigate the viability of more complex landmark descriptions to improve the quality of the landmarks. — Algorithms that can deal with ambiguous measurements such as Multiple Hypotheses Tracking and the Probability Hypotheses Density Filter require a low number of landmarks to remain tractable. Therefore, a reduction of landmarks is critical when dealing with ambiguous environments. In addition, in-house research on landmark-relative mapping requires persistent landmarks to find the co-ordinate frame of local sub-maps. The aim of this project is therefore to develop landmark definitions that will generate few, high quality and persistent landmarks.