Bienvenido: Ingresar

Por favor indica el password de tu cuenta en el wiki remoto abajo.
/!\ Deberías confiar en ambos wikis porque los administradoresrespectivos podrían ver tu contraseña.

Quitar mensaje
location: Vision / ProyectoFinalKraus

Visual odometry for robot navigation using artificial landmarks

Motivation

The experimental validation of different control strategies as well of computer vision algorithms aimed to solve specific problems of mobile robot navigation needs, in general, to have a reliable measure of the robot pose (location and orientation) respect to some predefined reference frame. Here, robustness must be required in order to deal with the usual problems of variable illumination conditions and/or significant view-point changes.

General (brief) description

To solve the above mentioned problems we propose to integrate the multiple measures of the robot pose relative to a set of artificial landmarks distributed over the environment on which the robot moves. Such landmarks, also called "fiducial markers" in the literature, corresponds to simple black and white planar patterns that are easily detectable in real-time by standard image processing methods. Once their are detected, the knowledge of the camera parameters (obtained off-line) and the pattern physical dimensions will allow to recover the pose of the robot relative to each one of such landmarks. This individual measures, together with their uncertainty estimates, will be used to compute the instantaneous pose of the robot in a robust way, e.g., using a non-linear Kalman filter.

Papers used:

+ Robust Pose Estimation from a Planar Target: [[http://www.emt.tugraz.at/publications/EMT_TR/TR-EMT-2005-01.pdf | pdf]]

(Gerald Schweighofer, Axel Pinz)

This paper presents an algorithm to estimate the position of a camera and further the problem of the pose ambiguity is solved. This means that all the possible positions, mostly there are two, are calculated and the one with the lowest value of an error function is considered as the correct pose.

The algorithm needs at least four coplanar points of a planar target, which means z = 0. In our case the planar target is represented by an artificial landmark. Further the corresponding image points of this landmark taken by a camera are to be provided. Since the calculation refers to the normalized image plane, with focus length f = 1, the image points have to be given with z = 1.

Both the camera and the model have a coordinate system with the origin in the center of the image plane respectively landmark. Moreover the coordinate systems have to coincide.

The algorithm itself tries to reach that the error function Eos(R,t) only depends on a rotation about the y-axis R(β). This is achieved by changing the general transformation v = Rp + t step by step, where v, p, R, t are the image points, the landmark points, the rotation matrix and the translation vector respectively.

First of all a first pose is estimated acording to “Fast and Globally Convergent Pose Estimation from Video Images”. The result is taken as input for the algorithm in this paper to compute other poses, if they exist. All the calculated poses are given again to the “Fast and Globally Convergent Pose Estimation from Video Images” to get the most exact estimation of the pose. Finally the error of all the estimated positions are compared and the one with the lowest value is taken as the final position.

So far the algorithm is implemented in Octave and the executed tests run without error. These tests are made of elementary self created scenes of which the expected result was known. Still is missing a run without error with the information provided by the robots camera.

+ Fast and Globally Convergent Pose Estimation from Video Images [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.69.9354&rep=rep1&type=pdf | pdf]]

(Chien-Ping Lu, Gregory D. Hager, Eric Mjolsness)

The algorithm presented in this paper computes the exterior orientation of a set of 3D points and the coresponding 2D set of the normalized image plane.

This algorithm starts with a similar transformation q = Rp + t where q, p, R, t are the image points (not normalized image plane), the model points, the rotations matrix and the translation vector respectively. For the iteration it is also used an error function E(R,t) whose minimum is tried to find. To do this the so called Singular Value Decomposition (SVD) is used.

Former Papers :

+ CCD camera calibration based on natural Landmarks [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.69.9354&rep=rep1&type=pdf | pdf]]

(Gouqing Zhou, Ethrog Uzi, Wenhao Feng, Baozong Yuan)

In this paper the approach to estimate the position of the robot is done by a simple landmark. This landmark has its edges, here called straight lines, along the three coordinate axis so that there are three pairs of two straight lines parallel to each other respectively. The fact of this parallelism is claimed to estimate the three Euler-angels which describe the rotation around the coordinate axis. Calculating the rotation angels we can estimate the rotation matrix which leads us to orientation of the robot relative to the landmark. And with knowing the rotation matrix we can calculate the translation vector and so the distance between the robot and the landmark.

The problem in our case is that there are only two pairs of straight lines available because of landmark dimension. We use plane rectangular landmarks so there are just two angels calculable. But since the robot is moving only in a plane area it's not a big deal because for this reason it's only necessary to deterimine one angel. And furthermore the rows of the rotation matrix are calculated separately with one pair of straight lines respectively. Thus it's sufficient estimating just as much entries of the matrix which are needed to calculate the angel.

But to calculate the translation vector and so the distance to the robot it is required to have computed the complete rotation matrix.

+ Vision-based mobile robot localization with simple artificial landmarks [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.61.9557&rep=rep1&type=pdf | pdf]]

(Robert Baczyk, Andrzej Kasiński, Piotr Skrzypczyński)

In this paper the distance of the camera and the landmark and the orientation of the camera are calculated by means of the dimension of the landmark image. The data required are the the coordinates and the length of the left and right frame edgeof the landmark image. Further the half-width of the landmark frame and the scale factor are needed.

This method has the advantage that the equations are quite simple and the values are easily estimable from the image given by the camera.

+ A flexible new technique for camera calibration

(Zhengyou Zhang)

Here the so called closed-form solution is used to determine both theintrinsic and the extrinsic parameters of the camera. The problem is that for a unique solution of the intrinsic parameters three images are necessary. But this can be circumvented by estimating the parameters with some other algorithm because it's not obligatory to calculate them with the closed-form solution. And once the intrinsic values are estimated the extrinsic parameters can be calculated easily.