Kok Seng CHONG
Institute of Microelectronics
11 Science Park Road
Singapore Science Park II, Singapore 117685
Lindsay KLEEMAN
Intelligent Robotics Research Centre (IRRC)
Dept of Electrical and Computer System Eng.
Monash University, Victoria 3168 Australia
Abstract
The objective of this work is to implement an autonomous mobile robot capable of navigating in an a priori unknown indoor environment using a sonar sensor. To this end, the robot requires the capability to build a map of the environment, which is a cyclic process of moving to a new position, sensing the environment, updating the map and planning subsequent motion. Map building and navigation is a complex problem because map integrity cannot be sustained by odometry alone due to errors introduced by wheel slippage and distortion. Exteroceptive sensing, such as sonar sensing as employed in this paper, is necessary, but any sensing is also subject to random errors. Hence, neither odometry nor matching sensory data to the map gives flawless estimation of the robot’s position, yet this position estimate becomes a reference for the integration of new features in the map. Consequently, with time, errors in robot position influence errors in the map and map errors influence the position estimation.
This paper employs sonar sensing in the map building process for many reasons. Sonar has the property that the data is sparse and naturally selects useful landmarks, such as walls, wall mouldings and corners. This alleviates the data processing compared to dense ranging devices such as laser range finders and stereo vision systems. Sonar also offers a high degree of ranging and bearing accuracy in an array configuration as deployed in this paper. Since most robots today employ some form of sonar due to the cost and power consumption advantages, there is considerable interest in its application.
Sonar sensing has some important properties that need to be carefully
understood in order to properly exploit the sensing data. Firstly, sonar
transducers have a significant angular spread of energy known as the beamwidth.
In many systems, the beamwidth gives rise to large angular uncertainty
in measurement. Some researchers have attempted to deal with this uncertainty
by employing grid based maps and repetitive measurements, as in the work
by (Moravec and Elfes 1985). Grid map update schemes range from Bayesian
(Lim and Cho 1993), evidential (Pagac, Nebot, and Durrant-Whyte 1996) to
fuzzy (Oriolo, Vendittelli and Ulivi 1995) and rely on viewing targets
from many locations. Localisation with a grid map can be complex. A grid
map based localisation scheme has been developed in (Gonzalez 1992), but
this work is suitable for laser rangefinder systems only. Other researchers
do not consider localisation necessary in their applications (Dudek, Freedman
and Rekleitis 1996; Ohya, Nagashima, and Yuta 1995). Feature-based mapping
schemes have become more commonplace (Leonard and Durrant-Whyte 1991, Rencken
1993) after Kuc and Siegel (Kuc and Siegel 1987) presented a method for
discriminating planes, corners and edges using sonar data gathered at two
positions. Two significant follow-ups to (Kuc and Siegel 1987) include
the work by Barshan and Kuc (Barshan and Kuc 1990), which differentiates
planes and corners with multiple transducers at a single position, and
the work by Bozma and Kuc (Bozma and Kuc 1991), which differentiates planes
and corners with one transducer at two positions. Later (Kleeman and Kuc
1995) developed a sonar sensor which allows target discrimination at one
position and target localisation with high precision. (Hong and Kleeman
1995) have successfully demonstrated the localisation capability of a mobile
robot with a sonar array in a known environment using 3D features. Data
fusion methods associated with feature based mapping include the Kalman
Filter (Ayache and Faugeras 1989; Moutarlier and Chatila 1989; Smith and
Cheeseman 1986), maximum likelihood estimation (Lu and Milios 1995), evidential
reasoning (Pagac, Nebot and Durrant-Whyte 1996), and heuristic rules (Dudek,
Freedman and Rekleitis 1996).
The second important property of sonar systems is the appearance of phantom targets that are due to multiple specular reflections. For example, a sonar sensor will see a virtual image of a corner due to the reflection from the wall in the outwards and return paths in Figure 1. A credibility count (Leonard and Durrant-Whyte 1991) has been used to identify these phantom targets, however this approach fails when the phantom target appears consistently from different positions as is the case in the example of Figure 1. A physically based solution is presented in this paper. The third important property of sonar systems is that, when sensing a planar wall, the sensor can only see the part of the wall which is orthogonal to the line of sight - like phantom targets, this property results from specular reflection. Therefore, if the robot navigates along a wall, the robot sees the wall not as an entity but as a set of discrete, approximately collinear planar elements. Postulates need to be made about the relationships between various sonar features during map matching. Furthermore, to reduce the risk of wrongly associating two features, the robot has to be refrained from moving a long distance between successive scanning points during map building.
In the authors’ opinion, prolonged navigation can best be achieved when map feature errors are systematically generated from the sensor and odometry errors. It is convenient, and usually justifiable, to assume that errors are approximately Gaussian in their distribution, and to represent the errors with covariance matrices, since robust noise filtering tools use this form of representation. The authors employ the Kalman Filter because the basis of the Kalman Filter is the Bayesian formula and the principle of minimum mean square error (Bar-Shalom and Li 1993) that are well understood and physically acceptable. The Kalman Filter reduces the uncertainties of the parameters of interest by weighting the initial estimation of errors with the errors associated with the new information (known as observations) about the parameters. In the context of sonar map building, the parameters to be estimated are the robot’s position and the features in the map. The observations are the postulates about the relational constraints among the new features and the existing features. The Kalman Filter makes available knowledge about the uncertainties of map features that is important for path planning that avoids obstacles and localisation within the map.
The Kalman Filter is based on a linear system model. To overcome this limitation, the Extended Kalman Filter (EKF) can be employed and is founded on the assumption that for small noise, first order linearisation of the system model is sufficient for propagating the noise covariance. The problem with discarding higher order terms is that bias can accumulate after repeated estimation. Two approaches have been proposed to deal with this bias: The first approach, called the Iterated Extended Kalman Filter (IEKF), iteratively estimates the parameters of interest by repetitively linearising the system equations about the new estimates until little change results. The second approach, which will be referred to as the Julier-Uhlmann-Durrant-Whyte method (JUDKF) (Julier, Uhlmann and Durrant-Whyte 1995), is based on generating a set of data points using the error covariance of the input parameters, propagating the data points and computing the resulting error covariance, thus obviating the need to manually evaluate various Jacobian matrices. This paper compares the accuracy of the two methods.
The mapping strategy presented here is feature based and has the following attributes:
2. Robot Architecture
As shown in Figure 2, the communication backbone of the robot is an ISA AT Bus with a 486DX2-66MHz processor board controlling a custom sonar sensor card and a custom servo motion control card. The sensor control card sends transmit pulses and captures entire echoes from three receiving transducers. The transmit pulse is generated from a 10 m s 300 to 0 to 300 Volt pulse and the echo waveform is sampled with a 12 bit ADC at 1 MHz. The motion control card contains a MC1401 chip which provides PID control to the four DC motors, two in the pan tilt mechanism and two for the drive wheels. For every motor, an encoder is mounted on the actuation shaft (ie after a gear box) to generate feedback information that is not corrupted by backlash in the gearbox.
2.1. The Sonar Array
Figure 3: The sonar array configuration
The sonar array illustrated in
Figure 3
has a multiple transducer configuration which makes it possible to
classify common indoor features into planes, 90°
concave corners and edges. The sonar array accurately estimates specular
target ranges to within 0.2 mm and elevation and azimuth angles to within
0.02° for ranges to 5 meters within
the sensor beamwidth (Kleeman and Kuc 1995). At every scanning point, the
sensor first simultaneously fires TR1 while scouting anticlockwise at 90°
per second to locate the directions of potential targets from the echoes
on the three receivers. Then, it pans clockwise at the same speed, only
slowing down at the directions of the potential targets found earlier and
fires T0 followed by T2. If classification is unsuccessful, the target
is tagged as unknown but range and bearing are still recorded to unknown
objects.
2.2. The Locomotion and Odometry System
The locomotion and odometry system shown in Figure 4 consists of drive wheels and separate encoder wheels that generate odometry measurements from optical shaft encoders. The encoder wheels are made with O-rings contacting the floor so as to be as sharp-edged as practically possible to reduce wheelbase uncertainty, and are independently mounted on linear bearings to allow vertical motion, and hence minimise problems of wheel distortion and slippage. This design greatly improves the reliability of odometry measurements. The odometry error model used to propagate error covariance and odometry benchmarking can be found in (Chong and Kleeman 1997).
3. Summary of the Iterated Extended Kalman Filter (IEKF) and the Julier-Uhlmann-Durrant-Whyte Kalman Filter (JUDKF)
The section begins by introducing Kalman Filter in a general context. Before proceeding, the notation used will be explained. A circumflex above a random variable, , is used to indicate the estimator of the random variable, whereas a bar over a random variable, , is used to indicate the mean of the random variable. The partial derivative operator is denoted by and is defined by (1).
Suppose the state vector S(k) contains all the randomly distributed parameters of interest which evolve with discrete time according to the state transition equation
where U(k) is the input vector. At stage k+1, these random parameters can be observed with a set of measurements M(k) via the observation model
The estimation of S(k+1) with equation (2) and (3) is inherently imperfect because of the noise in S(k), U(k+1) and M(k+1). The goal of optimisation is to generate a new state estimate that minimises the mean square error of the parameters S(k+1) conditioned on all the past observations which is equal to the mean of the parameters conditioned on all the past observations (Bar-Shalom and Li 1993). Let Z^{j} be all the observations gathered up to stage j, and be the minimum mean square error estimate of S(i) conditioned on Z^{j}, then
(4)
and
(5)
where E{.} is the expectation of a random variable.
A set of measurements about S(k+1) can also be gathered at stage k+1. Due to the noise in both and , equation (3) does not hold exactly. A residual vector can be defined as
(6)
With the residual vector, the Kalman Filter can be invoked to generate a better estimate of S(k+1), namely based on (Bar-Shalom and Li 1993),
and the error covariance is also reduced to
Where P_{sz}(k+1|k) is the cross-covariance between and z(k+1), and P_{zz}(k+1|k) is the covariance of z(k+1), defined in a similar fashion.
In practice, the state transition equation and the observation equation are non-linear. Some methods are required to estimate the covariance and cross-covariance matrices required by Kalman Filter. The IEKF filter and the JUDKF are introduced for this purpose.
3.1. The IEKF Method
The IEKF method is an extension of the Extended Kalman Filter (EKF) which is discussed first. With the EKF method,
The noise associated with all random vectors is assumed small, so that applying a first order Taylor’s expansion about the estimator is reasonable for propagating the error covariance through non-linear equations. Suppose the error of is not correlated with U(k), then
where is the Jacobian matrix of F() evaluated around S(k|k) or U(k+1), as indicated by the subscript. is also known as the state transition matrix. Cov(U(k+1)) is the error covariance of the input vector U(k+1). In a similar manner,
(12)
The IEKF method improves the performance of the Extended Kalman Filter by linearising the measurement equation about the new estimate , and attempts to iteratively draw closer to the true mean, in a way similar to solving a non-linear algebraic equation using Newton Raphson algorithm. Let , with , the pseudo code is as follows,
set i¬0
repeat {
i¬ i+1
} while ()
where the notation ‘;h_{i}’ means ‘evaluated at the new estimator h_{i}’, and eh is a threshold vector. Further details on the IEKF can be found in (Bar-Shalom and Li 1993; Jazwinski 1970).
3.2. The JUDKF Method
Julier, Uhlmann and Durrant-Whyte (Julier, Uhlmann and Durrant-Whyte 1995) have developed a method for accurately propagating a covariance matrix through non-linear equations while reducing the bias associated with the result. This section summarises and generalises the JUD method in the context of this paper. Examples of how this method can be used with the Kalman Filter (hence the JUDKF) are given at the end of this section.
Suppose that of size n_{s´}1 is an estimate of a particular random vector S(k), and associated with the estimate is an error covariance matrix P_{ss}(k|k) of size n_{s´}n_{s}, then a set of sigma points s_{j} are generated from the 2n_{s} columns of
(14)
where L is the lower triangular matrix of the Cholesky Decomposition of n_{s}P_{ss}(k| k)=LL^{T}. A set of 2n_{s} data points can be formed,
(15)
Let X and Y be non-linear n_{x´}1
and n_{y´}1 functions of
S,
(16)
The following quantities can be calculated with S_{i}(k|k)
(17)
(18)
(20)
The equations (19) to (21) for obtaining the covariance and cross-covariance are considered suboptimal (Julier, Uhlmann and Durrant-Whyte 1995) at the expense of ensuring positive (semi)definiteness. To simplify subsequent discussion, the computational details are encapsulated into the following functions:
In both functions, k is the stage specifier for all the independent parameters.
The computation of the square root of a matrix is computationally expensive and simplication is desirable. For example, if P_{ss} has a diagonal structure, that is, P_{ss}= diag(P_{i}), then
(22)
The JUD method can now be applied to a Kalman Filtering problem:
where diag(.) here is the matrix formed by the argument matrices on the diagonal.
4. Map Building Formalism
This section is further subdivided into nine parts. Section 4.1 contains a discussion of the environment model and pinpoints the parameters to be optimised. Section 4.2 details all mapping scenarios that must be considered in order to grow the map primitives. Section 4.3 presents map building as a statistical optimisation problem and formulates solutions in the context of Kalman Filter. Two formulations are presented: The classical Global approach (Leonard, Durrant-Whyte and Ingemar 1992) and the Relocation-Fusion approach (Moutarlier and Chatila 1989). The authors’ formulations bear resemblance to these precursors, but are more general in the sense that they extend beyond feature-to-feature matching in order to tackle the more complex scenarios faced by sonar mapping. Section 4.4 explains why a corner should be merged to two intersecting partial planes, not one. Section 4.5 describes how a collinearity constraint should be validated. Section 4.6 to section 4.9 focus on the discussion for other important map management details, namely, discrimination of phantom targets, incorporation of new measurement, as well as mergence and removal of existing primitives.
4.1. Map Primitives
The environmental model comprises two types of primitives:
Edge is similarly characterised by its Cartesian coordinates only. The sonar sensor provides no indication of its orientation.
4.2. Growing Map Primitives
A snapshot of the map building scenario at stage k+1 is depicted in Figure 6. The robot has just moved to a new position and sensed a few new features. It is now ready to use some features for localisation, and add the remaining features to the map.
Figure 6 : Status of map and data fusion process at stage k+1
Since the robot is operating indoors, discrete feature elements are assumed to come from a few planes, so that they can be merged using some collinearity constraint to give a more realistic representation of the environment.
Figure 7 : Conditions for growing map primitives with a plane measurement
Figure 8 : Conditions for growing map primitives with a corner measurement
A planar measurement would be fused to a partial plane if it satisfies the conditions depicted in Figure 7. A corner measurement would be fused to an existing corner feature if it is close enough to it, otherwise it would be fused to two existing intersecting planes if it satisfies the conditions depicted in Figure 8. In a typical real environment, edges are produced by the artefacts on the walls such as mouldings. While being excellent stationary landmarks for map building and localisation, they cannot be considered as collinear with the nearby walls. Therefore an edge is only fused to an existing edge if they are in the proximity of each other. For all greyed condition boxes in the figures, c^{2} tests (to be described later) are applied. Every time a re-observation of a feature/relation occurs, the state of every map feature would be updated because of their correlation. The unterminated endpoints of partial planes are projected to the new gradient determined by the new state parameters, whereas the terminated endpoints are re-calculated from the intersections of all pairs of partial planes marked as terminated with each other.
4.3. The Kalman Filter Formulation of Map Building Problem
Under this section, the map building problem is first formulated according to the classical global approach. A few equations are then highlighted and modified according to the concept of the Relocation-Fusion approach introduced by (Moutarlier and Chatila 1989). All these are done in the specific context of the sonar mapping. After embedding IEKF or JUDKF, the result is more general than the original formulation.
The two dimensional coordinates and orientation (collectively known as the state) of the robot, as well as the speed of sound, at stage k is denoted by the random vector with respect to a global reference frame. Further assume that a partial map already exists, and the random parameter vectors of the existing features x_{i}(k) are concatenated with x_{0}(k) to form the global state vector S(k). S(k) contains all the parameters to be optimised, and S is the set of all state vectors to be optimised.
(28)
(29)
At stage k+1, the robot travels to a new destination. The intermediate state of the robot can be predicted as a function of its preceding state and the input vector U(k+1) using the state transition equation (9) or (23). In this case U(k+1) is specified by the distance travelled by the left wheel and right wheel. Strictly speaking, the time histories of the left and right wheel rotations, L(t) and R(t), are required to compute the intermediate state. In this experiment, the motion types are confined to linear translation and on the spot rotation only. If the motion is a translation, L and R should have equal sign; Likewise, if the motion is a rotation, L and R should have opposite sign.
(30)
(31)
Since a new model has been developed in (Chong and Kleeman 1997) for propagating random odometry errors, and motion will only transform x_{0}(k|k), equation (9), (23), (11) and (25) can be simplified further. For the IEKF method,
(32)
(33)
(34)
and for the JUDKF method,
(35)
(36)
(37)
where Odom() represents the new odometry error model developed in (Chong and Kleeman 1997) that takes in the robot’s wheel covariance matrix Cov(U(k+1)) and wheel turns and outputs the propagated covariance matrix.
A measurement vector consists of a time of flight r_{i} and a direction Y_{i} to a target, and is denoted by
(38)
Every new measurement is tested against all the possible collinearity constraints set out in section 4.2, in order to grow the map primitives. A typical constraint would take the form of (3). Its residual vector (also known as innovation) and its covariance can be computed for each measurement, using (10) and (11) for IEKF and (24) and (26) for JUDKF respectively.
Since the noise incurred on these residuals are not correlated, block processing is not necessary (Bar-Shalom and Li 1993) (that is, they can be processed one at a time). Each residual vector is just a function of , the measurement and the corresponding ‘matched’ map features, therefore there are significant zero submatrices in the Jacobian matrix on which simplification can be made. The following simplifying formulation involves only one feature,. Formulation involving two states (for example, fusing a new corner measurement to two existing intersecting partial planes) is similar so will not be detailed. The modified residual vector looks like
(40)
and its error covariance,
where h_{r} is the r^{th} generated by IEKF, triggered with . Here includes and only.
The covariance of the measurement should account for the imperfect polygonal world assumption. For example, not all walls are strictly flat. It has a form depicted by equation (43) but more about the matrix values is presented later.
Kalman Filter equations require that the cross-covariance between the observation matrix and the state matrix be evaluated with equation (13) and (27). After that, the state and the error covariance matrix of the map features together with the robot’s position can be updated with equation (7) and (8). Once again, efficiency can be improved by processing the covariance matrix in disparate blocks. To update the state of,
(45)
For IEKF, let be the iterator for x_{j} only, , after the r^{th} iteration, starting with the initial value
(46)
For JUDKF, is simply found with the following classical equation,
Both cases share the same covariance update formula. For all combinations of state m and n,
Extra care has been taken when forming the covariance matrices required by JUDKF. For example, in equation (4444), if j=0 or j=i, then the composite covariance matrix passed into the JUDKF function should only include P_{00}, P_{0i}, P_{i0} and P_{ii} only. Otherwise, a redundant (hence singular with zero determinant) covariance would be formed which triggers a fatal computer run-time error.
The process is then repeated until all observations have been processed. Since IEKF is also an extremely computationally demanding implementation, simplification becomes essential. The original algorithm is modified such that it terminates after exactly three iterations. Under this simplification, the iterator should only contain the states which affect all the matrix terms appearing in the IEKF algorithm, namely x_{0} and x_{i}.
After the fusion of all measurements associated with the reobserved features, the remaining features are considered new and are simply incorporated into the global state. More information about fusing new observations is contained in section 4.7.
The Relocation-Fusion approach formulated by (Moutarlier and Chatila 1989) makes a minor variation on the Global approach. The measurement error is first used to update the robot’s state x_{0} ONLY. The improved x_{0} is then used to re-calculate the residual vector and all the related Jacobian matrices, which are then used to update the remaining map features. Stepwise, after z_{i}is computed, x_{0} and P_{00} and the cross-covariance between x_{0 }and all other map features x_{n} can be found using (47) by setting j=0 and (48) by setting m=0, respectively.
With the improved x_{0}, z_{i}, ( and in case of IEKF), P_{zz}, and P_{jz} can be re-generated, in this particular order, by applying equation (39) to (42). This is followed by the update of the states of all other map features, all covariance and cross-covariance, excluding x_{0} and P_{00}, using equation (47) to (48). To embed the IEKF algorithm, iteration is first performed on x_{0}. The matched feature x_{i} is included in the iteration after three runs. After this, the remaining states follow.
set ,
repeat { /* Relocation with IEKF */
r¬ r+1
, evaluate P_{jz}
, update P_{0j} and P_{j0}
set r¬ 0
repeat { /* Fusion with IEKF */
r¬ r+1
, evaluate P_{jz}
, update x_{j}
,
update P_{mn }(exclude
P_{00})
4.4. Why Should a New Corner be Fused to Two Intersecting Partial Planes ?
If a corner measurement fails to be fused to one of the standalone corners, an attempt will be made to fuse it to two intersecting planes. The reason behind fusing a corner to two intersecting planes is that, a corner measurement vector [a_{i}(x_{0},Mi) b_{i}(x_{0},Mi)]^{T} has a size 2´ 1. If it sets up a collinearity constraint with one partial plane [a b]^{T} only, then the residual vector formed would have a size 1´ 1, that is, z_{i}=[a_{i}a+b_{i}b-a^{2}-b^{2}]. After fusion, three options for dealing with the corner are available:
4.5. Validity of Collinearity Constraint
To access the validity of a constraint relationship, Mahalabonis distance test (or also known as c^{2} test) is applied to the residual vector :
(49)
where is the normalised sum of square of all the vector components. If the residual vector is assumed to be jointly Gaussian, then the expression will have a c^{2} distribution with degree of freedom determined by the rank of P_{zz}. A one-sided acceptance interval is chosen to establish a 90% probability concentration ellipsoid in the distribution. A new measurement whose falls in this acceptance interval is assumed to have satisfied the collinearity constraint set up with the existing feature(s). In this work, all residual vectors have a size of 2´ 1, so the degree of freedom is 2, and the acceptance interval is < 5.991.
To improve computational efficiency, z_{i} which is considerably different from 0 is rejected without going through the test, to avoid the series of matrix operations. At this stage, the issue of features falling into more than one validation gates has been temporarily put aside. This problem can arise either when the position covariance is too large, or when two existing map features are very close together but are not yet merged. This difficult issue will be investigated in the future.
4.6. Distinguishing Phantom Targets
Figure 9 : Example of treatment of phantom targets
Local maps are preserved. Each feature in the local map has a parameter indicating which state it has been fused to. Therefore, the knowledge of where a particular map primitive was observed, is available. When the map is sufficiently complete, many phantom targets caused by specular reflection can be eliminated by checking whether the line of sights from the positions they were observed are blocked by some partial planes. If the phantom targets are too close to some partial planes they are considered ambiguous and would not be eliminated. The strategies stem from the experimental observation that in a crammed indoor environment, such as a narrow corridor, corners and edges are more likely to cause phantom targets than planes. This is a result of the property that corners (and edges) return (some) acoustic energy in the opposite direction to its arrival, while planes reflect energy away from the arrival direction except for normal incidence.
4.7. Fusion of the Remaining New Features
After localisation, the fusion of the remaining features will make use of the estimated robot position. Each new feature x_{i} is a function H() of the robot’s position x_{0} and a measurement vector M_{i}. For each new feature, the error covariance can be calculated :
(50)
(51)
(52)
(53)
and all the cross-covariance among the new features and the existing features are also generated. Let j denote the objects already in the map,
(54)
(55)
As a reminder, if j=0, the composite matrix passed to the JUDKF function should comprise P_{00} and Cov(M_{i}) only.
4.8. Simultaneous Encounter of Collinear Features
There would be occasions when two or more collinear features are encountered at the same stage. For instance, this situation would occur if the robot reaches a corner and observes the corner and the walls that form the corner for the first time. When this situation arises, the planar feature is first incorporated into the global state vector as a new feature. The corner feature is then regarded as the observation for that new feature.
4.9. Removal of Redundant Primitives
Removal of redundant primitives would occur when
Figure 10 : Conditions for merging two existing partial planes in map
As a brief illustration, when two existing features, with states x_{i}(k) and x_{j}(k) respectively, are to form a collinearity constraint, redundancy can be removed by enhancing the estimation of x_{i}(k) with x_{j}(k), and discarding x_{j}(k) afterwards. After filtering, all x_{j} related covariance and cross covariances can be removed. The three-run IEKF algorithm is applied in a similar fashion so it will not be elaborated further. If a constraint involves three states, as in the case of fusing a corner to two intersecting partial planes, the formulation is once again similar.
It is also possible to exploit the orthogonality constraint among partial planes. However, not all intersecting walls in today’s indoor environments are strictly perpendicular, so the idea has not been implemented even though it can be accommodated. If implemented, a c^{2} test would also be applied to assess orthogonality.
5. Experimental Results
Experiments have been carried out in four artificial environments erected with cardboard boxes but here two are shown from Figure 11(a) to Figure 12(a). The odometry of the robot has been calibrated to reduce systematic errors, and the parameters required by the non-systematic error model have been obtained in (Chong and Kleeman 1997) prior to experiment.
Since the cardboard boxes were being lined up manually taking the gridlines on the parquetry floor as reference, the variance associated with the time of flight measurement and angular measurement were set larger than that achievable by the sonar sensor (Kleeman and Kuc 1995) in order for the collinearity constraints to hold. The initial value of speed of sound was set to 342.5 m/s, which is the mean value at the time. In fact, for all four experiments, the following tentative values were used:
standard deviation of direction = 2.4°
initial standard deviation of c_{s} = 0.18 m/s
In the first environment, the robot was programmed to repetitively enter, make a 180° turn, exit an enclosure four times to investigate the long term performance of both filters. The maps generated with JUDKF and IEKF are very similar but the speed of JUDKF is significantly slower. Both filters have remained consistent throughout the navigation. JUDKF produces a map with all features correctly merged. IEKF’s performance closely matches that of JUDKF, with only one corner not fused to two partial planes and two edge features not identified as belonged to the same physical edge. Their fusions are found to be hindered by the c^{2} tests. A comparison of the covariance generated by IEKF and JUDKF for a few features indicates that JUDKF in general tends to generate larger covariance. As a result, the error ellipses for ‘related’ features are more likely to overlap and more mergence can be observed. Despite the minor imperfection, both post-filtering maps show that only one partial plane is generated for each wall, and most of the corners have been successfully fused to two intersecting partial planes, hence well defined intersections can be observed. Also, most repetitive observations of the same edge are successfully merged into one edge map feature. All unterminated endpoints of partial planes have also been properly projected to the line parameters. Three phantom corners are retained in the raw data map, but are subsequently eliminated in the post-filtering maps by the partial planes blocking their lines of sight.
In the second experiment, the robot was programmed to follow a rectangular path four times. One side of the ‘wall’ was indented by about 0.5 metre. The observations made about this experiment are virtually the same as those made in the third experiment, so no repetition is necessary. At first glance, it might seem wasteful not to extend the partial planes with the edge artefacts by exploiting another collinearity constraint. This idea is found to be impractical in real world for two important reasons:
(a) Actual environment |
(b) Pre-filtering Perception |
(c) IEKF |
(d) JUDKF |
Figure 11 : The first environment, with the robot navigating into and out of the enclosure four times
(a) Actual environment |
(b) Pre-filtering Perception |
(c) IEKF |
(d) JUDKF |
Figure 12 : The second
environment, with the robot repeating a rectangular path four times
(c) The first experiment
(d) The second experiment
Figure 13 : Cumulative processing time at all sensing positions for the two experiments.
6. Conclusion
The capability of autonomous navigation by mapping of our mobile robot system in some simple environments has been demonstrated. IEKF and JUDKF have been employed to deal with the problem of covariance propagation through nonlinear transformation, and their strengths and weaknesses with regards to accuracy and speed have been compared with simulated and real data. It has been shown that the accuracy demonstrated by IEKF is comparable to that by JUDKF and is in fact sufficient in practice. While eliminating the tedium of deriving Jacobian matrices, JUDKF is less efficient compared to IEKF. The algorithm is now being intensively upgraded to enhance its robustness and efficiency. Current research focal points include the elimination of the storage and update of the covariance between two features if it is found to be small, in order to improve the speed and memory requirement of the algorithm. Also under investigation is a map matching strategy to re-establish robot’s position when its uncertainty is too large or when the accumulation of position bias becomes significant.
7. Acknowledgment
Mr. Greg Curmi’s assistance in the design of the robot and sonar implementation,
the funding of a large ARC grant, the financial assistance provided by
an MGS scholarship and an OPRS Scholarship are all gratefully acknowledged.
Appendix: Some Implementation Examples
This appendix evaluates some of the equations given in the main body of this paper. Let B denote the effective wheelbase, after moving, the robot’s new position can be computed from the following state update equations:
(56)
where
(57)
The Jacobian matrices with respect to and can be found in (Chong and Kleeman 1997) hence will not be reproduced here.
To match a corner feature to two partial planes x_{i}=[a_{i}b_{i}]^{T} and x_{j}=[a_{j}b_{j}]^{T}, the residual vector and various Jacobian matrices are
(58)
(59)
(60)
(61)
(62)
More constraint equations and Jacobian matrices can be found in (Chong and Kleeman 1997).
9. References
Bar-Shalom, Y. and Li, X.R. 1993. Estimation and tracking: principles, techniques and software. Boston, London: Artech House Inc.
Barshan, B and Kuc, R. 1990. Differentiating sonar reflections from corners and planes employing an intelligent sensor, IEEE Trans. on Pattern Analysis and Machine Intelligence, 12(6):560-569.
Bozma, O. and Kuc, R. 1991, Building a sonar map in a specular environment using a single mobile sensor, IEEE Trans. on Pattern Analysis and Machine Intelligence, 13(12):1260-1269.
Chong, K.S. and Kleeman, L. 1997. Accurate odometry and error modelling for a mobile robot. IEEE International Conference on Robotics and Automation, Albuquerque, pp. 2783-2788.
Chong, K.S. and Kleeman, L. 1996. Mobile robot map building from an advanced sonar array and accurate odometry. Technical Report MECSE-1996-10, Melbourne. Monash University. Department of Electrical and Computer Systems Engineering.
Dudek, G., Freedman, P. and Rekleitis, I. M. 1996. Just-in-time sensing: efficiently combining sonar and laser range data for exploring unknown worlds, IEEE International Conference on Robotics and Automation, Minneapolis, pp. 667-672.
Gonzalez, J. 1992. An iconic position estimator for a 2d laser rangefinder. IEEE International Conference on Robotics and Automation. pp. 2646-2651.
Hong, M.L. and Kleeman, L. 1995. A low sample rate 3d sonar sensor for mobile robots. IEEE International Conference on Robotics and Automation. Nagoya. pp. 3015-3020.
Jazwinski, A.H. 1970. Stochastic processes and filtering theory, New York: Academic Press.
Julier, S., Uhlmann, J. and Durrant-Whyte, H. 1995 A new approach for filtering nonlinear systems, Proceedings of the 1995 American Control Conference, Seattle, Washington, pp. 1628-1632.
Kleeman, L. and Kuc, R. 1995. Mobile robot sonar for target localization and classification. International Journal of Robotics Research, 14(4):295-318.
Kuc, R. and Siegel, M.W. 1987. Physically based simulation model for acoustic sensor robot navigation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 9(6):766-778.
Leonard, J.J. and Durrant-Whyte, H.F. 1991. Simultaneous map building and localisation for an autonomous robot. IEEE/RSJ International Workshop on Intelligent Robots and Systems. pp. 1442-1447.
Leonard, J.J., Durrant-Whyte H.F. and Ingemar J.C. 1992. Dynamic map building for an autonomous mobile robot. International Journal of Robotics Research. 11(4):286-298.
Lim, J.H. and Cho, D.W. 1993. Experimental investigation of mapping and navigation based on certainty grid using sonar sensors. Robotica 11(1):7-17.
Lu, F. and Milios, E.E. 1995. Optimal global pose estimation for consistent sensor data registration. IEEE International Conference on Robotics and Automation. pp. 93-100.
Moutarlier, P. and Chatila, R. 1989. Stochastic multisensory data fusion for mobile robot location and environment modeling. 5th International Symposium on Robotics Research. Tokyo. pp. 85-94.
Moravec, H.P. and Elfes, A. 1985. High resolution map from wide-angle sonar. IEEE International Conference on Robotics and Automation. pp. 116-121.
Neira, J. et al. 1996 Multisensor mobile robot localisation. IEEE International Conference on Robotics and Automation. pp. 673-679.
Ohya, A., Nagashima, Y, and Yuta, S. 1994. Exploring unknown environment and map construction using ultrasonic sensing of normal direction of walls, IEEE International Conference on Robotics and Automation. pp. 485-492.
Oriolo, G., Vendittelli M. and Ulivi, G. 1995. On-line map building and navigation for autonomous mobile robots. IEEE International Conference on Robotics and Automation. pp. 2900-2906.
Pagac, D., Nebot, E. M. and Durrant-Whyte, H. 1996. An evidential approach to probabilistic map-building. IEEE International Conference on Robotics and Automation. pp. 745-750.
Rencken, W.D. 1993. Concurrent localisation and map building for mobile robots using ultrasonic sensors. IEEE International Conference on Robotics and Automation. pp. 2192-2197.
Smith, R.C. and Cheeseman, P. 1986. On the representation and estimation of spatial uncertainty. International Journal of Robotics Research. 5(4):.56-68.