APP下载

A dimension reduced INS/VNS integrated navigation method for planetary rovers

2016-11-23NingXiolinGuiMingzhenZhngJieFngJincheng

CHINESE JOURNAL OF AERONAUTICS 2016年6期

Ning Xiolin,Gui Mingzhen,Zhng Jie,Fng Jincheng

aSchool of Instrument Scienceamp;Opto-Electronics Engineering,Beihang University,Beijing 100083,China

bFundamental Science on Novel Inertial Instrumentamp;Navigation System Technology Laboratory,Beijing 100083,China

A dimension reduced INS/VNS integrated navigation method for planetary rovers

Ning Xiaolina,b,Gui Mingzhena,*,Zhang Jiea,Fang Jianchenga,b

aSchool of Instrument Scienceamp;Opto-Electronics Engineering,Beihang University,Beijing 100083,China

bFundamental Science on Novel Inertial Instrumentamp;Navigation System Technology Laboratory,Beijing 100083,China

Inertial navigation system/visual navigation system(INS/VNS)integrated navigation is a commonly used autonomous navigation method for planetary rovers.Since visual measurements are related to the previous and current state vectors(position and attitude)of planetary rovers,the performance of the Kalman filter(KF)will be challenged by the time-correlation problem.A state augmentation method,which augments the previous state value to the state vector,is commonly used when dealing with this problem.However,the augmenting of state dimensions will result in an increase in computation load.In this paper,a state dimension reduced INS/VNS integrated navigation method based on coordinates offeature points is presented that utilizes the information obtained through INS/VNS integrated navigation at a previous moment to overcome the time relevance problem and reduce the dimensions of the state vector.Equations of extended Kalman filter(EKF)are used to demonstrate the equivalence of calculated results between the proposed method and traditional state augmented methods.Results of simulation and experimentation indicate that this method has less computational load but similar accuracy when compared with traditional methods.

1.Introduction

The planetary rover,which is very useful for planetary surface exploration,image acquisition,sampling and analysis,is one of the major tools used for deep space exploration missions.Planetary rovers are intended to work in an unfamiliar environment;thus,effective navigation is a basic requirement to guarantee survival and complete scheduled tasks.Traditional navigation is based on ground tracking.However,long communication delays between ground stations and rovers exist due to the large distance between target planets and the Earth.Taking a Mars explorer as an example,the average distance between Mars and the Earth is 225 million km,and the correspondingtwo-waycommunication timedelayisabout 14 min,1which makes it hard to meet the requirements of real-time navigation.In addition,there exist periods and areas that cannot be measured or controlled by ground tracking methods due to the rotational and revolving movements of the Earth and other planets.Since traditional ground tracking has these weaknesses,autonomous navigation has been a key technology for rovers due to its superior real-time capability and reliability.

Inertial navigation system(INS)2,3(dead reckoning4)and visual navigation system(VNS)5,6are two commonly used autonomous navigation methods for rovers.INS uses the inertial sensors consisting of gyroscopes and accelerometers to measure angular rates and linear accelerations.The position,velocity and attitude can be obtained by integration of these measurements without the need for external references.On the other hand,VNS determines the change of a rover’s position and attitude by sensing the external environment and obtains the current position and attitude by integration.7When the number offeature points in an image is too little or the overlapping area of two adjacent images is too small,the accuracy of VNS decreases.8

Although error accumulation problems exist in both INS and VNS,they have complementary characteristics in terms of information sources and the ways error accumulates.The error of INS is a function of time,while VNS error is directly proportional to distance.9INS is usually used to track dynamic motion over short time durations,while VNS is usually used to estimate the position displacement between consecutive views.INS has high precision of short-term attitude estimation,whereas VNS is more sensitive to relative displacement.10It is difficult for both INS and VNS to complete long-term high precision navigation tasks independently.Due to their complementary properties in source of information and frequency update characteristics,the accuracy and reliability of navigation can be improved by using VNS to aid INS.Currently,navigation methods combing INS and VNS have been successfully applied to Spirit and Opportunity,11which landed on Mars in 2004,and Curiosity,12which landed on Mars in 2012.

INS/VNS integrated navigation methods can be divided into two categories according to measurement selection:tight integration and loose integration.Tight integration is a method directly using pixel coordinates or normalized image coordinates obtained through cameras as measurements,13–15whereas loose integration uses 3D reconstruction and motion estimation to obtain relative motion parameters as measurements.16These measurements are relevant not only to the current state vector,but also to previous state vectors in both tight integration and loose integration.In order to solve the timecorrelation problem,a state dimension augmented(DA)algorithm is used in which the state vector is augmented by previous state vector.However,the computation load increases with the augmentation of the state dimensions.17

In this paper,a state dimension reduced INS/VNS(DRINS/VNS)integrated navigation method based on coordinates offeature points is proposed whose state model is established with the INS error equation.A new measuring model is deduced that utilizes the current position and attitude obtained from the INS combined with the previous state estimation of INS/VNS integrated navigation,which overcomes the time relevance problem between measurement and state.Using the proposed measurement model,the dimensions of the state vector can be reduced from 18D to 13D;thus,the computational load on a Kalman filter can be reduced.The dimension reduced(DR)algorithm proposed in this paper can achieve accuracy comparable to the traditional DA method in simulation and experiments.

2.INS/VNS integrated navigation algorithm

The INS/VNS integrated navigation algorithm is mainly composed of three parts:INS calculation in a navigation frame,acquisition of visual measurements,and an integrated navigation algorithm.Fig.1 shows the work flow of the INS/VNS integrated navigation method,where tk-1and tkstand for the moment k-1 and the moment k,respectively.INS mainly consists of an inertial measurement unit(IMU)and the corresponding INS mechanics.IMU typically consists of triaxial gyroscopes and triaxial accelerometers.The angular rates and linear accelerations of the rover relative to the inertial frame can be directly measured and then transformed into the navigation frame to determine the position,velocity and attitude of the rover.A binocular camera gets 3D image sequences of the external environment,and then the pixel coordinates of matching feature points can be obtained through an image processing algorithm.After that,the 3D coordinates offeature points are derived from frame transformation and 3D reconstruction,which are adopted as measurements.Finally,through the state model and measurement model of the established INS/VNS integrated navigation,the estimated position,velocity and attitude information can be obtained.

2.1.Principle of INS in navigation frame

The inertial navigation information is obtained through integrating angular velocity and acceleration in the navigation frame twice.The basic equation of strapdown INS in the navigation frame is as follows:18

where Cbnis the transformation matrix from the n-frame to the b-frame,which is also the attitude matrix;Cnbis the transformation matrix from the b-frame to the n-frame;rn=[L,λ]Tand vn=[vnE,vnN,vnU]Trepresent the position and velocity of the rover in the navigation frame,respectively,with L and λ the latitude and longitude respectively,vnE,vnNand vnUthe east,north and upright velocities of the rover in navigation frame;representsthe speci ficforcemeasurement,which is measured by the accelerometer; gnis the gravity acceleration expressed in the navigation frame;represents the lunar rotation rate vector in the navigation frame,with ωimthe lunar rotation rate whose value is 2.6617×10-6rad/s;represents the angular velocity of the navigation frame relative to the Moon fixed frame expressed in the navigation frame;Rmis the mean radius of the Moon,whose value is 1737.5 km;Ωbnbis the antisymmetric matrix ofthe angular velocity of the body frame relative to the navigation frame expressed in the body frame,which is given by

Fig.1 Work flow of INS/VNS integrated navigation method.

2.2.Acquisition of visual measurement

When a pair of stereo images is acquired,feature extraction is conducted on each image and stereo matching is conducted between the left image and right image simultaneously.When a new pair of images is taken,feature tracking is performed between two adjacent images.The scale-invariant feature transform(SIFT)algorithm is used in this paper for feature extraction and tracking.193D reconstruction can be done to obtain the 3D coordinates of matching feature points with respect to the camera frame,which are adopted as visual measurements in this paper.

We can directly obtain the pixel coordinates of a feature point (u,v)through image processing algorithms.Then,camera parameters are used to calculate the distortion normalized image coordinates xd.The normalized image coordinates xncan be obtained through iteration using xdand calibrated camera parameters.3D reconstruction can be executed by the obtained normalized image coordinates,as shown in Fig.2.

Fig.2 Diagram of triangulation theory.

In Fig.2,O1and O2represent the left and right photo optical centers,respectively.P represents a point in space.P1and P2are the projection of P on the left and right imaging planes of camera,respectively.ouv and OXY are the image pixel frame and the physical image frame,respectively.The direction vector of O1P and O2Pin corresponding camera frame can be obtained according to the normalized image coordinates of the feature point in the left and right image planes xLn=[xL,yL]Tand xRn=[xR,yR]T,respectively,as follows.

The rotation matrix from the left camera frame to the right camera frame is denoted as CRL,and the translation vector is TRL.They can be obtained through the binocular stereo vision system calibration;if we turn U1to the right camera frame,then we can get

We can calculate the value of|PO1|and|PO2|with Eqs.(4)and(5).The 3D coordinates of P with respect to the left camera frame can be expressed as

Taking error into consideration,we can average the above two equations and obtain

The 3D coordinates of P with respect to the right camera frame PcRcan be obtained according to PcR=CRLPcL+TRL.Thus,visual measurements are acquired.

According to the relative motion relationship in adjacent time,we can obtain

3.INS/VNS integrated navigation based on coordinates offeature points

In this section,the basic principles of the commonly used state DA-INS/VNS integrated navigation method based on coordinates offeature points,including its state model and measurement model,is introduced first.Then,the DR-INS/VNS integrated navigation method is presented,followed by the algorithm for the extended Kalman filter(EKF).20,21

3.1.Traditional state dimension augmented method

3.1.1.State model

In orderto obtain only themeasurementsrelated to the current state,the misalignment angle and position at the previous moment are added to the state vector.Thus,the state vectorofthe DA method can be written as Xz=[φ,δvn,δrn,ε,∇,~φ,δ~rn]T,where φ=[φE,φN,φU]Trepresentsthemisalignmentanglesatthecurrentmoment;δvn=[δvnE,δvnN,δvnU]Trepresents the current velocity error with respect to the navigation frame;δrn=[δL,δλ]Trepresents the current position error with respect to the navigation frame;ε=[εx,εy,εz]Trepresents gyroscopic drift; ∇ =[∇x,∇y,∇z]Trepresents accelerometer bias;~φ and δ~rnrepresent misalignmentangle and position erroratpreviousmoment,respectively.

The INS error equation with respect to the navigation frame can be obtained from Eq.(1),which is used as a state model:

The state model showed above is linear,and can be simplified as

where Fz(t)and Gz(t)are the coefficient matrixes of Xz(t)and Wz(t)respectively;Wz(t)is the system noise of DA method.

3.1.2.Measurement model

where Cbcis the matrix transpose ofis the rotation matrix of the body frame from the moment tkto tk+1,which can also be expressed as

where Cerr,k+1is the attitude error matrix at the moment tk+1,which can be obtained by φ,

where[φ×]is the antisymmetric matrix of φ.

Cerr,kis the attitude error matrix at the moment tk,which can be obtained by~φ

where[~φ×]is the antisymmetric matrix of~φ.

Fig.3 Diagram of relationship between measurement and state of DA method.

where Tbk+1bkis the translation vector of the body frame from the moment tkto tk+1,which can be expressed as

where hz(·)represents the measurement equation of the DA method; the dimension of the measurement vectoris 3N × 1,with N the number offeature points,Pck+1,ithe 3D coordinates of the ith feature point with respect to the left camera frame at the moment tk+1;V is the measurement noise.

3.2.State dimension reduced method

3.2.1.State model

The state vectoroftheDR method isdenoted as X=[φ,δvn,δrn,ε,∇]T.The INS error equation with respect to the navigation frame can be obtained from Eq.(1),which is used as a state model:

The above state model can be abbreviated as

where F(t)and G(t)are the coefficient matrixes of X(t)and W(t)respectively;W(t)is the system noise of DR method.

3.2.2.Measurement model

The measurement chosen in the DR method is the same as the traditional DA method.Fig.4 shows the relationship betweenand states.The DR method substitutes the position and attitude at the previous moment in the DA method with the estimated informationobtained from INS/VNS integrated navigation at the moment tk,and reduces the state vector dimensions from 18D to 13D.

Substituting the position and attitude at the previous moment in the DA method withobtained from INS/VNS integrated navigation at the momentcan be expressed as

Fig.4 Diagram of relationship between measurement and state of DR method.

From Eqs.(8),(27)and(29),we can establish the measurement model of the DR method,which can be abbreviated as

where h(·)represents the measurement equation of the DR method.It can be seen from the measurement model of DR method that the measurement is only relevant to the current position and attitude through utilizing the estimatedat the previous moment obtained by the INS/VNS integrated navigation.The dimensions of the state vector decrease from the traditional 18D to 13D,thereby reducing the computation load for the Kalman filter.

3.3.Algorithm of extended Kalman filter

From the state equation(Eq.(10)or Eq.(25))and the measurement equation(Eq.(23)or Eq.(30)),we can formulate the optimal filter.Since the measurement equation in this paper is nonlinear,the EKF is adopted as the navigation filter.The EKF algorithm extends optimal state estimates as the first order Taylor series,thus achieving the local linearization of nonlinear system.Due to its easy implementation and small computational load,it has been widely used in dealing with nonlinear problems.22The revised position and attitude can be obtained by the estimated misalignment angle and position error.The specific process of INS/VNS integrated navigation for this paper is shown in Fig.5.

4.Comparison between DA method and DR method

4.1.Comparison of a Kalman filter

In this section,the equivalence of calculated results between the proposed DR method and traditional DA method is shown through EKF equations.

4.1.1.State estimate and estimate covariance of DR method From Eq.(25),the state transition matrix of DR method can be approximated as

where Δt is the sampling period.The predicted state estimate is given by

where Pk-1and Qk-1are the covariance of estimate error and covariance of the system noise of the DR method at a previous moment,respectively.

Since the measurement equation is nonlinear,and we can obtain the measurement from a Jacobian matrix of the DR method from Eq.(30)as follows:

Thus,Optimal Kalman gain of the DR method can be given by

where R is covariance of the measurement noise.

The updated state estimate of the DR method is

The updated estimate covariance of the DR method is

Fig.5 Flowchart of EKF algorithm.

4.1.2.State estimate and estimate covariance of DA method

The state vector of the traditional DA method can be written as

where~X=[~φ,δ~r].Since~φ and δ~r are error estimated by the integrated navigation filter at a previous moment,we can get ˙~φ=δ˙~r=0.Then,we can obtai

Regarding the accurate navigation results of the filter,According to Eqs.(8),(17),(22),(27)and(29),we can know that the measurement equation of the DA method is equal to that of the DR method.Thus,the measurement Jacobian matrix of the DA method can be written as

Since the measurement of the DA method is equal to that of DR method,the updated state estimate of the DA method is

From Eqs.(49)and(50),we can see that the value corresponding to the current moment in^Xk,zand Pk,zis nearly equal to^Xkand Pk.Thus,the proposed DR method based on state estimate can achieve the same accuracy as the DA method based on reducing state dimensions.

4.2.Comparison of computation loads

The computation loads of two integrated navigation methods are analyzed to illustrate the efficiency of the DR method compared to the DA method.Assuming that a computer consumes equal time in running four basic floating-point operations(addition,subtraction,multiplication,and division),the computation loads for some common matrix operations are summarized in Table 1 according to Ref.23.

Assuming q and r are the state vector and measurement vector dimensions,respectively,the computation loads for each relevant code line can be given in Table 2.

The computational load of a Kalman filter in one cycle can be determined as

where T denotes the total computational load of a Kalman filter in one cycle.Table 3 shows the computation loads in one cycle of these two methods when the numbers of matching feature points are 50 and 100 respectively.When the number of matching feature points is 50,measurement vector dimensions amount to 150.The computational complexity of DR-INS/VNS integrated navigation method is about 11.5%less than that of the DA-INS/VNS integrated navigation method.When the number of matching feature points is 100,measurement vector dimensions amount to 300.The computational complexity of the DR-INS/VNS integrated navigation method is reduced by about 6.2%when compared to that of DA-INS/VNS integrated navigation method.

Table1 Computationloadsofsomecommonmatrix operations.

Table 2 Computation loads of a Kalman filter.

Table 3 Comparison of computation loads between two integrated navigation methods.

It can be seen from the above analysis that the proposed DR-INS/VNS integrated navigation method can achieve the same precision,while effectively reducing the computation load compared to DA-INS/VNS integrated navigation method.It’s worth noting that the dimensions of measurement vectors are much higher than those of state vectors when the 3D coordinates of matching feature points are adopted as measurements.Measurement vectors hold a larger proportion of the computational load than state vectors.That’s to say,the reduction in computational load is not obvious when the number offeature points is large,but there is an outstanding reduction in computationalload when the dimensions of measurement vectors are small.When a rover is running on the Moon,there are usually few feature points,and the proposed DR method has a better effect on computational load reduction.

5.Simulation and experiment

In order to demonstrate the effectiveness of the proposed navigation method,we compare it with INS,VNS,the traditional DA-INS/VNS method,XNAV and the DR-INS/VNS method through simulation and experimentation.A desktop with 3.20 GHz Intel Core i5 Processor and 4 GB of RAM is used for data processing.

5.1.Simulation

Experimenting on the Moon or Moon-like surfaces is unrealistic due to cost and experimental setup limitations.Simulations are therefore conducted to verify the feasible of the proposed method.The simulation system is composed of INS,VNS,and INS/VNS integrated navigation.The INS simulation system includes a path generator and IMU noise generator.The path generator is used to produce the ideal path of rovers,including the true value of position,velocity,attitude and ideal IMU output.The IMU noise generator produces IMU noises from the real IMU.Then,the noises of the IMU are added to the ideal values to get the simulated IMU data.The VNS simulation system contains a 3DS MAX model of the lunar surface and a simulated binocular visual system.The 3DS MAX model built virtual 3D terrain on the lunar surface based on images and a database from NASA,and the simulated binocular visual system can produce binocular images based on the ideal trajectory,3DS MAX model and camera parameters.

5.1.1.Simulation conditions

The gyroscopic drift is 1(°)/h(1σ)and accelerometer bias is 300 μg(1σ);the frequency of both sensors is 10 Hz.8The binocular visual system is setup by a pair of parallel installed cameras with a baseline of 20 cm.The height of the cameras is 1.7 m,their resolution is 1024×1024,and the field of view is 45°× 45°.24The frequency is 0.1 Hz,and the rotation matrix and translation vector from the camera frame to the body frame are

The desired trajectory is a rectangle as shown in Fig.6(a).The length is 724 m and the average traveling velocity is 0.2 m/s.The total run time is 3620 s.Fig.6(b)and(c)are typical images the binocular camera takes.

5.1.2.Simulation results

Figs.7–9 show the simulation results of INS,VNS,the traditional DA-INS/VNS integrated navigation method and the proposed DR-INS/VNS integrated navigation method under the above conditions.As can be seen from Fig.7(a),the estimation errors of INS increase rapidly with time,caused by the continuous integration of the gyro bias and accelerometer bias.As can be seen from the Fig.7(b),the trajectory of VNS navigation is significantly better than that of the INS navigation,but the error still gradually increases with distance.The trajectories of DA-INS/VNS method and DR-INS/VNS method are relatively close,and both are better than that of VNS.Figs.8 and 9 show the position and attitude error of INS,VNS,the traditional DA-INS/VNS integrated navigation method and the proposed DR-INS/VNS integrated navigation method.It can be seen that position error of both INS and VNS is divergent.Although VNS can achieve better position accuracy,its attitude error is large.INS can achieve good attitude accuracy,but position accuracy is poor.The position and attitude accuracy of the DR-INS/VNS integrated navigation method are very close to the DA-INS/VNS method,and have significant improvement over INS and VNS.

Fig.6 The desired trajectory and typical simulated images of the Moon’s surface.

Fig.7 Simulation trajectories.

Fig.8 Position errors of simulation.

Fig.9 Attitude errors of simulation.

Table 4 Final position and attitude errors in simulation.

Table 4 shows the specific data on final position and attitude errors for the four kinds of navigation methods.The final position error of INS is 3858.1 m,which is 532.89%of the total length.The final position error of VNS is 4.5 m,which is 0.62%of the total length.However,the final attitude error of VNS is significantly greater than that of INS,especially head error,which is 14.5 times larger.The results of DR-INS/VNS integrated navigation are close to those of the DA-INS/VNS integrated navigation method.The final position error of both the DR-INS/VNS and DA-INS/VNS methods is 3.4 m,which is 0.47%of the total length.The final attitude error is significantly better than that of VNS.The time consumed in one cycle of the DR-INS/VNS method is reduced by about 15%when compared to that of DA-INS/VNS method.

5.2.Experiment

Although the gravity,spin rate and surface of the Earth are different from those of the Moon,reference value still exists for experimentation on Earth.KITTI Vision Benchmark Suite25,26is a self-driving car project using visual navigation,conducted by the Karlsruhe Institute of Technology(KIT)and the Toyota Technological Institute at Chicago(TTIC).Raw data of pilotless automobile driving in and around different traffic sites in Karlsruhe,Germany,can be downloaded from itsWeb site.Thispapercited data setsfrom 2011_09_30_drive_0028 raw data;the length of route is 4128.86 m,and the run time is 8.38 min.

5.2.1.Experimental conditions

The IMU used in the experiment was the OXTS RT 3003,26whose frequency is 100 Hz.The gyroscopic drift was 0.01(°)/s.The accelerometer bias was 1020.4 μg and the random drift was 50 μg.The baseline of the binocular camera was 54 cm,and the height was 1.7 m from the ground.The resolution was 1226×370 with a frequency of 10 Hz.The rotation matrix and translation vector from the camera frame to the body frame were

Fig.10 GPS/IMU trajectory and typical experiment images.

Fig.11 Experiment trajectories.

Fig.12 Position errors of experiment.

Fig.13 Attitude errors of experiment.

Table 5 Final position and attitude errors in experiment.

The trajectory of GPS/IMU was adopted as the ideal trajectory,as shown in Fig.10,are typical images in the data set.

5.2.2.Experiment results

Figs.11–13 show the experimental results of INS,VNS,the DA-INS/VNS integrated navigation method and the DR-INS/VNS integrated navigation method under the above conditions.Fig.11(a)shows the comparison between INS trajectory and ideal trajectory.Fig.11(b)shows the comparison between the VNS trajectory,two kinds of INS/VNS integrated navigation trajectories,and ideal trajectory.Figs.12 and 13 show the position and attitude error in the four kinds of navigation methods.As can be seen from Figs.11–13,the experimental results are broadly similar to simulation results;the results of the DR-INS/VNS integrated navigation method are basically the same as the DA-INS/VNS integrated navigation method.Both of them can achieve relatively high accuracy.

Table 5 shows the specific data on final position and attitude errors for the four kinds of navigation methods.The final position error of INS is 58420.7 m,which is 1414.89%of the total length.The final position error of VNS is 236.3 m,which is 5.72%of the total length.However,the final attitude error of VNS is significantly greater than that of INS,especially head error,which is 585.8 times larger.The final position error of the proposed DR-INS/VNS method is 18.3 m,which is 0.44%of the total travel.The final pitch error,roll error,and head error are 1.3693,3.0897 and 0.2192,respectively,which are similar to those of the DA-INS/VNS method.The time consumed in one cycle of the DR-INS/VNS method is reduced by about 9.8%when compared to that of DA-INS/VNS method.

6.Conclusions

(1)A state dimension reduced algorithm based on state estimation is deduced utilizing the current position and attitude information obtained from INS combined with previous information obtained through INS/VNS integrated navigation.

(2)The measurement of the proposed method is only relevant to the current position and attitude(that is,the state vector)and reduces the dimensions of the state vector.It decreases the computational load of Kalman filters.

(3)The equivalence of results from the proposed DR method and the traditional DA method is shown in detail through EKF equations.

(4)Simulation and experimental results reveal that the accuracy of these two navigation methods is basically the same.An analysis of the computation loads demonstrates that the computational complexity of the DRINS/VNS integrated navigation decreases by more than 11.5%compared to that of DA-INS/VNS integrated navigation when the number of matching feature points is 50.The reduction in computational load will be larger when the dimensions of measurement vectors are less.

Acknowledgments

This study was supported by the National Natural Science Foundation of China(Nos.61233005 and 61503013),the National Basic Research Program of China (No.2014CB744202),and Beijing Youth Talent Program.The authors wish to express their gratitude to all members of the Scienceamp;Technology on Inertial Laboratory,Fundamental Science on Novel Inertial Instrumentamp;Navigation System Technology Laboratory and Program for Changjiang Scholars and Innovative Research Team in University(IRT1203)for their valuable comments.

1.Bhaskaran S.Autonomous navigation for deep space missions[Internet].Pasadena,CA:Jet Propulsion Laboratory,National Aeronautics and Space Administration;2006.Available from:http://arc.aiaa.org/doi/pdf/10.2514/6.2012-1267135.

2.Barshan B,Durrant-Whyte HF.Inertial navigation systems for mobile robots.IEEE Trans Robot Autom 1995;11(3):328–42.

3.Wang ZH,Chen XJ,Zeng QS.Comparison of strapdown inertial navigation algorithm based on rotation vector and dual quaternion.Chin J Aeronaut 2013;26(2):442–8.

4.Fuke Y,Krotkov E.Dead reckoning for a lunar rover on uneven terrain.In:Proceedings of 1996 IEEE international conference on robotics and automation;1996 Apr 22–28;Minneapolis(MN).Piscataway(NJ):IEEE Press;1996.

5.Zhou H,Zhang T.A vision-based navigation approach with multiple radial shape marks for indoor aircraft locating.Chin J Aeronaut 2014;27(1):76–84.

6.Goldberg SB,Maimone MW,Matthies L.Stereo vision and rover navigation software for planetary exploration.In:Aerospace conference proceedings;2002 Mar 9–16;Montana.Piscataway(NJ):IEEE Press;2002.

7.Ning XL,Gui MZ,Xu YZ,Bai XB,Fang JC.INS/VNS/CNS integrated navigation method for planetary rovers.Aerosp Sci Technol 2015;48:102–14.

8.Cheng Y,Maimone M,Matthies L.Visual odometry on the Mars exploration rovers.In:Proceedings of 2005 IEEE international conference on systems,man and cybernetics;2005 Oct 10–12;HI.Piscataway(NJ):IEEE Press;2005.p.903–10.

9.Tardif JP,George M,Laverne M,Kelly A,Stentz A.A new approach to vision-aided inertial navigation.In:Proceedings of 2010 IEEE/RSJ international conference on intelligent robots and systems(IROS);2010 Oct 18–22;Taipei.Piscataway(NJ):IEEE Press;2010.

10.Hesch JA,Kottas DG,Bowman SL,Roumeliotis SI.Consistency analysis and improvement of vision-aided inertial navigation.IEEE Trans Rob 2014;30(1):158–76.

11.Ali KS,Vanelli CA,Biesiadecki JJ,Maimone MW,Cheng Y,Alexander JW.Attitude and position estimation on the mars exploration rovers.In:Proceedings of 2005 IEEE international conference on systems,man and cybernetics;2005 Oct 10–12;HI.Piscataway(NJ):IEEE Press;2005.

12.Grotzinger JP,Crisp J,Vasavada AR,Anderson RC,Baker CJ,Barry R,et al.Mars science laboratory mission and science investigation.Space Sci Rev 2012;170(1–4):5–56.

13.Roumeliotis SI,Johnson AE,Montgomery JF.Augmenting inertial navigation with image-based motion estimation.In:Proceedings of IEEE international conference on robotics and automation;2002 May 11–15;Washington,D.C.Piscataway(NJ):IEEE Press;2002.

14.Panahandeh G,Jansson M.Vision-aided inertial navigation using planar terrain features.In:Proceedings of 2011 the 1st international conference on robot,vision and signal processing;2011 Nov 21–23;Kaohsiung.Piscataway(NJ):IEEE Press;2011.

15.Di Corato F,Innocenti M,Pollini L.An entropy-like approach to vision-aided inertial navigation.IFAC World Congr 2011;44(1):13789–94.

16.Bilodeau VS,Beaudette D,Hamel J-F,AlgerM,Iles P,MacTavish K.Vision-based pose estimation system for the lunar analogue rover ‘ARTEMIS’.In:International symposium on artificial intelligence,robotics and automation in space;2012 Sep;Turin,Italy;2012.p.4–6.

17.Mourikis AI,Roumeliotis SI.On the treatment of relative-pose measurements for mobile robot localization.In:Proceedings 2006 IEEE international conference on robotics and automation;2006 May 15–19;Orlando(FL).Piscataway(NJ):IEEE Press;2006.

18.Titterton D,Weston J.Strapdown inertial navigation technology.Aerospace Electron Syst Mag 2005;20(7):33–4.

19.Joglekar J,Gedam SS,Mohan BK.Image matching using SIFT features and relaxation labeling technique-A constraint initializing method for dense stereo matching.IEEE Trans Geosci Remote Sens 2014;52(9):5643–52.

20.Jazwinski AH.Stochastic processes and filtering theory.Pittsburgh:Academic Press;1970.p.162–93.

21.Jazwinski AH.Nonlinear and adaptive estimation in re-entry.AIAA J 1973;11(7):922–6.

22.Simon D.Optimal state estimation:Kalman,H infinity,and nonlinear approaches.New York:John Wileyamp;Sons;2006.p.395–409.

23.Karlsson R,Scho¨n T,Gustafsson F.Complexity analysis of the marginalized particle filter.IEEE Trans Signal Process 2005;53(11):4408–11.

24.Maki JN,Bell JF,Herkenhoff KE,Squyres SW,Kiely A,Klimesh M,et al.Mars exploration rover engineering cameras.J Geophys Res:Planets(1991–2012)2003;108(E12),8071-1-12.

25.Geiger A,Lenz P,Urtasun R.Are we ready for autonomous sriving?The KITTI vision benchmark suite.In:Proceedings of 2012 IEEE conference on computer vision and pattern recognition(CVPR);2012 Jun 16–21;Providence(RI).Piscataway(NJ):IEEE Press;2012.p.3354–61.

26.Geiger A,Lenz P,Stiller C,Urtasun R.Vision meets robotics:The KITTI dataset.Int J Robot Res 2013;32(11):1231–7.

Ning Xiaolin received her B.E.in computer science from Shandong Teachers University,Shandong,China,in 2001,and her Ph.D.in mechanical engineering from Beihang University,Beijing,China,in 2008.She is currently an Associate Professor at the School of Instrumentation Science and Opto-electronics Engineering,Beihang University.Her research interests include the guidance,navigation,and control systems of spacecraft and autonomous navigation of deep space explorers.

Gui Mingzhen is a Ph.D.student at the School of Instrumentation Science and Opto-electronics Engineering,Beihang University.He received his B.E.in measurement control technology and instruments from Northwestern Polytechnical University,Xi’an,China,in 2013.His research interests include autonomous navigation of deep space explorers and integrated navigation.

4 January 2016;revised 23 May 2016;accepted 20 June 2016

Available online 21 October 2016

Computational complexity analysis;

Inertial navigation system;

INS/VNS integrated navigation;

Planetary exploration rover;Visual navigation system

Ⓒ2016 Chinese Society of Aeronautics and Astronautics.Production and hosting by Elsevier Ltd.This is anopenaccessarticleundertheCCBY-NC-NDlicense(http://creativecommons.org/licenses/by-nc-nd/4.0/).

*Corresponding author.Tel.:+86 10 82339013.

E-mail addresses:ningxiaolin@buaa.edu.cn(X.Ning),guimingzhen@buaa.edu.cn(M.Gui).

Peer review under responsibility of Editorial Committee of CJA.

Production and hosting by Elsevier

http://dx.doi.org/10.1016/j.cja.2016.10.009

1000-9361Ⓒ2016 Chinese Society of Aeronautics and Astronautics.Production and hosting by Elsevier Ltd.

This is an open access article under the CC BY-NC-ND license(http://creativecommons.org/licenses/by-nc-nd/4.0/).