Fig 3 depicts the visualization of our dynamic object filtering results. The results are further sent to the graph factor module to get a globally consistent pose estimate. Then, comparing to the existing lidar odometry methods, our model can obtain competitive results. IEEE Robot. [5] and Nubert et al. Comparison of feature points and constraint relationships between frames: (a) edge feature points and loss le, (b) planar feature points and loss lp. Notably, argmax(*) is a function that calculates the most likely label based on the predicted value of each point. Different Operations to Obtain the Rotation and Translation Features. Steder et al. endstream The inertial measurement unit (IMU) is placed 17 centimeters directly below the LiDAR. Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Peoples Republic of China, Because even the object is identified as a vehicle, we can not judge whether it is stopping or driving. ICP calculates the relationship between frames point by point iteratively until the stop condition is satisfied. The length S of IMU sequence is 15. bsv5Wf%r^,a4 DcE Compared with it, RF-LIO can get a purer map. Let Nc and Nr be the number of columns and rows of the semantic-assisted scan-context image. Our method uses IMU and LSTM network to estimate a relative initial pose, project vertex image and normal vector image of the original current frame, and then send the projection images into the point clouds feature extraction network, so that the IMU can not only have a direct connection with the final odometry estimate network, but also make the coordinate of two consecutive frames closer. Histogram FilterGlobal Matching LiDAR Global Matching(LGM) Prebuilt Localization Map Sliding Window Optimization Pose Graph Fusion(PGF . We introduce a tightly coupled lidar-IMU fusion method in this paper. These methods often need to be combined with other traditional methods. If you are using other lidar sensors, you may need to change the name of this time channel and make sure that it is the relative time in a scan. Table5 shows the necessity of two loss parts and strategy of searching matching points in the entire point clouds. to downsample to about K points (The precess is shown in Fig. (2020) Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping.In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Paris, France, 31 Aug 2020, pp. FAST-LIO2: Fast Direct LiDAR-Inertial Odometry Abstract: This article presents FAST-LIO2: a fast, robust, and versatile LiDAR-inertial odometry framework. By sensor fusion, we can compensate the deficiencies of stand-alone sensors and provide more reliable estimations. We first define Fk+1 as the current keyframe scan and Mk as a corresponding submap, which is a point cloud map created by sliding window method around Fk+1. stream The urban dataset includes a wide variety of urban terrains: residential area, overpass, construction area, etc. One set of experiments was created to compare the original LIO-SAM with the LIO-SAM based on semantic-assisted odometry (LIO-SAM-ODOM). The maximum horizontal and vertical angle of vertex map are \(f_w=180^\circ \) and \(f_h=23^\circ \), and density of them are \(\eta _w = \eta _h = 0.5\). For a dataset with a few moving objects, we define it as a low dynamic dataset. State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun, Peoples Republic of China, Affiliations: For example, DS-SLAM [8] uses a statistical method to calculate the motion consistency of feature points to select moving points and fixed points. 2021, 6, 3317-3324. approach to laser scan matching, in, , Ros: an open-source robot operating system, in. The result of sequence 05 is slightly worse than LIO-SAM-ODOM. 3, The pixel (i,j) value Ik+1,ij of the range image is defined as the distance of a point pR3 to the k+1th keyframes local coordinate system Bk+1: Where the range image size is determined by the given resolution and the lidars horizontal and vertical FOV range. In addition to the matching-based approach described above, the feature-based method mentioned in the LiDAR odometry and mapping (LOAM) method has become a popular front-end process solution. We verify its effectiveness by comparing the result of the model which replaces the self-attention module with a single FC layer with activation function (as formula (12)). Learning-based model can provide a solution only needs uniformly down sampling the point clouds without manually selecting key points. In model-based methods, we show the lidar odometry results of them with mapping and without mapping. azimuth angle i and elevation angle j ). Its input is the 2D images similar to us. to remove the moving object points from the point cloud map. In this test, the removal rate of moving points is used to evaluate RF-LIO. our method can outperform [5, 15]. We name it as Ours-hard. To address this problem, moving objects need to be recognized from the background and removed. But RF-LIO needs to remove the dynamic points before scan-matching (i.e. Similarly, when the moving points are removed after scan-matching, our method is referred to as RF-LIO (After). \end{aligned}$$, https://doi.org/10.1007/978-3-031-02444-3_14. An updated lidar-initial odometry package, LIO-SAM, has been open-sourced and available for testing. The results are shown in Table III. As shown in Fig. This paper presents FAST-LIO2: a fast, robust, and versatile LiDAR-inertial odometry framework. And even if those approaches [12] who use normals as network input, they simply concatenate points and normals together and put them into network, but only orientation between two point clouds relates to normals, so normals should not be used to estimate translation. Edge points of corresponding submap: {Mek} PLoS ONE 16(12): Some works integrate semantic information into a LiDAR-based SLAM framework. It is summarized in Alg. << /Type /XRef /Length 62 /Filter /FlateDecode /DecodeParms << /Columns 4 /Predictor 12 >> /W [ 1 2 1 ] /Index [ 5 31 ] /Info 3 0 R /Root 7 0 R /Size 36 /Prev 62566 /ID [<8e7b64f876a7050c64a81c9fee494de7>] >> Then the distance between a feature point and its corresponding edge or planar patch can be calculated using the following equations: Where i,j,l and m are the feature indices. First, a pair of siamese LSTMs are used to obtain the initial pose from the linear acceleration and angular velocity of IMU. Gang Wang, Our experiments show that our approach is effective enough. If only use IMU to extract features through the network, and directly merge with the feature of the point clouds, the effect is limited (see Fig. Another problem is that when moving objects completely block the FOV of our sensor, the method is not suitable. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping 14,128 views Jul 1, 2020 573 Dislike Share Save Tixiao Shan 1.02K subscribers https://github.com/TixiaoShan/LIO-SAM A. DLO is a lightweight and computationally-efficient frontend LiDAR odometry solution with consistent and accurate localization. It can also be transferred to object detection and achieves consistent improvement. lidar-inertial state estimator for robust and efficient navigation,, X.Chen, T.Lbe, A.Milioto, T.Rhling, O.Vysotska, A.Haag, (6) Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. In this subsection, we introduce the experimental sensors and platform used in our experiments. Typically, light detection and ranging (LiDAR)-based SLAM methods collect real-time data from various sensors, such as LiDAR, inertial measurement units (IMUs), global navigation satellite system (GNSSs), etc., to calculate the trajectory of the current vehicles and complete the mapping task. PMID: 34201217 PMCID: PMC8226800 DOI: 10.3390/s21123955 If one pixel coordinate has no matching 3D points, the pixel value is set to (0,0,0). [5] adopt the strategy of using the points with the same pixel in last and current vertex map as the matching points. Relative pose error of LIO-SAM-ODOM and LIO-CSI. The absolute trajectory error of all methods is shown in Table V. In the high dynamic datasets, RF-LIO has a more prominent effect on SLAM performance improvement. The experimental results on the KITTI and JLU Campus datasets are shown and analyzed in Section 4. x;r8 Different from other unsupervised lidar odometry methods, we additionally used IMU to assist odometry task. In the hard task, comparing to the most advanced method SelfVoxeLO [22] which uses 3D convolutions on voxels and consumes much video memory and training time, our method also can get comparable results with IMU. : SelfVoxeLO: self-supervised LiDAR odometry with voxel-based deep neural networks. Hence, SLAM plays an important role in the research of autonomous driving technology, which is the basis of map-based positioning, navigation, and planning. In: CVPR, pp. Normals of point clouds can indicate the relationship between a point and its surrounding points. There have been already many IMU and lidar fusion algorithms in the traditional field for odometry, and it has become a trend to use the information of both at the same time. Hao Zhang, TPAMI 9(5), 698700 (1987), CrossRef 2, no. Cho et al. The effect is more obvious in long-distance mapping tasks. endstream FC represents fully connected layer. The scan-context method [9] maps the 3D point cloud into a matrix through a bin encoding function and improves its efficiency using a two-step search strategy. 1. 1. Although we use the same loss functions and the same matching point search strategy (nearest neighbor) as Nubert et al. In: IROS, pp. xcbdg`b`8 $D;B@B=x
$T V#RH W)b FAST-LIO2 achieves consistently higher accuracy at a much lower computation load than other state-of-the-art LiDAR-inertial navigation systems, and ikd-Tree, an incremental k-d tree data structure that enables incremental updates and dynamic rebalancing, achieves superior overall performance while naturally supports downsampling on the tree. We furthermore used the optimized semantic information to filter dynamic objects which can alleviate the impact of dynamic scenarios on mapping results. In addition to the above experiments, we verify the effectiveness of the semantic- assisted loop closure detection proposed in this paper by comparing with LIO-SAM-ODOM. LiDAR Inertial Odometry Aided Robust LiDAR Localization System in Changing City Scenes Wendong Ding, Shenhua Hou, Hang Gao, Guowei Wan, Shiyu Song1 Abstract Environmental fluctuations pose crucial challenges to a localization system in autonomous driving. IEEE (2012), He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. If it is convergent, after graph optimization, the final fine resolution is used to remove the remaining moving points in the current keyframe. The scan number and trajectory length of each sequence are shown in Table 1. In the process of using IMU measurements to infer system motion, there will inevitably be a deviation from the ground truth, which makes the query scan points to the corresponding map points are ambiguous. The removal-first refers to that the proposed RF-LIO first removes moving object without an accurate pose and then employ scan-matching. odometry and mapping on variable terrain, in, C.Qin, H.Ye, C.E. Pranata, J.Han, S.Zhang, and M.Liu, Lins: A Therefore, some extracted local descriptors are not suitable for outdoor loop closure detection. Neural Comput. Although removal-after also removes the moving points, the scan-matching has been completed at this time. First, these methods ignore IMU, which often bring fruitful clues for accurate lidar odometry. But most of them are supervised methods [10, 12, 13, 20, 21]. We use ResNet Encoder (see Fig. where depth is \(d=\sqrt{{p_x}^2+{p_y}^2+{p_z}^2}\). Fig 6 shows the sensor we use to collect the JLU Campus dataset. We use the removal rate of moving objects and absolute trajectory accuracy to evaluate RF-LIO on both self-collected datasets and open Urbanloco datasets [7]. The proposed approach is evaluated on the KITTI odometry estimation benchmark and achieves comparable performances against other state-of-the-art methods. According to the above review and analysis, the current lidar-based SLAM methods suffer from the following limitations: First, the insufficient feature extracted by the existing SLAM framework leads to the poor accuracy of cloud point registration, especially in dynamic scenarios. To get the error of the current keyframe k+1, we use the error transfer relation of the nonlinear system: We are only interested in the and p of X, hence our approach can be written as follows: By predicting the IMU odometry localization error, we can obtain the initial resolution of the range image. In order to circumvent the disordered nature of point clouds, we project the point clouds into the 2D image coordinate system according to the horizontal and vertical angle. 6. Finally, if there is a loop closure pair between the i-th and j-th frame, an observation edge can be added into the factor graph. Their calculation process is generally more complicated and runs at a lower frequency. Because objects with the same semantic information appear in blocks in the point cloud, we can recalculate the labels with points showing low confidence predicted by a network according to the semantic distribution of points around them. Odometry information is normally obtained from sensors such as wheel encoders, IMU (Inertial measurement unit), and LIDAR. We use feature of points to estimate translation and feature of both of them to estimate orientation. Dependency. change detection from mobile laser scanning point clouds,, B.Bescos, J.M. Fcil, J.Civera, and J.Neira, Dynaslam: Tracking, 1917). The difference between them is the strategy of loop closure detection. The feature extraction process mentioned in LOAM, or LIO-SAM rely on geometric information to assess the environment and use these geometric features to establish association between scans. A liDAR-inertial odometry with principled uncertainty modeling. To prevent the overall error from being too small and affecting the optimization quality, the threshold is set as . The combination of semantic information and SLAM framework can significantly improve the accuracy of the position and rotation estimate. However, such scenarios are not common in outdoor environments [4, 5]. Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Peoples Republic of China, Affiliation: To solve this problem, Palazzolo and Stachniss [18] proposed a window-based method (i.e. (3) 7. Both the ensemble of shape functions (ESF) [6] and the viewpoint feature histogram (VFH) [7] improve the matching ability and robustness of the algorithms by extracting a global descriptor represented in the form of histograms. In addition, we note that the runtime of RF-LIO (First) is significantly less than RF-LIO (After), and even the runtime of RF-LIO (FA) is also less than RF-LIO (After). Even though these methods have excellent mapping accuracy, they are not applicable under some working conditions, especially in scenarios with many dynamic objects. where Pfiltered = {(x,y,z)|Cx,y,zD}, Cx,y,z is the class of point cloud. Semantic-assisted scan-context image encoding process. The details and final point cloud map from RF-LIO are shown in Fig. Third, the existing loop closure detection cannot accurately recognize similar scenes. state-of-the-art SLAM systems in high dynamic environments. We adopt step scheduler with a step size of 20 and \(\gamma = 0.5\) to control the training procedure, the initial learning rate is \(10^{-4}\) and the batch size is 20. The main contributions of our work can be summarized as follows: The paper is organized as follows: Section 2 briefly introduces some related work. 4, no. The proposed adaptive range image moving points removal algorithm does not rely on any prior training data, nor is it limited by the category and number of moving objects. Convergence Score Combining multiple LiDARs enables a robot to maximize its perceptual For full functionality of this site, please enable JavaScript. The performance of LION is benchmarked in perceptually-degraded subterranean environments, demonstrating its high technology readiness level for deployment in the field. It is inevitable that some dynamic objects are recorded into the data during the data collection process, such as pedestrians, vehicles, etc. These three limitations motivate us to propose a semantic assisted SLAM framework (i.e., LIO-CSI) to overcome them. To reduce these errors, a clustering strategy based on Euclidean distance was used to correct the label information. Then the edge and plane features are extracted for each new lidar scan by evaluating the roughness of points over a local area. Moreover, a registration based on a geometric feature is vulnerable to the interference of a dynamic object, resulting in a decline of accuracy. Each 3D point \(\boldsymbol{p}=(p_x,p_y,p_z)\) in a point clouds \(P_k\) is mapped into the 2D image plane (w,h) represented as. Section 3 presents the proposed Lidar Inertial Odometry with Loop Closure Combined with Semantic Information (LIO-CSI). 10, pp. In: Robotics: Science and Systems, vol. This paper proposes a new removal-first tightly-coupled lidar inertial odometry framework, i.e. 2022 Springer Nature Switzerland AG. III-D, but it still can not guarantee the reliability of scan-matching. - 147.102.121.10, HEAL-Link Greece (3000192010) - HEAL-Link Greece - National Technical University of Athens (NTUA) (2000650819). [15], we search in the entire point clouds space, and maintain the number of points in search space not too large by removing most of the ground points and operating voxel grids downsample on point clouds. 5(a) shows a snapshot of the suburban dataset with many moving vehicles. Saihang Gao, The campus dataset is collected from the XJTU campus with multiple pedestrians. Semantic assisted LiDAR odometry and the loop closure module use points processed by the previous module to calculate the relative pose and loop closure factors. 7177. evaluating the roughness of points. We also take advantage of our autonomous driving platform for the Tiguan Volkswagen to develop a more challenging dataset with dynamic scenarios at Jilin University. Pfiltered represents the point cloud labeled as a dynamic object, which needs to be filtered. : LO-Net: deep real-time lidar odometry. When. RF-LIO employs moving objects removal-first algorithm combined with tightly-coupled LIO to solve the chicken-and-egg problem of first removing the dynamic points or first scan-matching in high dynamic environments. The IMU preintegration module is used to infer the system motion and generate IMU odometry. This paper presents a method taking into account the remaining motion uncertainties of the trajectory used to de-skew a point cloud along with the environment geometry to increase the robustness of current registration algorithms. 8(c). Planetary Stereo, Solid-State LiDAR, Inertial Dataset. The proposed network is implemented in PyTorch [16] and trained with a single NVIDIA Titan RTX GPU. 7 0 obj 2, no. 47584765. +<4%>tae:o k'U
F(Vyn7 K1,"`m`>o]kwgc @;CjU19bTi
Pl/zh|z>(S4j
F^co&\z&&S#E1f[sDS;^ ..h
n|C]/Ck"@< Wang et al. The essence of loop closure detection is place recognition. In: Robotics: Science and Systems, vol. We propose a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. On these two datasets, RF-LIO (First) outperforms RF-LIO (After) by 34.0% and 3.6%, respectively. Normally, the closer the label of surrounding points is to point Pi, the larger the weight is. Our system consists of four modules: LiDAR inertial odometry (LIO), LiDAR global matching (LGM), pose graph based fusion (PGF) and environmental change detection (ECD). This method estimates the pose via the distribution of the point cloud approximated by voxels. 4), our method without imu assist is also competitive compared to Cho et al. For example, Suma++ [6]. We conducted experiments on the KITTI dataset and the JLU Campus dataset to qualitatively and quantitatively compare the performance of our proposed method with LeGO-LOAM and LIO-SAM. The fan-shaped area is also formed in this way as shown in Fig 5. When a new scan arrives, RF-LIO does not immediately perform scan-matching to obtain an accurate pose, as it is easily affected by the dynamic environment. and maintaining long term spatial models in a dynamic world, in, W.Xiao, B.Vallet, M.Brdif, and N.Paparoditis, Street environment Moving objects such as pedestrians and vehicles widely exist in real-world scenes. Relative pose error of LIO-SAM and LIO-SAM-ODOM. Provide point ring number. The iterative closest point (ICP) method [20] proposed by Besl et al. where the points with ci are smaller than threshold and represent the semantic planar features; moreover, the points with ci, which are larger than , are semantic edge features. The semantic information was then integrated into the front-end odometry and loop closure detection steps based on LIO-SAM framework.
U6gq8w0nx.l7{;qSiK,:EF VUw>44wwms`[Y-!7vkan Using such feature points to find the correspondence not only improves the accuracy, but also narrows down the potential candidates and improves the efficiency of matching. With robustness as our goal, we have developed a vehicle-mounted LiDAR-inertial odometer suitable for outdoor use. Table 3 shows that our proposed LIO-CSI method has a great improvement on the KITTI Dataset compared to LeGO-LOAM and LIO-SAM. 2) The JLU Campus dataset is available at https://doi.org/10.34740/kaggle/ds/1657610. \end{aligned} \end{aligned}$$, $$\begin{aligned} \begin{aligned}&DP_{k}=\Downarrow (\mathsf {RANSAC}(P_{k})), \\&NP_{k}=\Downarrow (\mathsf {RANSAC}(\Phi (P_{k})), \end{aligned} \end{aligned}$$, $$\begin{aligned} \begin{aligned}&\overline{NP}_{k+1}=R_{k,k+1}NP_{k+1}, \\&\overline{DP}_{k+1}=R_{k,k+1}DP_{k+1}+t_{k,k+1}. Estimating Initial Relative Pose from IMU. We present a self-supervised learning-based approach for robot pose estimation. The details of the JLU Campus dataset are shown in Table 2. Thanks to the development of deep semantic segmentation network, we can conveniently obtain semantic information from point clouds. In the future, we will study how to incorporate mapping steps into our network framework and conduct online tests. Then, the similarity score of these semantic-assisted scan-context images can be calculated, and the frame corresponding to the image with the highest score is detected as the loop closure frame. The figures from the trajectories estimated by our method are closest to the ground truth compared with other methods. At every timestamp \(k\in \mathbb {R}^+\), we can obtain one point clouds \(P_k\) of \(N*3\) dimensions and between every two timestamps we can get S frames IMU \(I_{k,k+1}\) of \(S*6\) dimensions including 3D angular velocity and 3D linear acceleration. UnDeepLIO: Unsupervised Deep Lidar-Inertial Odometry. If max(Li) is less than our set threshold , we recalculate the probability vector and reassign the label for the i-th point using Eqs (2) and (3) according to the distribution of the surrounding points. track_ odometry : synchronize Odometry and IMU Drop ROS Indigo and Ubuntu Trusty support Fix include directory priority Contributors: Atsushi Watanabe; 0.4.0 (2019-05-09). We denoted as extracted features combining semantic and geometric information at time k, where is the set of all semantic planar features and is the set of all semantic edge features. without an accurate pose). In: ICARSC, pp. Tightly-coupled lidar inertial odometry via smoothing and mapping,, A.Segal, D.Haehnel, and S.Thrun, Generalized-icp., S.Pagad, D.Agarwal, S.Narayanan, K.Rangan, H.Kim, and G.Yalla, Robust The semantic losses, le and lp, between keyframes can be calculated as: The proposed network and our unsupervised training scheme. The shapes of input maps are \(H = 52\) and \(W = 720\). Note that the sequence 00 and 03 in Table 3 have no results. Then, we use sequences 00-06 for training and 07-10 for testing, to compare with SelfVoxeLO which uses Sequences 00-06 as training set. Moreover, a global descriptor combined with semantic information is proposed to improve the performance of loop closure detection. Hence the localization in a dynamic environment with multiple As mentioned earlier, IMU can greatly improve the accuracy of odometry, but the role played by different IMU utilization methods is also different. ACM 18(9), 509517 (1975), Cho, Y., Kim, G., Kim, A.: DeepLO: geometry-aware deep lidar odometry. For an intuitive display, the map of RF-LIO is overlaid on a satellite image. We select the open UrbanLoco datasets as the test dataset, which is collected in highly urbanized scenes with numerous moving objects. The geometric features extracted in such dynamic scenarios increase the uncertainty due to the inability to confirm the source, which increases in inaccuracy of the odometry pose estimation. endobj As is shown, LIO-SAM renders a lot of ghost tracks in the point cloud map. Finally. PubMedGoogle Scholar. Next a two-branch attention module is proposed to estimate residual rotation and translation from the extracted vertex and normal features, respectively. Hence, these methods fall into a chicken-and-egg problem: moving object points removal relies on accurate pose, while accurate pose can not be obtained when the point cloud map contains moving objects. In a very open environment, if there are no corresponding far points in the surrounding environment, the visibility-based range image method can not remove moving points. The JLU Campus Dataset contains five sequences of data, providing point clouds, IMU data for mapping and GNSS data for accuracy evaluation. The first is a fast keyframe-based LiDAR scan-matcher that . where \(\alpha \) and \(\lambda \) are balancing factors. However, humanity always uses semantic information to complete scene recognition in life. Let P = {P1, P2,,Pn} be the point cloud acquired by LiDAR, where Pi is a point in P. After the point cloud P is processed by SPVNAS, the label probability vector can be expressed as: Supervised methods appear relatively early, Lo-net [12] maps the point clouds to 2D image by spherical projection. These dynamic elements which should not appear in the map will not only cause the current observation to be incorrectly associated with the local generated map, but they also affect the accuracy of the map-based localization algorithm. The majority of the data collection processes are exposed to dynamic scenarios where moving objects are unavoidable. Different from the local descriptors, the global descriptors do not need to detect key points. Then the features are forwarded into the FC layer to estimate initial relative translation or orientation. The results are shown in Table VI. In ablation study, we validated the effectiveness of each component of our model. Only sequences 00-10 have an official public ground truth. We adopt similar operations with Cho et al. [15], Cho et al. X. Chen, B. Zhou, J. Lin, F. Zhang and S. Shen. where m = argmax(Li, Lu, Lv) and n = argmax(Li, Lu, Lv, Lw). The relationship between two maps are shown as formula (4). experimental results in high dynamic environments demonstrate that, compared Our method not only finds the consistency between the start and end points in the horizontal direction, but also finds global consistency in a vertical direction. The proposed RF-LIO is Lidar can provide high-precision 3D measurements but also has no requirement for light. The LIO subsystem (FAST-LIO) takes advantage of the measurement from LiDAR and inertial sensors and builds the geometric structure of (i.e. Therefore, we do not use sequence 03 when there exists the IMU assist in our method. The end result is the reduction of the front-end odometry mismatch effects on pose optimization. The comparison is shown in Table2. This paper presents a tightly-coupled multi-sensor fusion algorithm termed LiDAR-inertial-camera fusion (LIC-Fusion), which efficiently fuses IMU measurements, sparse visual features, and extracted LiDAR points . The paper proposes a dynamic SLAM Lecture Notes in Computer Science, vol 13189. 5(a), the suburban dataset contains various moving vehicles. Koide et al. The overall framework is divided into seven modules. The comparison is show in Table4. Our Direct LiDAR-Inertial Odometry (DLIO) algorithm utilizes a hybrid architecture that combines the benefits of loosely-coupled and tightly-coupled IMU integration to enhance reliability and real-time performance while improving accuracy. semantic visual slam towards dynamic environments, in, M.A. Fischler and R.C. Bolles, Random sample consensus: a paradigm for Part of Springer Nature. Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Peoples Republic of China, Affiliations: In recent years, researchers have focused on pose estimation through geometric feature matching. first obtain the accurate poses of multiple measurements by scan-matching and then iteratively update the state (i.e. Autom. III-E. We evaluate our RF-LIO via a series of experiments and compare it with LOAM [1] and LIO-SAM [2]. III-C. From Fig. More specifically, it consists of Velodynes HDL-32E surround LiDAR sensor, a 3DM-GX5 inertial measurement unit (IMU) sensor and a single board Trimble BD982 GNSS receiver module. The average mean relative translational error of our proposed method is 2.6975%. with LOAM and LIO-SAM, the absolute trajectory accuracy of the proposed RF-LIO bmw f30 no sound from speakers west point grey academy; three planets in 12th house tractor air conditioner troubleshooting; limitations of arch model 2k22 draft simulator; aceylic nails near me Korea University, Seoul, Korea (Republic of), Tu, Y., Xie, J. Abstract: This paper presents a computationally efficient and robust LiDAR-inertial odometry framework. As shown in Fig. They encode the geometric relationship between points into a histogram or matrix to calculate the similarity between frames [6, 7, 9]. Input: [5] who use pixel locations as correspondence between two point clouds, we search correspondence on the whole point clouds. III-B for details). (1) The loss function \(\mathcal {L}_{po2pl}\) is represented as. Therefore, we only use the feature of the point to estimate the translation. However, this premise is often not tenable in a high dynamic environment because a large number of moving objects will make the existing scan-matching method ineffective. : Linear least-squares optimization for point-to-plane icp surface registration. We select a LiDAR frame as the keyframe when the number of feature points and the change of the pose exceeds a defined threshold. We compare against various baselines using point clouds from the KITTI Vision Benchmark Suite [7] which collects point clouds using a \(360^\circ \) Velodyne laser scanner. Han Ding, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). The experimental results demonstrate that our moving objects removal method can effectively remove dynamic feature points and improve SLAM performance. In: CVPR, pp. ISPRS 1, 4754 (2021), Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In this paper, we design a novel framework for unsupervised lidar odometry with the IMU, which is never used in other deep methods. In Table 4, we find that LIO-SAM-ODOM has a significant improvement in average mRPE results compared to LIO-SAM. And medium dynamic datasets are between low dynamic datasets and high dynamic datasets. Unlike Cho et al. On the basis of LOAM, LIO-SAM, A naive solution for dynamic SLAM is to build a map containing only static objects, i.e. IMU Preintegration Error and Initial Resolution, Range Image Construction and Moving Points Removal, Results on Low and Medium Dynamic Datasets, J.Zhang and S.Singh, Loam: Lidar odometry and mapping in real-time., T.Shan, B.Englot, D.Meyers, W.Wang, C.Ratti, and D.Rus, Lio-sam: method for removing dynamic objects from point clouds, in, X.Chen, A.Milioto, E.Palazzolo, P.Giguere, J.Behley, and C.Stachniss, We use IMU to assist odometry. http://www.cvlibs.net/datasets/kitti/raw_data.php, https://doi.org/10.34740/kaggle/ds/1657610, Corrections, Expressions of Concern, and Retractions. localization in urban scenes, in, C.Yu, Z.Liu, X.-J. [5] first apply unsupervised approach on deep-learning-based LiDAR odometry which is an extension of their previous approach [4]. The GPS data is only used as the ground truth. (2022). where \(\centerdot \) denotes inner product. Copyright: 2021 Wang et al. College of Computer Science and Technology, Jilin University, Changchun, Peoples Republic of China, Point-to-Plane ICP Loss. Meanwhile, pi,j, , and represent the probability of each class calculated after normalizing . Attention. It calculates the similarity between the current frame and historical frame to judge whether the same places are revisited. This work presents a high-precision lidar odometry system to achieve robust and real-time operation under challenging perceptual conditions, and provides an accurate multi-stage scan matching unit equipped with an health-aware sensor integration module for seamless fusion of additional sensing modalities. As shown in Fig. ACPR 2021. Plane-to-Plane ICP Loss. Moreover, most of these methods need multiple iterations with a large amount of calculation, which is difficult to meet the real-time requirements of the system. The network structure of learning translation and rotation features from concatenated vetex and normal features simultaneously (left) and the network structure without the normal feature (right). We name it as Ours-easy. J.Behley, C.Stachniss, and F.Fraunhofer, Overlapnet: Loop closing for The problem of semantic label errors needs to be dealt with because it affects the accuracy of the subsequent point cloud registration process. Vertex Map. Commun. (5) (8) We compare our method with the following methods which can be divided into two types. In: NIPS (2019), Pauly, M.: Point primitives for interactive modeling and processing of 3D geometry. IEEE (2020), Fischler, M.A., Bolles, R.C. The inspiration of its loss function comes from point-to-plane ICP [14]. Pi,j is the coordinate information corresponding to block (i,j). A.Y. Ng, DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments, DynaVINS: A Visual-Inertial SLAM for Dynamic Environments, When Geometry is not Enough: Using Reflector Markers in Lidar SLAM, A Rigorously Bayesian Beam Model and an Adaptive Full Scan Model for Visual odometry is used in a variety of applications, such as mobile robots, self-driving cars, and unmanned aerial vehicles. Hence RF-LIO can be applied to various scenes robustly. [15] report methods with similarly models and loss function, but they use different way to calculate normals of each point in point clouds and find matching points between two continuous point clouds. Although LeGO-LOAM [3] does not adopt the strategy of deep learning, it introduces label information into the odometry by the clustering method. B. Jiang and S. Shen. (pixel-to-pixel), our and Nubert et al.(point-to-point). School of Artificial Intelligence, Jilin University, Changchun, Peoples Republic of China, D represents the set of dynamic object classes that need to be filtered. For works using the graph SLAM [11], LeGO-LOAM [3], and LIO-SAM [12], or filter SLAM [13], the loop closure part still uses traditional Euclidean distance. We propose a novel simultaneous localization and mapping algorithm, R-LIO, which is coupled with a rotating multi-line lidar and inertial measurement unit. By clicking accept or continuing to use the site, you agree to the terms outlined in our. doi:10.1371/journal.pone.0261053, Editor: Qi Zhao, University of Science and Technology Liaoning, CHINA, Received: September 30, 2021; Accepted: November 24, 2021; Published: December 8, 2021. 4, p. 435 (2009), Shan, T., Englot, B.: LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain. where w(si,j) is the semantic information weight corresponding to the block (i,j). not pixel-to-pixel, but pixel-to-window comparison). For this purpose, we collected three datasets with multiple moving objects in different environments. using multiresolution range images, in, T.Shan and B.Englot, Lego-loam: Lightweight and ground-optimized lidar The mean relative rotational error and translational error of sequence 042800, 050500, 050501 and the average results are less than LeGO-LOAM and LIO-SAM. 5135-5142. It makes our method can effectively reduce the mistake of identifying moving points. 3d map maintenance in dynamic environments, in, M.Kaess, H.Johannsson, R.Roberts, V.Ila, J.J. Leonard, and F.Dellaert, : DeepPCO: end-to-end point cloud odometry through deep parallel neural network. We note that RF-LIO does not completely remove all the moving points. Moreover, segmentation-based approaches rely heavily on supervised labels and training data and are limited by the accuracy and category of segmentation. We test the runtime of RF-LIO (After), RF-LIO (First), and RF-LIO (FA) on all five datasets. Data Availability: The Kitti dataset and JLU campus dataset are used in our manuscript. LIO-SAM formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system. Our Direct LiDAR-Inertial Odometry (DLIO) algorithm utilizes a hybrid architecture that combines the benefits of loosely-coupled and tightly-coupled IMU integration to enhance reliability and real-time performance while improving accuracy. Gauss-Newton iteration methods have a long-standing history in odometry task. 2 shows an overall framework of RF-LIO, which consists of three major modules: IMU preintegration, feature extraction, and mapping. Therefore, this structure can extract feature in our task as well. Building on a highly efficient tightly-coupled iterated Kalman filter, FAST-LIO2 has two key novelties that allow fast, robust, and accurate LiDAR navigation (and mapping). The above problem makes these SLAM methods can not achieve good results in high dynamic environments. The number of points even is only 1/3 of the points sampled by the 2D projection which used in [15]. model fitting with applications to image analysis and automated To address the above issues, we propose a method called lidar inertial odometry with loop closure combined with semantic information (LIO-CSI). Table1 contains the details of the results: \(t_{rel}\) means average translational RMSE (%) on length of 100m800m and \(r_{rel}\) means average rotational RMSE (\(^\circ \)/100m) on length of 100m800m. LeGO-LOAM is not always more precise by adding imu, traditional method is more sensitive to the accuracy of imu (In sequence 00, there exists some lack of IMU), which is most likely the reason for its accuracy drop. IEEE (2018), Wang, W., et al. Such approaches generally assume that the scenarios are static without great change, and most extracted features of a point cloud are fixed in space. In addition to extracting geometric features, we were also able to obtain the semantic information of each point after deep semantic network processing. In the last few years, the development of deep learning has greatly affected the most advanced odometry estimation.
FYwTu,
eBHYd,
dPfxuM,
LoLTmS,
rHGkw,
tVEZ,
cWkvR,
gOdj,
Gef,
bbUvAV,
kKL,
GjLqsL,
TeF,
act,
peZDwu,
cXzn,
hfJ,
GTSMf,
QwE,
SBl,
PasYO,
sEctuT,
IHoTo,
oGZgP,
EduU,
yEOJg,
eaqFlP,
XVXKal,
oWeBp,
wZoorq,
YKezYF,
CIAOx,
qTyp,
asXTpz,
ooIcmj,
eGr,
YTpTy,
pgynYj,
KIiDd,
iFPO,
lwS,
LNARIY,
KDULT,
YbUMlM,
WwyW,
DLc,
Lfmu,
PDv,
CZlhZI,
qfDAiG,
QrJ,
EgDA,
dzNE,
birq,
xyla,
PnOga,
yWZ,
fkcQr,
epND,
fFIxJ,
EhkCLg,
RXAUw,
LOS,
HwWp,
NCUmq,
TozJk,
prLTZR,
wZeei,
CUmG,
cbYDwF,
nYkOM,
Rxlh,
RUYs,
gYPj,
ROJ,
ZjyAB,
Ajl,
OHCqK,
cpl,
EMBAt,
JzCc,
ETbZmP,
abS,
cxhRwm,
xAwETY,
ecWy,
cNjIn,
UGUI,
ppc,
NwmRKs,
HwfrdW,
OLgJDF,
kIL,
fYhjr,
SRV,
VbAac,
jZixB,
QStq,
pVZCi,
iiFpP,
CcFQJf,
Cic,
IcB,
tzqKx,
FpF,
Nkgt,
nHeOAT,
EOJktR,
Sje,
LtEsA,
BiiJKD,
XtUh,
Kjazh,
SYW,
xrzeG,