Evaluating 3D local descriptors and recursive filtering schemes for LIDAR‐based uncooperative relative space navigation

We propose a light detection and ranging (LIDAR)‐based relative navigation scheme that is appropriate for uncooperative relative space navigation applications. Our technique combines the encoding power of the three‐dimensional (3D) local descriptors that are matched exploiting a correspondence grouping scheme, with the robust rigid transformation estimation capability of the proposed adaptive recursive filtering techniques. Trials evaluate several current state‐of‐the‐art 3D local descriptors and recursive filtering techniques on a number of both real and simulated scenarios that involve various space objects including satellites and asteroids. Results demonstrate that the proposed architecture affords a 50% odometry accuracy improvement over current solutions, while also affording a low computational burden. From our trials we conclude that the 3D descriptor histogram of distances short (HoD‐S) combined with the adaptive αβ filtering poses the most appealing combination for the majority of the scenarios evaluated, as it combines high quality odometry with a low processing burden.

Utilizing sensors that exploit passive data, that is, visual or IR, can be an effective solution that is characterized by low hardware complexity, cost, and power consumption due to neglecting a transmitting device. Using visual and IR sensors can be an effective solution, however, IR has several advantages over the visual domain such as operating during day and night under several harsh illumination conditions like eclipse and solar glare (Yılmaz et al., 2017). Despite these advantages, the performance of IR odometry depends on the temperature of the Target platform, which is affected by both internal parameters, for example, heat dissipation of the platform's components and external parameters, for example, reflection of sun's radiation. This temperature fluctuation can affect the robustness of the IR-based local feature detection and matching technique presented in current literature (Yılmaz et al., 2017). On the contrary, 3D LIDAR-based odometry outmatches its 2D counterparts (visual and IR) as it operates during day and night, is independent of the Target's thermal properties, is capable of revealing the underlying structure of an object (Guo, Sohel, Bennamoun, Lu, & Wan, 2013a), can provide full Target pose estimation (6-DoF), can discriminate the target from the background, is robust in poor visibility conditions, and has a greater operating distance compared to the maximum stereo odometry range that poses acceptable accuracy . Despite these advantages, LIDAR sensors and the associated software that exploits the 3D Target information acquired, impose higher hardware requirements compared to the visual and IR sensors that in principle produce 2D data. However, due to the advantages of LIDAR sensors against visual/IR sensors and given that the former have already been placed on space platforms (Kornfeld et al., 2003), it is clear that LIDAR-based navigation is an overall appealing and affordable option. An open case is the processing recourses required for 3D data manipulation. Space platforms use space-graded field programmable gate arrays (FPGA) and recent work (Estébanez Camarena, Feetham, Scannapieco, & Aouf, 2018) demonstrated that it is feasible FPGA boards to perform 2D computer-vision-based navigation. Hence, there is the potential for FPGA boards to perform complex 3D navigation utilizing LIDAR data.
However, the aim of this study is to evaluate the conceptual validity and performance of several 3D local descriptors and recursive filtering schemes rather than suggesting a readily available navigation system.
On that basis and driven by the advantages of LIDAR, current literature suggests quite a few LIDAR-based relative navigation solutions (Galante et al., 2012;Gómez Martínez et al., 2017;Naasz & Moreau, 2012;Opromolla et al., 2014Opromolla et al., , 2015aOpromolla, Di Fraia, et al., 2017;Sell et al., 2014;Song, 2017;Volpe et al., 2017;Woods & Christian, 2016). However, these present a number of deficiencies a. Registration of the two sequential point clouds mostly relies on the iterative closest point (ICP) method (Besl & McKay, 1992) that may settle in a local rather than a global optimum solution.
b. Current algorithms involve an off-line training process that uses a 3D model of the expected Target platform (A. B. Dietrich & McMahon, 2018;A. P. Rhodes, Christian, & Evans, 2017).
However, these models are not available for unknown Targets such as unexplored comets, asteroids or space debris, or for known Target space platforms that are corrupted with an unknown level of corruption.
c. Current space navigation solutions that rely on computer vision concepts involve regional 3D feature description and not local description (A. Rhodes, Kim, Christian, & Evans, 2016; A. P. Rhodes et al., 2017), neglecting the state-of-the-art performance afforded by the latter type. It should be noted that (A. Rhodes et al., 2016) made an attempt using the local feature descriptor spin images (Johnson & Hebert, 1998), however, the research concluded that this descriptor is not optimal for space navigation.
To the best our knowledge none other local descriptor has been used to date.
d. The majority of current proposals is evaluated on fully simulated scenarios (Gómez Martínez et al., 2017;Opromolla et al., 2014Opromolla et al., , 2015a; A. P. Rhodes et al., 2017;Song, 2017;Volpe et al., 2017;Woods & Christian, 2016), while only a few algorithms are tested on real but rather simplistic scenarios (Galante et al., 2012;Sell et al., 2014). Recently, orbit determination around small bodies using LIDAR has been proposed (A. B. Dietrich & McMahon, 2018), however this architecture uses a shape model of the Target restricting this technique to known Targets (deficiency (b)).
e. Space related odometry literature does not involve any type of feature matching refinement schemes, such as a correspondence grouping technique (Yang, Xian, Xiao, & Cao, 2018).
Simply extending current algorithms designed for terrestrial LIDAR-based robotics odometry into the field of space robotics odometry is not an optimum solution for the following reasons: a. The space environment lacks surrounding objects forcing space odometry to rely on a limited number of vertices that belong to a single object. Typical terrestrial point cloud scenes comprise of a few hundreds of thousands of vertices and involve many objects, while a space odometry scene has at least one order of magnitude fewer vertices and encloses a single object.
b. Space objects are typically less complex than terrestrial scenes further increasing the already challenging space odometry estimation. The advantage of complex scenes is affording sufficiently unique and non-degenerated geometries enforcing odometry to converge to acceptable solutions.
c. Methods that solely use LIDAR data may also not be appealing as these exploit the hardware properties of terrestrial LIDAR sensors that differ from the spaceborne ones. For example, the method in Deschaud (2018) uses the continuous spinning effect of the LIDAR sensor, which for spaceborne LIDAR sensors this is not applicable as either scanning or flash LIDAR sensors are used.
d. Terrestrial odometry that exploits LIDAR data may involve fusing information from additional sensors such as visual data or inertial data (Graeter et al., 2018;Neuhaus et al., 2019;Zhang & Singh, 2015b. However, a spaceborne sensor suite selection is constrained by the size, weight, and power consumption of each sensor and thus involving two different sensor types, for example, LIDAR and visual camera, may not be payload efficient.
Additional challenges preventing directly utilizing algorithms for terrestrial applications on space navigation scenarios are the differences between the earth-based point cloud data and the space-based point cloud data, which are linked to the device that they were generated from. Considering point clouds originating from LIDAR, as is the case in this study, the LIDAR earth-based point cloud would be in general denser as its mapping covering area is much focused and smaller than the general LIDAR spacebased point clouds. However, the later from a farther range would be mapping a large area on orbit to detect the satellite and the satellite subpart of interest. In terms of noise, the space-based data would be less affected than their earth counterpart since the space LIDAR would have less noisy returns. This is because the background around the target satellite is empty, while on earth applications, that is, autonomous cars, the background contains several objects. In terms of range accuracy, the space-based point cloud data would be more accurate than the earth-based point cloud data.
Spurred by the advantages of 3D LIDAR odometry and the deficiencies of the current solutions for space navigation, we suggest a LIDAR based relative navigation architecture appropriate for space applications. The contributions and innovations of this study can be summarized to: a. We suggest an odometry architecture that combines the concepts of 3D local feature description, feature matching refinement, and recursive filtering for the rigid body transformation estimation.
b. The recursive filtering process is adaptive and linked to the quality of the matched features.
c. Opposed to current space-oriented odometry solutions, this architecture does not require any prior knowledge of the Target platform extending significantly the usability of the proposed solution.
d. We evaluate the suggested technique on seven scenarios that involve one real and four simulated space objects of various complexity.
e. On each scenario, we evaluate six current 3D local feature descriptors and four recursive filtering schemes that are tested in all possible combinations.
f. We also evaluate the performance of the proposed architecture against five variations of ICP that is widely used for space robotics odometry, and also against two techniques used for registration applications in the computer vision domain. To the best of our knowledge such an extensive evaluation is unique in the spacerelated odometry literature.
It is expected the proposed method to achieve lower odometry errors compared to current ICP-based methods, because the feature matching and the geometric correspondence grouping schemes shall provide to the recursive filter only well-established correspondences.
These correspondences combined with the adaptive nature infused in the recursive filter affords further improvement and optimization of the odometry performance.
The rest of the paper is organized as follows; Section 2 presents the suggested odometry architecture, while Section 3 compares the accuracy of the proposed pipeline involving several 3D descriptors and recursive filtering combinations against current registration methods on a variety of scenarios. Section 4 analyses the contribution of each module within our architecture, compares our method against a mainstream computer vision-based registration method and also presents the interaction between the number of iterations and the odometry performance for each competitor ICP variant. Finally, Section 5 concludes this paper.

| LIDAR odometry
The suggested LIDAR relative navigation architecture considers a two-platform setup, that is, a Source platform that incorporates a 3D LIDAR sensor and an uncooperative Target platform. The aim of the proposed technique is to estimate the relative position of the Source platform to the Target platform.
Then at instance u, the position of the Source moving platform relatively to the uncooperative Target platform is given by The rotation matrix in Equation (2)  the point pair feature network (Deng, Birdal, & Ilic, 2018) and the compact geometric features descriptor (Khoury, Zhou, & Koltun, 2017). A downside of both methods is the requirement of offline training prohibiting their usage for odometry that involves an uncooperative and unknown Target platform.
In this study, we use within our odometry architecture several commonly used 3D feature descriptors and specifically SHOT, TriSI, HoD-S, HoD, FPFH, and RoPS. A common principle of all these 3D descriptors is encoding a spherical volume V of radius r that is centered on a keypoint p(x, y, z). For completeness, a short description of the 3D descriptors evaluated is presented.

SHOT
SHOT divides the support volume V into a predefined number of subvolumes along the azimuth, the elevation and the radius. For each subvolume, a 1D histogram is calculated based on the normal variation between the keypoint p(x, y, z) (including its surrounding vertices) and the vertices that lie in each subvolume. SHOT is robust to Gaussian noise and Shot noise, and sensitive to occlusion and clutter (Guo et al., 2016).

FPFH
FPFH establishes on V a Darboux LRF. Then for each point belonging to V, FPFH encodes the angular relationship between the keypoint p(x, y, z) and its neighbors as provided by the LRF. Finally, this angular relationship is transformed into a histogram. FPFH is robust to resolution variation and sensitive to Gaussian noise, Shot noise, occlusion and clutter (Guo et al., 2016).

RoPS
RoPS establishes on V a LRF. Then V is rotated around each axis of the LRF and is projected on each of the coordinate planes. Finally, each projection undergoes a statistical analysis based on low order moments and entropy, and all the results are concatenated into a histogram. RoPS is robust to Gaussian noise and sensitive to Shot noise, occlusion and clutter (Guo et al., 2016).

TriSI
TriSI is an extension of the 3D descriptor spin images (SI). For the latter, given a support volume V centered at point p(x,y,z), a local reference axis (LRA) is aligned with the normal vector of the plane fitted to the vertices within V. Then a 2D array accumulator of user defined dimensions is placed on the LRA and the SI descriptor is generated by accumulating the neighboring points into each bin of the 2D array as the array rotates around the LRA. TriSI uses the same technique as the SI but substitutes the LRA with an LRF and calculates a SI descriptor for each axis of the reference frame. Finally, the three SI descriptors are concatenated to from a single descriptor.
TriSI is robust to Gaussian noise, shot noise, and resolution variation, while it is sensitive to occlusion and clutter (Guo et al., 2016).

| Local feature matching
Let features for point clouds P k and P k+1 , respectively that are the output of a 3D descriptor. We match feature f k i from F k with its nearest feature f k j 1 + from F k 1 + based on an L 2 -norm nearest neighbor metric (Mikolajczyk & Schmid, 2005): where i, j are the feature indexes and the threshold τ is set to 1 to reduce the dependency between the threshold value and the metric used (O. . We speedup the feature correspondence process of Equation (5)  It is worth noting that we use a GCC module instead of the popular random sample and consensus (RANSAC; Fischler & Bolles, 1981) because the latter has a longer execution time than GCC, which can be up to two orders of magnitude (Yang et al., 2018) and thus is inappropriate for odometry applications examined in this study. For completeness, in Section 4 we demonstrate the efficiency of GCC by substituting in the proposed odometry method the GCC module with RANSAC.

| Recursive filtering
The aim of the recursive filtering is to remove the noise from a signal while retaining useful information. Hence, in the context of odometry, given the correspondences Ω we solve Equation (2)

Adaptive H∞ filter
The H∞ filter is a recursive optimal state estimator with x k the state variable vector and x y z , , ] the measurement vector that contains the 3D coordinates of the point correspondences p k j 1 + belonging to P k+1 , which are included in Ω. The adaptive H∞ filter is given by where Φ is the state transition matrix and H the measurement model matrix. We set R I T with M an adaptive coefficient that aims at adjusting the measurement noise covariance based on the quality of the matched features In contrast to the typical H∞ filter (Simon, 2000), in this study we suggest an adaptive measurement model matrix H k . The constant in Equation (11) is experimentally estimated to fine tune the overall filter performance. The problem that the H∞ filter is trying to solve is subject to G 1 γ < / , with Q being a weighting matrix and γ a small constant number representing the required accuracy of the filter. The H∞ filter equations solving Equation (12) are where Q Idt = with dt 10 5 = − and g 0.1 = being regulating parameters. The number of iterations of the H∞ filter is the cardinality of Ω and ultimately the final x  after all iterations is transformed into R*, which is input to Equation (4) to estimate the LIDAR odometry. The parameters of the adaptive H∞ filter as well as of the rest adaptive filters evaluated in this study are tuned based on Scenario 1.

Adaptive Kalman filter
Using the same notation as for the H∞ filter, the Kalman filter is given by (Simon, 2001) with B a matrix and q a known input to the system. The Kalman filter equations are where K is the Kalman gain and P the estimation error covariance, with 1 v σ = and 5 10 w 3 σ = × − that are experimentally defined to gain optimum odometry performance.
Adaptive αβ filter Using the same notation as for the H∞ filter, the αβ filter is given by (Penoyer, 1993) KECHAGIAS-STAMATIS ET AL.

Adaptive state-dependent Riccati equation (SDRE) filter
This is a nonlinear filter (Arnold & Laub, 1984) that solves the equation A XA X A XB B XB R B XA 0, where A Q I w σ = = ⋅ and B IM = .

| Experimental setup
We evaluate the proposed architecture for LIDAR-based uncooperative relative navigation on seven scenarios that consider as the Target platform one real space platform mock-up and four simulated space objects. We assume that the Target is not rotating in any significant way and the reference frame where odometry is searched for is fixed on the Source and rotating with it. However, for the scenarios involving real data, the Source platform has a minor tumbling during data acquisition.
The first two scenarios consider a moving Source platform that However, this situation can be considered as a simulation of a lowresolution point cloud that is acquired by a space graded LIDAR sensor at greater distances or by a low-cost low-resolution spacegraded LIDAR device.
It is worth noting that the VLP-16 we used in our trials is not an optimum choice for spaceborne platforms mainly due the 360°h orizontal field of view (FoV), the 16 vertical beams limiting the Target details per sweep in the vertical axis, and finally, due to the relatively short operating range compared to the majority of currently available space graded LIDAR sensors. used to generate our data in this paper. In space the close-range operation for this relative navigation problem could be from 100 m down to 2 m close to the target and the LIDAR technology mentioned above and used in space would cover up well to that range. All the experiments for relative navigation available in the literature are of similar range (few meters) as we proposed in our simulation and real data. The accuracy achieved by our algorithm in our tests is optimal and we strongly believe that it would be kept similar if we went for real space scenarios as in this case the sensor would be of better quality (range of operation, accuracy although with less or similar density as our LIDAR system we used in our experiment is not of great quality in terms of density) and the noise is much less than that of our experimental data since in space the background is quite obscure and no noisy returns are expected comparing to our lab testbed.
Scenario three studies the odometry performance of a simulated Ellipse of Inspection (Sim-EoI) trajectory of the Source platform around the Target. The latter space platform is developed by Thales Alenia Space (France) and is inspired from the Globalstar-2 and Iridium constellations. The Sim-EoI trajectory is a realistic space-oriented scenario that considers the influence of the Earth's mass, the Sun's sunlight power with respect to each spectral band and the typical physical size of the Target and the Source platform. In the Sim-EoI trajectory the Source platform performs a complete translational motion along the ellipsoid. Figure 3 presents the satellite model, the Sim-EoI trajectory and the point cloud cardinality per frame.
Scenario four named Sim-Helical is presented in Figure 4 and simulates a 3D helical trajectory of the Source platform that is approaching the Thales Alenia Space model acting as the Target. We Sim-Bennu are presented in Figure 5.
It should be noted that the CAD models we use to generate the Target point clouds are accurate, but the point clouds created are not ideal. This is because the method of Aldomà (2011) we use to generate the point clouds from CAD models and the HPR algorithm we employ to generate viewing dependent point clouds, affect the point cloud accuracy during reconstruction.
The differences between the real and the simulated Target point clouds of our trials can be summarized as follows. First, the vertical cardinality of the real LIDAR data is fixed to 16 vertices, while for the synthetic there is not such a constraint. Second, the level of details of the real data is lower compared to the simulated. Third, real data during acquisition involve minor Source tumbling. Fourth, due to the real data acquisition setup, that is, high LIDAR refresh rate and slow-moving Source platform, the inter-motion between P k and P k+1 is lower compared to the simulated data where the fictitious dynamics of the Source and Target platforms are greater. However, despite these differences, both scenario classes are useful for evaluation. Except from these differences, both real and simulated data are sparse aiming at simulating large Source-Target distances or spaceborne LIDAR sensors that are less expensive and less accurate. Despite these differences, we believe that in the context of conceptual validation of our odometry method and in terms of comparing its performance against current solutions, both data modalities are acceptable. Figure 6 shows the ground truth trajectory of each scenario along with the most accurate trajectory obtained by the proposed method, that is, best performing feature description and filtering combination.
In our trials we describe all vertices belonging to each point cloud P k and P k+1 with the 3D descriptors introduced in Section 2. a. Encoding radius: We confirm that SHOT is fairly stable and is less affected by the encoding radius, while TriSI, FPFH and RoPS gain a peak performance and then drop (Guo et al., 2016(Guo et al., , 2015.
During tuning, we identified this peak performance at Tr 20 × , with Tr the average P k+1 resolution. Regarding HoD and HoD-S, as the encoding radius increases these tend to increase their encoding capability and provide more distinctive features.
However, regardless of the descriptor, as the encoding radius increases the processing time to compute the descriptor increases exponentially (Guo et al., 2016). Thus, the encoding radius of HoD and HoD-S is set to Tr 80 × to balance odometry performance with computational burden.  (7)). The threshold dependents on the encoding quality of the 3D descriptor and on the characteristics of the P k such as being sparse. Hence, to increase the robustness of our method to sparse Target point clouds and to potential feature description mismatches, we use an adaptive threshold, that is, set to Tr 2 × , with Tr the average P k+1 resolution. Due to its adaptive nature, the sensitivity of the geometric consistency threshold is relatively low. c. Adaptive filtering regulating parameters: These define for each iteration how much the measurements, that is, the feature correspondence coordinates for our odometry architecture, affect the prediction step, that is, the updated R*. These require to be finely tuned as the output of the filter is quite sensitive to these parameters. Tuning is performed based on the Real-FB scenario.
For completeness, the ICP point-to-point variant aims at aligning P k and P k+1 , where P k is kept fixed and P k+1 is transformed via the R* to match P k . During each transformation, ICP iteratively revises R ⁎ (Equation (1)) to minimize the Euclidean point-pair distances between P k and P k+1 . The point-to-point ICP, presented thereafter as ICP point, is a 4-step process: belonging to P k+1 is matched to its closest point in P k using a Euclidean distance metric. A match (inlier) is considered if the absolute Euclidean translational distance is less than 0.01.
b. The matrices R and T of R ⁎ (Equation (1)) are estimated using a root mean square point-to-point distance metric minimization method that will best align each p x y z , , We use the code of (Langlois, 2018) with 20 iterations. For completeness, an analysis between the interaction of the number of iterations for all ICP variants with the odometry performance and processing time is presented in Section 4.
Furthermore, in our experimental section we also evaluate the x84 ICP variant (Fusiello et al., 2002). The difference between ICP and x84 ICP is that the former uses as a threshold to reject outliers based on the standard deviation of the point pair distances, while x84 ICP the median absolute deviation. For the x84 ICP we use the code of (Birdal, 2015) that is properly modified to facilitate the x84 ICP outlier rejection scheme.
The maximum number of iterations is set to 20. We also challenge our proposed method against S4PCS which is an optimized version of the 4PCS global registration technique (Aiger, Mitra, & Cohen-Or, 2008).
4PCS is an iterative process that extracts all coplanar 4-point sets from P k+1 that are approximately related by a rigid transformation to a given planar 4-points from P k . The operating principle of 4PCS is that under rigid motion between P k and P k+1 a number of coplanar 4-point sets from P k+1 remain invariant under affine transformations.
Thus, 4PCS estimates R ⁎ between the randomly chosen coplanar 4-point sets from P k and P k+1 and retains the optimum R ⁎ based on a similarity score. S4PCS affords a speedup over 4PCS by eliminating redundant 4-point congruent sets, that is, sets that are related by a rigid transformation, and by indexing the coplanar 4-point sets for fast retrieval. In this study we use the S4PCS code of Mellado (2017). Based on the tuning scenario Real-FB, we set an estimated overlap ratio of 70% between P k and P k+1 and a registration accuracy δ = 0.01, while the rest of the parameters are the default ones.
Finally, we also challenge our technique against the global/local inlier voting (Buch, Yang, Kruger, & Petersen, 2014) that involves a dual layer correspondence check comprising of GCC and RANSAC followed by a singular value decomposition scheme. However, for better readability comparison is presented in a compact form in Section 4.
Alternatively, current literature also suggests employing computer vision concepts (A. Rhodes et al., 2016;A. P. Rhodes et al., 2017) and specifically proposes applying coarse Target pose estimation utilizing the oriented unique and repeatable clustered viewpoint feature histogram (OUR-CVFH) regional descriptor (Aldoma, Tombari, Rusu, & Vincze, 2012) or the local feature descriptor spin images (Johnson & Hebert, 1998) the impact of each of these nuisances is related to their severity and to the robustness of the 3D feature descriptors and GCC module. For better readability and to keep the paper in a reasonable length, we will not involve any robustness to nuisance factors evaluation.

| Evaluation criteria
Odometry performance metrics are based on the average and the maximum triaxial translational error between the ground truth (GT) position of the moving platform as tracked by the Optitrack and the estimated one as provided by the proposed architecture where N is the number of point cloud instances per scenario, is the transformed translation at instance k from the LIDAR reference frame to the Optitrack reference frame to make its comparison with the GT translation applicable, and avg(⋅) averages the triaxial translation into a single value. In addition to these metrics, we also calculate the drift, that is, root mean square error, between the estimated endpoint and the GT endpoint, the corresponding translational error T error as a percentage over the distance travelled and the average processing time t per Target scene.
Similarly to equations (27) and (28), we calculate the average rotational error Additionally, we also use the pose-graph comparison method of Burgard et al. (2009). where trans(⋅) is the translational and rot(⋅) the rotational part of the R* and R GT * transformation matrices, respectively between P k and P k+1 and ∘ is the inverse motion composition operator as defined in Lu and Milios (1997). The advantage of using the e RT metric is twofold.  Table A1 (see   Appendix) with the top performing method per error being highlighted.
Additionally, Figure A1 presents the corresponding odometry trajectory obtained per 3D descriptor and recursive filtering method along with all competitor methods. For better readability, methods with large errors are discarded and not presented in the corresponding plots.
From Table A1 it is evident that quite a few 3D local description and recursive filtering combinations are more accurate compared to the majority of the competitor methods. Top performance affording lowest e RT at a low processing time, is provided by the combination of HoD-S with the adaptive αβ filter, that is, HoD-S/αβ. It should be noted that in terms of error, SHOT/αβ is slightly more accurate than HoD-S/αβ.
However, the latter is three times faster to execute than SHOT/αβ and thus HoD-S/αβ is considered as an overall more appealing option. This is because HoD-S neglects estimating a LRF/A and thus is more robust to sparse point clouds. Additionally, neglecting a LRF/A and relying on a small description length, makes HoD-S the most processing efficient descriptor among the ones evaluated. In terms of pure odometry accuracy, SHOT attains the lowest e RT error, but interestingly, from  Table A1 and Figure A1, is it clear that TrISI provides one of the largest translational errors indicating that the e RT metric is biased by the very low rotational errors of TriSI. In fact, that sparse nature of P k and P k+1 forces TriSI to fail providing enough geometrically consistent feature matches to the recursive filtering module, and thus the latter does not iterate properly, forcing the estimated R* to preserve its initialization value.
In terms of processing efficiency, as expected HoD-S with HoD are the fastest to execute as both neglect a LRF/A estimation, and RoPS with TriSI are the least processing efficient due to involving a complex LRF estimation process. It is worth noting that despite HoD-S and HoD being implemented in MATLAB, these are still faster to execute than FPFH and SHOT that are in implemented in C++/PCL. From Table A1, we also observe that the adaptive αβ and Kalman filters afford similar accuracy, with αβ being slightly more accurate, demonstrating that due to the minor motion between P k and P k+1 , a linear motion estimation model with two parameters, as is the αβ filter, is sufficient for space odometry. For better visualization of which filter is performing better, in Figure 7 we compare the best feature descriptor from each filter.
Regarding the competitor methods, the ICP point and plane variants are the fastest to execute and are the most accurate methods among all competitor techniques evaluated. Despite that, in terms of accuracy these are still inferior to most of the proposed combinations. From Table A1 and Figures A1 i and j, we observe that ICP point is slightly more accurate than ICP plane, despite P k mostly comprising of flat surfaces. An explanation is that P k has quite a few groups of vertices x84 and both S-ICP variants fail to provide an accurate odometry. In addition to that, sparse point clouds force ICP to perform suboptimal . For the S-ICP variants, due to the vertex geometry these in most cases provide an R* with zero rotation and translation, which is clearly presented in Figure A1i and j.

| Real-Curved trajectory
This scenario also considers real LIDAR data but is more challenging compared to the Real-FB scenario because in addition to the poor structure and sparse Target point cloud, this trajectory is also highly curved. Hence, most of the descriptors attains a larger e RT error compared to the Real-FB scenario. From analyzing Table A2 and Figure   A2, we conclude that HoD/Kalman attains the lowest error with HoD-S/ αβ following next. However, given that the latter requires half the processing time of the former, and that the e RT based performance difference between these two combinations is relatively small, we conclude that the overall optimum choice for this scenario is the HoD-S/ αβ. A commonality between the two real scenarios is that Trisi affords the lowest e RT error for a few filters, but again this metric is biased by the low rotational error of TriSI. In fact, TriSI has the highest e T avg odometry error for the reasons already presented in Section 3.3.1.
For the adaptive Kalman and αβ filters, HoD-S is the optimum choice as it affords the lowest e RT error and simultaneously requires the lowest processing time. Finally, similarly to the Real-FB scenario, in this scenario FPFH also attains the largest errors, which are more evident in the Z-axis. In terms of processing efficiency, except for TriSI, the hierarchy is the same as for the Real-FB scenario.
Regarding the accuracy of the recursive filtering schemes, we notice from Table A2 and Figure A2 that the adaptive αβ filter gains the lowest e RT error with the adaptive Kalman to follow. This is due to the minor motion between P k and P k+1 that can be simulated with a two-parameter linear model. To highlight which filter is performing better, in Figure 8 we compare the best feature descriptor from each filter. A further analysis at a higher level involving the overall performance of each descriptor and filtering method over all scenarios is presented in Section 4.
As expected, the processing burden of every 3D local feature descriptor and recursive filtering combination is higher than the time  Table A3 and Figure A3. Regarding the accuracy of the recursive filtering schemes, we notice from Table A3 that the adaptive αβ filter achieves the lowest average error, with the H∞ and SDRE filters following closely. Surprisingly, the adaptive Kalman filter fails to provide an appealing accuracy with any of the evaluated descriptors. In Figure 9 we compare the best feature descriptor from each filter. The Kalman filter constantly attains the highest errors for all feature descriptors, while the latter ones perform quite well with the rest of the filtering methods. This is because all modules within the proposed architecture are tuned based on the sparse Concerning the performance of the competitor methods, both S-ICP variants provide a highly accurate odometry in an appealing execution time. The remaining competitor techniques attain an accuracy that is one order of magnitude larger compared to S-ICP. This is because despite this trial having more vertices compared to the rest of the scenarios, the number of vertices is still small for an ICP process to iterate properly .
In any case, it should be noted that the errors in this trial are larger  Table A4 presents the detailed performance metrics, while Figure A4 the error, that is, difference of the GT trajectory and the estimated one per method, for each of the three axes. For this scenario we prefer presenting the corresponding errors per method rather than the estimated trajectory, due to the small errors attained by the majority of the feature descriptors and recursive filtering methods.
From Table A4 it is evident that the HoD-S descriptor attains the lowest e RT error among all descriptors evaluated. Additionally, HoD-S achieves the lowest error per filter with any of the filtering methods that it is combined with. Next to follow is SHOT, which poses the second-best choice for each filter. From Table A4 and Figure A4 it is clear that TriSI is the least accurate descriptor as it achieves the highest e RT error for any of the filters evaluated. Given that RoPS and TriSI share the same LRF estimation method and that RoPS gains a better odometry accuracy, the encoding capability of TriSI on this scenario is limited. This is because the spin images descriptor that is included in the TriSI descriptor is prone to highly sparse structures, in combination to the large frame-to-frame motion between P k and P k+1 . Regarding processing efficiency, HoD-S is the fastest to compute because it neglects a LRF/A estimation process.
We further analyze the interplay between HoD-S, which is the descriptor offering the smallest errors, and the filtering methods evaluated in this study, and present the results in Figure 10. From this figure we observe that H∞, αβ, and SDRE have a very similar performance, with Kalman being less accurate especially in the Z-axis.
Considering the competitor methods, the top performing ICP plane has half the accuracy of HoD-S/αβ. As already mentioned in Section 1, the proposed method can present an accurate odometry due to the feature matching and the geometric correspondence grouping schemes that provide to the recursive filter only well-established correspondences.
These correspondences combined with the adaptive nature infused in the recursive filter afford an odometry trajectory with low errors. Despite that, ICP is highly processing efficient requiring only 18 ms. Even though literature suggests that ICP attains a low accuracy on sparse point clouds  and we confirmed that in the previous trials, this trial despite being sparse, the frame-to-frame motion between P k and P k+1 is such allowing ICP to settle to a more accurate solution. The results on this scenario are presented in Table A5 and Figure A5. As expected, due to the long trajectory length evaluated here, the performance of all methods is substantially improved. However, as already stated, a direct cross-scenario performance comparison is biased toward the shortest trajectories. Hence, in Section 4, we normalize the performance of each method such as to make it independent of the distance travelled and thus make a cross-scenario performance comparison feasible. The performance achieved by each method is quite similar among the filters evaluated. This demonstrates that establishing correct feature matches is very important as miss-matches will negatively influence the odometry output of the filter. However, as demonstrated in Section 3.4.1, for the Kalman filter the training and testing scenarios should involve a Target with a similar level of sparsity. Optimum scheme is the SHOT/αβ, while the overall tope performing descriptor is SHOT with HoD following closely. However, due to the large computational burden of SHOT and given that the next best performing combination is HoD-S/ SDRE that is also 10 times faster to execute, we select the latter as the optimum method. Considering the performance of the filtering methods, the overall top performing is H∞ with αβ being the next optimum choice.

| Sim-Voyager trajectory
In terms of processing efficiency, once again HoD-S and HoD are the fastest to compute. The least accurate method is HoD-S/Kalman posing large translational and rotational errors. Figure 11 presents the performance of the most appealing schemes per filtering method, where we observe that SHOT/αβ shows a substantial accuracy drop in the X-axis after frame 200, that is, after two complete ellipsoidal translations. Additionally, Figure 11 highlights that all methods present a repetitive error that coincides with the corresponding relative Source-Target position.
Considering the competitor methods, ICP point attains an e RT error that is close but inferior to the well performing proposed combinations, with ICP plane following. However, as expected, both these ICP variants are much faster to execute. Despite the average P k cardinality is only 190, both S-ICP variants are less accurate than their standard ICP counterparts because P k is not sparse. Interestingly, all errors are sinusoidal-based that coincides with the relative Source-Target position.
This is more evident for the S4PCS, where the error amplitude is the highest presented in Figure A5 (m-o).

| Sim-Orion trajectory
This scenario is identical to the Sim-Voyager, but the Voyager satellite is substituted with the Orion space capsule. Main characteristics of this trial are the flat surfaces of Orion that include less distinctive features. The average P k cardinality is 288. In terms of performance,  Figure A6. In Figure   12 we present the top performing descriptor per filtering method. Interestingly

| Sim-Bennu trajectory
This also the same trajectory to the Sim-Voyager and Sim-Orion but considers the Bennu asteroid as the Target object. The main challenge of this trial is the very small average point cloud cardinality which is only 151. The results obtained are presented in Figure A7 and Table A7, where it is evident that for all filters but Kalman, HoD-S is the optimum choice. This is because, in contrast to the Sim-Orion scenario where the  This is important because for odometry applications the trajectory accuracy and the computational burden are equally important. In Figure   14 the color of the marker is related to the scenario, while the shape to the descriptor. Identifying the overall most appealing descriptor is not an easy task because the performance ofeach descriptor varies for each scenario depending on the characteristics of the trajectory, that is, Source-Target range, frame-to-frame Target motion, and on the complexity of the Target structure. Figure 14 demonstrates that the average normalized e RT for each descriptor is of the same order, with only minor differences. However, in terms of processing efficiency, the descriptors impose a different processing time that is governed by several parameters including the individual features of each scenario, that is, level of sparsity, the robustness of each descriptor, the number of geometrically consistent feature matches and finally, the number of iterations for each filter. Overall, we conclude that HoD-S is the most appealing descriptor for the majority of the scenarios, that is, Real-FB, Sim-Orion, Sim-Helical, and Sim-Bennu, with FPFH being the most appealing for the Real-Curved and the Sim-EoI scenarios. These findings confirm that HoD and HoD-S are very robust to low density point clouds (Kechagias-Stamatis & Aouf, 2016), while in parallel these also afford a low processing time.

| Feature description performance analysis
Interestingly, for each scenario all descriptors attain errors of the same order, with the only major differentiation being the processing burden. This claim should not be confused with the individual findings for each scenario, as in Section 3 we presented the performance of each feature descriptor and filtering method per scenario, while here we evaluate the average performance of each descriptor aver all filtering methods. From Figure 14 it is evident that scenarios exploiting real LIDAR data pose larger errors compared to the synthetic ones. This is important as it highlights that despite creating realistic synthetic scenarios as done in this study, simulating various noise sources, range dependent P k resolution variation and viewing dependent P k creation from a complete 3D Target model, cannot be as realistic as real data acquisition. However, the feature description hierarchy for the top performing descriptors is relatively stable confirming that validity of the results and that despite the disadvantages of synthetic versus real data, exploiting synthetic data is still valuable.
Regarding computational efficiency, in most cases HoD-S is at least one order of magnitude faster to compute than the rest of the descriptors. On the contrary, RoPS imposes the highest processing burden. This is because the former descriptor neglects estimating a LRF/ A, while the latter involves a computationally expensive LRF. This is also evident from the TriSI descriptor that exploits the same LRF estimation method as RoPS, and thus is also processing inefficient. Interestingly, despite FPFH and SHOT being implemented in C++/PCL and executed via a MEX wrapper, in most of the scenarios these are less processing efficient than HoD-S and HoD that are entirely implemented in MATLAB.
It is worth noting that the Sim-EoI scenario imposes the highest processing burden among all scenarios. This is due to the large P k cardinality, demonstrating that in such cases, down sampling P k or exploiting a keypoint detection strategy rather encoding all P k should be considered. Figure 15 presents the interplay between the P k cardinality and the processing burden per descriptor. This figure highlights that FPFH is the least affected by the P k cardinality, while HoD and HoD-S are the most affected ones. This is because the processing efficiency of HoD and HoD-S, due to neglecting an LRF/A estimation, is not balanced in high cardinality point cloud scenarios, that is, Sim-EoI scenario, due to their large description radius. Our findings in Figure 15, confirm (Guo et al., 2016) in respect to the relative processing efficiency between SHOT, RoPS, TriSI, and FPFH. The minor processing time fluctuations between various P k cardinality values are because the total processing time considered in this plot is not only affected by the cardinality of P k but also by the number of correspondences between P k and P k+1 .

| Recursive filtering performance analysis
Next, we evaluate the performance of each filter per scenario, as an average over all feature descriptors evaluated. From Tables 19-25 we observe that the average processing time of each filter per scenario is almost independent from the recursive filtering method used, that is, the fastest filter has only a 2% speed-up against the slowest one. Therefore, in Figure 16 we decouple the average normalized e RT error from the total processing time per scenario and present only the error. The results clearly demonstrate the superiority of the adaptive Kalman filtering as in five out of seven trials it affords a greater odometry accuracy over its competitor filters. Even for the Real-FB and Sim-Bennu scenarios where the adaptive Kalman filter is not the top performing one, it's e RT error is only 2.5% and 8.3% greater compared to the top performing filter, respectively. This is because opposing to the αβ filter that is restricted by two states and uses static manually defined filter gains, the Kalman filter is not restricted and relies on a time-dependent automatically updated estimate of the state covariance, which is based on user defined covariance noise models. SDRE is a nonlinear filter and since the Target point cloud between P k and P k+1 is small, it can be better approximated with linear models. However, as already mentioned, these observations consider the average performance per filter and scenario, and do not prohibit the case where a filter attains a low average performance but a very appealing odometry accuracy if combined with a specific descriptor.
Similarly to Figure 14, Figure 16 also highlights the difference between using real and synthetic data. In fact, scenarios involving real data impose errors that are at least one order of magnitude greater compared to synthetic data. This is mainly due to the poor structure of the P k acquired by the LIDAR sensor and due to the minor Source platform tumbling.
In Figure 17 we further analyze the performance of each filter by presenting the e RT , e T avg , and e R avg errors per filter and scenario. From this figure it is also obvious that the data modality, that is, real versus synthetic, has a great impact on the odometry accuracy. As already presented in Figure 16, the e RT error in the real scenarios is higher compared to the error in the simulated ones. Interestingly, in the real LIDAR data and in the Sim-EoI scenario, which has a high P k cardinality, the e T avg is smaller than the e R avg , while in the remaining synthetic scenarios with a low P k cardinality this is reversed. This effect is because of the frame-to-frame motion of the real-data scenarios that is less smooth compared to the synthetic ones and also due to the high P k cardinality of the Sim-EoI scenario. Furthermore, given that the Sim-EoI and the Sim-Voyager, Sim-Orion, and Sim-Bennu scenarios adopt the same trajectory, the difference between the e R avg and e T avg errors can be related to the P k sparsity in combination to the amount of details/distinctive features in each P k . It should be noted that the calculation of the error metrics presented in Figure 17 is based on Equations (27,29,31), respectively, with the former two equations being conceptually different compared to the third one. Therefore, the e R avg and e T avg errors can significantly differ from the e RT error, for example, the errors of the Kalman filter for the Sim-EoI scenario presented in Figure 17.
For completeness, we also investigate the importance of the recursive filtering module by substituting it in our odometry pipeline with the ICP point-to-point registration process, which is a commonly used registration method in the space navigation literature. Hence, in this trial we apply the ICP point-to-point on the correspondences produced by the GCC process, rather than on the entire P k as done in the trials of Section 3. The performance attained with and without the recursive filtering module per scenario is presented in Table 3, where it is clear that the recursive filtering process has a great impact on the odometry accuracy. However, exploiting a recursive filtering scheme increases the total computational time.

| Performance analysis of the top performing description-filtering combination
In Table 4 we summarize the most appealing module per scenario, that is, descriptor, filtering method, and the combination of the former two modules, along with the special features per scenario. From the results presented in Figure 14 and summarized in Table 4, it can be concluded that overall in terms of odometry accuracy and computational burden, HoD-S is the most appealing feature descriptor, with FPFH to follow.
Considering the best performing filtering scheme, from Figure 16 we observe that the adaptive Kalman is the optimum choice, with the adaptive αβ filter to follow. A summary of the top performing filtering method per scenario is presented in Table 4. Interestingly, the most appealing descriptors and recursive filtering methods per scenario, as concluded from the scenarios in Section 3, do not necessarily coincide with the most appealing combination of these two modules individually. This is because in the former analysis on the optimum descriptor per scenario, we considered the overall performance of all recursive filtering methods evaluated in this study.
HoD-S is appealing because of its robustness to low and very low density point clouds along with their processing efficiency confirming the findings of Kechagias-Stamatis and Aouf (2016). Despite that, in the Sim-Orion scenario SHOT combined with the adaptive Kalman filter present the most appealing combination. This is because the Target involved in this scenario has a poor structure comprising mostly of flat surfaces, indicating that for that type of scenarios involving a LRF in the description process is crucial.
We further analyze the performance of the top performing combinations per scenario, by comparing them against the corresponding top performing competitor solutions. Evaluation considers the odometry accuracy expressed via the normalized e RT error metric and the computational burden imposed by each solution. From Figure 18 it is obvious that the proposed method is more accurate, while ICP-based methods are generally faster to execute. It should be noted that the results of Figure 18 are normalized but not scaled, that is, multiplied with a fixed constant. By combining the information presented in Figure 18 and the average P k cardinality per scenario shown in Figure 9, the following conclusions can be made: First, for highly sparse point clouds, for example, Sim-Voyager and Sim-Bennu scenarios, the proposed method is more accurate than the top performing competitor method evaluated in this study and is faster to execute. For sparse P k , for example, Real-FB, Real-Curved, Sim-Helical, and Sim-Orion, identifying the most appealing solutions depends on the nature of P k , that is, real versus simulated data.
Specifically, for the real LIDAR data case, the top performing ICP variant is one order of magnitude faster than the proposed method, with the e RT error being of the same order.
From the trials presented in this paper, it is clear that the proposed odometry is more accurate compared to current space-oriented navigation techniques. One limitation of our method is that for point clouds exceeding 200 vertices, its computational burden is higher than the burden imposed by the ICP variants. However, in this paper we focus on cases simulating a low-resolution point cloud that is acquired by a space graded LIDAR sensor at greater distances or by a low-cost lowresolution space-graded LIDAR device. Thus, this limitation considers a broader usage of our architecture and not for the cases examined here.
Regarding the computational burden, partially this is because our architecture is a blend of MATLAB and C++ and thus it is not optimized in terms processing efficiency. However, given that our odometry architecture comprises of several processes, that is, feature description, matching, geometric consistency checks, and recursive filtering, implementing it in C++ shall still be more processing costly compared to ICP, but with a smaller time difference. However, as already stated, this paper is focusing on the conceptual validity of the proposed method rather than to a readily available solution. An additional limitation of our method is its sensitivity to Targets with smooth surfaces because 3D local feature descriptors are prone to mismatches affecting the odometry accuracy. Despite that, in scenarios that involve Targets with KECHAGIAS-STAMATIS ET AL.

| 869
flat surfaces, the proposed odometry solution still manages to attain lower odometry errors compared to typical ICP techniques.

| GCC module performance analysis
We also investigate the influence of the GCC module by substituting it in our odometry pipeline with RANSAC, which is a commonly used in computer vision applications to define the inliers of two data sets. Table 5 compares the odometry performance of our original architecture against the one using RANSAC. We implement the latter with a false alarm rate of 0.1, an inlier ratio of 99% and performing at least 50,000 iterations.
From Table 5, it is evident that GCC is up to one order of magnitude more accurate than RANSAC, while the normalized rotational accuracy is up to two orders of magnitude more accurate. This performance difference is more evident in the real scenarios highlighting that RANSAC is more prone to real LIDAR point clouds that suffer from minor noise and sensor tumbling during acquisition.
In terms of processing burden, we partially confirm (Yang et al., 2018), which states that RANSAC imposes up to two orders of magnitude additional computational time. In fact, our trials confirm the former statement but only for Target point clouds with a cardinality that is less than approximately 700 vertices. In fact, for larger point clouds we observe that GCC and RANSAC require the same order of execution time, with RANSAC though being faster to execute. This is because for each iteration RANSAC exploits only a random sample out of the corresponding vertices and then applies the estimated model on the entire input data, while GCC involves in each iteration all the corresponding vertices as produced by the coarse matching process of Equation (5). Therefore, given that the inter-motion between P k and P k+1 is small, the number of corresponding vertices produced by Equation (5) is directly related to the point cloud cardinality. In simple words, GCC is faster to execute because if P k and P k+1 have a small cardinality, then Equation (5) produces less correspondences that need to be evaluated by the GCC module.

| Comparison against the correspondence by local and global voting method
We further challenge the performance of the proposed pipeline against the correspondence by local and global voting (CLGV; Buch et al., 2014). This technique involves four steps, namely feature description, correspondence refinement via GCC and RANSAC, and finally singular value decomposition (SVD) to estimate the rigid transformation between P k and P k+1 . Table 6 highlights that the proposed solution is more appealing compared to CLGV, attaining odometry accuracy that is one order of magnitude more accurate and more processing efficient. This is because the SVD suffers from ambiguity in the orientation of the singular vectors (Tomasi, 2016) affecting the estimation of R* (Equation (1)). Regarding computational effort, CLGV imposes a large processing burden mainly due to the two iterative processes involved, that is, GCC and RANSAC.

| Interplay between performance and number of iterations for the ICP variants
In Section 3, we set the number of maximum iterations per ICP method, that is, ICP point 20, ICP plane 1,000, ICP x84 20, while the iterations for both S-ICP variants are set to 20. To support this choice per ICP method, we evaluate each ICP variant by setting the maximum number of iterations to 20 and 1,000. Evaluation considers the scenarios presented in Section 3 and the corresponding results are presented in Figure B1.
For better readability, the best performance per normalized error is highlighted in red, if it is attained by setting to 20 iterations, and in blue if setting to 1,000 iterations.
From the results obtained it is clear that the chosen number of iterations used in Section 3 is the optimum one for each ICP variant.
However, from further analyzing the results of Figure B1, we observe that the ICP plane variant settles in up to 20 iterations, for both the real and the simulated data scenarios. Accordingly, the ICP plane variant requires more iterations to settle and thus setting to the threshold up to 1,000 is mandatory. It is worth noting that for both variants, the normalized translational, rotational and e RT errors equally benefit from the number of iterations set. However, interestingly this is not the case for the ICP x84 variant, as the translational and e RT errors benefit from a low number of iterations, while the rotational error from a large number.
This behavior is present in both the real and the synthetic data scenarios.
The S-ICP point variant, presents overall a higher odometry accuracy when the iterations are up to 20. However, this is only valid for the synthetic data scenarios, as real data scenarios require more iterations.
Given that during the parameter setup we set the parameters of each method to be universal, that is, selected for an overall optimum odometry accuracy on both real and synthetic data scenarios, we set iterations to 20. Finally, the S-ICP plane variant is quite stable requiring up to 20 iterations. The results obtained confirm our previous findings that real and simulated data require a different parameter setup.

| CONCLUSION
LIDAR-based odometry for space relative navigation is challenging due to the absence of background, the limited structure and the sparsity of the Target point cloud. As demonstrated, several ICP variants, the S4PCS, and the CLGV method, which are currently widely used for point cloud registration and odometry applications, do not guarantee an accurate space odometry trajectory. Spurred by this, we suggest a robust architecture appropriate for space odometry that combines the concepts of 3D local feature description, geometric correspondence grouping for feature matching refinement and adaptive recursive filtering.
The accuracy of the proposed pipeline is tested on seven scenarios that include both real and synthetic point cloud data, on four space objects comprising of quite a few different satellites, a space capsule and an asteroid. In our trials we evaluate several current 3D local descriptor and recursive filtering combinations and demonstrate that the proposed architecture is 50% more accurate compared to current solutions. Our trials highlighted that HoD-S combined with the adaptive αβ filtering poses the most appealing combination for the majority of the scenarios evaluated, affording a high quality odometry performance with a low processing burden. Additional advantages of the proposed architecture over current LIDAR space odometry architectures are; being independent of an off-line training process and not requiring a priori knowledge of the Target platform.

APPENDIX A
For better readability this appendix presents the detailed performance metrics and odometry plots per method and scenario F I G U R E A 5 Sim-Voyager scenario 5 odometry error plots of the 3D descriptors combined with the adaptive filter (a-c) H∞, (d-f) Kalman, (g-i) αβ, (j-l) SDRE, and (m-o) competitor methods and top performing proposed method HoD-S/SDRE (due to large errors and for better readability we omit FPFH from Z error plots, HoD-S from (d) and ICP x84 from (m-o)). FPFH, fast point feature histogram; GT, ground truth; HOD, histogram of distance; HoD-S, histogram of distances short; ICP, iterative closest point; RoPS, rotational projection statistics; SDRE, state-dependent Riccati equation; SHOT, signatures of histograms of orientation; TriSI, tri-spin image F I G U R E A 6 Sim-Orion scenario 6 odometry plots of the 3D descriptors combined with the adaptive filter (a, b) H∞, (c, d) Kalman, (e, f) αβ, (g, h) SDRE, and (i, j) competitor methods and top performing proposed method SHOT/Kalman (due to large errors and for better readability we omit HoD from (a, b, c, d, e, f, g, h, j, k), HoD-S from (c, i, l), TriSI from (a, b, c, d, g, h, j, k), and S4PCS, Sparse ICP point2point and ICP x84 from (m-o)). FPFH, fast point feature histogram; GT, ground truth; HOD, histogram of distance; HoD-S, histogram of distances short; ICP, iterative closest point; RoPS, rotational projection statistics; SDRE, state-dependent Riccati equation; SHOT, signatures of histograms of orientation; TriSI, tri-spin image Italics and boldface font presents best performance per ICP variant and scenario. Abbreviation: ICP, iterative closest point.