References of "Sanchez Lopez, Jose Luis 50027149"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailS-Nav: Semantic-Geometric Planning for Mobile Robots
Kremer, Paul UL; Bavle, Hriday UL; Sanchez Lopez, Jose Luis UL et al

Scientific Conference (2023, July 10)

Detailed reference viewed: 121 (0 UL)
Full Text
Peer Reviewed
See detailHierarchical Visual SLAM based on Fiducial Markers
Tourani, Ali UL; Bavle, Hriday UL; Sanchez Lopez, Jose Luis UL et al

Scientific Conference (2023, June 02)

Fiducial markers can encode rich information about the environment and aid Visual SLAM (VSLAM) approaches in reconstructing maps with practical semantic information. Current marker-based VSLAM approaches ... [more ▼]

Fiducial markers can encode rich information about the environment and aid Visual SLAM (VSLAM) approaches in reconstructing maps with practical semantic information. Current marker-based VSLAM approaches mainly utilize markers for improving feature detection in low-feature environments and/or incorporating loop closure constraints, generating only low-level geometric maps of the environment prone to inaccuracies in complex environments. To bridge this gap, this paper presents a VSLAM approach utilizing a monocular camera and fiducial markers to generate hierarchical representations of the environment while improving the camera pose estimate. The proposed approach detects semantic entities from the surroundings, including walls, corridors, and rooms encoded within markers, and appropriately adds topological constraints among them. Experimental results on a real-world dataset demonstrate that the proposed approach outperforms a traditional marker-based VSLAM baseline in terms of accuracy, despite adding new constraints while creating enhanced map representations. Furthermore, it shows satisfactory results when comparing the reconstructed map quality to the one reconstructed using a LiDAR SLAM approach. [less ▲]

Detailed reference viewed: 186 (5 UL)
Full Text
Peer Reviewed
See detailFrom SLAM to Situational Awareness: Challenges and Survey
Bavle, Hriday UL; Sanchez Lopez, Jose Luis UL; Cimarelli, Claudio et al

in Sensors (2023), 23(10), 4849

The capability of a mobile robot to efficiently and safely perform complex missions is limited by its knowledge of the environment, namely the situation. Advanced reasoning, decision-making, and execution ... [more ▼]

The capability of a mobile robot to efficiently and safely perform complex missions is limited by its knowledge of the environment, namely the situation. Advanced reasoning, decision-making, and execution skills enable an intelligent agent to act autonomously in unknown environments. Situational Awareness (SA) is a fundamental capability of humans that has been deeply studied in various fields, such as psychology, military, aerospace, and education. Nevertheless, it has yet to be considered in robotics, which has focused on single compartmentalized concepts such as sensing, spatial perception, sensor fusion, state estimation, and Simultaneous Localization and Mapping (SLAM). Hence, the present research aims to connect the broad multidisciplinary existing knowledge to pave the way for a complete SA system for mobile robotics that we deem paramount for autonomy. To this aim, we define the principal components to structure a robotic SA and their area of competence. Accordingly, this paper investigates each aspect of SA, surveying the state-of-the-art robotics algorithms that cover them, and discusses their current limitations. Remarkably, essential aspects of SA are still immature since the current algorithmic development restricts their performance to only specific environments. Nevertheless, Artificial Intelligence (AI), particularly Deep Learning (DL), has brought new methods to bridge the gap that maintains these fields apart from the deployment to real-world scenarios. Furthermore, an opportunity has been discovered to interconnect the vastly fragmented space of robotic comprehension algorithms through the mechanism of Situational Graph (S-Graph), a generalization of the well-known scene graph. Therefore, we finally shape our vision for the future of robotic situational awareness by discussing interesting recent research directions. [less ▲]

Detailed reference viewed: 142 (3 UL)
Full Text
Peer Reviewed
See detailA Review of Radio Frequency Based Localisation for Aerial and Ground Robots with 5G Future Perspectives
Kabiri, Meisam UL; Cimarelli, Claudio UL; Bavle, Hriday UL et al

in Sensors (2022), 23(1), 188

Efficient localisation plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAVs), which contributes to improved control, safety, power economy ... [more ▼]

Efficient localisation plays a vital role in many modern applications of Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAVs), which contributes to improved control, safety, power economy, etc. The ubiquitous 5G NR (New Radio) cellular network will provide new opportunities to enhance the localisation of UAVs and UGVs. In this paper, we review radio frequency (RF)-based approaches to localisation. We review the RF features that can be utilized for localisation and investigate the current methods suitable for Unmanned Vehicles under two general categories: range-based and fingerprinting. The existing state-of-the-art literature on RF-based localisation for both UAVs and UGVs is examined, and the envisioned 5G NR for localisation enhancement, and the future research direction are explored. [less ▲]

Detailed reference viewed: 121 (9 UL)
Full Text
See detailS-Graphs+: Real-time Localization and Mapping leveraging Hierarchical Representations
Bavle, Hriday UL; Sanchez Lopez, Jose Luis UL; Shaheer, Muhammad UL et al

E-print/Working paper (2022)

In this paper, we present an evolved version of the Situational Graphs, which jointly models in a single optimizable factor graph, a SLAM graph, as a set of robot keyframes, containing its associated ... [more ▼]

In this paper, we present an evolved version of the Situational Graphs, which jointly models in a single optimizable factor graph, a SLAM graph, as a set of robot keyframes, containing its associated measurements and robot poses, and a 3D scene graph, as a high-level representation of the environment that encodes its different geometric elements with semantic attributes and the relational information between those elements. Our proposed S-Graphs+ is a novel four-layered factor graph that includes: (1) a keyframes layer with robot pose estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer encompassing sets of wall planes, and (4) a floors layer gathering the rooms within a given floor level. The above graph is optimized in real-time to obtain a robust and accurate estimate of the robot's pose and its map, simultaneously constructing and leveraging the high-level information of the environment. To extract such high-level information, we present novel room and floor segmentation algorithms utilizing the mapped wall planes and free-space clusters. We tested S-Graphs+ on multiple datasets including, simulations of distinct indoor environments, on real datasets captured over several construction sites and office environments, and on a real public dataset of indoor office environments. S-Graphs+ outperforms relevant baselines in the majority of the datasets while extending the robot situational awareness by a four-layered scene model. Moreover, we make the algorithm available as a docker file. [less ▲]

Detailed reference viewed: 158 (4 UL)
Full Text
Peer Reviewed
See detailVisual SLAM: What Are the Current Trends and What to Expect?
Tourani, Ali UL; Bavle, Hriday UL; Sanchez Lopez, Jose Luis UL et al

in Sensors (2022), 22(23), 9297

In recent years, Simultaneous Localization and Mapping (SLAM) systems have shown significant performance, accuracy, and efficiency gain. In this regard, Visual Simultaneous Localization and Mapping (VSLAM ... [more ▼]

In recent years, Simultaneous Localization and Mapping (SLAM) systems have shown significant performance, accuracy, and efficiency gain. In this regard, Visual Simultaneous Localization and Mapping (VSLAM) methods refer to the SLAM approaches that employ cameras for pose estimation and map reconstruction and are preferred over Light Detection And Ranging (LiDAR)-based methods due to their lighter weight, lower acquisition costs, and richer environment representation. Hence, several VSLAM approaches have evolved using different camera types (e.g., monocular or stereo), and have been tested on various datasets (e.g., Technische Universität München (TUM) RGB-D or European Robotics Challenge (EuRoC)) and in different conditions (i.e., indoors and outdoors), and employ multiple methodologies to have a better understanding of their surroundings. The mentioned variations have made this topic popular for researchers and have resulted in various methods. In this regard, the primary intent of this paper is to assimilate the wide range of works in VSLAM and present their recent advances, along with discussing the existing challenges and trends. This survey is worthwhile to give a big picture of the current focuses in robotics and VSLAM fields based on the concentrated resolutions and objectives of the state-of-the-art. This paper provides an in-depth literature survey of fifty impactful articles published in the VSLAMs domain. The mentioned manuscripts have been classified by different characteristics, including the novelty domain, objectives, employed algorithms, and semantic level. The paper also discusses the current trends and contemporary directions of VSLAM techniques that may help researchers investigate them. [less ▲]

Detailed reference viewed: 97 (6 UL)
Full Text
Peer Reviewed
See detailUnclonable human-invisible machine vision markers leveraging the omnidirectional chiral Bragg diffraction of cholesteric spherical reflectors
Agha, Hakam UL; Geng, Yong UL; Ma, Xu UL et al

in Light: Science and Applications (2022), 11(309), 10103841377-022-01002-4

The seemingly simple step of molding a cholesteric liquid crystal into spherical shape, yielding a Cholesteric Spherical Reflector (CSR), has profound optical consequences that open a range of ... [more ▼]

The seemingly simple step of molding a cholesteric liquid crystal into spherical shape, yielding a Cholesteric Spherical Reflector (CSR), has profound optical consequences that open a range of opportunities for potentially transformative technologies. The chiral Bragg diffraction resulting from the helical self-assembly of cholesterics becomes omnidirectional in CSRs. This turns them into selective retroreflectors that are exceptionally easy to distinguish— regardless of background—by simple and low-cost machine vision, while at the same time they can be made largely imperceptible to human vision. This allows them to be distributed in human-populated environments, laid out in the form of QR-code-like markers that help robots and Augmented Reality (AR) devices to operate reliably, and to identify items in their surroundings. At the scale of individual CSRs, unpredictable features within each marker turn them into Physical Unclonable Functions (PUFs), of great value for secure authentication. Via the machines reading them, CSR markers can thus act as trustworthy yet unobtrusive links between the physical world (buildings, vehicles, packaging,...) and its digital twin computer representation. This opens opportunities to address pressing challenges in logistics and supply chain management, recycling and the circular economy, sustainable construction of the built environment, and many other fields of individual, societal and commercial importance. [less ▲]

Detailed reference viewed: 134 (7 UL)
Full Text
See detailA Lightweight Universal Gripper with Low Activation Force for Aerial Grasping
Kremer, Paul UL; Rahimi Nohooji, Hamed UL; Sanchez Lopez, Jose Luis UL et al

E-print/Working paper (2022)

Soft robotic grippers have numerous advantages that address challenges in dynamic aerial grasping. Typical multi-fingered soft grippers recently showcased for aerial grasping are highly dependent on the ... [more ▼]

Soft robotic grippers have numerous advantages that address challenges in dynamic aerial grasping. Typical multi-fingered soft grippers recently showcased for aerial grasping are highly dependent on the direction of the target object for successful grasping. This study pushes the boundaries of dynamic aerial grasping by developing an omnidirectional system for autonomous aerial manipulation. In particular, the paper investigates the design, fabrication, and experimental verification of a novel, highly integrated, modular, sensor-rich, universal jamming gripper specifically designed for aerial applications. Leveraging recent developments in particle jamming and soft granular materials, the presented gripper produces a substantial holding force while being very lightweight, energy-efficient and only requiring a low activation force. We show that the holding force can be improved by up to 50% by adding an additive to the membrane’s silicone mixture. The experiments show that our lightweight gripper can develop up to 15N of holding force with an activation force as low as 2.5N, even without geometric interlocking. Finally, a pick and release task is performed under real-world conditions by mounting the gripper onto a multi-copter. The developed aerial grasping system features many useful properties, such as resilience and robustness to collisions and the inherent passive compliance which decouples the UAV from the environment. [less ▲]

Detailed reference viewed: 112 (1 UL)
Full Text
Peer Reviewed
See detailAdvanced Situational Graphs for Robot Navigation in Structured Indoor Environments
Bavle, Hriday UL; Sanchez Lopez, Jose Luis UL; Shaheer, Muhammad UL et al

E-print/Working paper (2022)

Mobile robots extract information from its environment to understand their current situation to enable intelligent decision making and autonomous task execution. In our previous work, we introduced the ... [more ▼]

Mobile robots extract information from its environment to understand their current situation to enable intelligent decision making and autonomous task execution. In our previous work, we introduced the concept of Situation Graphs (S-Graphs) which combines in a single optimizable graph, the robot keyframes and the representation of the environment with geometric, semantic and topological abstractions. Although S-Graphs were built and optimized in real-time and demonstrated state-of-the-art results, they are limited to specific structured environments with specific hand-tuned dimensions of rooms and corridors. In this work, we present an advanced version of the Situational Graphs (S-Graphs+), consisting of the five layered optimizable graph that includes (1) metric layer along with the graph of free-space clusters (2) keyframe layer where the robot poses are registered (3) metric-semantic layer consisting of the extracted planar walls (4) novel rooms layer constraining the extracted planar walls (5) novel floors layer encompassing the rooms within a given floor level. S-Graphs+ demonstrates improved performance over S-Graphs efficiently extracting the room information while simultaneously improving the pose estimate of the robot, thus extending the robots situational awareness in the form of a five layered environmental model. [less ▲]

Detailed reference viewed: 98 (2 UL)
Full Text
Peer Reviewed
See detailA Hybrid Modelling Approach For Aerial Manipulators
Kremer, Paul UL; Sanchez Lopez, Jose Luis UL; Voos, Holger UL

in Journal of Intelligent and Robotic Systems (2022)

Aerial manipulators (AM) exhibit particularly challenging, non-linear dynamics; the UAV and its manipulator form a tightly coupled dynamic system, mutually impacting each other. The mathematical model ... [more ▼]

Aerial manipulators (AM) exhibit particularly challenging, non-linear dynamics; the UAV and its manipulator form a tightly coupled dynamic system, mutually impacting each other. The mathematical model describing these dynamics forms the core of many solutions in non-linear control and deep reinforcement learning. Traditionally, the formulation of the dynamics involves Euler angle parametrization in the Lagrangian framework or quaternion parametrization in the Newton-Euler framework. The former has the disadvantage of giving birth to singularities and the latter being algorithmically complex. This work presents a hybrid solution, combining the benefits of both, namely a quaternion approach leveraging the Lagrangian framework, connecting the singularity-free parameterization with the algorithmic simplicity of the Lagrangian approach. We do so by offering detailed insights into the kinematic modeling process and the formulation of the dynamics of a general aerial manipulator. The obtained dynamics model is validated experimentally against a real-time physics engine. A practical application of the obtained dynamics model is shown in the context of a computed torque feedback controller (feedback linearization), where we analyze its real-time capability with increasingly complex AM models. [less ▲]

Detailed reference viewed: 82 (1 UL)
Full Text
See detailFrom SLAM to Situational Awareness: Challenges and Survey
Bavle, Hriday UL; Sanchez Lopez, Jose Luis UL; Schmidt F, Eduardo et al

E-print/Working paper (2021)

Detailed reference viewed: 247 (6 UL)
Full Text
Peer Reviewed
See detailSemantic situation awareness of ellipse shapes via deep learning for multirotor aerial robots with a 2D LIDAR
Sanchez Lopez, Jose Luis UL; Castillo Lopez, Manuel UL; Voos, Holger UL

in 2020 International Conference on Unmanned Aircraft Systems (ICUAS) (2020, September)

In this work, we present a semantic situation awareness system for multirotor aerial robots equipped with a 2D LIDAR sensor, focusing on the understanding of the environment, provided to have a drift-free ... [more ▼]

In this work, we present a semantic situation awareness system for multirotor aerial robots equipped with a 2D LIDAR sensor, focusing on the understanding of the environment, provided to have a drift-free precise localization of the robot (e.g. given by GNSS/INS or motion capture system). Our algorithm generates in real-time a semantic map of the objects of the environment as a list of ellipses represented by their radii, and their pose and velocity, both in world coordinates. Two different Convolutional Neural Network (CNN) architectures are proposed and trained using an artificially generated dataset and a custom loss function, to detect ellipses in a segmented (i.e. with one single object) LIDAR measurement. In cascade, a specifically designed indirect-EKF estimates the ellipses based semantic map in world coordinates, as well as their velocity. We have quantitative and qualitatively evaluated the performance of our proposed situation awareness system. Two sets of Software-In-The-Loop simulations using CoppeliaSim with one and multiple static and moving cylindrical objects are used to evaluate the accuracy and performance of our algorithm. In addition, we have demonstrated the robustness of our proposed algorithm when handling real environments thanks to real laboratory experiments with non-cylindrical static (i.e. a barrel) objects and moving persons. [less ▲]

Detailed reference viewed: 524 (21 UL)
Full Text
Peer Reviewed
See detailA Survey of Computer Vision Methods for 2D Object Detection from Unmanned Aerial Vehicles
Cazzato, Dario UL; Cimarelli, Claudio UL; Sanchez Lopez, Jose Luis UL et al

in Journal of Imaging (2020), 6(8), 78

The spread of Unmanned Aerial Vehicles (UAVs) in the last decade revolutionized many applications fields. Most investigated research topics focus on increasing autonomy during operational campaigns ... [more ▼]

The spread of Unmanned Aerial Vehicles (UAVs) in the last decade revolutionized many applications fields. Most investigated research topics focus on increasing autonomy during operational campaigns, environmental monitoring, surveillance, maps, and labeling. To achieve such complex goals, a high-level module is exploited to build semantic knowledge leveraging the outputs of the low-level module that takes data acquired from multiple sensors and extracts information concerning what is sensed. All in all, the detection of the objects is undoubtedly the most important low-level task, and the most employed sensors to accomplish it are by far RGB cameras due to costs, dimensions, and the wide literature on RGB-based object detection. This survey presents recent advancements in 2D object detection for the case of UAVs, focusing on the differences, strategies, and trade-offs between the generic problem of object detection, and the adaptation of such solutions for operations of the UAV. Moreover, a new taxonomy that considers different heights intervals and driven by the methodological approaches introduced by the works in the state of the art instead of hardware, physical and/or technological constraints is proposed. [less ▲]

Detailed reference viewed: 457 (7 UL)
Full Text
Peer Reviewed
See detailTrajectory Tracking for Aerial Robots: an Optimization-Based Planning and Control Approach
Sanchez Lopez, Jose Luis UL; Castillo Lopez, Manuel UL; Olivares Mendez, Miguel Angel UL et al

in Journal of Intelligent and Robotic Systems (2020), 100

In this work, we present an optimization-based trajectory tracking solution for multirotor aerial robots given a geometrically feasible path. A trajectory planner generates a minimum-time kinematically ... [more ▼]

In this work, we present an optimization-based trajectory tracking solution for multirotor aerial robots given a geometrically feasible path. A trajectory planner generates a minimum-time kinematically and dynamically feasible trajectory that includes not only standard restrictions such as continuity and limits on the trajectory, constraints in the waypoints, and maximum distance between the planned trajectory and the given path, but also restrictions in the actuators of the aerial robot based on its dynamic model, guaranteeing that the planned trajectory is achievable. Our novel compact multi-phase trajectory definition, as a set of two different kinds of polynomials, provides a higher semantic encoding of the trajectory, which allows calculating an optimal solution but following a predefined simple profile. A Model Predictive Controller ensures that the planned trajectory is tracked by the aerial robot with the smallest deviation. Its novel formulation takes as inputs all the magnitudes of the planned trajectory (i.e. position and heading, velocity, and acceleration) to generate the control commands, demonstrating through in-lab real flights an improvement of the tracking performance when compared with a controller that only uses the planned position and heading. To support our optimization-based solution, we discuss the most commonly used representations of orientations, as well as both the difference as well as the scalar error between two rotations, in both tridimensional and bidimensional spaces $SO(3)$ and $SO(2)$. We demonstrate that quaternions and error-quaternions have some advantages when compared to other formulations. [less ▲]

Detailed reference viewed: 556 (15 UL)
Full Text
Peer Reviewed
See detailA Real-Time Approach for Chance-Constrained Motion Planning with Dynamic Obstacles
Castillo Lopez, Manuel UL; Ludivig, Philippe; Sajadi-Alamdari, Seyed Amin et al

in IEEE Robotics and Automation Letters (2020), 5(2), 3620-3625

Uncertain dynamic obstacles, such as pedestrians or vehicles, pose a major challenge for optimal robot navigation with safety guarantees. Previous work on motion planning has followed two main strategies ... [more ▼]

Uncertain dynamic obstacles, such as pedestrians or vehicles, pose a major challenge for optimal robot navigation with safety guarantees. Previous work on motion planning has followed two main strategies to provide a safe bound on an obstacle's space: a polyhedron, such as a cuboid, or a nonlinear differentiable surface, such as an ellipsoid. The former approach relies on disjunctive programming, which has a relatively high computational cost that grows exponentially with the number of obstacles. The latter approach needs to be linearized locally to find a tractable evaluation of the chance constraints, which dramatically reduces the remaining free space and leads to over-conservative trajectories or even unfeasibility. In this work, we present a hybrid approach that eludes the pitfalls of both strategies while maintaining the original safety guarantees. The key idea consists in obtaining a safe differentiable approximation for the disjunctive chance constraints bounding the obstacles. The resulting nonlinear optimization problem is free of chance constraint linearization and disjunctive programming, and therefore, it can be efficiently solved to meet fast real-time requirements with multiple obstacles. We validate our approach through mathematical proof, simulation and real experiments with an aerial robot using nonlinear model predictive control to avoid pedestrians. [less ▲]

Detailed reference viewed: 763 (38 UL)
Full Text
Peer Reviewed
See detailVision-Based Aircraft Pose Estimation for UAVs Autonomous Inspection without Fiducial Markers
Cazzato, Dario UL; Olivares Mendez, Miguel Angel UL; Sanchez Lopez, Jose Luis UL et al

in IECON 2019-45th Annual Conference of the IEEE Industrial Electronics Society (2019, October)

The reliability of aircraft inspection is of paramountimportance to safety of flights. Continuing airworthiness of air-craft structures is largely based upon the visual detection of smalldefects made by ... [more ▼]

The reliability of aircraft inspection is of paramountimportance to safety of flights. Continuing airworthiness of air-craft structures is largely based upon the visual detection of smalldefects made by trained inspection personnel with expensive,critical and time consuming tasks. At this aim, Unmanned AerialVehicles (UAVs) can be used for autonomous inspections, aslong as it is possible to localize the target while flying aroundit and correct the position. This work proposes a solution todetect the airplane pose with regards to the UAVs position whileflying autonomously around the airframe at close range forvisual inspection tasks. The system works by processing imagescoming from an RGB camera mounted on board, comparingincoming frames with a database of natural landmarks whoseposition on the airframe surface is known. The solution has beentested in real UAV flight scenarios, showing its effectiveness inlocalizing the pose with high precision. The advantages of theproposed methods are of industrial interest since we remove manyconstraint that are present in the state of the art solutions. [less ▲]

Detailed reference viewed: 348 (8 UL)
Full Text
Peer Reviewed
See detailReal-Time Human Head Imitation for Humanoid Robots
Cazzato, Dario UL; Cimarelli, Claudio UL; Sanchez Lopez, Jose Luis UL et al

in Proceedings of the 2019 3rd International Conference on Artificial Intelligence and Virtual Reality (2019, July)

The ability of the robots to imitate human movements has been an active research study since the dawn of the robotics. Obtaining a realistic imitation is essential in terms of perceived quality in human ... [more ▼]

The ability of the robots to imitate human movements has been an active research study since the dawn of the robotics. Obtaining a realistic imitation is essential in terms of perceived quality in human-robot interaction, but it is still a challenge due to the lack of effective mapping between human movements and the degrees of freedom of robotics systems. If high-level programming interfaces, software and simulation tools simplified robot programming, there is still a strong gap between robot control and natural user interfaces. In this paper, a system to reproduce on a robot the head movements of a user in the field of view of a consumer camera is presented. The system recognizes the presence of a user and its head pose in real-time by using a deep neural network, in order to extract head position angles and to command the robot head movements consequently, obtaining a realistic imitation. At the same time, the system represents a natural user interface to control the Aldebaran NAO and Pepper humanoid robots with the head movements, with applications in human-robot interaction. [less ▲]

Detailed reference viewed: 442 (18 UL)
Full Text
Peer Reviewed
See detailDeep learning based semantic situation awareness system for multirotor aerial robots using LIDAR
Sanchez Lopez, Jose Luis UL; Sampedro, Carlos; Cazzato, Dario UL et al

in 2019 International Conference on Unmanned Aircraft Systems (ICUAS) (2019, June)

In this work, we present a semantic situation awareness system for multirotor aerial robots, based on 2D LIDAR measurements, targeting the understanding of the environment and assuming to have a precise ... [more ▼]

In this work, we present a semantic situation awareness system for multirotor aerial robots, based on 2D LIDAR measurements, targeting the understanding of the environment and assuming to have a precise robot localization as an input of our algorithm. Our proposed situation awareness system calculates a semantic map of the objects of the environment as a list of circles represented by their radius, and the position and the velocity of their center in world coordinates. Our proposed algorithm includes three main parts. First, the LIDAR measurements are preprocessed and an object segmentation clusters the candidate objects present in the environment. Secondly, a Convolutional Neural Network (CNN) that has been designed and trained using an artificially generated dataset, computes the radius and the position of the center of individual circles in sensor coordinates. Finally, an indirect-EKF provides the estimate of the semantic map in world coordinates, including the velocity of the center of the circles in world coordinates.We have quantitative and qualitative evaluated the performance of our proposed situation awareness system by means of Software-In-The-Loop simulations using VRep with one and multiple static and moving cylindrical objects in the scene, obtaining results that support our proposed algorithm. In addition, we have demonstrated that our proposed algorithm is capable of handling real environments thanks to real laboratory experiments with non-cylindrical static (i.e. a barrel) and moving (i.e. a person) objects. [less ▲]

Detailed reference viewed: 523 (5 UL)