Hierarchical Visual SLAM based on Fiducial MarkersTourani, Ali ; Bavle, Hriday ; Sanchez Lopez, Jose Luis et alScientific Conference (2023, June 02) Fiducial markers can encode rich information about the environment and aid Visual SLAM (VSLAM) approaches in reconstructing maps with practical semantic information. Current marker-based VSLAM approaches ... [more ▼] Fiducial markers can encode rich information about the environment and aid Visual SLAM (VSLAM) approaches in reconstructing maps with practical semantic information. Current marker-based VSLAM approaches mainly utilize markers for improving feature detection in low-feature environments and/or incorporating loop closure constraints, generating only low-level geometric maps of the environment prone to inaccuracies in complex environments. To bridge this gap, this paper presents a VSLAM approach utilizing a monocular camera and fiducial markers to generate hierarchical representations of the environment while improving the camera pose estimate. The proposed approach detects semantic entities from the surroundings, including walls, corridors, and rooms encoded within markers, and appropriately adds topological constraints among them. Experimental results on a real-world dataset demonstrate that the proposed approach outperforms a traditional marker-based VSLAM baseline in terms of accuracy, despite adding new constraints while creating enhanced map representations. Furthermore, it shows satisfactory results when comparing the reconstructed map quality to the one reconstructed using a LiDAR SLAM approach. [less ▲] Detailed reference viewed: 186 (5 UL) From SLAM to Situational Awareness: Challenges and SurveyBavle, Hriday ; Sanchez Lopez, Jose Luis ; et alin Sensors (2023), 23(10), 4849 The capability of a mobile robot to efficiently and safely perform complex missions is limited by its knowledge of the environment, namely the situation. Advanced reasoning, decision-making, and execution ... [more ▼] The capability of a mobile robot to efficiently and safely perform complex missions is limited by its knowledge of the environment, namely the situation. Advanced reasoning, decision-making, and execution skills enable an intelligent agent to act autonomously in unknown environments. Situational Awareness (SA) is a fundamental capability of humans that has been deeply studied in various fields, such as psychology, military, aerospace, and education. Nevertheless, it has yet to be considered in robotics, which has focused on single compartmentalized concepts such as sensing, spatial perception, sensor fusion, state estimation, and Simultaneous Localization and Mapping (SLAM). Hence, the present research aims to connect the broad multidisciplinary existing knowledge to pave the way for a complete SA system for mobile robotics that we deem paramount for autonomy. To this aim, we define the principal components to structure a robotic SA and their area of competence. Accordingly, this paper investigates each aspect of SA, surveying the state-of-the-art robotics algorithms that cover them, and discusses their current limitations. Remarkably, essential aspects of SA are still immature since the current algorithmic development restricts their performance to only specific environments. Nevertheless, Artificial Intelligence (AI), particularly Deep Learning (DL), has brought new methods to bridge the gap that maintains these fields apart from the deployment to real-world scenarios. Furthermore, an opportunity has been discovered to interconnect the vastly fragmented space of robotic comprehension algorithms through the mechanism of Situational Graph (S-Graph), a generalization of the well-known scene graph. Therefore, we finally shape our vision for the future of robotic situational awareness by discussing interesting recent research directions. [less ▲] Detailed reference viewed: 142 (3 UL) Visual SLAM: What Are the Current Trends and What to Expect?Tourani, Ali ; Bavle, Hriday ; Sanchez Lopez, Jose Luis et alin Sensors (2022), 22(23), 9297 In recent years, Simultaneous Localization and Mapping (SLAM) systems have shown significant performance, accuracy, and efficiency gain. In this regard, Visual Simultaneous Localization and Mapping (VSLAM ... [more ▼] In recent years, Simultaneous Localization and Mapping (SLAM) systems have shown significant performance, accuracy, and efficiency gain. In this regard, Visual Simultaneous Localization and Mapping (VSLAM) methods refer to the SLAM approaches that employ cameras for pose estimation and map reconstruction and are preferred over Light Detection And Ranging (LiDAR)-based methods due to their lighter weight, lower acquisition costs, and richer environment representation. Hence, several VSLAM approaches have evolved using different camera types (e.g., monocular or stereo), and have been tested on various datasets (e.g., Technische Universität München (TUM) RGB-D or European Robotics Challenge (EuRoC)) and in different conditions (i.e., indoors and outdoors), and employ multiple methodologies to have a better understanding of their surroundings. The mentioned variations have made this topic popular for researchers and have resulted in various methods. In this regard, the primary intent of this paper is to assimilate the wide range of works in VSLAM and present their recent advances, along with discussing the existing challenges and trends. This survey is worthwhile to give a big picture of the current focuses in robotics and VSLAM fields based on the concentrated resolutions and objectives of the state-of-the-art. This paper provides an in-depth literature survey of fifty impactful articles published in the VSLAMs domain. The mentioned manuscripts have been classified by different characteristics, including the novelty domain, objectives, employed algorithms, and semantic level. The paper also discusses the current trends and contemporary directions of VSLAM techniques that may help researchers investigate them. [less ▲] Detailed reference viewed: 97 (6 UL) Unclonable human-invisible machine vision markers leveraging the omnidirectional chiral Bragg diffraction of cholesteric spherical reflectorsAgha, Hakam ; Geng, Yong ; Ma, Xu et alin Light: Science and Applications (2022), 11(309), 10103841377-022-01002-4 The seemingly simple step of molding a cholesteric liquid crystal into spherical shape, yielding a Cholesteric Spherical Reflector (CSR), has profound optical consequences that open a range of ... [more ▼] The seemingly simple step of molding a cholesteric liquid crystal into spherical shape, yielding a Cholesteric Spherical Reflector (CSR), has profound optical consequences that open a range of opportunities for potentially transformative technologies. The chiral Bragg diffraction resulting from the helical self-assembly of cholesterics becomes omnidirectional in CSRs. This turns them into selective retroreflectors that are exceptionally easy to distinguish— regardless of background—by simple and low-cost machine vision, while at the same time they can be made largely imperceptible to human vision. This allows them to be distributed in human-populated environments, laid out in the form of QR-code-like markers that help robots and Augmented Reality (AR) devices to operate reliably, and to identify items in their surroundings. At the scale of individual CSRs, unpredictable features within each marker turn them into Physical Unclonable Functions (PUFs), of great value for secure authentication. Via the machines reading them, CSR markers can thus act as trustworthy yet unobtrusive links between the physical world (buildings, vehicles, packaging,...) and its digital twin computer representation. This opens opportunities to address pressing challenges in logistics and supply chain management, recycling and the circular economy, sustainable construction of the built environment, and many other fields of individual, societal and commercial importance. [less ▲] Detailed reference viewed: 134 (7 UL) |
||