References of "Gaudilliere, Vincent 50043311"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailA survey on deep learning-based monocular spacecraft pose estimation: Current state, limitations and prospects
Pauly, Leo UL; Rharbaoui, Wassim UL; Shneider, Carl UL et al

in Acta Astronautica (2023), 212

Estimating the pose of an uncooperative spacecraft is an important computer vision problem for enabling the deployment of automatic vision-based systems in orbit, with applications ranging from on-orbit ... [more ▼]

Estimating the pose of an uncooperative spacecraft is an important computer vision problem for enabling the deployment of automatic vision-based systems in orbit, with applications ranging from on-orbit servicing to space debris removal. Following the general trend in computer vision, more and more works have been focusing on leveraging Deep Learning (DL) methods to address this problem. However and despite promising research-stage results, major challenges preventing the use of such methods in real-life missions still stand in the way. In particular, the deployment of such computation-intensive algorithms is still under-investigated, while the performance drop when training on synthetic and testing on real images remains to mitigate. The primary goal of this survey is to describe the current DL-based methods for spacecraft pose estimation in a comprehensive manner. The secondary goal is to help define the limitations towards the effective deployment of DL-based spacecraft pose estimation solutions for reliable autonomous vision-based applications. To this end, the survey first summarises the existing algorithms according to two approaches: hybrid modular pipelines and direct end-to-end regression methods. A comparison of algorithms is presented not only in terms of pose accuracy but also with a focus on network architectures and models' sizes keeping potential deployment in mind. Then, current monocular spacecraft pose estimation datasets used to train and test these methods are discussed. The data generation methods: simulators and testbeds, the domain gap and the performance drop between synthetically generated and lab/space collected images and the potential solutions are also discussed. Finally, the paper presents open research questions and future directions in the field, drawing parallels with other computer vision applications. [less ▲]

Detailed reference viewed: 39 (1 UL)
Full Text
Peer Reviewed
See detailSpace Debris: Are Deep Learning-based Image Enhancements part of the Solution?
Jamrozik, Michele Lynn UL; Gaudilliere, Vincent UL; Mohamed Ali, Mohamed Adel UL et al

in Proceedings International Symposium on Computational Sensing (2023)

The volume of space debris currently orbiting the Earth is reaching an unsustainable level at an accelerated pace. The detection, tracking, identification, and differentiation between orbit-defined ... [more ▼]

The volume of space debris currently orbiting the Earth is reaching an unsustainable level at an accelerated pace. The detection, tracking, identification, and differentiation between orbit-defined, registered spacecraft, and rogue/inactive space “objects”, is critical to asset protection. The primary objective of this work is to investigate the validity of Deep Neural Network (DNN) solutions to overcome the limitations and image artefacts most prevalent when captured with monocular cameras in the visible light spectrum. In this work, a hybrid UNet-ResNet34 Deep Learning (DL) architecture pre-trained on the ImageNet dataset, is developed. Image degradations addressed include blurring, exposure issues, poor contrast, and noise. The shortage of space-generated data suitable for supervised DL is also addressed. A visual comparison between the URes34P model developed in this work and the existing state of the art in deep learning image enhancement methods, relevant to images captured in space, is presented. Based upon visual inspection, it is determined that our UNet model is capable of correcting for space-related image degradations and merits further investigation to reduce its computational complexity. [less ▲]

Detailed reference viewed: 96 (7 UL)
Full Text
See detailSelf-Supervised Learning for Place Representation Generalization across Appearance Changes
Musallam, Mohamed Adel; Gaudilliere, Vincent UL; Aouada, Djamila

E-print/Working paper (2023)

Detailed reference viewed: 81 (4 UL)
Full Text
See detailPose Estimation of a Known Texture-Less Space Target using Convolutional Neural Networks
Rathinam, Arunkumar UL; Gaudilliere, Vincent UL; Pauly, Leo UL et al

in 73rd International Astronautical Congress, Paris 18-22 September 2022 (2022, September)

Orbital debris removal and On-orbit Servicing, Assembly and Manufacturing [OSAM] are the main areas for future robotic space missions. To achieve intelligence and autonomy in these missions and to carry ... [more ▼]

Orbital debris removal and On-orbit Servicing, Assembly and Manufacturing [OSAM] are the main areas for future robotic space missions. To achieve intelligence and autonomy in these missions and to carry out robot operations, it is essential to have autonomous guidance and navigation, especially vision-based navigation. With recent advances in machine learning, the state-of-the-art Deep Learning [DL] approaches for object detection, and camera pose estimation have advanced to be on par with classical approaches and can be used for target pose estimation during relative navigation scenarios. The state-of-the-art DL-based spacecraft pose estimation approaches are suitable for any known target with significant surface textures. However, it is less applicable in a scenario where the target is a texture-less and symmetric object like rocket nozzles. This paper investigates a novel ellipsoid-based approach combined with convolutional neural networks for texture-less space object pose estimation. Also, this paper presents the dataset for a new texture-less space target, an apogee kick motor, which is used for the study. It includes the synthetic images generated from the simulator developed for rendering synthetic space imagery. [less ▲]

Detailed reference viewed: 181 (14 UL)
Full Text
Peer Reviewed
See detailCubeSat-CDT: A Cross-Domain Dataset for 6-DoF Trajectory Estimation of a Symmetric Spacecraft
Mohamed Ali, Mohamed Adel UL; Rathinam, Arunkumar UL; Gaudilliere, Vincent UL et al

in Proceedings of the 17th European Conference on Computer Vision Workshops (ECCVW 2022) (2022)

This paper introduces a new cross-domain dataset, CubeSat- CDT, that includes 21 trajectories of a real CubeSat acquired in a labora- tory setup, combined with 65 trajectories generated using two ... [more ▼]

This paper introduces a new cross-domain dataset, CubeSat- CDT, that includes 21 trajectories of a real CubeSat acquired in a labora- tory setup, combined with 65 trajectories generated using two rendering engines – i.e. Unity and Blender. The three data sources incorporate the same 1U CubeSat and share the same camera intrinsic parameters. In ad- dition, we conduct experiments to show the characteristics of the dataset using a novel and efficient spacecraft trajectory estimation method, that leverages the information provided from the three data domains. Given a video input of a target spacecraft, the proposed end-to-end approach re- lies on a Temporal Convolutional Network that enforces the inter-frame coherence of the estimated 6-Degree-of-Freedom spacecraft poses. The pipeline is decomposed into two stages; first, spatial features are ex- tracted from each frame in parallel; second, these features are lifted to the space of camera poses while preserving temporal information. Our re- sults highlight the importance of addressing the domain gap problem to propose reliable solutions for close-range autonomous relative navigation between spacecrafts. Since the nature of the data used during training impacts directly the performance of the final solution, the CubeSat-CDT dataset is provided to advance research into this direction. [less ▲]

Detailed reference viewed: 154 (19 UL)
See detailImage Enhancement for Space Surveillance and Tracking
Jamrozik, Michele Lynn UL; Gaudilliere, Vincent UL; Mohamed Ali, Mohamed Adel UL et al

in Jamrozik, Michele Lynn; Gaudilliere, Vincent; Musallam, Mohamed Adel (Eds.) et al Proceedings of the 73rd International Astronautical Congress (2022)

Images generated in space with monocular camera payloads suffer degradations that hinder their utility in precision tracking applications including debris identification, removal, and in-orbit servicing ... [more ▼]

Images generated in space with monocular camera payloads suffer degradations that hinder their utility in precision tracking applications including debris identification, removal, and in-orbit servicing. To address the substandard quality of images captured in space and make them more reliable in space object tracking applications, several Image Enhancement (IE) techniques are investigated in this work. In addition, two novel space IE methods were developed. The first method called REVEAL, relies upon the application of more traditional image processing enhancement techniques and assumes a Retinex image formation model. A subsequent method, based on a UNet Deep Learning (DL) model was also developed. Image degradations addressed include blurring, exposure issues, poor contrast, and noise. The shortage of space-generated data suitable for supervised DL is also addressed. A visual comparison of both techniques developed was conducted and compared against the current state-of-the-art in DL-based IE methods relevant to images captured in space. It is determined in this work that both the REVEAL and the UNet-based DL solutions developed are well suited to correct for the degradations most often found in space images. In addition, it has been found that enhancing images in a pre-processing stage facilitates the subsequent extraction of object contours and metrics. By extracting information through image metrics, object properties such as size and orientation that enable more precise space object tracking may be more easily determined. Keywords: Deep Learning, Space, Image Enhancement, Space Debris [less ▲]

Detailed reference viewed: 110 (13 UL)
Full Text
Peer Reviewed
See detailLeveraging Equivariant Features for Absolute Pose Regression
Mohamed Ali, Mohamed Adel UL; Gaudilliere, Vincent UL; Ortiz Del Castillo, Miguel UL et al

in IEEE Conference on Computer Vision and Pattern Recognition. (2022)

Pose estimation enables vision-based systems to refer to their environment, supporting activities ranging from scene navigation to object manipulation. However, end-to-end approaches, that have achieved ... [more ▼]

Pose estimation enables vision-based systems to refer to their environment, supporting activities ranging from scene navigation to object manipulation. However, end-to-end approaches, that have achieved state-of-the-art performance in many perception tasks, are still unable to compete with 3D geometry-based methods in pose estimation. Indeed, absolute pose regression has been proven to be more related to image retrieval than to 3D structure. Our assumption is that statistical features learned by classical convolutional neural networks do not carry enough geometrical information for reliably solving this task. This paper studies the use of deep equivariant features for end-to-end pose regression. We further propose a translation and rotation equivariant Convolutional Neural Network whose architecture directly induces representations of camera motions into the feature space. In the context of absolute pose regression, this geometric property allows for implicitly augmenting the training data under a whole group of image plane-preserving transformations. Therefore, directly learning equivariant features efficiently compensates for learning intermediate representations that are indirectly equivariant yet data-intensive. Extensive experimental validation demonstrates that our lightweight model outperforms existing ones on standard datasets. [less ▲]

Detailed reference viewed: 202 (6 UL)
Full Text
Peer Reviewed
See detailLSPnet: A 2D Localization-oriented Spacecraft Pose Estimation Neural Network
Garcia Sanchez, Albert UL; Mohamed Ali, Mohamed Adel UL; Gaudilliere, Vincent UL et al

in Proceedings of Conference on Computer Vision and Pattern Recognition Workshops (2021, June)

Being capable of estimating the pose of uncooperative objects in space has been proposed as a key asset for enabling safe close-proximity operations such as space rendezvous, in-orbit servicing and active ... [more ▼]

Being capable of estimating the pose of uncooperative objects in space has been proposed as a key asset for enabling safe close-proximity operations such as space rendezvous, in-orbit servicing and active debris removal. Usual approaches for pose estimation involve classical computer vision-based solutions or the application of Deep Learning (DL) techniques. This work explores a novel DL-based methodology, using Convolutional Neural Networks (CNNs), for estimating the pose of uncooperative spacecrafts. Contrary to other approaches, the proposed CNN directly regresses poses without needing any prior 3D information. Moreover, bounding boxes of the spacecraft in the image are predicted in a simple, yet efficient manner. The performed experiments show how this work competes with the state-of-the-art in uncooperative spacecraft pose estimation, including works which require 3D information as well as works which predict bounding boxes through sophisticated CNNs. [less ▲]

Detailed reference viewed: 316 (35 UL)
Full Text
Peer Reviewed
See detailSPACECRAFT RECOGNITION LEVERAGING KNOWLEDGE OF SPACE ENVIRONMENT: SIMULATOR, DATASET, COMPETITION DESIGN, AND ANALYSIS
Mohamed Ali, Mohamed Adel UL; Gaudilliere, Vincent UL; Ghorbel, Enjie UL et al

in 2021 IEEE International Conference on Image Processing (ICIP) (2021)

Detailed reference viewed: 227 (20 UL)