
AUTONOMOUS SPACE ROBOT
FOR CONSTRUCTING STAELLITES AND INTERSTELLAR COLONY
Watch Our Innovations
Objectives and Scope of This Paper
​This paper presents a forward-looking exploration of the transformative role that autonomous space robotics will play in the construction of orbital and planetary infrastructure. Specifically, it seeks to examine the current state of robotic technologies that enable autonomous in-space assembly, including their mechanical, computational, and algorithmic foundations. The discussion extends to an evaluation of both operational and experimental missions—such as those in low Earth orbit and prototype demonstrations for extra-terrestrial application—that exemplify the capabilities and maturity of these systems. By comparing use cases across different gravitational and environmental conditions, the paper aims to assess the adaptability and scalability of autonomous robotic platforms. In addition, we identify and analyse the technological, operational, and policy challenges that must be addressed to transition from proof-of-concept to fully functional interplanetary construction ecosystems. Ultimately, this study proposes a structured roadmap for the advancement and deployment of autonomous space robotics, with the goal of demonstrating how such systems will underpin the next generation of space infrastructure. By synthesizing existing research and anticipating future developments, the paper envisions a not-so-distant future in which intelligent machines autonomously lay the groundwork for sustained human and robotic presence beyond Earth.
Visual Serving and 3D Spatial Awareness

In extreme space environments, this visual feedback plays a crucial role in space operations like docking and assembling parts in satellite. Therefore, integrating good visual techniques like image-based visual serving (IBVS) with deep learning with traditional control systems to handle dynamic lighting conditions in space. A researcher demonstrated the technology named IBVS using neural networks to perform space capture tasks [25]. Earlier, Umeda et al. (1997) introduced a method that combines touch and vision—known as tactual-visual serving. This allows robots to follow edges or make contact more precisely by using both camera images and touch sensors. It’s especially useful for detailed tasks like lining up connectors or tightening bolts during space construction [26].





