Бидний тухай
Багш ажилтан
Autonomous mobile robots are essential for advancing space exploration by performing tasks in inaccessible or hazardous environments to humans. This study presents the design and implementation of a two-wheel drive (2WD) autonomous robot for space-related missions. The robot prioritizes simplicity, lightweight construction, and energy efficiency—key factors for deployment in extraterrestrial settings. For implementation, a hexagonal parachute was designed to ensure a safe landing. Positioning was determined using a GPS module, and the distance between two points on the globe was calculated using the Haversine formula. The direction was determined using motor encoder values, and the module performed repeated calculations of direction and distance every 30 seconds to fulfill the mission objectives. Mounting electronic components, the rover was assembled onto a copper board, and its wheels, main body, and parachute release mechanism were fabricated using a 3D printer to produce a functional prototype.
With the rapid growth of industrial automation, estimating the 3D pose of objects has become a central challenge in random bin picking—particularly in cluttered environments with occlusions and a variety of object types. One major hurdle in developing and testing such systems is the reliance on industrial robot arms, which can cost up to $100,000 and are often unsuitable for research labs due to safety, space, and movement constraints. To overcome this, we built an open-source, 3D-printed robotic arm with six degrees of freedom. We integrated it into the ROS MoveIt framework using a URDF model, enabling inverse kinematics and motion planning. Our system uses a deep neural network trained entirely on synthetic data for 3D object pose estimation. This approach allows for fast and scalable generation of large, labeled training datasets. The network outputs belief maps and vector fields from input images; peaks are detected in the belief maps, keypoints are assigned via the vector fields, and the final object pose is computed using the Perspective-n-Point (PnP) algorithm. Using this method, we demonstrate a low-cost, real-time random bin-picking system that accurately estimates the poses of known assembly line objects.