Project Background

Human curiosity drives us to explore the unknown and push the boundaries of discovery. As we turn our gaze to Mars and other planets, the challenges of its rugged terrain demand new solutions beyond traditional rovers.

To explore the planet’s cliffs, caves, and vertical surfaces, we need climbing robots and agile systems capable of reaching these tough-to-access areas. These machines could reveal Mars’ secrets, help assess its potential for colonization, and even uncover signs of ancient life.

The following factors highlight why developing such versatile robotic systems is crucial:

  • Increasing interest in Mars exploration for scientific discovery and potential colonization
  • Need for versatile robotic systems to navigate challenging Martian terrain
  • Importance of vertical mobility in exploring cliff faces, lava tubes, and other geological features
  • Potential applications in future space habitats and orbital stations
  • Challenges of operating in low-gravity, low-pressure environments
  • Inspiration from NASA’s LEMUR (Limbed Excursion Mechanical Utility Robot) project

Project Goals

For our project, we plan to explore and develop technologies for robotic climbing mechanism used on rough surfaces. This development includes:

  • An end-effector with exceptionally strong gripping force
  • A lightweight body to limit the torque required by the gripper and other joints
  • Strong joints for lifting the body
  • A smart climbing technique, perhaps using path planning

Problem Statement

Current robotic climbing systems lack sufficient gripping and motion tracking systems,  presenting a need for more versatile lightweight systems with strong gripping end-effectors, robust joint connections, and smart path planning

Meet the Team

Aaron Thomas: Electrical, ROS2 Implementation, Arm Design
Will Chen: Computer Vision, Path Planning, Autonomy
Jackson Erb: Mechanical Gripper Design
Sarah Glomski: Wall, Gripper, Chassis Design
Cameron Reid: Joint Design

Let’s dive into the mechanical design of our climbing robot. We’ll explore the key components, materials, and mechanisms that enable it to grip, climb, and maneuver effectively. Most of the required parts are 3D printed along with a few small, hardware parts. All 3D-printable designs and CAD files will soon be available on Thingiverse, making it easy for you to recreate or customize the parts for your own robot.

Project Breakdown

In this section, we’ll break down the key design elements of our Cliffhanger climbing robot and provide a step-by-step guide on how to build it. We’ll cover everything from the structural components and sensor integration to the mechanisms that allow it to navigate vertical surfaces. Whether you’re a robotics enthusiast or an engineer, this detailed overview will help you understand our approach and give you the tools to replicate or adapt the design for your own projects.

Mechanical Explanation

Let’s dive into the mechanical design of our climbing robot. We’ll explore the key components, materials, and mechanisms that enable it to grip, climb, and maneuver effectively. Most of the required parts are 3D printed along with a few small, hardware parts. All 3D-printable designs and CAD files will soon be available on Thingiverse, making it easy for you to recreate or customize the parts for your own robot.

Climbing Wall

Before we could start designing the physical robot, we had to define the environment it would perform in. We decided to design a climbing wall with custom handholds spaced vertically on a board.

We started with a realistic handhold that you may find in a climbing gym, but quickly iterated to a jug hold, which is the easiest climbing hold to grip because it highly constrains the climber’s hand.

We 3d printed 6 of these jug holds in the same blue color so that our computer vision task would be made easier. We screwed the jug holds into a plywood board, starting with standardized distances of 8 inches vertically and 9 inches horizontally. After some success with initial testing, we moved to more variable distances to test the flexibility of our robot in dynamic environments. 

Gripper

For the gripper design, we were able to use a static hook due to the simplicity of the jug hold. We started with a very basic hook design and then added some complexity by having it self-guide into the hold. This took some iteration, as we had to make sure the gripper was constrained enough to support the full weight of the robot, while not getting lodged inside the jug hold.

We designed a modular interface between the gripper and the arm so that we could interchange different gripper attachments for fast iteration. After some initial trials, it was noted that the modular interface had to be 1) robust enough to support the tension and torsion of the robot’s dynamic movements, and 2) non-obstructive to the hook-jug hold interface. After some iteration, this combination was achieved.

The width of the gripper was modified several times to determine the optimal fitment. In the end, a narrower grip was preferred because of the higher flexibility that this gave to the robot’s range of reach.

Linear Actuator

To reach certain joints, we decided that a prismatic joint has a lot of benefits to keep our design simple yet strong and relatively lightweight. We found that this 3-stage linear actuator concept from thang010146 helps us maximize strength and reach in a compact design.

Because it is a 3-stage linear actuator, every additional inch of added to the collapsed length adds an extra three inches of travel. This means that with an original 10 inch length, the arm can extend to around 34 inches of total length. These are incredible gains compared to typical linear actuators!

Since the geometries of each part are somewhat difficult to manufacture, we were able to take advantage of 3D printing and simplify the parts and eliminate the use of any additional hardware from thang010146’s design. The entire actuator is 3D printed and very strong! We found that these prints could be printed well on any Bambu Lab 3d printer (with mostly default settings), and the tolerances turn out great! This design is also parametric so we can change the travel lengths, wall thicknesses, thread pitch, and more with just one variable.

Three-stage linear actuator with top slider part hidden. This exposes how the screw parts synchronize in rotation and then the rotation of the sliders is constrained to convert rotational motion from the screws to the linear translation of the sliders

Section view of three-stage linear actuator 3D printed design. The original concept was adapted from thang010146’s design. Channels/keyways are used so three screw-like parts can spin synchronusly while the slider parts are restricted to convert rotational motion to translation.

Section view of three-stage linear actuator 3D printed design, fully extended.

To drive this actuator, we used a 25GA370 12V DC Encoded Motor which runs at 150 RPM. This motor provides enough strength to lift the entire robot and provide relative position. An additional limit switch can be added for absolute position as well.

Actuated Gripper

We explored advanced designs for the gripper in parallel for more complex gripping requirements.

We started with kinematic and force analysis to determine the technical requirements of the gripper and moved onto design.

During the course of the design process, the gripper evolved from a tendon-actuated 2-Dof gripper meant to accommodate more complex jug-holds, but as the design of the rock wall became more defined, we transitioned to a 1 DoF motor driven gripper with a high ratio worm gear to provide the static torque needed to hold the robot.

3D CAD and prototype of the Actuated Gripper

Chassis

We wanted a chassis that would fit around our electronics and hold them firmly in place. We designed a clam shell that used tight tolerances to secure the electronics boards in place while still providing ventilation and access to the ports. The two halves of the clam shell are held together with bolts for easy access in the event that more detailed electrical work should be needed.

Below the main frame, we attached 2 castor wheels that run along the plywood part of the climbing wall.  The purpose of these wheels is to provide support and balance to the robot’s body while reducing drag as it climbs.

Near the front of the chassis, the shoulder joints attach for the arms. This minimalistic interface allows the wires that power the motors to flex based on the movement of the robot arms.

Adding the Brains: Wiring and Software Setup

Now that we have gathered and assembled most of the mechanical parts of the robot, let’s make it smart! 

To add brains to a robot, we need to add some electronics. This includes a microcontroller or computer, and power distribution. Listed below is a list of the electrical components that are used for this robot. Additionally a PCB has been designed to make the wiring much easier, cleaner, and less likely to destroy anything.

For this robot, we will also be using ROS2 Jazzy. This will help facility good communication between a host computer and the robot, distribute computation power, and it will seamlessly handle all the necessary information for the functions that run the robot. 

Bill of Materials

  • Personal (host) Computer running Ubuntu 24.04 Noble Numbat
  • Soldering Iron (with solder)
  • Wire strippers
  • Intel Realsense D435 Depth Camera
  • Raspberry Pi 4B (or 5) running Ubuntu 24.04 Noble Numbat
  • ESP-WROOM-32 Microcontroller
  • TB6612FNG Dual DC Motor Driver
  • 12V to 5V 5A Buck Converter
  • Two Micro Limit Switches
  • LTC3780 130W DC Buck-Boost Converter (Set to 6.8V Output)
  • At least 100 Male Header Pins
  • At least 70 Female Header Pins
  • Custom PCB (See below for files)
  • At least three 18650 Batteries w/ charger (or 12v power alternative)
  • 3 Sets of Spring Contacts for 18650 Batteries
  • Wire (~18 AWG should be fine)
  • 10K Ohm Resistors
  • (Optional) Nylon PCB Standoffs

All of these parts except the personal tools, depth camera, and PCB can be found at this wishlist: https://www.amazon.com/hz/wishlist/ls/1FURLT03YV34J?ref_=wl_share 

I designed my PCB through EasyEDA which made it easy and cheap to order from JLCPCB! I just got the nonassembled boards which keeps the price to near $20 including shipping. Not bad for custom boards that save hours or even days of breadboard wiring and soldering. 

Top (left) and bottom (right) sides of printed circuit board used for Cliffhanger climbing robot. Header pins and manual wires are soldered on so components can be added or swapped easily.

Assembly

Soldering the PCB isn’t too hard if you know how to solder. I have also included outlines for each part to make the assembly a bit easier. Simply solder header pins as pictured in the photo. Check component placement as you go. For any components that don’t perfectly fit the holes, use wire to connect them. Don’t forget to solder the resistors and dc motors. Solder wire to the battery terminals to connect the batteries in series. With three 18650 batteries at full charge, that means the max voltage is around 12.6V volts.

Fully-assembled PCB for Cliffhanger climbing robot.

Now with all the header pins, we can plug in each component including the servo motors (double check to make sure the pins are aligned) and then plug in the battery. If you don’t see “magic smoke” and lights turn on normally, you should be good to go! If you find later that a component is faulty, you can always replace it with the ease of these header pins.

Setting up the Computers

ROS2 & MoveIt 2 Installation

Both computers (the Raspberry Pi and your host computer) which run Ubuntu 24.04 require us to install ROS2 Jazzy. Follow these instructions to install ROS2 on each computer. 

The host computer will also need to run packages such as MoveIt 2 and other ROS2 Controllers. Run the following lines:

sudo apt update

sudo apt upgrade

sudo apt install ros-jazzy-ros2-control ros-jazzy-ros2-controllers ros-jazzy-moveit

An error has been commonly reported that prevents MoveIt 2 from launching properly. The error says “[ERROR] [launch]: Caught exception in launch (see debug for traceback): ‘capabilities'” If you get this error, you will need to go into the /opt/ros/jazzy directory and find the launches.py file and follow the instructions from this link.

Now that these ROS2 packages are installed, we will need to create a custom ROS2 workspace and install a couple more. The package can be made by running this line:

mkdir -p ~/ros2_ws/src 

Depth Camera Setup

Follow these instructions to install the Intel Realsense SDK on your host computer and follow the instructions from this git repo to set up the ROS2 wrapper. Then cd into ros2_ws/src and  This will help us later to run the Intel D435 Depth Camera.

Intel Realsense D435 Depth Camera

Git Repo

To use the packages for this robot, clone this workspace on both computers. It may be useful to add the included packages into your ros2_ws if you want to make sure you only have to source from that workspace.  Run colcon build to build the workspace and then resource it. 

ESP32 Code

Before running anything, upload this code to the ESP32 using the Arduino IDE, PlatformIO, or other preferred option. You may need to install the necessary boards and packages. 

Running the Packages

Now to run the code, make sure both computers are on the same network and can ping each other.  SSH into the Raspberry Pi from the host computer, and enter the terminal command: ros2 run esp_comms rpi_esp_comms
 
This should start communicating with the ESP32. If it can’t communicate with the ESP32, make sure it is using the correct USB Port by running ls /dev/tty*
You may also need to set “groups” permissions for the computer to allow you to use the serial ports. 
 
If everything is running. on the pi properly. Stop it with ctrl-c and let’s test MoveIt2 on the host computer. Open a new terminal, source the workspace and ROS2 opt files. Run ros2 launch climbbot3_moveit demo.launch.py. RVIZ should pull up and load the robot. Play around with the joints to see if everything can move properly. 
 

Visualization of running the MoveIt launch file for the Cliffhanger climbing robot.

 
Now if we go back to our terminal used to SSH to the Pi, let’s rerun the rpi_esp_comms node. This should set all the joints to 0.0 or the current default configuration and the servos may move. Now if you move the robot arms around in RVIZ and plan and execute paths, the robot joints should mimic RVIZ’s movements. 
 
The way this is performed is by subscribing to the /joint_states topic to follow the fake robot from MoveIt and then sending those joints over serial to the ESP32 to control the robot.
 
The more proper way to do this is to make a hardware interface, set up proper joint controllers, and even set up MicroRos with the ESP32. The proper way however, takes a good amount of time and debugging effort so we took the slightly lazier way for the scope/timeline of this project.
 

Computer Vision Tutorial: Object Detection and Pose Estimation

This tutorial explains the computer vision system implemented for an autonomous rock climbing robot. We’ll cover three main components:

  1. Blue object detection for identifying climbing holds
  2. Converting 2D image coordinates to 3D world coordinates
  3. ArUco tag detection for robot pose estimation
  4. Path Planning and Execution

Object Detection

Our climbing robot needs to identify blue climbing holds on the wall. We use OpenCV’s color detection and image processing capabilities to accomplish this. Here’s how it works:

  1. Color Space Conversion
    • Convert the camera image from BGR to HSV color space
    • HSV is better for color detection as it separates color from brightness
  2. Color Thresholding
    • Define a range of blue colors in HSV space
    • Create a binary mask highlighting only the blue pixels
    • This effectively isolates the climbing holds from the background
  3. Image Processing
    • Apply morphological operations (opening and closing) to clean up noise
    • This removes small artifacts and fills small holes in the detection
  4. Hold Detection
    • Find contours in the cleaned binary mask
    • Filter contours by size to eliminate small detections
    • Calculate bounding boxes around the remaining contours
    • These boxes represent the locations of climbing holds

Github Link

2D to 3D Coordinate Conversion

To plan climbing movements, we convert the 2D pixel locations of holds into 3D world coordinates using depth information.

  1. Depth Reading
    • For each detected hold, get the corresponding depth value from the depth camera
    • This tells us how far the hold is from the camera
  2. Coordinate Transformation
    • Use the camera’s intrinsic parameters (focal length, optical center) to convert pixel coordinates to normalized camera coordinates
    • Combine with depth information to get 3D points in camera space
    • This gives us the real-world position of each hold
  3. Coordinate System Alignment
    • Transform the 3D points from camera coordinates to the robot’s coordinate system
    • This allows the robot to understand where holds are relative to its own position

Github Link

ArUco Tag Detection

ArUco tags on the robot’s base enable precise pose estimation. Here’s how the system tracks the robot’s position:

  1. Tag Detection
    • Convert camera image to grayscale
    • Use OpenCV’s ArUco module to detect marker corners and IDs
    • Calculate the center point of each detected marker
  2. Tag Processing
    • Convert tag centers to 3D coordinates using depth information
    • Track specific tag IDs (4, 5, 7) used for robot pose estimation
  3. Pose Estimation
    • Use tag 4 as the origin point
    • Calculate robot orientation using relative positions of tags 5 and 7
    • Establish the robot’s coordinate system for movement planning

Github Link

When combining ArUco tag detection and blue object tracking we can get the following results:

Path Planning and Execution

The robot uses computer vision data to plan and execute climbing movements:

  1. Position Transformation
    • Convert all detected hold positions from camera space to robot space
    • Apply coordinate transformations using TF2 library
    • Ensure all positions are in the same reference frame
  2. Target Selection
    • Define goal positions for left and right arms
    • Filter out unreachable holds based on robot kinematics
    • Select the closest reachable hold to each arm’s goal position
    • Consider approach angles and gripping constraints
  3. Movement Planning
    • Plan separate trajectories for each arm
    • Include pre-grasp positioning with offsets
    • Account for robot’s current pose and joint limits
    • Ensure collision-free paths
  4. Execution Control
    • Coordinate movements between arms
    • Monitor position and force feedback
    • Implement safety checks and failure recovery
    • Verify successful grasp before proceeding
  5. Safety Features
    • Validate all transformations before execution
    • Check motion plans for collisions
    • Monitor execution progress
    • Implement emergency stop capabilities

Github Link

System Integration

These four components work together to create a functional climbing system:

  1. The blue object detection identifies potential holds
  2. 2D to 3D conversion provides spatial understanding
  3. ArUco tracking maintains robot pose awareness
  4. Path planning converts this information into actual climbing movements

The integrated system enables the robot to:

  • Find and evaluate climbing holds
  • Track its position and orientation
  • Plan efficient climbing routes
  • Execute safe climbing movements

Joint Design

In order for the two three-stage linearly actuating arms to fit properly onto the chassis without breaking, a robust joint design is required. To do this, we designed an elbow-like joint that fits two servo 35kg servo motors on top of each other.

Image of finalized joint design displayed in OnShape

Incorporating this joint system allows for the chassis to connect to the arms appropriately, without putting too much stress on the reaching components. As seen in the image, the two 35kg servo motors fit into their inserts appropriately. The joint is connected to the linearly actuating arm component via a plate with two screws, and similarly connected to the chassis. 

Final Result

With everything assembled, the final result should look pretty tough! It features two 3-dof arms including a three stage linear actuator for reaching far distances, aruco tag localization, and ROS2 capabilities including handhold detection and path planning with a depth camera. This assembly also has the option to convert the static grippers in actuated grippers for more advanced surfaces. 

Cliffhanger Climbing Robot, Fully Assembled