
Motivation
My name is Juan Lasso Velasco and I’m Mechanical Engineering Masters student at Duke University. My researched is focused on adaptive and versatile robots that have the potential for deployment in harsh, high risk environments.
My project in the Experimental Design and Research Methods course involves the development and augmentation of a robotic quadruped platform that is able to navigate and adapt to difficult terrain. The project consists of fabrication and assembly of an open source quadruped design [site GitHub here], augmenting it to use LIDAR mapping to visualize it’s environment, and using sensor hardware to allow the robot to react to difficulties during navigation. This project is a continuation of the quadruped project started by Rebecca Schmitt. You can find her page on the project here.
Advancements and innovations in robotics have changed the way we look at our everyday tasks and how we solve problems both inside and outside the engineering field. With autonomous devices becoming more and more prevalent, the questions arises: How many tasks do we (the humans) actually need to do? Until now, the assumption could be made that many complex tasks required the attentive and involved intervention of a person. Now the field is open to the robots, that can be made to traverse, navigate, and interact with the world around them. The applications are almost endless. Where once we needed to put a human being into harm’s way, we now have the chance to send an autonomous or remote controlled vehicle to take the risk in their place. Robots made to navigate difficult terrain can be used in rescue and first responder operations. Natural disasters, city building collapses, and cave ins can be handled quicker and more effectively with the use of autonomous mobile devices that can reach and map out areas that would otherwise be inaccessible to humans. The options are also open to discover and exploration of new and remote areas.Transport of goods can also be done by robots capable of traversing difficult land.
With these opportunities in mind, the objective is to make a quadruped robotic platform that can map out and navigate an area. The idea is to use an open source quadruped design as a base and augment it to accommodate the hardware needed to achieve awareness and adaptability to the surrounding environment. By making use of responsive and relatively accurate actuators, the robot would be designed with the hardware needed to respond well to obstacles and other inconsistencies it comes across.The proposed method for the robot to “see” it’s surroundings is through LIDAR and its accompanying software. ROS (RoboticOperating System) will be used in order to control the robot and run the calculations needed to run its kinematic model. FDM 3D printing (and possiblySLA) will be the proposed method of manufacturing the physical parts needed for the structure of the robot. With these elements in place, this project will be a good proof of concept for larger scale or more accessible robots and can be further used as a base for additional functions and improvements.
Project Overview
Needs Assessment
As described before, there are many casualties incurred from hazardous tasks and lines of work that can be prevented by using autonomous devices. Though these devices and robots will only be able to perform these tasks if they have the ability to know their surroundings and adapt to them. Existing technology such as inertial measurement units, LIDAR modules, and pressure sensors can be used to grant a robot the information it needs to navigate certain environments and reach certain objectives. However, the equipment and technology to develop a high end traversal robot is inaccessible to many at-home makers and even some researchers. In order to explore and eventually implement a solution to this problem, more accessible hardware and forms of fabrication must be used so that smaller scale and lower fidelity test can be done. This will allow others who are interested in tackling this problem, the ability to take on the project for themselves and perform their own augmentations and research.
Problem Statement
Design Criteria
The criteria for the robotic platform are:
- Light ( < 10lbs )
- Normal speed ( can travel approx. 166 mm/s)
- Responsive ( must be able to react to a new object placed approx. 5 inches in front of it)
- Adaptive ( can walk up inclinations of approx. 35 degrees, can walk across debris piled approx. 5 inches high)
Current State of Project
As of writing this article, there is still a good portion of the project that remains to be finished. All mechanical and hardware elements of the robot are finished. The only problem at the moment is the robot’s current software. Since the creators of the repository used a custom ROSSerial package in an earlier version of ROS to communicate with the robot’s microcontroller, there are a number of compatibility issues that cannot be resolved and cause the main computer (the Raspberry pi onboard the robot) to be unable to communicate with the main microcontroller (the Teensy 4.0) that controls the servo motors used to make the dog walk.
Below is a schematic of the current physical design.
The following is the current diagram for the robot. It needed to be heavily simplified due to issues with the software. The original plan was to have a number of other sensors such as a LIDAR, a RealSense Camera, and hal-effect sensors so that the robot could understand it’s surroundings. This is only possible, however, if data can be sent from the Teensy to the Raspberry Pi computer and vise versa.
In it’s current state, the robot cannot receive commands and cannot collect information about it’s surroundings due to this severed connection between the Teensy and the Raspberry Pi. This means the dog can only move if firmware is flashed to the Teensy with pre-set commands for the legs’ positions.
Since this is an integral part of the robot and essential for any further functionality to be added, it will need to be addressed in the future. The software for the robot will need to be rewritten entirely so that each segment of ROS used in the design can be individually tested and debugged.
If you are interested in replicating this project, be sure to look at the spot_mini_mini repository for the full bill of materials and see the project breakdown below to guide you through what’s been done so far.
Module Scaffold
Mechanical Parts
The mechanical assembly of the quadruped robot is a big part of the project. To work with the physical design of the dog, you will need to 3D print all of its mechanical parts. If you are unfamiliar with FDM (Fuse Deposition Modeling) 3D printers and how to use them, you may want to look at this guide to familiarize yourself with the process.
Once you’ve done that, take a look at the files for the quadruped which can be found on Onshape here. Be sure to look at the spot mini repository by Maurice Rahme and Juan Miguel Jimeno. Their design and software are the basis for this project, so exploring its contents and becoming familiar with its structure will be extremely useful throughout the course of this project. Using Onshape is not the focus of this section, but if you have never used it before, there are a number of tutorials on Onshape’s official website that can get you started and teach you fundamental concepts of CAD as well. For this section, all you will need to know how to do is export parts as STL files.
To export a part from Onshape, right click on the part in the inspector window on the left and select “Export”. A pop up menu will then appear with a number of options. The only options you need to pay attention to are the “Format” , “STL Format”, and “Units” options. The export format should be “STL”. Once you selected that, the “STL Format” option will reveal itself. Be sure it is set to “Binary”. The “Units” option you should pay the most attention to. It must match the units that the part or assembly was designed in. In the case of the spot mini assembly, it was designed in millimeters, so this is what you should set for this option. The rest of the options should be fine at their defaults. Once you’ve exported the files, you can begin to 3D print them so you have them ready for the next sections.
Raspberry Pi Preparation
While you’re waiting for you parts to print, you can begin setting up the Raspberry Pi that will serve as the main computing hub for the dog. If you have never booted a new Raspberry pi, take a look at this tutorial that will guide you through the steps of setting up an OS (operating system) on your Pi.
For this project, you will need to setup Ubuntu 20.04 on the Pi. If you’ve looked at the spot mini repo by now (which you should if you haven’t already), you will notice that the creators list ROS Melodic as one of its dependencies. Unfortunately, ROS and Ubuntu update and change their software every year, which means what software is supported and distributed also changes. ROS Melodic is the ROS distribution that was made for Ubuntu 18.04. At the time of writing this section, Ubuntu has updated to 21.10 and no longer distributes a desktop or server version of 18.04 for arm devices. There is currently no ROS distribution that supports Ubuntu 21.10. This means that you will need to download the Ubuntu 20.04 Server disk image for Raspberry Pi. The reason you will need the server version is because, at the time of writing this, Ubuntu does not even distribute a desktop version of 20.04 for arm devices. This means that, in order to install ROS on the Pi, you will need to do a bit of manual editing. This will be covered in future sections. For now, just download the disk image and flash it to the SD card you will use for the Pi.
Assembly of Dog
Once you have the parts of the dog printed and have the belts, fasteners, and servos, you are ready to assemble the dog. Be sure to check out the assembly and calibration guide on the spark mini repo. This will be a useful reference, but this section will go through the assembly portion of the guide anyways. It will also be helpful to have the Onshape assembly open.
As the guide states, before you begin assembling the dog, you must ensure that all the servo motors are powered. This is to ensure that the zero position of the motor is set during the assembly and the zero “dog position” set by assembling the parts in shape of the desired zero configuration. For this dog, you will want to assemble it so that all four legs are pointing straight down. This will be the default position on power-up. As you prepare the parts for assembly, notice that a majority of the parts are fastened by using captive nuts, so be sure to insert nuts in their captive slots before putting the parts together.

Slots in parts for captive nuts
When installing the pulleys and idler, be sure that the captive nut is in place before you insert the idler into its sliding rail. Also be sure that the idler is installed before the belt pulley, so that the belt rest on top of the idler and is being pushed by the idler from underneath. The idler wheel should not be tightened on the small rail guide it’s mounted on, and should be able to spin freely. Once the leg is in the right position (straight down), you can tighten the idler by using a screw in the outside hole on the other side of the captive nut you placed before inserting the pulley.
The rest of the assembly should be fairly straightforward as long as the end result looks like the image above.
Preparing The Pi
As mentioned previously, due to changes in software and supported distributions, we will need to use Ubuntu 20.04 to install ROS Noetic. There are still a few steps that need to be taken before we install ROS.
If you’ve already booted your Pi with the OS from the previous section, you can probably already tell why there’s still some work to do. You shouldn’t panic at the fact that there is only a black screen with a flashing underscore, since this is exactly what should happen when booting the server version of Ubuntu. Server OS’s are distributed without a desktop environment, since they are generally run headless (i.e. without a monitor). This isn’t a problem, however, as we can install the desktop version our Ubuntu distribution through the apt
package manager.
Before we can do that, however, we need to gain access to the internet. From the terminal this isn’t such an easy task, as you will have to manually edit the network configuration files. First you will need to identify the name of your wireless network interface. To do this, simply type this at the command prompt:
ls /sys/class/net
Generally the wireless interface name is something like wlan0
or wlp3s0
. Next you’ll need to see what the name or your network configuration file in the netplan
directory is. In the command prompt type:
ls /etc/netplan/
Your looking for a file name that looks similar to 01-network-manager-all.yaml
or 50-cloud-init.yaml
. Once you know the name of your file, you will need to edit it. Type:
sudoedit /etc/netplan/[filename_you_found].yaml
Within this file you will need to add the following:
wifis:
wlan0:
optional: true
access-points:
"SSID-NAME-HERE":
password: "PASSWORD-HERE"
dhcp4: true
Replacing the “SSID-NAME-HERE” with the name of your network and the “PASSWORD-HERE” with your network’s password. Make sure that the wifis
block is aligned with the ethernet
and version
blocks in the file (if they’re present). Then hit “Ctrl+X” and then “Y” to save the changes you made and exit the editor. You will now need to apply the changes you made. In the command prompt type:
sudo netplan apply
This will apply your changes. You should now be connected to the internet! Now you can install the desktop environment for your OS. In the command prompt type:
sudo apt update
To update the package manager’s list of available packages and then:
sudo apt install ubuntu-desktop
To install the desktop environment. This might take several minutes. Once the installation is complete, restart your device and follow the instructions once your device boots up again.
You are now ready to install ROS Noetic, the details of how to do so are linked here.
Repo Dependencies
You will want to install the dependencies for both the robot firmware and the simulation. Most of these you should be able to use the pip
package manager for. To use it, you will need to verify that you’ve installed python3 by running sudo apt install python3
and then running sudo apt install python3-pip
. Once you’ve done that, you can install the following with pip
:
- Pytorch
pip install torch
- Pybullet
pip install pybullet
- Scipy
pip install scipy
- Numpy
pip install numpy
- Gym
pip install gym
You will also need to install OpenCV if the pybullet simulation is to work. You should be able to install it by using:
pip install opencv-python
But if the simulation ends up not working, you may have to build OpenCV from source.
Additional Software for File Editing
Teensy Loader
This robot uses a Teensy 4.0 to control its actuators, so you will need to get the Teensy Loader software in order to flash the firmware to it. It is recommended that you download it on the Pi so that you can use it flash the firmware. The instructions on how to set up the Teensy Loader on your device can be found in the link, but as they are a tad confusing, we will go through it here.
Go to the link and click on the “Download Teensy Program+Utils (Raspberry Pi)” to download the files for the software and open the “Linux udev rules” in a new tab to open the file contents for a file you will need to make on your Raspberry Pi’s OS. After the download is complete, extract the files from the compressed folder. You might want to extract it somewhere other than your downloads folder (the instructions on the Teensy website shows them extracting the files in their Desktop directory). Once extracted, open a terminal window and navigate to the directory where you extracted the files. Then run the following command:
chmod 755 teensy
This will make the teensy
file that was extracted executable. Before you run it however, you will need make a new file in rules.d
directory. You can do so using nano
by typing this is the command prompt:
nano /etc/udev/rules.d/00-teensy.rules
This will create a new file in the rules.d
directory called 00-teensy.rules. In the file, copy paste the contents of the page you opened when you clicked on “Linux udev rules“. Save and exit the file by pressing “Ctrl+X” and hitting “Y”. The teensy loader should now be setup.
VSCode/Platformio
The firmware files for the Teensy were written, compiled, and uploaded using the Platformio IDE, which is an extension of VSCode. You are able to install the PlatformioCore command line tools without installing VSCode, but it is recommended that you install Platformio through VSCode as it will give you access to a number of other editing features you will want should you want to augment the firmware. Instructions for how to install VSCode on the Linux OS your Pi is running can be found here.
For this project, I was able to install VSCode through the Snap Store by running:
sudo snap install --classic code
Once you have VSCode installed, you can search and download the Platformio IDE in the packages menu.
After Platformio has finished installing, you will want to restart VSCode so that all the tools can be loaded.
You will still need to setup the command line tools for Platformio in order to flash the firmware to the Teensy. To do so simply run the following three commands in a new terminal window:
sudo ln -s ~/.platformio/penv/bin/platformio /usr/local/bin/platformio
sudo ln -s ~/.platformio/penv/bin/pio /usr/local/bin/pio
sudo ln -s ~/.platformio/penv/bin/piodebuggdb /usr/local/bin/piodebuggdb
You’re now ready with the tools you need for the project.
Cloning Git Repository into Catkin Workspace
Using the Robotic Operating System (ROS) is no easy task. You will want to familiarize yourself with the general workflow of working with catkin packages within a catkin workspace. ROS has fairly useful tutorials on their wiki for you to go through. If you want to take the deep dive into ROS, definitely work through their tutorials as they provide you with a majority of groundwork you will need to program robots. For this section, however, you will only need to know how to create a catkin workspace so we can clone the spot_mini_mini repo into it. The ROS tutorial that covers this can be found here. The process is fairly simple, however. It consists of simply making a new director and ensuring it has a src
directory inside it:
mkdir -p spark_catkin_ws/src
You can name the workspace whatever you like, but it helps to name it something useful.
next enter the root of the the new workspace (i.e. the spark_catkin_ws
directory) and run the command catkin_make
.
cd ~/spark_catkin_ws/
catkin_make
This will create and build the catkin workspace for you. You will be able to see the two new directories that were created for you in the workspace. You will need to source the setup file in the devel directory for the workspace before you can use any of the packages you make inside it.
source ~/spark_catkin_ws/devel/setup.bash
You will need to do this within every new terminal you make, so it may be helpful to add this command to the bashrc file so that it’s run every time you open a new terminal.
echo "source ~/spark_catkin_ws/devel/setup.bash" >> ~/.bashrc
We are now ready to clone the Git repository into the workspace! If you have never used git or GitHub before, you will need to install git and make an account on GitHub before continuing. A tutorial on how to do that can be found here. You will also need to setup SSH on your GitHub account and make an SSH key for your Pi. You can take a look at how to do that here.
To clone the git repository into your Pi’s catkin workspace, first navigate to the src folder of the workspace:
cd ~/spark_catkin_ws/src/
and run the following command:
git clone git@github.com:OpenQuadruped/spot_mini_mini.git
You should verify that the url matches the SSH url on the spot_mini_mini repository. Go to the repo page link and click on the green “Clone” button. Select the SSH tab and check the url it gives you.
If it isn’t the same, hit the copy button next the url box and paste it into the git clone
command. Your device should then download all the files from the repository and create a local git repository for you. Once it’s finished, rebuild your workspace with catkin_make
. Your environment should now be ready to run the quadruped’s software.
Running Spot Pybullet Simulation
A good way of checking if you’ve set everything up correctly is trying to run the Pybullet simulation the spot_mini creators made for their quadruped design. You can do this by navigating to the spot_bullet/src
directory:
cd ~/spark_catkin_ws/src/spot_mini_mini/spot_bullet/src/
and running:
python env_tester.py
A pybullet simulation window should pop up and a model of the quadruped should load in.
You can now use the sliders to make the robot move in the simulation. Try moving the “Step Length” slider to see what happens. Play around with the simulation by adjusting the sliders. When you’re finished, you can simply exit out of the window.
Outsource Custom PCB
The spot_mini designers made a custom power distribution PCB to connect all key components together. The board supports an array of three pin connectors for all of the servo motors as well as a socket for the Teensy 4.0 micro controller.

Power distribution board (image from spot_mini_mini Github)
Since this is a custom board that isn’t sold anywhere, you will need to outsource the PCB design to a 3rd party manufacturer to have it made for you. If you have never outsourced a custom PCB before or would like to learn of another recommended manufacturer, go here for a guide on using JLCPCB’s manufacturing services. The board design can be found in the spot_real package of the spot_mini_mini Github repository, but you can also find it here.
Debugging ROSSerial
Throughout this project, I had multiple issues with the rosserial package and ended up discovering that the package used within the git repository was no longer compatible with ROS. I was able to test this using simple scripts on a different Raspberry Pi. If you would like to learn how to setup a simple publisher through ROSSerial, check out this tutorial.
Future Work
There are many elements of this project that will continue to be developed on. Due to the nature of ROS and the current software’s incompatibility with the latest distribution, further work will be done to allow the design to run with the current version of ROS. Other areas that will be see further work in the future are:
- Implementation of lidar and area mapping
- Computer vision with use of 3D camera
- Jetson Nano integration for use with classification models
- Foot pressure sensor implementation with hal-effect sensors
About the Author
This page was written by Juan Lasso Velasco. Check out his personal page here.