Motivation

My name is Juan Lasso Velasco and I’m Mechanical Engineering Masters student at Duke University. My researched is focused on adaptive and versatile robots that have the potential for deployment in harsh, high risk environments.

My project in the Experimental Design and Research Methods course involves the development and augmentation of a robotic quadruped platform that is able to navigate and adapt to difficult terrain. The project consists of fabrication and assembly of an open source quadruped design [site GitHub here], augmenting it to use LIDAR mapping to visualize it’s environment, and using sensor hardware to allow the robot to react to difficulties during navigation. This project is a continuation of the quadruped project started by Rebecca Schmitt. You can find her page on the project here.

Advancements and innovations in robotics have changed the way we look at our everyday tasks and how we solve problems both inside and outside the engineering field. With autonomous devices becoming more and more prevalent, the questions arises: How many tasks do we (the humans) actually need to do? Until now, the assumption could be made that many complex tasks required the attentive and involved intervention of a person. Now the field is open to the robots, that can be made to traverse, navigate, and interact with the world around them. The applications are almost endless. Where once we needed to put a human being into harm’s way, we now have the chance to send an autonomous or remote controlled vehicle to take the risk in their place. Robots made to navigate difficult terrain can be used in rescue and first responder operations. Natural disasters, city building collapses, and cave ins can be handled quicker and more effectively with the use of autonomous mobile devices that can reach and map out areas that would otherwise be inaccessible to humans. The options are also open to discover and exploration of new and remote areas.Transport of goods can also be done by robots capable of traversing difficult land.

With these opportunities in mind, the objective is to make a quadruped robotic platform that can map out and navigate an area. The idea is to use an open source quadruped design as a base and augment it to accommodate the hardware needed to achieve awareness and adaptability to the surrounding environment. By making use of responsive and relatively accurate actuators, the robot would be designed with the hardware needed to respond well to obstacles and other inconsistencies it comes across.The proposed method for the robot to “see” it’s surroundings is through LIDAR and its accompanying software. ROS (RoboticOperating System) will be used in order to control the robot and run the calculations needed to run its kinematic model. FDM 3D printing (and possiblySLA) will be the proposed method of manufacturing the physical parts needed for the structure of the robot. With these elements in place, this project will be a good proof of concept for larger scale or more accessible robots and can be further used as a base for additional functions and improvements.

Project Overview

Needs Assessment

As described before, there are many casualties incurred from hazardous tasks and lines of work that can be prevented by using autonomous devices. Though these devices and robots will only be able to perform these tasks if they have the ability to know their surroundings and adapt to them. Existing technology such as inertial measurement units, LIDAR modules, and pressure sensors can be used to grant a robot the information it needs to navigate certain environments and reach certain objectives. However, the equipment and technology to develop a high end traversal robot is inaccessible to many at-home makers and even  some researchers. In order to explore and eventually implement a solution to this problem, more accessible hardware and forms of fabrication must be used so that smaller scale and lower fidelity test can be done. This will allow others who are interested in tackling this problem, the ability to take on the project for themselves and perform their own augmentations and research.

Problem Statement

To aid the research and development of autonomous robots, the aim of the project is to make an accessible robotic platform that is capable of understanding and traversing its surrounding effectively.

Design Criteria

The criteria for the robotic platform are:

  •  Light ( < 10lbs )
  • Normal speed ( can travel approx. 166 mm/s)
  • Responsive ( must be able to react to a new object placed approx. 5 inches in front of it)
  • Adaptive ( can walk up inclinations of approx. 35 degrees, can walk across debris piled approx. 5 inches high)

Current State of Project

As of writing this article, there is still a good portion of the project that remains to be finished. All mechanical and hardware elements of the robot are finished. The only problem at the moment is the robot’s current software. Since the creators of the repository used a custom ROSSerial package in an earlier version of ROS to communicate with the robot’s microcontroller, there are a number of compatibility issues that cannot be resolved and cause the main computer (the Raspberry pi onboard the robot) to be unable to communicate with the main microcontroller (the Teensy 4.0) that controls the servo motors used to make the dog walk. 

Below is a schematic of the current physical design.

The following is the current diagram for the robot. It needed to be heavily simplified due to issues with the software. The original plan was to have a number of other sensors such as a LIDAR, a RealSense Camera, and hal-effect sensors so that the robot could understand it’s surroundings. This is only possible, however, if data can be sent from the Teensy to the Raspberry Pi computer and vise versa. 

In it’s current state, the robot cannot receive commands and cannot collect information about it’s surroundings due to this severed connection between the Teensy and the Raspberry Pi. This means the dog can only move if firmware is flashed to the Teensy with pre-set commands for the legs’ positions.

Since this is an integral part of the robot and essential for any further functionality to be added, it will need to be addressed in the future. The software for the robot will need to be rewritten entirely so that each segment of ROS used in the design can be individually tested and debugged. 

If you are interested in replicating this project, be sure to look at the spot_mini_mini repository for the full bill of materials and see the project breakdown below to guide you through what’s been done so far.

Module Scaffold

Future Work

There are many elements of this project that will continue to be developed on. Due to the nature of ROS and the current software’s incompatibility with the latest distribution, further work will be done to allow the design to run with the current version of ROS. Other areas that will be see further work in the future are:

  • Implementation of lidar and area mapping
  • Computer vision with use of 3D camera
  • Jetson Nano integration for use with classification models
  • Foot pressure sensor implementation with hal-effect sensors

About the Author

This page was written by Juan Lasso Velasco. Check out his personal page here.