Introductory Video

Watch this video for a short introduction to my project.

Project Motivation, Needs, and Goals

My name is Juliet O’Riordan and I am a first semester masters student studying mechanical engineering at Duke University. I am interested in autonomous driving through use of computer vision as well as machine learning and path optimization. All of these ideas are incorporated into my maze solving robot. This robot can learn its environment on its own and make decisions without human intervention. Once its learned an entire maze, it can calculate the optimal path from entrance to exit and then take that path to solve the maze.

A robot that can solve a maze is not itself important but the ideas needed to so can be used in a wide variety of applications.  A robot that can navigate a space on its own can take the place of humans in dangerous or tedious situations saving lives and time.  Instead of a human mapping out the layout of a space, a robot could do it and report back to the human. This project is a fun way to learn how to use computer vision and live video to control movement and store environmental information in various ways.

I made two versions of this robot that differ in the microcontroller or microprocessor used as the brains of the car. 

The simpler version, shown in the below picture on the left, uses an Arduino and a 3 column matrix that stores turning information for each intersection. I named this Ardbot and most of this version is all my own work. 

The more complex version, shown below on the right, utilizes machine learning  using an NVIDIA jetson nano. This is heavily based off the jetbot project developed by those at NVIDIA and thus keeps the name Jetbot. This version is more complicated because data is stored in neural networks instead of simple matrices.  Because of this, the Jetbot is more powerful and can be used in more applications than the Ardbot, but is also harder to understand and program. 

They are both explained in detail in the links below.

Arduino = Ardbot
Jetson Nano = Jetbot

Learning Objectives

The objective of this project is to build and code a motorized car that uses a camera to see so it can learn and solve a maze. While completing this project, you will learn the following:

  • Data storage
  • Motor control
  • Autonomous driving
  • Computer vision
  • Image processing

Project Vision and Narrative

How It Works

The Ardbot has two modes: learning and solving. When in learning mode,  this car will always make the right most turn in the maze. This is because any maze without loops can eventually solved if you make consistent decisions at every intersection. The video below shows the path the robot will take on an example maze. The black lines show all possible paths. red shows dead ends. Green shows the optimal path. And blue shows each step the car takes as is progresses.

The Ardbot stores the information of the maze in a three column matrix shown below. The first column corresponds to left turns, middle column corresponds to going straight, and right column corresponds to turning right. A one means the robot should go in that direction and a zero means it shouldn’t. 

As the robot goes down different paths, it learns what intersections are important and which turns lead to dead ends. It’s constantly updating the intersection matrix to account for the new information gained. Once the robot gets to the end, it has already calculated which turns to make at every intersection to minimize path length. When switched to solving mode, the robot will go through the maze taking the optimal path.

How the robot sees

The robots eyes are a single camera at the front of the bot. This camera processes the images it gathers by examining the colors of the pixels in its view.  By detecting differences in color, it can distinguish and track objects or lines. 

The video below shows how the camera would follow a line. The camera software draws a straight vector from the start of a line to the end. It’s also smart enough to recognize intersections. 

The camera keeps track of where the line is on the screen and reports is location to the Arduino. Ideally, the vector is vertical pointing from the bottom to the top and centered in its field of view. If it is not, the Arduino will control the motors to turn the bot until the line is centered. When the camera detects an intersection, it notifies the Arduino. The Arduino will update its intersection matrix and then turn onto the right most path. When it reaches a dead end, it recognizes this and will turn around to go back the way it came until it reaches an intersection again.

Project Stages

The project can be split into four stages that build on top of each other, with each being more complex than the last. The first three center on the ardbot while the final moves onto the jetbot.

About the Author

My name is Juliet O’Riordan. I am a first semester masters student  at Duke University studying mechanical engineering and materials science. 

Last Update: May 2021