The MEMS MS Capstone at Duke University


ME555 Experiment design & Research method
Smart home service robot
(shsbot)

Overview

Project Motivation

Nowadays, when we talk about Robot arm, majority of its applications came to our mind is Industrial robot in manufacturing. Industrial robots can move precisely and carrying out their tasks efficiently. They can only do the fixed job by programming and can’t automatically work in a new environment. They have to be monitored at all times to ensure that they do not get any mechanical faults which would cause them to stall. This could lead to losses to the company. However, with the advancements in technology, monitoring robots has become very easy. It can even be done remotely. The robots are able to send feedback of their processes via text to the operator. In case of a technical hitch, it can also be corrected remotely without having to stop the robot.  However, although the applied technologies and research of industrial robot arms are already mature and advanced, they only can do some cyclic and tedious jobs in noisy factories when they receive comments from specific programming.

With 6-degree of freedom, the capacity of the robot arm shouldn’t be limited in factories doing simple and repetitive works. The robot arms with the functions of accomplishing multiple tasks should be applied in people’s daily lives. They can assist humans in doing several simple households works such as collecting waste papers and passing coffee. Therefore, to achieve that goal, studies on robotic cognitive and human-robot interaction are very much needed. With these motivations, we plan to design and build a smart home service robot (SHSbot). 

Application

More precise and intelligent robotic arms have been or will be put into use in many fields, such as assisting patients or the disabled in daily life in hospitals, and highly sophisticated medical robotic arms can also help doctors perform operations; The smart robotic arm can also sort garbage more efficiently in the garbage dump. Even at the space station, there will be robotic arms to help clean up the astronauts’ living garbage. Moreover, there are already intelligent service robots in many restaurants, and it is believed that there will be more intelligent household robots to assist people in their daily lives soon.

 

Design Iteration

In order to provide people direct helps in daily life, the primary ideal of us is to create a kind of small size and affordable robot arm. However, the current civilian robot arm market is dominated by various small, cheap and single-function robotic arms. It is not so much that the robotic arm is actually one kind of toy with an advanced mechanical structure. Although the costs of them are relatively low, these robot arms lacking artificial intelligence are limited help to people’s daily lives. In this case, we decide to design this arm by ourselves to make its control system and mechanical structure satisfies our requirement of serving people and also, keep it in low cost.

1.Initial Composition 

Based on reference here, our initial idea to this robot construction and function can be considered as three parts.

  • The first part is a high-precision six-degree-of-freedom robotic arm and a depth camera for visual functions. High-precision manipulator can reduce the error caused by mechanical movement and control, so that the service of robot is more accurate. Similarly, the camera should also have accurate coordinate recognition ability, so that the robot can accurately locate the object.The robot arm will use Computer Vision and Machine Learning (CNN) to complete the recognition of objects on the table. 
  • The next part is a base rover which can make the robot arm move in the room. To realize the parallel displacement of the SHSbot so that the robot can walk freely at home, a base rover is one of the most crucial parts of the SHSbot. The base rover realizes obstacle avoidance and objects recognition through robot vision and depth camera.
  • Furthermore, in line with the purpose of serving people, we will also apply the voice recognition function to this robot, which could receive speech commands. Meanwhile, other sensors which can help do room serves will be added to the robot and finally, make the robot can complete or help others complete most of things in daily life.

The idea of our initial robot is as follows.

 

2.Review and Improvement

Because of the limited time and energy, we decided to just study the first part in this semester. Then after several times review, we conceived the blueprint of a Smart Home Service Robot, SHSbot. It is one kind of intelligent and practical Robot arm, which was designed to help people collect garbage on table top such as wasted papers or sola cans and so on, also organize the table top orderly.

The SHSbot is designed to sit on a corner of a desk and help people perform trivial tasks within the range of the robot arm. It obtains environmental information through the camera, and then controls the movement of the robot arm to complete the corresponding task according to the command of the human. These tasks can be cleaning desktops, cleaning up trash and passing things around. For some more professional applications, by changing the mechanical arm business manipulator can be completed, such as the welding work in the mechanical laboratory can be changed to its manipulator welding equipment to complete. Like Tony’s robotic arm on the right.

                     

Design Criteria

1. Structure and Hardware

Robot Arm First of all, in terms of the use and design of robot arm, we chose armbot which has a relatively complete development foundation and has been used for many times in this course. Because this robot arm has a large movement radius and relatively high movement accuracy, its code is open source and some model files are available in ROS, so we only need to sort it out a little before we can use it in our project.
 
Depth Camera For visual sensors, we chose the depth camera Realsense D435i. Depth camera can directly output the depth information of an object without the need for conversion through a specific algorithm, so it greatly reduces the error generated when recognizing the position of an object. It is the preferred solution for robot navigation and object recognition applications
 
Platform For the platform and controller, we choose Jetson Nano, Arduino and TIC500.  Jetson Nano is a small mobile processor suitable for embedded and robot development. Arduino and TIC500 are common controller combinations that are relatively simple and suitable for beginners.

 

                            

2. Development Environment

Compare both advantages and disadvantages of different development environments, we choose to set up and control SHSbot using ROS, Arduino, and Moveit, do the pathing planning of SHSbot in Rvis using Moveit, and achieve the simulation via Gazebo cooperates Moveit&Rvis.

ROS, Robot Operating System which is a collection of software frameworks for robot software development. We completed Jetson Installation and ROS installation, setup a ROS catkin workspace, created the environment, built SSN, and downloaded necessary packages 

MoveIt which provides a platform for developing advanced robotics applications is a software for mobile manipulation, incorporating the latest developments in motion planning, manipulation, kinematics & dynamics, control and navigation. We supposed to combine Gazebo, ROS Control for a feasible development platform of SHSbot at first, then we finished the path planning via MoveIt.

Gazebo, offers the ability to accurately and efficiently simulate populations of robots in complex various environments. After finished the setup of MoveIt, we were able to  finish the Gazebo simulation integration, the MoveIt Setup Assistant helps setup the SHSbot to work with Gazebo, but there are still additional steps required to successfully run MoveIt in Gazebo.

Object recognition (Machine Learning), SHSbot applies CV and ML (CNN) to complete the recognition of specific objects on the table top, the sensing algorithm will send the goal position information of objects to the reasoning algorithm. The algorithm will complete the path planning of the robot arm, complete the object grabbing and movement by controlling motors

Learning Objectives

Robot Structure

  • Robot arm mechanical structure (3D printing, planetary gear…)
  • Robot arm kinematics (DH parameter, forward and inverse robot arm kinematics)

Robot System

  • Robot operation system(Moveit!, Rviz,Gazebo)
  • Arduino (motor control, serial communication)

Robot Learning and Perception

  • Depth camera
  • Deeplearning
  • Object detection

Problem Statement

To further simplify the task, make our goals clear and available, we build a specific situation and task here.

In this project, we want to make a robot arm able to clean up our desks automatically.  We use a robot arm to do pick and place motion and incorporate computer vision and deep learning to recognize items and detect their position.

In the experiment, to demonstrate the service we wanted to achieve, we separate one desk into two areas. As shown in the picture below, some items we daily use are please in order in area B. When we use them, we usually just put them back on the desk randomly which is the situation in area A. So, in the experiment, we will record the initial position of items in area B and then put them on area A randomly. Finally, we want our robot arm to recognize the items and locate their position, and then pick them up and place them back in order on area B.

System Decomposition

The next step for us is to organize and divide our project into small and reasonable tasks. Due to the special feature of ROS (Topic Publication and Subscription), we can development different part respectively and then construct communication programs to put them together. Therefore, following some reference we found, we divide our project into three parts, and each of our group member focuses on just one of them, which is represented by using different color in the figure. We mainly have four subjects: vision, motion planning, simulation and control. Vision part can be divided into depth camera using and calibration, object detection. Moveit will be used in motion planning. To complete the robot simulation in ROS, we also need to build robot configuration file and set some property parameters. Finally, to control a really arm, we need to finish serial communication and controller setup. In addition, we also use 3D printing, laser cutting and other methods to complete the hardware construction.

  • Shucheng’s part is using depth camera to do object detection and publish position of target item to node in Moveit. 
  • Yu’s part is to simulate armbot in Gazebo.
  • Jiaxun’s part is using ROS to control the motion of real armbot. (Detailed tutorial is in his personal page)

More detailed implement and develop will be showed in the Skill level and Implement detail part.

                           

 

Conclusion

In conclusion, this is a very complex and ambitious project for us to do in one semester. So far, we finished most of the planned work, but we haven’t actually made the hole system work successfully. Some problem is really hard to solve not only because of theory, but also due to multiple bugs in hardware. As shown in the picture below, we haven’t finished the joint calibration of camera and robot arm, the communication between object detection and Moveit and the communication between Moveit and Arduino. These parts will be the future work of this project.

                                                     

Skill Level and Implement Detail

Team Member

Jiaxun Liu

jiaxun.liu@duke.edu

Yu Zhou

yu.zhou@duke.edu

Shucheng Zhang

shucheng.zhang@duke.edu

Reference

[1]  Lecun Y , Bengio Y , Hinton G . Deep learning[J]. Nature, 2015, 521(7553):436.

[2] Canny J . A Computational Approach to Edge Detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, PAMI-8(6):679-698.

[3] Zhang, Zhengyou. (2000). A Flexible New Technique for Camera Calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 22. 1330 – 1334. 10.1109/34.888718.

[4] https://towardsdatascience.com/simple-introduction-to-convolutional-neural-networks-cdf8d3077bac

[5] https://wiki.pathmind.com/convolutional-network