Bot-tender
Gazebo & MoveIt
This page describes part of the work done for Bot-tender, a project using UR5e robot arm to explore motion planning and computer vision strategies for the future of drink service.
Specifically, this page will cover:
- Setting up a simulation environment with Gazebo, as well as creating objects and a simulated robot in the environment.
- Methods developed to work in the Robot Operation System (ROS) and send commands to the nodes of the robot.
Check out this page for more about the project!

Gazebo - Setting Up an Simulation Environment


It’s always a good practice to test the project in simulation environment and make sure everything works before heading to the physical equipment. To setup such an environment, one could use the Gazebo simulation environment and the UR5e robot arm world that comes with Gazebo. With the following shell command:
roslaunch ur_gazebo ur5.launch limited:=true >/dev/null &
We also would like there to be simulated objects such as bottles and cups for the robot to interact with. This requires us to prepare some .sdf or .urdf files that describes the models and attributes of these objects. For this project, we used .sdf files with the following structures:
- Model folder
- .config file
- Model name
- Model .sdf file name
- Author info
- Description
- .sdf file
- sdf info
- model name
- Links
- Joints
- .config file
And in the .sdf file, each link and joint should contain information with the following structure:
- Link or joint
- inertial
- mass
- inertia
- pose
- gravity
- self_collide
- kinematics
- enable_wind
- visual
- pose
- geometry
- material
- transparency
- cast_shadow
- collision
- pose
- geometry
- surface
- friction
- bounce
- contact
- inertial
Note that when building these files following Gazebo online tutorial, most information do not need to be changed. The only information we focused on are:
- inertial
- pose
- visual
- pose
- geometry
- collision
- pose
- geometry
Visual geometry and material can be retrieved from a 3D model, which would increase the simulation’s fidelity and facilitate the use of computer vision in simulation. Though for this project, this would not be necessary. Simple geometries were used to compose the bottle and cup-shaped objects while keeping the computational complexity low.
And with the .sdf files ready, the following shell commands were used to create them in the simulation:
rosrun gazebo_ros spawn_model -file ./Scene/bottle/model.sdf -sdf -x 1 -y 0 -z 0 -model bottle >/dev/null &
rosrun gazebo_ros spawn_model -file ./Scene/cup/model.sdf -sdf -x -1 -y 0 -z 0 -model cup >/dev/null &
If preferred, Rviz could be a good interface to control the robot. It provides advantages such as intuitive end-effector position control, motion plan preview, easier integration with physical robot, etc. To start Rviz, use the following shell command:
roslaunch ur5_moveit_config moveit_rviz.launch config:=true >/dev/null &
All shell commands above can be easily integrated with python to create a python pipeline script to automate this process, and to keep everything as functions with parameters for future usage. The syntax for such integration is:
import os
shell_command=”roslaunch…”
os.system(shell_command)
MoveIt - Talking with the robot
To work with the robot, in simulation or in the physical world, one need to send command to the robots controller. This process is done through MoveIt. The following shell command is used to start MoveIt.
roslaunch ur5_moveit_config ur5_moveit_planning_execution.launch sim:=true limited:=true >/dev/null &
A python object was created to serve as a user interface, and its __init__() method contains the following code and other initalization process necessary for the simulation.
rospy.init_node(“myUR5eExercise”, anonymous=True)
moveit_commander.roscpp_initialize(sys.argv)
robot=moveit_commander.RobotCommander()
scene=moveit_commander.PlanningSceneInterface()
manipulator=moveit_commander.MoveGroupCommander(“manipulator”)
The above code initialize nodes in ROS for simulation, and designate all command messages to be sent to the robot controller “manipulator”.
With in the python object, there are the following methods which send out commands to the robot in a fairly general manner for most arbitrary tasks.
def neutralPoint()
”’Define a neutral state for the robot. It’s a good practice to let the robot start with an neutral state before performing tasks to avoid singularities while keep the motion safe and computation efficient.”’
def moveInJointSpace()
”’Input a goal state expressed in joint space and send commands to let the robot go to that state.”’
def moveInOperationalSpace()
”’Input a goal state expressed in operational space and send commands to let the robot go to that state.”’
def planTraj()
”’Input a goal state expressed in operational space and come up with a trajectory for the robot to follow along to get to that goal”’
def moveAlongTraj()
”’Let the robot move along a pre-defined trajectory.”’
def checkIfDone()
”’Check if the robot’s current state is close enough with the goal state. Usually called at the end of motions.”’
def attach()
”’Attach the object to the end-effector of the robot. This happens only in Gazebo simulation and in “magnet style”–the object will be set to be fixed with the robot. This mimics the gripper end-effector grabbing the object in real physical world ”’
def detach()
”’Detach the object from the end-effector. This mimics the gripper end-effector let go of what it’s holding in real physical world.”’
This object and these methods serve as a higher level control interface, so in future development the works can be focused on writing front-end programs describing the tasks and finding solutions, after which one could simply call on these methods to let the robot go to work, in simulation or in physical systems.
Author

Victor Xia
This page is authored by Victor Xia and describes work done by him. By the end of this project and creation of this page, Victor Xia is a first year graduate student at MEMS Department, Duke University.
Check out this page for more about him!