Motivation and Methodology:

Robots are being developed so that they become capable of functioning independently. However, the path to reaching complete autonomy had to pass through a series of controlling methods. Initially, control was established by using remote controllers. But that required the constant presence of an operator, which is not feasible. Autonomous robots eliminate that need and perform tasks with greater efficiency.

Despite this, there are tasks where the precision of the robot is extremely desirable, but its autonomous nature is not. For example, surgical robots are employed by professional practitioners to perform surgeries on extremely delicate organs. In such scenarios, it is outside the comfort zone of the patients to a let a machine work on them without being supervised. It is here that a different mean of controlling is needed.

Gesture control is introduced as an effective controlling option. The objective is to manipulate a 4-wheeled robot wirelessly by the motion of the hands. The palm of the hand is used to generate data through the change in its angular positions. Such data can be retrieved using the MPU6050, as observed in the Self-Balancing robot project. The pitching and rolling angles can be encoded to the motion of the vehicle. A change in pitch determines the straight-line motion and similarly, a change in the roll varies the robot’s direction. 

The animation from Wheel Buddies below is analogous to what has been explained in this paragraph: 

To transmit the data wirelessly, a RF receiver-transmitter module is used. The transmitter sits besides the MPU6050, getting the angles recorded by the sensor and immediately transmits them to the receiver on the vehicle. The receiver collects the data and sends it to the Arduino, which then accordingly provides signals to the motors of the robot. The range through which the data can be transmitted and received is limited  by the specifications of the RF module.

Another method of instilling gesture control is by using Computer Vision and Machine Learning. A neural network architecture that has been trained to interpret the gesture of hands can be developed. Images of palms in certain configurations (Figure 1) are fed into the network as the training set, and with appropriate techniques of data augmentation and convolutions, the network learns to discriminate between the gestures. Like before, the interpretation of data needs to be transmitted from the programming script (in Python) to the Arduino. This way, the Arduino can decide the correct signal and sends the information to the motors. 

Fig 1: Four hand gestures indicating robot’s motion

Phases:

Using MPU6050

  1. Acquiring the materials.
  2. Designing the chassis of the robot.
  3. Using MPU6050 to generate change in the angles of the palm.
  4.  Learning to use the RF module to transmit data wireless to Arduino.

Using Computer Vision

  1. Acquiring the materials.
  2. Learning OpenCV and Machine Learning.
  3. Creating the training dataset of images of gestures.
  4.  Convolutional Neural Network to make an interpreting model.
  5. Transmitting data from the Python IDE to the Arduino.

Gesture-controlled robotics is an interesting theme of experimentation. Covering the topics of sensors, micro-controllers, coding, designing and signals, this project brings everything to an individual venturing newly into robotics. Its applications range from vehicles and drones to even daily appliances, like the television set and a laptop monitor. 

Gesture-control comes along with a huge scope of futuristic implementation, as well as a vast opportunity for its own development.

Authored by: Shivam Kaul
Last edited: May, 2021