Pumas DSPL (Takeshi Robot)

The Human Support Robot (HSR), also called Takeshi by the Bio-Robotics laboratory, is a semi-humanoid service robot designed and developed by the Toyota company, whose objective is to assist people with limited capacities or elderly people in their daily activities, whether at home or in an office. Its main purpose is to serve as an academic development platform for various algorithms and autonomous systems.

At the beginning of the HSR project in the Bio-Robotics laboratory, the Justina robot software was adapted for the Takeshi robot, with the aim of demonstrating the modularity of the system and the algorithms implemented in another development platform.

The main areas of development are the following:

  • Object detection with an RGB-D camera.

  • Object manipulation with a robotic arm with four degrees of freedom.

  • Autonomous navigation with obstacle avoidance in dynamic environments.

  • Natural language recognition, such as voice and gestures.

  • Action planner.

Takeshi is part of a software development agreement between UNAM and Toyota-Japan, thanks to the good results obtained in the global RoboCup competition in 2017 with the Justina robot. In the same way, Takeshi has great achievements participated in the RoboCup, he has obtained a second place in 2018 (Montreal, Canada) and a fourth place in 2019 (Sydney, Australia) in the RoboCup @ Home DSPL category.

Projects and Publications

Object detection with Convolutional Neural Networks

In this master thesis (“Detección de objetos con Redes Neuronales Profundas para un Robot de Servicio”, Edgar Roberto Silva Guzmán 2020),  we propose a system capable of generating a labeled image dataset automatically from video files, which is subsequently used for the training of a Convolutional Neural Network (YOLOv3). In addition, a system is proposed using the detection of objects with the Convolutional Neural Network in a service robot.

The first system is made up of two main modules developed in ROS: the first module segments an object from each frame from a video file. The second module creates the database of synthetic images, that is, artificial scenes containing the segmented and labeled objects of the previous module, then, data augmentation techniques are applied to generate a more robust set of data. The system can generate thousands of labeled images with approximately 30 objects in each one in minutes. Finally, the training parameters for the YOLOv3 Convolution Neural Network are configured.

In the second system, a methodology is proposed so that a service robot can manipulate an object autonomously. The system inputs are the images acquired by an RGB-D camera. Then, the objects are detected in a two-dimensional space using the trained model of the Convolutional Neural Network. Subsequently, the centroid calculation of each object in a three-dimensional space is made and the best way to manipulate the object is evaluated. Finally, it is evaluated if the object was manipulated.

The Takeshi and the Justina robots are the main service robot projects developed in the Bio-Robotics laboratory of the UNAM School of Engineering. Therefore, one of the main objectives of this thesis project is to apply the proposed systems to these service robots.

Application of formal psychological models in the development of Mobile Robots

Thesis: Application of Formal Psychological Models in the Development of Autonomous Mobile Robots  (Victor Hugo Sánchez Correa 2020).

The Generalized Context Model (GCM) is a paradigm that was released by a psychologist in the ’80s. The GMC explains how humans categorized sensory experience into groups. So, the model was applied to an object detector system to evaluate the effect of using models based on living creatures on essential and basic tasks in robotics.