Research

JARIM

JARIM

JARIM (Just A Rather Intelligent Machine) is a modified RC car capable of autonomously driving inside a circuit. The car uses a mono camera to acquire images of the circuit and exploits a machine learning algorithm to perform decision making about the actions (throttle and steering) to take for driving inside the circuit. The decision making algorithm was developed by using reinforcement learning methods to train a virtual car driving in a simulated environment.

Moreover, an image segmentation algorithm was developed and trained by using a dataset of images of the circuit to learn how to label the lines delimiting it. About hardware, the actuators of the car (brushed DC motor and steering servo) were re-routed to be controlled by the on-board computer through a driver. A new electric and power system was designed and integrated to feed all components and both a wired E-stop and a wireless E-stop were implemented for safety purposes. The software framework adopted Robot Operating System for managing camera data acquisition and processing, decision making and actuators control.

RLBots​

RLBots

The project aims to R&D algorithms based on Reinforcement Learning (RL) techniques to increase robots autonomy. Reinforcement learning differs from supervised learning in not needing labelled input/output pairs be presented.  Instead the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).

Because of this different paradigm of learning, the generalization capabilities potential of these algorithms is huge. Nevertheless, the modeling of the simulation environment and the train of these algorithms is quite a challenge and an active area of research. Currently the project is focused on the development of Deep RL Agent capable of performing obstacle avoidance in hardware in an unknown environment for the end of this year. Deep Reinforcement Learning agents will be then used to control autonomous bots at different levels, from basic obstacle avoidance up to autonomous movement and planning.

AMAV​

AMAV

Micro Aerial Vehicles (MAVs) are miniature Unmanned Aerial Vehicles (UAVs) restricted in size and mass. MAVs have great advantages over larger and heavier vehicles: easy and fast transportation and deployment in the field, high maneuverability and flexibility in accessing and navigating environments such as caves, pipelines, and buildings, easy implementation of large swarms, low-risk of serious damage to objects and people, and expendability. These features greatly take advantage of extended autonomy and flight duration. The research activity aims to develop a MAV capable of autonomously following an object tracked via deep learning algorithms.

The goal is to develop and implement full on-board autonomous control, including sensor’s data acquisition and processing, decision making and flight control. Specifically, the research activity focuses on the investigation of enabling technologies, including miniaturized high performance cameras optimized for running deep learning and computer vision algorithms, small-size, low-mass avionics, high specific power supply, telecom and actuators.

VTP

VTP

The Visual Trajectory Planning project aims to develop knowledge in the field of visual-planning. The project leverages Image Analysis with non-conventional Computer Vision Object detection and spatial recognition. The   first   standalone   feature   the   squad   is   implementing   regards   the recognition of the gates and the computation of a suitable trajectory for the drone to pass through it. To do so, the task is split in two: first the identification of the gate, using a neural network specifically trained to recognize it and return the distance from this latter, and second the identification of the trajectory, using a second neural network able to return the best path to drive the drone through the gate   itself. This   feature   is   of   primary   importance   in   both   internal   and   external scenarios and directly related to the capability of the drone to safely move in an environment. The first implementation in hardware will use a DJI Tello.

LTS

LTS

The idea of the LTS (Long Term SLAM) is to try any interesting approach that can be found or thought of in order to solve the SLAM problem, or improving an already-existing solutio. Therefore, it covers many different issues, directly or indirectly related to the SLAM problem, both in the hardware and software fields.

By studying and testing different set-ups, it is possible to try every thinkable solution, by creating inside the team a solid database of information about SLAM, that collects all the gathered experience. The aim is to obtain different strategies to perform efficient localization and mapping tasks. Therefore, the idea is to have a special focus on the sensor suite choice, sensor fusion algorithm, testing of advanced and still developing strategies, like the application of Deep Learning techniques, to enforce SLAM algorithms’ efficiency. All the solutions are finally tested and implemented in hardware to evaluate and compare results.