The acquisition of manipulation skills in robotics involves the combination of object recognition, action-perception coupling and physical interaction with the environment. Several learning strategies have been proposed to acquire such skills. As for humans and other animals, the robot learner needs to be exposed to varied situations. It needs to try and refine the skill many times, and/or needs to observe several attempts of successful movements by others to adapt and generalize the learned skill to new situations. Such skill is not acquired in a single training cycle, motivating the need to compare, share and re-use the experiments.
We propose Learn-Real: Learning physical manipulation skills with simulators using realistic variations
In LEARN-REAL, we aim to learn manipulation skills through simulation for object, environment and robot, with an innovative toolset comprising: 1) a simulator with realistic rendering of variations allowing the creation of datasets and the evaluation of algorithms in new situations; 2) a virtual-reality interface to interact with the robots within their virtual environments, to teach robots object manipulation skills in multiple configurations of the environment; and 3) a web-based infrastructure for principled, reproducible and transparent benchmarking of learning algorithms for object recognition and manipulation by robots.
Learn-Real project offcial website: https://learn-real.eu/
Associated APRIL Lab Works
1. The Importance and the Limitations of Sim2Real for Robotic Manipulation in Precision Agriculture 
- Selection of graphics and physics engine.
- Rendering various forms of realistic variations for robot manipulation tasks.
- Developing interfaces for researchers to link their robot models, algorithms with the simulator.
2. Leveraging Kernelized Synergies on Shared Subspace for Precision Grasp and Dexterous Manipulation 
This video presents the series of experiments conducted for dexterous manipulation tasks thereby exploiting the proposed framework of "Kernelized Synergies''. The Kernelized Synergies framework, inspired from human's sensori-motor organization, preserves the same reduced subspace for different grasping and manipulation tasks. The framework is trained on basic geometrical objects for elementary grasping (precision) and manipulation primitives (translation and rotation). Using two Kernelized Synergistic components, different complex tasks such; pouring coffee, closing jar, opening latches, sequential grasping and manipulation and playing carrom board game are performed.
2. Vision Based Adaptation to Kernelized Synergies for Human Inspired Robotic Manipulation 
This video presents the experiments conducted using updated Kernelized Synergies framework (proposed in ) for human-inspired manipulation tasks. The Kernelized Synergise framework is augmented with visual perception for pose estimation of objects to be manipulated. For this, a simplified perceptual pipeline is defined that uses RANSAC algorithm together with Euclidean clustering and SVM classifier for semantic segmentation and classification respectively. With the object being recognized, its pose is enumerated locally from its point cloud and is transformed into corresponding synergistic values for given tasks. The tasks considered are: mounting a bulb on the socket, squeezing lemon into water and spraying cleanser on the board.
Dr. Fei Chen, Prof. Darwin Caldwell, Istituto Italiano di Tecnologia, Italy
Dr. Sylvain Calinon, Idiap Research Institute, Switerland
Prof. Liming Chen, École Centrale de Lyon, France
 Carlo Rizzardo, Sunny Katyara, Miguel Ferandes, Fei Chen, "The Importance and the Limitations of Sim2Real for Robotic Manipulation in Precision Agriculture" in workshop "2nd Workshop on Closing the Reality Gap in Sim2Real Transfer for Robotics" in Robotics: Science and Systems (RSS2020) [arXiv]
 Sunny Katyara, Fanny Ficuciello, Darwin Caldwell, Bruno Siciliano, Fei Chen, "Leveraging Kernelized Synergies on SharedSubspace for Precision Grasp and Dexterous Manipulation", under review, 2020. [arXiv]
 Sunny Katyara, Fanny Ficuciello, Fei Chen, Bruno Siciliano, Darwin G. Caldwell, "Vision Based Adaptation to Kernelized Synergies for Human Inspired Robotic Manipulation", under review, 2020. [arXiv]