Manipulability Tracking
The code shows simple examples for a manipulability tracking task, where the robot is required to track a desired manipulability ellipsoid either as its main task or as a secondary objective (where the nullspace of the robot is exploited).This approach offers the possibility of transferring posture-dependent task requirements such as preferred directions for motion and force exertion in operational space, which are encapsulated in manipulability ellipsoids.
The proposed formulation exploits tensor-based representations and takes into account that manipulability ellipsoids lie on the manifold of symmetric positive definite matrices. The proposed mathematical development is compatible with statistical methods providing 4th-order covariances (see [1]), which are here exploited to reflect the tracking precision required by the task.
For more details, please check the paper:
If you have questions, comments or remarks about this code, please drop me an email! I will be glad to talk with you.
The proposed formulation exploits tensor-based representations and takes into account that manipulability ellipsoids lie on the manifold of symmetric positive definite matrices. The proposed mathematical development is compatible with statistical methods providing 4th-order covariances (see [1]), which are here exploited to reflect the tracking precision required by the task.
For more details, please check the paper:
- Jaquier, N., Rozo, L., Caldwell, D. and Calinon, S. (2018). Geometry-Aware Tracking of Manipulability Ellipsoids. Robotics: Science and Systems (R:SS).
If you have questions, comments or remarks about this code, please drop me an email! I will be glad to talk with you.
Manipulability Transfer
The code shows simple examples for the manipulability transfer between a teacher and a learner. The former demonstrates how to perform a task with a desired time-varying manipulability profile, while the latter reproduces the task by exploiting its own redundant kinematic structure so that its manipulability ellipsoid matches the demonstration.
This approach offers the possibility of transferring posture-dependent task requirements such as preferred directions for motion and force exertion in operational space, which are encapsulated in the demonstrated manipulability ellipsoids.
The proposed approach is first built on a GMM/GMR model that allows for the geometry of the SPD manifold to encode and retrieve appropriate manipulability ellipsoids. This geometry-aware approach is later exploited for redundancy resolution, allowing the robot to modify its posture so that its manipulability ellipsoid coincides with that of a demonstration.
For more details, please check the paper:
If you have questions, comments or remarks about this code, please drop me an email! I will be glad to talk with you.
This approach offers the possibility of transferring posture-dependent task requirements such as preferred directions for motion and force exertion in operational space, which are encapsulated in the demonstrated manipulability ellipsoids.
The proposed approach is first built on a GMM/GMR model that allows for the geometry of the SPD manifold to encode and retrieve appropriate manipulability ellipsoids. This geometry-aware approach is later exploited for redundancy resolution, allowing the robot to modify its posture so that its manipulability ellipsoid coincides with that of a demonstration.
For more details, please check the paper:
- Rozo, L., Jaquier, N. Calinon, S. and Caldwell, D. (2017). Learning Manipulability Ellipsoids for Task Compatibility in Robot Manipulation. Intl. Conf. on Intelligent Robots and Systems (IROS), pp. 3183-3189.
If you have questions, comments or remarks about this code, please drop me an email! I will be glad to talk with you.
Proactive and reactive collaborative behaviors with ADHSMM
This code implements simple examples for trajectory generation and control of a collaborative robot manipulator, which are built on an adaptive duration hidden semi-Markov model (ADHSMM). This adaptive duration model allows the robot to shape the temporal dynamics of the task based on the interaction with the user. In addition to this, ADHSMM also permits to generate proactive behaviors that exploit the temporal coherence observed during demonstrations of the collaborative task.
For more details, please check the papers:
If you have questions, comments or remarks about this code, please drop me an email! I will be glad to talk with you.
For more details, please check the papers:
- Rozo, L., Silvério, J. Calinon, S. and Caldwell, D. (2016). Learning Controllers for Reactive and Proactive Behaviors in Human-Robot Collaboration. Frontiers in Robotics and AI, 3:30, pp. 1-11.
- Rozo, L., Silvério, J. Calinon, S. and Caldwell, D. (2016). Exploiting Interaction Dynamics for Learning Collaborative Robot Behaviors. International Joint Conference on Artificial Intelligence (IJCAI), Workshop on Interactive Machine Learning, New York - USA, pp. 1-7.
If you have questions, comments or remarks about this code, please drop me an email! I will be glad to talk with you.
Human-robot collaborative transportation task (2D case)
This code shows a simple human-robot cooperative transportation task in a planar scenario. The task consists of transporting an object from an initial position to a target location (both vary across demonstrations and reproductions). The code shows:
For more details, please check the paper:
Rozo, L., Calinon, S., Caldwell, D., Jimenez, P. and Torras, C. (2016). Learning Physical Collaborative Robot Behaviors from Human Demonstrations. Transactions on Robotics.
If you have questions, comments or remarks about this code, please drop me an email! I will be glad to talk with you.
- Computation of virtual attractor trajectories from the dynamics observed during the demonstrations of the collaborative task.
- Task parametrized GMM learning of the collaborative task.
- BIC-based model selection.
- Stiffness estimation built on a convex optimization.
- Reproduction using GMR with adaption to new configurations of the task parameters.
For more details, please check the paper:
Rozo, L., Calinon, S., Caldwell, D., Jimenez, P. and Torras, C. (2016). Learning Physical Collaborative Robot Behaviors from Human Demonstrations. Transactions on Robotics.
If you have questions, comments or remarks about this code, please drop me an email! I will be glad to talk with you.
PbDlib
PbDlib is an open source library for robot programming by demonstration (learning from demonstration), composed of various functionalities at the crossroad of statistical learning, dynamical systems and optimal control. It can for example be used in applications requiring task adaptation, human-robot skill transfer, safe controllers based on minimal intervention principle, as well as for probabilistic motion analysis and synthesis.
My colleagues (Dr. Davide De Tommaso, Dr. Tohid Alizadeh, Dr. Milad Malekzadeh and Dr. Sylvain Calinon) and I created this library in 2012, and we have been improving it since then. Several amazing researchers have joined us making nice contributions to this initiative.
There is also a MATLAB/Octave version with additional functionalities which is mainly maintained by Dr. Sylvain Calinon.
My colleagues (Dr. Davide De Tommaso, Dr. Tohid Alizadeh, Dr. Milad Malekzadeh and Dr. Sylvain Calinon) and I created this library in 2012, and we have been improving it since then. Several amazing researchers have joined us making nice contributions to this initiative.
There is also a MATLAB/Octave version with additional functionalities which is mainly maintained by Dr. Sylvain Calinon.