J. Hietala, D. Blanco-Mulero, G. Alcan, V. Kyrki
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022), Kyoto, Japan, October 23–27, 2022
Publication date: 2022

Robotic manipulation of cloth is a challenging task due to the high dimensionality of the configuration space and complexity of dynamics affected by various material properties.The effect of the complex dynamics is even more pronounced in dynamic folding, for example, when a square piece of fabric is folded in two by a single manipulator. To account for the complexity and uncertainties, feedback of the cloth state using e.g. vision is typically needed. However, construction of visual feedback policies for dynamic cloth folding is an open problem. In this paper, we present a solution that learns policies in simulation using Reinforcement Learning (RL) and transfers the learned policies directly to the real world. In addition, to learn a single policy that manipulates multiple materials, we randomize the material properties in simulation. We evaluate the contributions of visual feedback and material randomization in real world experiments. The experimental results demonstrate that the proposed solution can fold successfully different fabric types using dynamic manipulation in the real world.

>Project Website<

BibTeX

@inproceedings{hietala2022learning,
  title={Learning visual feedback control for dynamic cloth folding},
  author={Hietala, Julius and Blanco--Mulero, David and Alcan, Gokhan and Kyrki, Ville},
  booktitle={2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  pages={1455--1462},
  year={2022},
  organization={IEEE}
}