
You have already added 0 works in your ORCID record related to the merged Research product.
You have already added 0 works in your ORCID record related to the merged Research product.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
You have already added 0 works in your ORCID record related to the merged Research product.
You have already added 0 works in your ORCID record related to the merged Research product.
Learning to Model the Grasp Space of an Underactuated Robot Gripper Using Variational Autoencoder

Learning to Model the Grasp Space of an Underactuated Robot Gripper Using Variational Autoencoder
Grasp planning and most specifically the grasp space exploration is still an open issue in robotics. This article presents a data-driven oriented methodology to model the grasp space of a multi-fingered adaptive gripper for known objects. This method relies on a limited dataset of manually specified expert grasps, and uses variational autoencoder to learn grasp intrinsic features in a compact way from a computational point of view. The learnt model can then be used to generate new non-learnt gripper configurations to explore the grasp space.
accepted at SYSID 2021 conference
ACM Computing Classification System: TheoryofComputation_MISCELLANEOUS
Microsoft Academic Graph classification: Grasp planning business.industry Computer science GRASP Underactuated robots Robotics Space (commercial competition) Autoencoder Space exploration Point (geometry) Artificial intelligence business
FOS: Computer and information sciences, [INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE], [SPI.AUTO]Engineering Sciences [physics]/Automatic, Control and Systems Engineering, Robotics (cs.RO)
FOS: Computer and information sciences, [INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE], [SPI.AUTO]Engineering Sciences [physics]/Automatic, Control and Systems Engineering, Robotics (cs.RO)
ACM Computing Classification System: TheoryofComputation_MISCELLANEOUS
Microsoft Academic Graph classification: Grasp planning business.industry Computer science GRASP Underactuated robots Robotics Space (commercial competition) Autoencoder Space exploration Point (geometry) Artificial intelligence business
22 references, page 1 of 3
Abadi, M. et al. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from www.tensorflow.org.
Berenson, D., Diankov, R., Nishiwaki, K., Kagami, S., and Ku ner, J. (2007). Grasp planning in complex scenes. In IEEE-RAS International Conference on Humanoid Robots, 42{48.
Choi, C., Schwarting, W., DelPreto, J., and Rus, D. (2018). Learning object grasping for soft robot hands. IEEE Robotics and Automation Letters, 3(3), 2370{ 2377. doi:10.1109/LRA.2018.2810544.
Chollet, F. et al. (2015). Keras. Software available from www.keras.io.
Depierre, A., Dellandrea, E., and Chen, L. (2018). Jacquard: A large scale dataset for robotic grasp detection. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 3511{3516. doi: 10.1109/IROS.2018.8593950. [OpenAIRE]
Drost, B., Ulrich, M., Navab, N., and Ilic, S. (2010). Model globally, match locally: E cient and robust 3d object recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 998{1005.
Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. (2017). beta-vae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representation.
Kingma, D. and Welling, M. (2014). Auto-encoding variational bayes. In International Conference on Learning Representations (ICLR).
Koenig, N. and Howard, A. (2004). Design and use paradigms for gazebo, an open-source multi-robot simulator. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), volume 3, 2149{2154. doi:10.1109/IROS.2004.1389727.
Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J., and Quillen, D. (2018). Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, 37(4), 421{436. doi:10.1177/0278364917710318.
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).1 popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.Average influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).Average impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.Average citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).1 popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.Average influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).Average impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.Average Powered byBIP!

Grasp planning and most specifically the grasp space exploration is still an open issue in robotics. This article presents a data-driven oriented methodology to model the grasp space of a multi-fingered adaptive gripper for known objects. This method relies on a limited dataset of manually specified expert grasps, and uses variational autoencoder to learn grasp intrinsic features in a compact way from a computational point of view. The learnt model can then be used to generate new non-learnt gripper configurations to explore the grasp space.
accepted at SYSID 2021 conference