Combining Motor Primitives for Perception Driven Target Reaching With Spiking Neurons

Combining Motor Primitives for Perception Driven Target Reaching With Spiking Neurons

J. Camilo Vasquez Tieck (FZI Research Center for Information Technology, Karlsruhe, Germany), Lea Steffen (FZI Research Center for Information Technology, Karlsruhe, Germany), Jacques Kaiser (FZI Research Center for Information Technology, Karlsruhe, Germany), Daniel Reichard (FZI Research Center for Information Technology, Karlsruhe, Germany), Arne Roennau (FZI Research Center for Information Technology, Karlsruhe, Germany) and Ruediger Dillmann (FZI Research Center for Information Technology, Karlsruhe, Germany)
DOI: 10.4018/IJCINI.2019010101

Abstract

Target reaching is one of the most important areas in robotics, object interaction, manipulation and grasping tasks require reaching specific targets. The authors avoid the complexity of calculating the inverse kinematics and doing motion planning, and instead use a combination of motor primitives. A bio-inspired architecture performs target reaching with a robot arm without planning. A spiking neural network represents motions in a hierarchy of motor primitives, and different correction primitives are combined using an error signal. In this article two experiments using a simulation of a robot arm are presented, one to extensively cover the working space by going to different points and returning to the start point, the other to test extreme targets and random points in sequence. Robotics applications—like target reaching—can provide benchmarking tasks and realistic scenarios for validation of neuroscience models, and also take advantage of the capabilities of spiking neural networks and the properties of neuromorphic hardware to run the models.
Article Preview
Top

Introduction

Target reaching is one of the most important problems in robotics, object interaction, manipulation and grasping tasks require reaching a specific target (Latombe, 2012). Humans will learn motions and remember them for execution in different situations. A broadly accepted concept in neuroscience is that the CNS (Central Nervous System) uses sensory-motor primitives as building blocks for the execution and planning of motions (d’Avella & Lacquaniti, 2013; Johansson & Cole, 1992). The combination of simple primitives representing muscle synergies creates more complex and advanced motions (Bizzi, Cheung, d’Avella, Saltiel, & Tresch, 2008; d’Avella & Lacquaniti, 2013; Thoroughman & Shadmehr, 2000). There have been recent developments in robotics using these principles for dynamic motion primitives (Ijspeert, Nakanishi, Hoffmann, Pastor, & Schaal, 2013; Schaal, 2006), for a reactive framework of reflexes (Kröger, 2011).

Nevertheless, robotics still relies on classical well-proven methods to solve most of the problems.

In classical robotics the problem of reaching a target is solved by calculating the inverse kinematics (IK) for a target point, then validating the configuration, and finally planning the trajectory. These steps are computationally expensive. A complete overview of planning methods is presented in (Latombe, 2012), and a detailed analysis of different methods for solving the IK is presented in (Buss, 2004).

Spiking neural networks (SNN) focus on replicating the way real neurons work and attempt to replicate their biological characteristics (Maass, 1997). SNN are capable of spike-based communication, enabling research on brain functioning, learning and plasticity mechanisms (Gamez, 2010). For more details on SNN see (Gruening & Bohte, 2014; Maass, 1997; Vreeken, 2003).

Building on previous work with SNN using motor primitives for grasping (Tieck et al., 2017; Tieck, Steffen, Kaiser, Reichard, et al., 2018; Tieck, Steffen, Kaiser, Arne, & Dillmann, 2018; Tieck, Weber, Stewart, Roennau, & Dillmann, 2018), the authors propose a bio-inspired architecture to perform target reaching with a robot arm without planning. A spiking neural network represents motions in a hierarchy of motor primitives. Different correction primitives are combined using an error signal to control a robot arm in a closed-loop scenario --- illustrated in Figure 1.

This approach is motivated by consideration of how human beings estimate positions and distances. Humans can easily determine which object is in front of or behind, or on the left or right of another one, and which of two angles is wider (Pfeifer & Bongard, 2006). The motoric of the eyes and head provide all this information very fast. Studies have shown that the human brain uses the feedback information from vision and from proprioception to execute reaching movements (Filimon, Nelson, Huang, & Sereno, 2009; Saunders & Knill, 2003). A coupling between these two systems suggest that there are other important components involved in the generation of motion.

The methods presented in this article were first presented in (Tieck, Steffen, Kaiser, Reichard, et al., 2018). The approach is implemented with SNN. The authors avoid the complexity of calculating the IK and motion planning, and instead the authors use a combination of motor primitives.

In this work experiments using a simulation of a robot arm are presented to cover the working space returning to start, and experiments to test extreme targets and random points in sequence.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 14: 4 Issues (2020): 1 Released, 3 Forthcoming
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing