Applications of Deep Learning in Robotics

Applications of Deep Learning in Robotics

DOI: 10.4018/978-1-6684-8098-4.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Deep artificial neural network applications to robotic systems have seen a surge of study due to advancements in deep learning over the past 10 years. The ability of robots to explain the descriptions of their decisions and beliefs leads to a collaboration with the human race. The intensity of the challenges increases as robotics moves from lab to the real-world scenario. Existing robotic control algorithms find it extremely difficult to master the wide variety seen in real-world contexts. The robots have now been developed and advanced to such an extent that they can be useful in our day-to-day lives. All this has been possible because of improvisation of the algorithmic techniques and enhanced computation powers. The majority of traditional machine learning techniques call for parameterized models and functions that must be manually created, making them unsuitable for many robotic jobs. The pattern recognition paradigm may be switched from the combined learning of statistical representations, labelled classifiers, to the joint learning of manmade features and analytical classifiers.
Chapter Preview
Top

Introduction To Deep Learning

The majority of modern civilization is driven by machine learning, including the filtering of information on social networks, the pointing out of products on e-commerce websites, and an increasing number of consumer gadgets such as cameras and smartphones. This includes the filtering of information on social networks. When discussing machine learning, a knowledge of algorithms is used in selecting appropriate search outcomes, decipher difficulties in photos, audio to text conversion, identify news items, communications, and pick out things from photographs. These packages are increasing the implementation of a set of strategies known as deep learning (DL). One of the primary objectives of Deep Learning is to generate optimised outputs in order to solve the conundrum of producing efficient results using AI. Machine learning was initially referred to as “Deep Learning” by Dechter (1986), while Artificial Neural Networks (NNs) were first mentioned by Aizenberg. It then rose to prominence, particularly in the setting of deep neural networks (NNs), the most successful Deep Learners going back half a century.

Because of its ability in identifying relatively complex structures in the data which is high-dimensional, it may be used in a huge variety of fields which are scientific, commercial, and governmental. In addition to breaking results in speech (Mikolov et al., 2011) and image recognition (Farabet et al., 2013; Krizhevsky et al., 2012), it has outperformed other machine-learning methods when it comes to foreseeing the activity of concerned drug-molecules (Ma et al., 2015), reconstructing brain circuits from particle accelerator data, and speculating on the potential effects of mutations in uncoded DNA on the occurrence of genetic disorders plus mutations. Machine learning is being surprisingly adopted by the biological sciences because it helps computers deal with perceptual difficulties such as picture and voice recognition. These deep-learning systems, which are comprised of deep artificial neural networks, employ a few of processing layers in order to identify and recognize patterns and form in extremely huge record lists. Every layer extracts a notion from the input, which may then be expanded upon by subsequent layers; as the degree of complexity grows, the learned ideas become more generalised. Deep learning automatically extracts functions without depending on the processing of data that came before it. An artificial deep neural network that was trained and built to identify bureaucracy, for instance, may initially learn to detect basic edges, and then, in subsequent layers, it could include identification of more complicated forms created up of these edges. This is just one simple example (Rusk, 2016). Since deep learning can extract high-level knowledge from enormous amounts of data, it will be extremely useful in the context of big data. Initial issues like overfitting because of infrequent relationships in the training data and high processing costs can be processed as it gains popularity in genomic research.

We have different kinds of neural network models which are used to implement different tasks depending the specificity of the problem. For instance,

  • 1.

    Convolutional Neural Networks (CNNs): In the field of neural networks, the CNN is a t popular types used for image recognition and classification. CNNs are used rather often versatile applications, inclusive of scene-tagging, entity-recognition, face-detection, and much more. Some of these applications include: The convolution layer constitutes the very first step in the process of feature extraction from an input image. The convolutional layer is responsible for maintaining the relationship between pixels via the process of learning and studying unique characteristics using a little square of input data. A mathematical operation is carried out by it with the two inputs consisting of an image pixel matrix and a kernel-filter combination.

  • 2.

    Recurrent Neural Network (RNNs): In voice detection and natural-language-processing, RNNs are one of the artificial neural network models. The networking and firing of neurons concept in human brain which is implemented by various models and DL models both work on the principle of RNN. In order to recognise patterns in sequences of data, such as spoken language, handwriting, text, genome sequences, and quantitative time series data, stock markets, and corporate organisations, recurrent networks are constructed. These networks are then used to analyse the data. A memory-state is provided to the neurons in a RNN, which otherwise resembles a CNN. A basic memory is to be used in the computation.

Complete Chapter List

Search this Book:
Reset