Artificial Neural Networks (ANNs) and Solution of Civil Engineering Problems: ANNs and Prediction Applications

Artificial Neural Networks (ANNs) and Solution of Civil Engineering Problems: ANNs and Prediction Applications

Melda Yucel (Istanbul University-Cerrahpaşa, Turkey), Sinan Melih Nigdeli (Istanbul University-Cerrahpaşa, Turkey) and Gebrail Bekdaş (Istanbul University-Cerrahpaşa, Turkey)
DOI: 10.4018/978-1-7998-0301-0.ch002

Abstract

This chapter reveals the advantages of artificial neural networks (ANNs) by means of prediction success and effects on solutions for various problems. With this aim, initially, multilayer ANNs and their structural properties are explained. Then, feed-forward ANNs and a type of training algorithm called back-propagation, which was benefited for these type networks, are presented. Different structural design problems from civil engineering are optimized, and handled intended for obtaining prediction results thanks to usage of ANNs.
Chapter Preview
Top

Multilayer Artificial Neural Networks

Artificial neural networks (ANNs) are computer systems that realize the learning function, which is one of the principal features of human brain. They carry out the learning activity by the help of samples.

These systems are used in various applications, such as prediction, classification, and problem control in practice. Besides, they are a computational stencil composing from a group of artificial neurons (i.e., nodes, namely input, hidden, and output neurons), which are connected to each other and have the weight value; they are based on biologic neural networks. In this regard, ANNs are an adaptive system, which can change its structure by internal or external information flowing along the network, along the learning process. This learning or training is a process whose weights are determined (Olivas et al., 2009). In addition, the weight value of every connection between nodes reflects the effect of each neuron on output value, and nodes are processed according to these weight values. Therefore, information that will be occurred takes form with respect of these weights, too.

A layer name is given to the section that is made altogether of more than one neuron cell. The main structure of ANNs includes only input and output layers. However, multilayer ANNs contain a layer called “hidden layer,” too. The input layer from these three layers, which compose the network structure, represents the training data, and these data are processed according to the model continuously.

Similarly, the second layer is a hidden layer. In this layer, neurons are trained by the data, and they are not open to a direct intervention from the external environment. Although generally only one hidden layer is used in practice, the number of layers is at discretion, and can be between 0 and a high number. Output values arranged with weight values produced by the last hidden layer are transferred to units comprised of an output layer, so the network spreads the prediction values generated for training data (Han & Kamber, 2006; Veintimilla-Reyes et al., 2016).

In this way, how compatible the obtained outputs are with actuals is observed by comparing the ultimate results obtained in the output layer and the real results, and error rate (variation) is investigated. It is possible to say that a correct training of the network, in case of variation between the actual output and output, which is obtained from the neural network, is in an allowable limit (Ormsbee & Reddy, 1995). If this rate is not within an admissible range, the network structure is arranged again by making the required improvements. If required, the weights of the connections can be modified, too. Operations are iteratively continued, and the network structure is completed when the error achieves the minimum acceptable level. The following list reports some positive and negative properties of ANNs:

  • They can produce solutions properly to samples whose values are not known or were not applied previously by generalization, through learning the linear or nonlinear relationship between input and output data about any problem from current samples. Networks have the ability to work rapidly due to their learning ability, adaptability to different problems, and the structure of their components, which can be worked simultaneously. At the same time, they require less information, and their implementation is easy (Uygunoğlu & Yurtçu, 2006).

  • They can process external information, based on their previous experiences, and simplify the complex and time-consuming problems, owing to their mapping ability (Gholizadeh, 2015).

  • Neural networks include long training times, thus they are more suitable for applications that allow long training. In a network structure, generally, many parameters, which are determined in the best way experimentally, are needed (Han & Kamber, 2006).

  • They can solve problems related to uncertain models or data that have variables, which include much missing and noisy information. This error tolerance feature addresses data mining problems, because real data are generally dirty and follow the open-possibility structures statistical models typically desire (Maimon & Rokach, 2010).

  • When an ANN structure is very small, the desired function cannot be carried out. When it is the network is very big, that is it learns all created samples from a large search area, it does not generalize inputs of which it does not know the value or which it has not seen before, substantially. From this respect, neural networks show extreme/peak behavior, in case numerous parameters are present in the model (Russell & Norvig, 1995)

Complete Chapter List

Search this Book:
Reset