Artificial Intelligence Based on Biological Neurons: Constructing Neural Circuits for IoT

Artificial Intelligence Based on Biological Neurons: Constructing Neural Circuits for IoT

DOI: 10.4018/978-1-7998-3111-2.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The chapter describes the new approach in artificial intelligence based on simulated biological neurons and creation of the neural circuits for the sphere of IoT which represent the next generation of artificial intelligence and IoT. Unlike existing technical devices for implementing a neuron based on classical nodes oriented to binary processing, the proposed path is based on simulation of biological neurons, creation of biologically close neural circuits where every device will implement the function of either a sensor or a “muscle” in the frame of the home based live AI and IoT. The research demonstrates the developed nervous circuit constructor and its usage in building of the AI (neural circuit) for IoT.
Chapter Preview
Top

Introduction

Although the concept of the “Internet of Things” (IoT) has been around for a long time, it, especially in the light of our rapid technological development, is constantly evolving. We can say that IoT is the embodiment of the gradual merger of the physical and digital worlds, as data is collected from an ever-growing number of devices and then combined into so-called “big data.” The number of such devices of the “Internet of things”, according to experts and analysts, will reach 50 billion by 2020.

However, when you try to transfer the data collected by IoT devices to a centralized storage, such as a cloud, there is a problem with the delay in their transmission. In many respects, even though the connection speed is constantly increasing, the characteristics of this process do not correspond to the available data growth. If you transfer the “raw” data, that is, unprocessed, all in a row, the delay will increase and, therefore, the overall system performance will suffer.

Data processing is one of those areas in which AI can make a significant contribution. In addition, it opens the way to the introduction of technological innovations in various fields, from optimizing the movement of urban transport to improving public safety and improving the provision of financial services.

Implementing AIoT requires components that can cope with complex and diverse conditions at the edge of the network. The periphery, as you know, can be literally anything - from airborne vehicles and aircraft to factories or oil installations in the desert. All this requires a flexible and adaptable approach to the production of components to solve this problem. An important point is that AI promises to eliminate the influence of the human factor on decision-making as much as possible. This puts more pressure on system integrators: they need to provide special control over the quality of the functioning of the system, since an accident in systems with artificial intelligence does not always have an obvious culprit or a visible reason.

Another difficulty we face with is related to the fact that we always have to tune something in the settings and in some cases we might want the devices to work in one mode in another in the other one, so as a result we’d constantly have to spend our time on tuning, changing, updated and doing lots of work. Besides it would take lots of time to read the instructions, to learn how to use some software, etc.

One more problem is enclosed in the fact that what we call “Artificial Intelligence” is not really the AI, it provides extremely narrow functionality which only allows to select the proper “answer” based on multi-criteria condition. Such the AI can not really think, evolve with time. Such the AI will require the constant installation of the new and new modules for processing new tasks and the abilities of such the AI can’t be compared with the abilities of even the most primitive creature.

A good example of the scope of limited AI is the recognition of text, images and speech, which we can implement using neural networks and machine learning. During training, such artificial intelligence remembers thousands, if not millions, of various iterations of data and is able to correctly determine the image or an object located in the zone of its action. No matter how complex the predictions of such an AI become, it is still limited by a narrow function. If something goes beyond the given parameters, the AI becomes almost useless. For example, artificial intelligence, trained to recognize written numbers, can master this task and easily push people out of this sphere of activity, because it will work more efficiently, without fatigue and interruptions, but it will be completely useless if it is given to it without retraining such tasks as e.g. letter identification.

As for the concept of border (peripheral) computing, the initial idea of IoT was that the data for processing and subsequent analysis was sent to some central device or to the cloud. However, as the number of devices increases exponentially, many applications have already reached the limit of their capabilities, and all this large amount of data transferred back and forth leads to problems with unacceptable delays in decision making and response.

Border computing solves this problem by processing “big data” directly on the edge of the network. Thus, the device can independently determine what needs to be sent to the cloud and what can be filtered out like digital garbage. In fact, this concept offers the movement of computing power to the “edge” of the network - to where the Internet connects to various devices.

And here we come to another problem which is enclosed in neural computation.

Complete Chapter List

Search this Book:
Reset