Scented Node Protocol for MANET Routing

Scented Node Protocol for MANET Routing

Song Luo (Intelligent Automation Inc, USA), Yalin E. Sagduyu (Intelligent Automation Inc, USA) and Jason H. Li (Intelligent Automation Inc, USA)
DOI: 10.4018/978-1-61350-092-7.ch013
OnDemand PDF Download:
No Current Special Offers


Ant Colony Optimization (ACO) for biologically inspired networking introduces performance gains over classical routing solutions for Mobile Ad Hoc Networks (MANETs). However, the current ACO protocols involve significant amount of overhead and do not fully reflect the wireless interference effects in routing decisions. In ant routing, sources send out ant-based control packets for route discovery and path maintenance. Destinations can assist ant packets by disseminating scent messages to provide better guidance for route discovery and thus effectively reduce the protocol overhead. For that purpose, Scented Node Protocol (SNP) is introduced for interference-aware routing with novel scent diffusion and reinforcement mechanisms. The wireless link rates are measured by identifying the node pairs that are the most impacted by wireless interference, and network flows are routed to avoid severe interference effects among concurrent wireless transmissions. The throughput and overhead performance of SNP is evaluated through extensive realistic simulations for dynamic MANET environment. The resulting amount of overhead for scent and ant packets is also evaluated through the asymptotic analysis of scaling laws, as the network size grows, and through the dynamic analysis of the finite overhead constraint, by discussing the possible effects of local network coding on scent dissemination between neighbor nodes. Our results verify the throughput and overhead gains of biologically inspired SNP in wireless networks over the existing ACO and MANET routing protocols.
Chapter Preview


Swarm Intelligence (SI) is based on the notion that “swarms” or large collections of simple interacting entities acting in some cooperative fashion can solve complex problems. A popular example is the large ant colonies, which can, with apparently low level of intelligence, limited functionality and information processing capability or minimal information exchange, perform collective tasks, efficiently allocate resources or flexibly implement division of labor among workers in the colony.

As an adaptive, decentralized, and robust system, an ant colony solves the shortest path problem to find sources of food through the use of chemical substances known as pheromones, which have a scent that decays over time through the process of evaporation (Bonabeau, Dorigo, & Thraulaz, 1999). Ants first set out in search of a food source by (apparently randomly) choosing several different paths. Along the way they leave traces of pheromone. Once ants find a food source, they retrace their path back to their colony (and in so doing inform other ants in the colony) by following their scent back to their point of origin. Since many ants go out from their colony in search of food, the ants that return first are presumably those that have found the food source closest to the colony or at least have found a source that is in some way more accessible. In this way, an ant colony can identify the shortest or “best” path to the most desirable food source.

Ants simply follow the path that has the strongest pheromone traces, again the one associated with the shortest path. This leads more ants to go along this path that further strengthens the pheromone trail and thereby reinforces the shortest path to the food source – hence is a form of reinforcement learning (Kaelbling, Littman, & Moore, 1996). If, for instance, a shortest path is somehow obstructed, then the second best shortest path will, at some later point in time, attain the strongest pheromone, hence will induce ants to traverse it and thereby will strengthen it. The decay in the pheromone level allows for redundancy and robustness of the emergent solution as well as dynamic adaptation.

Such reinforcement learning methods are known in the SI literature as ant colony optimization (ACO) (Dorigo & Stutzle, 2004), or simply, ant algorithms and have been used to provide heuristics for a number of difficult combinatorial optimization problems such as the Traveling Salesman Problem (Dorigo & Stutzle, 2004), Job-shop Scheduling Problem (Colorni, Dorigo, Maniezzo, & Trubian, 1994), Graph Coloring Problem (Cost & Hertz, 1997), Vehicle Routing Problem (Bullnheimer, Hartl, & Strauss, 1999), and Quadratic Assignment Problem (Maniezzo, 1999). In many cases, ACO algorithms are competitive with, if not superior to, well-established heuristics.

In the search of food sources, ants and many other insects are not just guided by pheromones, but also attracted by the scent emanated by the food sources (Edmund, Mark, & Ryohei, 1993), as illustrated in Figure 1. The food scent increases relatively, as insects approach the food source, and it exhibits a decreasing gradient as they move farther away from it. In particular, insects wander until they pick up the slightest hint of this scent, and then very quickly follow the scent gradient toward the source. This scent-based mechanism is intended to interact with and enhance any of the ACO algorithms, which typically use only pheromones.

Figure 1.

Food (destination) scent attracts ants (ant packets)


The use of pheromones along with food scent forms the basis of what amounts to a clever information storage and retrieval system. The fact that pheromone and food scent strengths or intensities decay over time suggest that they are very simple information processing mechanisms that can also implement a form of positive and negative feedback. This “processing” capability is illustrated in the simplicity of how ants utilize and respond to pheromones and food scent.

Complete Chapter List

Search this Book: