The past decade could be classified as the “decade of connectivity”; in fact, it is commonplace for computers to be connected to an LAN, which in turn is connected to a WAN, which provides an Internet connection. On an application level this connectivity allows access to data that even five years earlier were unavailable to the general population. This growth has not occurred without problems, however. The number of users and the complexity/size of their applications continue to mushroom. Many networks are over-subscribed in terms of bandwidth, especially during peak usage periods. Often network growth was not planned for, and these networks suffer from poor design. Also, the explosive growth has often necessitated that crisis management be employed just to keep basic applications running. Whatever the source of the problem, it is clear that proactive design and management strategies need to be employed to optimize available networking resources (Fortier & Desrochers, 1990). This is especially true in today’s world of massive Internet usage (Zhu, Yu, & Doyle, 2001).
Obviously, one way to increase network bandwidth is to increase the speed of the links. However, this may not always be practical due to cost or implementation time. Furthermore, this solution needs to be carefully thought out because increasing speed in one part of a network could adversely effect response time in another part of that network. Another solution would be to optimize the currently available bandwidth through programming logic. Quality of service (QOS) and reservation bandwidth (RB) are two popular methods currently being utilized. Implementation of these optimization methods is rarely simple and often requires a high degree of experimentation if they are to be effectively configured (Walker, 2000). This experimentation can have detrimental effects on a live network, often taking away resources from mission critical applications. The client/server model so popular in Internet communication is a prime example of a system that can benefit from an analytical modeling strategy (Postigo-Boix, Garcia-Haro, & Melus-Moreno, 2005).
Key Terms in this Chapter
Markov Chain: A model that determines probabilities for the next event, or “state,” given the result of the previous event.
M/M/1 Model (Exponential/Exponential with One Server): The queuing model that assumes an exponential distribution for inter-arrival times, an exponential distribution for service times, and a single server.
Quality of Service (QoS): A method of marking certain packets for special handling to ensure high reliability or low delay.
Inter-Arrival Time: The amount of time that elapses after the receipt of a packet until the next packet arrives.
Packet: A finite stream of bits sent in one block with header and trailer bits to perform routing and management functions.
Throughput: The number of bytes transferred per second across a computer network.
Packet Intensity: The number of packets transferred per second across a computer network.