PERFORMANCE OF VARIOUS INPUT-BUFFERED AND OUTPUT-BUFFERED ATM SWITCH DESIGN PRINCIPLES UNDER BURSTY TRAFFIC - SIMULATION STUDY

Authors
Citation
Sc. Liew, PERFORMANCE OF VARIOUS INPUT-BUFFERED AND OUTPUT-BUFFERED ATM SWITCH DESIGN PRINCIPLES UNDER BURSTY TRAFFIC - SIMULATION STUDY, IEEE transactions on communications, 42(2-4), 1994, pp. 1371-1379
Citations number
12
Categorie Soggetti
Telecommunications,"Engineering, Eletrical & Electronic
ISSN journal
00906778
Volume
42
Issue
2-4
Year of publication
1994
Part
2
Pages
1371 - 1379
Database
ISI
SICI code
0090-6778(1994)42:2-4<1371:POVIAO>2.0.ZU;2-N
Abstract
This paper investigates the packet loss probabilities of several alter native input-buffered and output-buffered switch designs with finite a mounts of buffer space. The effects of bursty traffic, modeled by geom etrically distributed active and idle periods, are explored. Methods f or improving switch performance are classified, and their effectivenes s for dealing with bursty traffic discussed. This work indicates that bursty traffic can degrade switch performance significantly and that i t is difficult to alleviate the performance degradation by merely rest ricting the offered traffic load. Unless buffers are shared, or very l arge. buffers provided, strategies that improve throughput under unifo rm random traffic are not very effective under bursty traffic. For inp ut-buffered switches, our investigation suggests that the specific con tention resolution scheme we use is a more important performance facto r under bursty traffic than it is under uniform random traffic. In add ition, many qualitative results true for uniform random traffic are no t true for bursty traffic. The work also reveals several interesting, and perhaps unexpected, results: 1) output queueing may have higher lo ss probabilities than input queueing under bursty traffic; 2) speeding up the switch operation could results in worse performance than havin g several output ports per output address under bursty traffic; and 3) if buffers are not shared in a fair manner, sharing buffers could mak e performance worse than not sharing buffers at high traffic loads. Si mulation results and intuitive explanations supporting the above obser vations are presented.