This results in essentially two algorithms, one for accommodating burstiness, and another for maintaining a particular level of traffic. I found it somewhat odd that these algorithms are packet based, as opposed to being byte-based; in fact the way they attempt to mitigate for this is modify the algorithm to mark proportional to the packet size; this is not quite the same as making the queue calculation byte-based, however.
The simulation results show that RED accomplishes its goals of maintaining average queue size and dropping packets with less bias against burstiness. Given TCP's limited feedback mechanism (dropping packets is really the only indication of congestion), the simulations demonstrate that RED utilizes packet dropping well in order to avoid congestion, by showing that when congestion begins to occur, the system drops packets, and the result is a reduction in average queue size.
RED is not fair in the max-min sense. In fact, the authors say fairness is not "well-defined," but I think there are reasonable fairness metrics--- RED does not in anyway guarantee fairness or ensure equitable utilization of gateway resources. Like all the congestion avoidance schemes we've looked at, it assumes cooperative clients and does not deal with malicious users.
In addition, it is quite sensitive to parameter selection, and different parameters seem to work for different workloads. The authors do provide some guidelines for parameter selection, but it is not clear how the interaction of the different parameters works with different traffic characteristics. Thus it seems more of a matter of "guess and check" to figure the optimal way to configure the routers for each kind of traffic.
Overall, given the limitations of TCP and the desire to have a simple algorithm that prevents congestion, RED is an interesting and useful simple way to maintain performance in the face of approaching congestion.
No comments:
Post a Comment