Thursday, October 2, 2008

Architecture and Evaluation of Roofnet

Roofnet is a 802.11 mesh network near MIT that uses volunteers to host access points, providing multi-hop wireless links to the internet via NAT. The major goal here is to ease deployment by not constraining node placement, forcing configuration to occur automatically, and use multi-hop rather than single-hop routing to extend the footprint of the network even to nodes that by themselves do not have a wired connection. The use of multi-hop is one major difference between this network and those we have studied to date.

A custom route discovery mechanism called Srcr, is used to periodically detect best routes from an access point through the network. Although the algorithm is very simple, it seems to work, albeit in an unscalable way: the flood packets combined with the re-forwarding when new faster paths are discovered are unscalable and would probably hurt performance as the number of nodes increased or if nodes were so closely packed together that the local path explosion problem occurred (i.e. the best path keeps changing marginally because all paths are essentially equal with some jitter). Another custom algorithm manages bitrate selection, using heuristics that are not the same as 802.11: Roofnet will tolerate high packet loss if the obtained bandwidth is higher than a low-bitrate selection where loss is low. IIRC, 802.11 considers packet loss very bad and attempts to reduce bitrate even at the cost of overall throughput.

Interestingly, the network functions despite a number of shortcomings. For one, most paths go through the same "important" short high-speed links. Even though the network could survive losing some nodes, the loss of these most important nodes would disproportionately reduce connectivity. In addition, even in the full network, some pairs cannot route from one to another; this isn't so much of a problem if all communication is to the internet.

The major interesting result for me was the fact that this network uses the original CSMA/CA algorithm from 802.11 (not b) and has CTS/RTS disabled. It seems to me that with this kind of network, the hidden node problem would be a big deal; in fact that was my first thought when the network was described in section 2. However, their experimental results show that CTS/RTS doesn't help at all.

I would like to know why this is: it seems counterintuitive. Is it due to the particular implementation? I believe the CA part of CSMA/CA implements essentially what DS does, so perhaps CTS/RTS is redundant here. Still, it would be interesting to see how prevalent the hidden node problem is on their network and whether 802.11's CA is sufficient.

No comments: