Public wifi – why use more radio spectrum than you need?

Here’s the second of my series of little rants about poor public wifi – this time, why use more spectrum than you need?

Using the Wifi Explorer tool, I was recently checking out a venue with a modern public wifi installation. Here’s what the 5GHz spectrum looked like:

crowded_5Ghz_spectrumI’ve redacted the SSIDs so that we aren’t naming names and we’re hopefully saving face.

You’re probably thinking “They aren’t using much spectrum at all“, right? All their access points are all clustered down on 4 channels – that in itself not being a good idea.

Note that they are using “wide” 40MHz channels – the signal from each access point is occupying two standard 20MHz channels. Networks are usually setup like this to increase the amount of available bandwidth, by using multiple signals on multiple radio channels at once between the base station and the client.

This was also a brand new installation, and the access points were supporting 802.11a, n and ac, and the Wifi Explorer tool reported each AP could support a theoretical speed of 540Mb/sec.

What if I told you the access circuit feeding this public wifi network, and therefore the most bandwidth available to any single client, was 100Mb/sec?

Vanilla 802.11a would give a maximum data rate of 54Mb/sec (probably about 30Mb/sec usable payload) on a single channel, this could be 150 or 300Mb/sec with 802.11n (MIMO). Plenty for getting to that 100Mb.

Thus rather than having as many as 4 overlapping access points sharing the same channels, this could be reduced significantly by only using 20MHz channels. This would result in less radio congestion (fewer clients on the same frequency), and probably wouldn’t negatively effect access speeds for the clients on the network.

There’s also the question of why all 6 access points visible in this sweep are spread across just two 40MHz channels.

The main reason is that DFS (Dynamic Frequency Selection) and TPC (Transmit Power Control) is required for any of the channels highlighted with blue numbers in the chart above – it’s also known as “Radar Detection”, because some radar operates in these channels. An access point running DFS will “listen” first for any radar signals before choosing an unoccupied channel and advertising the network SSID. If it hears any radar transmissions, it will shut down and move channel.

Sure, avoiding the DFS mandatory channels gives more predictability in your channel use, and means you aren’t affected by an access point needing to go off air.

However, an option in designing the network could be to use the DFS mandatory channels to increase available spectrum, but strategically place access points on non-DFS channels spatially in between those using DFS, getting away from the “listen on startup” phase (e.g. if there’s a need to reset an access point), or from the service suddenly going offline because of radar detection.

Also, remember that this is an indoor deployment and well inside a building. The chances of encountering radar interference are relatively low. I don’t recall seeing a problem using DFS when I’ve deployed temporary networks for meetings.

The other thing to note is that this deployment is not using a controller-based architecture. It is made of access points which can signal control messages between each other, but each access point maintains effectively it’s own view of the world. (Those of you in the Wifi space can now probably work out who I’m talking about.)

Is the above configuration using so few channels, and using them unwisely considering the target bandwidth actually available to the wifi clients, just asking for trouble once a few hundred users show up?