A hotel that got wifi right

Normally the one to highlight when something is done badly, I also want to give praise where it is due.

I’m currently staying in a Premier Inn in leafy Abingdon. The data service here that I’d normally tether to is next to non-existent, dropping out all over the place. It looks like I’m in the shadow of some structure, between me and the Three UK antenna. There are also a couple of water courses in between, which might be hindering the signal.

So, I’m forced onto one of my pet hates, paid-for hotel wifi. Remember that Premier Inn are marketed as a “no frills” hotel – but they are almost always spotlessly clean and consistent.

It was either pony up for that or go and track down (and pay for) an O2 PAYG data sim, as I do at least have line of sight from my room here to one of their masts.

Firstly, I fired up Wifi Explorer, and took a look at what is deployed here.

Nice, uncrowded 2.4GHz spectrum, sensibly placed channels.

Nice, uncrowded 2.4GHz spectrum, sensibly placed channels.

Not only was the 2.4GHz likely to work okay, but they also had 5GHz too!

Wow! 5Ghz as well.

Wow! 5Ghz as well.

So, I decided that it was worth a spin. I signed up for the free half hour. Then I actually found I could get real work done on this connection, so I gave it a speed test.

Reasonably speedy too. I'd guess it's a VDSL line.

Reasonably speedy too. I’d guess it’s a VDSL line. Might get crowded later, I guess?

Not only have they got 5GHz, but they have recently slashed their prices. Some would say that it should be free anyway, but £3 for the day, or £20 for a month seemed a reasonable deal, especially if you’re staying in a Premier Inn a lot (I’m actually back here again next week).

I’ve not tried connecting multiple devices simultaneously using the same login, but I suspect you can’t, which is possibly the only downside.

However, big props to the folks at Premier Inn for actually having a wifi install that works, even if that means having to pay for it. I’ve seen much worse services in high-end hotels which have under-provisioned, congested (and often expensive) 2.4GHz networks.

Credit where it is earned, indeed.

Public wifi – why use more radio spectrum than you need?

Here’s the second of my series of little rants about poor public wifi – this time, why use more spectrum than you need?

Using the Wifi Explorer tool, I was recently checking out a venue with a modern public wifi installation. Here’s what the 5GHz spectrum looked like:

crowded_5Ghz_spectrumI’ve redacted the SSIDs so that we aren’t naming names and we’re hopefully saving face.

You’re probably thinking “They aren’t using much spectrum at all“, right? All their access points are all clustered down on 4 channels – that in itself not being a good idea.

Note that they are using “wide” 40MHz channels – the signal from each access point is occupying two standard 20MHz channels. Networks are usually setup like this to increase the amount of available bandwidth, by using multiple signals on multiple radio channels at once between the base station and the client.

This was also a brand new installation, and the access points were supporting 802.11a, n and ac, and the Wifi Explorer tool reported each AP could support a theoretical speed of 540Mb/sec.

What if I told you the access circuit feeding this public wifi network, and therefore the most bandwidth available to any single client, was 100Mb/sec?

Vanilla 802.11a would give a maximum data rate of 54Mb/sec (probably about 30Mb/sec usable payload) on a single channel, this could be 150 or 300Mb/sec with 802.11n (MIMO). Plenty for getting to that 100Mb.

Thus rather than having as many as 4 overlapping access points sharing the same channels, this could be reduced significantly by only using 20MHz channels. This would result in less radio congestion (fewer clients on the same frequency), and probably wouldn’t negatively effect access speeds for the clients on the network.

There’s also the question of why all 6 access points visible in this sweep are spread across just two 40MHz channels.

The main reason is that DFS (Dynamic Frequency Selection) and TPC (Transmit Power Control) is required for any of the channels highlighted with blue numbers in the chart above – it’s also known as “Radar Detection”, because some radar operates in these channels. An access point running DFS will “listen” first for any radar signals before choosing an unoccupied channel and advertising the network SSID. If it hears any radar transmissions, it will shut down and move channel.

Sure, avoiding the DFS mandatory channels gives more predictability in your channel use, and means you aren’t affected by an access point needing to go off air.

However, an option in designing the network could be to use the DFS mandatory channels to increase available spectrum, but strategically place access points on non-DFS channels spatially in between those using DFS, getting away from the “listen on startup” phase (e.g. if there’s a need to reset an access point), or from the service suddenly going offline because of radar detection.

Also, remember that this is an indoor deployment and well inside a building. The chances of encountering radar interference are relatively low. I don’t recall seeing a problem using DFS when I’ve deployed temporary networks for meetings.

The other thing to note is that this deployment is not using a controller-based architecture. It is made of access points which can signal control messages between each other, but each access point maintains effectively it’s own view of the world. (Those of you in the Wifi space can now probably work out who I’m talking about.)

Is the above configuration using so few channels, and using them unwisely considering the target bandwidth actually available to the wifi clients, just asking for trouble once a few hundred users show up?

 

Matisse @Tate Modern: Nuit de no-light

The Tate Modern are currently staging a fantastic exhibition of work from later in Henri Matisse’s later life – his Cut Out era.

Some of the works are really quite amazing, from the classic Matisse use of colour, through to the sheer scale of the work, filling whole walls.

The exhibition culminates in the famous collaboration with glass craftsman Paul Bony, “Nuit de Noel”, or “Christmas Eve” to you and I.

However, having slowly unwrapped this “present”, layer on layer through the previous galleries, this amazing piece of art felt like an anti-climax, sat in it’s darkened room, with it’s one-dimensional backlighting.

It certainly wasn’t how Matisse must have imagined it: to be constantly changing due to the vagaries of natural light, and the way that it should cast it’s coloured patterns back into the room.

I wonder why the Tate didn’t try to exhibit Nuit de Noel with some sort of intelligent and programmable LED backlight that can emulate natural light, and how the light source would move with the day and with the seasons?

Nuit de Noel was originally commissioned by Time magazine to be put in it’s reception at Rockerfeller Centre. Would it have been lit by the low winter sun, shining down the “alleyways” of Manhattan skyscrapers? Surely we can make that happen with modern technology?

Apparently, according to @gogo and @AmericanAir this blog is adult themed.

Well. If you ask American Airlines or GoGo Inflight Wifi, this blog is blocked because it contains “adult-and-pornography”…

Apparently, you're looking at pr0n

Apparently, you’re looking at pr0n. Right now.

A reader just contacted me from Flight Level 330 to let me know he couldn’t read my blog. (Well, I suppose people need something to help them sleep…)

Looks like it’s the attack of some overzealous content filters, or maybe GoGo Inflight didn’t agree with my opinions on implementing event and public wifi?

Event (or Public) Wifi. It’s not that difficult.

A not uncommon source of frustration is poor wifi access in public spaces such as hotels, airports, etc., and by extension, poor wifi at events. We’ve all seen the issues – very slow downloads, congestion, dropped connections.

One of the things I do is regularly attend Internet industry events, and by nature of what they are, they are full of geeks, nerds and other types of “heavy user”, and they need their own significant wifi capability to support the event. Yet even those events, which don’t rely on the in-house wifi provision, still run into problems – for instance, the most recent NANOG meeting had some significant wifi issues on the first day, though they did have the challenge of serving over 800 users in a relatively tight space.

I’ve also been involved in setting up connectivity at meetings, so I know from experience that it’s not that difficult to provide good quality wifi. You just need to make a little bit of effort.

This is probably the first of a short series of posts, where I’ll share nuggets of wisdom I’ve picked up along the years. I’m going to assume that if you’re reading this, you know a bit about wifi.

1) Band steering does not work reliably enough

Wi-fi operates on two bands, 5GHz and 2.4GHz. The 2.4GHz spectrum is congested. Most modern clients support both bands. Band steering is an attempt to force clients that can use the 5GHz spectrum off the 2.4GHz spectrum. It works by having a base station “ignore” 2.4GHz association attempts from a client that can associate on 5GHz.

However, experience shows that this is not reliable for all clients, and many clients which could be associated with a 5GHz base station end up associated not only with a 2.4GHz base station, but one which is suboptimal (e.g. further away).

Band steering seems especially problematic when enabled on autonomous base stations. At least on a centrally orchestrated controller-based network, the controller can simultaneously tell all base stations to band steer a particular client.

Which leads me on to…

2) Run separate SSIDs for 5GHz and 2.4GHz bands

If band steering is unreliable, then you need some other way of getting clients on the “right” wifi band. Running separate SSIDs is the best way of doing this.

The majority of modern clients support 5GHz (802.11a). I would therefore recommend that your main/primary SSID is actually 5GHz only. All the clients will ordinarily connect to that.

You can then set up a second SSID which could end in “-2.4″ or “-g” or “-legacy” for the non-5GHz clients to connect to. The 2.4GHz clients will only see this SSID and not the 5GHz one. There are significantly fewer 2.4GHz clients around these days.

At the end of the day, both the 5GHz and 2.4GHz wifi SSIDs should then still be bridged to the same backend network so that the network services are the same for the 5GHz and 2.4GHz clients.

3) Turn off 802.11b

Are there any 802.11b only devices still around in regular use?

Disabling 802.11b will restrict your 2.4GHz spectrum to .11g capable devices only, and has the effect of raising the minimum connect speed.

If you know you don’t have to support legacy 802.11b devices, switch off support for it.

4) Restrict the minimum connect speed

The default configuration on some base stations, especially if 802.11b is switched on, is to accept wifi connect speeds as slow as 1Mb/sec.

All you need is one rogue, buggy or distant client to connect at a slow speed and this acts as a “lowest common denominator” to bring your wifi to it’s knees, slowing all clients on that base station to a crawl.

Right. That’s it for today.

Eventually I’ll be showing you how you can run wifi for a 200-300 person meeting out of a suitcase.

Waiting for (BT) Infinity – an update

I mentioned in my last post about my partner’s Mother moving home this week, and how it looks like BT have missed an opportunity to give a seamless transition of her VDSL service.

The new house was only around the corner from the old one, so should be on the same exchange, and maybe even on the same DSLAM and cabinet. It had previously had VDSL service, judging from the master socket faceplate.

20140624_103830

Was the jumpering in the cab over to the DSLAM still set up? Well, we dug out the old BT VDSL modem and HomeHub 3, and set those up.

Guess what…

20140626_144809The VDSL modem successfully trained up. The line is still connected to the VDSL DSLAM.

However, it’s failing authentication – a steady red “b“. Therefore it looks like the old gear won’t work on the new line.

But then the new HomeHub 5 they’ve needlessly shipped out won’t work either: we set that up too, and get an orange “b” symbol.

Evidently, something isn’t provisioned somewhere on the backend. Maybe the account credentials have been changed, or the port on the DSLAM isn’t provisioned correctly yet.

Does this look like a missed opportunity to provide a seamless transition, without the need for an engineer visit, or what?

 

When parents-in-law move homes – a tale of being “default” tech support

Sheesh BT.

The MiL has moved. Around the corner from her old house. She had BT Infinity (BT’s Retail FTTC product) at the old house. She ordered the service to be moved. The voice service was activated on the day she moved, but not the Internet access.

The new house has previously had FTTC with the last occupant, it has the FTTC faceplate. One can only assume that the “double jumpering” to the FTTC MSAN is still in place too.

I wouldn’t mind betting that it’s even coming off the same bloody street cab/MSAN.

Can we just take the old Homehub 3 and VDSL modem over and plug those in? Oh no.

BT have sent a new Homehub 5 and scheduled an engineer visit for Friday, 5 days after she’s moved in.

It just feels a bit wrong, and maybe even on the crazy side. In theory this could have been done as a simultaneous provide – i.e. both the voice and the internet service brought up at the same time, and in this case potentially without an engineer visit!

Who knows why it’s not happened. Certainly the MiL wouldn’t have known to ask for a “sim-provide”, but should she have to?