Premier Inn Wifi – If only it were consistent.

I recently heaped praise on Premier Inn for providing a good wifi service in one of their hotels.

Sadly, this is not consistent across all their properties. I’m currently staying in another Premier Inn just down the road from the one with the good wifi (which was already full for this evening).

The wifi performance here isn’t great at all…

This is as good as it got. Fail.
This is as good as it got. Fail.

It does have sensibly laid out 5GHz and 2.4GHz spectrum like the other Premier Inn, so it seems the wifi architecture is sound, however what’s different here is the backhaul technology.

The other property was on what appeared to be a VDSL line from a more specialist business ISP. It also had the advantage that it was only shared between about 20-odd rooms.

This Premier Inn is much larger, but based on the ISP (Sharedband) it is likely to be using a link-bundled ADSL2 connection, and is shared amongst many more users – about 140 rooms. I’ve noticed several other Arqiva-managed hotspots using Sharedband as the backhaul technology, and these all have suffered from very slow speeds, high latency and signs of heavy oversubscription and congestion.

Notice the “star rating” on the Speedtest above. One star. Lots of unhappy punters?

I’m currently getting better performance on a 3G modem. (No 4G coverage on my provider in this area.)

It would be great if Premier Inn could offer a more consistent experience in it’s wifi product, and I mean a consistently good experience such as the one I enjoyed just up the road in Abingdon, and not the lowest common denominator of the congested barely useable mess here.

They aim for a consistent product in the rest of their offerings and for the most part achieve it, however if I was only staying here in this property, I’d be asking for a refund for the wifi charge.

Update at 1am in the morning, after the fire alarm went off around 11.30pm and caused the hotel to be evacuated…

I can just about get 3Mb/sec down (and less than 256k up) out of the connection here now, and I assume the large majority guests are sleeping. Still less than great. This is very obviously based around oversubscribed link-bundled ADSL stuff.

Today’s wifi moan

Currently in St Julians, Malta. This spectrum makes me sad.

Of course, there's nothing in the 5GHz spectrum
Of course, there’s nothing in the 5GHz spectrum

Congested to hell, barely useable and riddled with “supposedly faster” wide channels.

What makes me even sadder is that the hotel wifi firewall is blocking all sorts of stuff, including ssh and vpns. Sigh.

Guess I’ll have to go back to the beach…

A hotel that got wifi right

Normally the one to highlight when something is done badly, I also want to give praise where it is due.

I’m currently staying in a Premier Inn in leafy Abingdon. The data service here that I’d normally tether to is next to non-existent, dropping out all over the place. It looks like I’m in the shadow of some structure, between me and the Three UK antenna. There are also a couple of water courses in between, which might be hindering the signal.

So, I’m forced onto one of my pet hates, paid-for hotel wifi. Remember that Premier Inn are marketed as a “no frills” hotel – but they are almost always spotlessly clean and consistent.

It was either pony up for that or go and track down (and pay for) an O2 PAYG data sim, as I do at least have line of sight from my room here to one of their masts.

Firstly, I fired up Wifi Explorer, and took a look at what is deployed here.

Nice, uncrowded 2.4GHz spectrum, sensibly placed channels.
Nice, uncrowded 2.4GHz spectrum, sensibly placed channels.

Not only was the 2.4GHz likely to work okay, but they also had 5GHz too!

Wow! 5Ghz as well.
Wow! 5Ghz as well.

So, I decided that it was worth a spin. I signed up for the free half hour. Then I actually found I could get real work done on this connection, so I gave it a speed test.

Reasonably speedy too. I'd guess it's a VDSL line.
Reasonably speedy too. I’d guess it’s a VDSL line. Might get crowded later, I guess?

Not only have they got 5GHz, but they have recently slashed their prices. Some would say that it should be free anyway, but £3 for the day, or £20 for a month seemed a reasonable deal, especially if you’re staying in a Premier Inn a lot (I’m actually back here again next week).

I’ve not tried connecting multiple devices simultaneously using the same login, but I suspect you can’t, which is possibly the only downside.

However, big props to the folks at Premier Inn for actually having a wifi install that works, even if that means having to pay for it. I’ve seen much worse services in high-end hotels which have under-provisioned, congested (and often expensive) 2.4GHz networks.

Credit where it is earned, indeed.


Update: Sadly, it seems Premier Inn have decided we can’t have too much of a good thing, and need to manage our expectations. It’s alleged that they have therefore plotted with those dastardly people at Arqiva to make Premier Inn wifi universally shit.

Please read this item from Bill Buchan, which reports that the wifi is now clamped to 2.5Mb/sec on the premium “ultimate” offering.

The question I’ve got is if the “ultimate wifi” is, as they market it, 8x faster than the free wifi, then I make the free wifi out to be <500Kb/sec.

I can just imagine a load of product management muppets sat around some buzzword infested meeting room table, cowed by groupthink, agreeing this is a good idea.

Public wifi – why use more radio spectrum than you need?

Here’s the second of my series of little rants about poor public wifi – this time, why use more spectrum than you need?

Using the Wifi Explorer tool, I was recently checking out a venue with a modern public wifi installation. Here’s what the 5GHz spectrum looked like:

crowded_5Ghz_spectrumI’ve redacted the SSIDs so that we aren’t naming names and we’re hopefully saving face.

You’re probably thinking “They aren’t using much spectrum at all“, right? All their access points are all clustered down on 4 channels – that in itself not being a good idea.

Note that they are using “wide” 40MHz channels – the signal from each access point is occupying two standard 20MHz channels. Networks are usually setup like this to increase the amount of available bandwidth, by using multiple signals on multiple radio channels at once between the base station and the client.

This was also a brand new installation, and the access points were supporting 802.11a, n and ac, and the Wifi Explorer tool reported each AP could support a theoretical speed of 540Mb/sec.

What if I told you the access circuit feeding this public wifi network, and therefore the most bandwidth available to any single client, was 100Mb/sec?

Vanilla 802.11a would give a maximum data rate of 54Mb/sec (probably about 30Mb/sec usable payload) on a single channel, this could be 150 or 300Mb/sec with 802.11n (MIMO). Plenty for getting to that 100Mb.

Thus rather than having as many as 4 overlapping access points sharing the same channels, this could be reduced significantly by only using 20MHz channels. This would result in less radio congestion (fewer clients on the same frequency), and probably wouldn’t negatively effect access speeds for the clients on the network.

There’s also the question of why all 6 access points visible in this sweep are spread across just two 40MHz channels.

The main reason is that DFS (Dynamic Frequency Selection) and TPC (Transmit Power Control) is required for any of the channels highlighted with blue numbers in the chart above – it’s also known as “Radar Detection”, because some radar operates in these channels. An access point running DFS will “listen” first for any radar signals before choosing an unoccupied channel and advertising the network SSID. If it hears any radar transmissions, it will shut down and move channel.

Sure, avoiding the DFS mandatory channels gives more predictability in your channel use, and means you aren’t affected by an access point needing to go off air.

However, an option in designing the network could be to use the DFS mandatory channels to increase available spectrum, but strategically place access points on non-DFS channels spatially in between those using DFS, getting away from the “listen on startup” phase (e.g. if there’s a need to reset an access point), or from the service suddenly going offline because of radar detection.

Also, remember that this is an indoor deployment and well inside a building. The chances of encountering radar interference are relatively low. I don’t recall seeing a problem using DFS when I’ve deployed temporary networks for meetings.

The other thing to note is that this deployment is not using a controller-based architecture. It is made of access points which can signal control messages between each other, but each access point maintains effectively it’s own view of the world. (Those of you in the Wifi space can now probably work out who I’m talking about.)

Is the above configuration using so few channels, and using them unwisely considering the target bandwidth actually available to the wifi clients, just asking for trouble once a few hundred users show up?

 

Event (or Public) Wifi. It’s not that difficult.

A not uncommon source of frustration is poor wifi access in public spaces such as hotels, airports, etc., and by extension, poor wifi at events. We’ve all seen the issues – very slow downloads, congestion, dropped connections.

One of the things I do is regularly attend Internet industry events, and by nature of what they are, they are full of geeks, nerds and other types of “heavy user”, and they need their own significant wifi capability to support the event. Yet even those events, which don’t rely on the in-house wifi provision, still run into problems – for instance, the most recent NANOG meeting had some significant wifi issues on the first day, though they did have the challenge of serving over 800 users in a relatively tight space.

I’ve also been involved in setting up connectivity at meetings, so I know from experience that it’s not that difficult to provide good quality wifi. You just need to make a little bit of effort.

This is probably the first of a short series of posts, where I’ll share nuggets of wisdom I’ve picked up along the years. I’m going to assume that if you’re reading this, you know a bit about wifi.

1) Band steering does not work reliably enough

Wi-fi operates on two bands, 5GHz and 2.4GHz. The 2.4GHz spectrum is congested. Most modern clients support both bands. Band steering is an attempt to force clients that can use the 5GHz spectrum off the 2.4GHz spectrum. It works by having a base station “ignore” 2.4GHz association attempts from a client that can associate on 5GHz.

However, experience shows that this is not reliable for all clients, and many clients which could be associated with a 5GHz base station end up associated not only with a 2.4GHz base station, but one which is suboptimal (e.g. further away).

Band steering seems especially problematic when enabled on autonomous base stations. At least on a centrally orchestrated controller-based network, the controller can simultaneously tell all base stations to band steer a particular client.

Which leads me on to…

2) Run separate SSIDs for 5GHz and 2.4GHz bands

If band steering is unreliable, then you need some other way of getting clients on the “right” wifi band. Running separate SSIDs is the best way of doing this.

The majority of modern clients support 5GHz (802.11a). I would therefore recommend that your main/primary SSID is actually 5GHz only. All the clients will ordinarily connect to that.

You can then set up a second SSID which could end in “-2.4” or “-g” or “-legacy” for the non-5GHz clients to connect to. The 2.4GHz clients will only see this SSID and not the 5GHz one. There are significantly fewer 2.4GHz clients around these days.

At the end of the day, both the 5GHz and 2.4GHz wifi SSIDs should then still be bridged to the same backend network so that the network services are the same for the 5GHz and 2.4GHz clients.

3) Turn off 802.11b

Are there any 802.11b only devices still around in regular use?

Disabling 802.11b will restrict your 2.4GHz spectrum to .11g capable devices only, and has the effect of raising the minimum connect speed.

If you know you don’t have to support legacy 802.11b devices, switch off support for it.

4) Restrict the minimum connect speed

The default configuration on some base stations, especially if 802.11b is switched on, is to accept wifi connect speeds as slow as 1Mb/sec.

All you need is one rogue, buggy or distant client to connect at a slow speed and this acts as a “lowest common denominator” to bring your wifi to it’s knees, slowing all clients on that base station to a crawl.

Right. That’s it for today.

Eventually I’ll be showing you how you can run wifi for a 200-300 person meeting out of a suitcase.

For peering in New York, read New Amsterdam

Dutch East India Company Logo
It’s colonialism all over again. Just not as we know it…

Last week, there was this announcement about the establishment of a new Internet Exchange point in New York by the US arm of the Amsterdam Internet Exchange – “AMS-IX New York” – or should that be “New Amsterdam”… 🙂

This follows on from the vote between AMS-IX members about whether or not the organisation should establish an operation in the US was carried by a fairly narrow majority. I wrote about this a few weeks ago.

This completes the moves by the “big three” European IX operators into the US market, arriving on US shores under the umbrella of the Open-IX initiative to increase market choice and competitiveness of interconnection in the US markets.

LINX have established LINX-NoVA in the Washington DC metro area, and AMS-IX are proceeding with their NY-NJ platform, while DECIX have issued a press statement on their plan to enter the NY market in due course.

One of the key things this does is bring these three IXPs into real direct competition in the same market territory for the first time.

There has always been some level of competition among the larger EU exchanges when attracting new international participants to their exchange, for instance DECIX carved itself a niche for attracting Eastern European and Russian players on account that many carrier services to these regions would hub through Frankfurt anyway.

But each exchange always had it’s indigenous home market to provide a constant base load of members, there wasn’t massive amounts of competition for the local/national peers, even though all three countries have a layer of smaller exchanges active in the home market.

Now, to some extent, they are going head-to-head, not just with US incumbents such as Equinix, TelX and Any2, but potentially with each other as well.

The other thing the AMS-IX move could end up doing is potentially fracture even further the NY peering market, which is already fractured – being served by three, maybe four, sizeable exchanges. Can it sustain a fifth or sixth?

Is it going to be economical for ISPs and Content Providers to connect to a further co-terminous IXP (or two)? Can the NY market support that? Does it make traffic engineering more complex for networks which interconnect in NY? So complex that it’s not worth it? Or does it present an opportunity to be able to more finely slice-and-dice traffic and share the load?

Don’t forget we’re also in a market which has been traditionally biased toward minimising the amount of public switch-based peering in favour of private bi-lateral cross-connects. Sure, the viewpoint is changing, but are we looking for a further swing in a long-term behaviour?

We found out from experience in the 2000s that London can only really sustain two IXPs – LINX and LONAP. There were at least 4 well-known IXPs in London in the 2000s, along with several smaller ones. (Aside… if you Google for LIPEX today, you get a link to a cholesterol-reducing statin drug.)

Going to locations on the East Coast may have made sense when we sailed there in ships and it took us several weeks to do it, but that’s no reason for history to repeat itself in this day and age, is it? So why choose New York now?

Will the EU players become dominant in these markets? Will they manage to help fractured markets such as NY to coalesce? If they do, they will have achieved something that people have been trying to do for years. Or, will it turn out to be an interesting experiment and learning experience?

It will be interesting to see how this plays out over time.

My recent talk at INEX – Video

Or, I never thought of myself as a narcissist but…

Thanks to the folks at HEAnet, here’s a link to the video of the talk “It’s peering, Jim…” that I gave at the recent INEX meeting in Dublin, where I discuss topics such as changes in the US peering community thanks to Open-IX and try to untangle what people mean when they say “Regional Peering”.

The talk lasts around 20-25 minutes and I was really pleased to get around 15 minutes of questions at the end of it.

I also provide some fairly pragmatic advice to those seeking to start an IX in Northern Ireland during the questions. 🙂

mh_inex_video