Premier Inn Wifi – If only it were consistent.

I recently heaped praise on Premier Inn for providing a good wifi service in one of their hotels.

Sadly, this is not consistent across all their properties. I’m currently staying in another Premier Inn just down the road from the one with the good wifi (which was already full for this evening).

The wifi performance here isn’t great at all…

This is as good as it got. Fail.
This is as good as it got. Fail.

It does have sensibly laid out 5GHz and 2.4GHz spectrum like the other Premier Inn, so it seems the wifi architecture is sound, however what’s different here is the backhaul technology.

The other property was on what appeared to be a VDSL line from a more specialist business ISP. It also had the advantage that it was only shared between about 20-odd rooms.

This Premier Inn is much larger, but based on the ISP (Sharedband) it is likely to be using a link-bundled ADSL2 connection, and is shared amongst many more users – about 140 rooms. I’ve noticed several other Arqiva-managed hotspots using Sharedband as the backhaul technology, and these all have suffered from very slow speeds, high latency and signs of heavy oversubscription and congestion.

Notice the “star rating” on the Speedtest above. One star. Lots of unhappy punters?

I’m currently getting better performance on a 3G modem. (No 4G coverage on my provider in this area.)

It would be great if Premier Inn could offer a more consistent experience in it’s wifi product, and I mean a consistently good experience such as the one I enjoyed just up the road in Abingdon, and not the lowest common denominator of the congested barely useable mess here.

They aim for a consistent product in the rest of their offerings and for the most part achieve it, however if I was only staying here in this property, I’d be asking for a refund for the wifi charge.

Update at 1am in the morning, after the fire alarm went off around 11.30pm and caused the hotel to be evacuated…

I can just about get 3Mb/sec down (and less than 256k up) out of the connection here now, and I assume the large majority guests are sleeping. Still less than great. This is very obviously based around oversubscribed link-bundled ADSL stuff.

Today’s wifi moan

Currently in St Julians, Malta. This spectrum makes me sad.

Of course, there's nothing in the 5GHz spectrum
Of course, there’s nothing in the 5GHz spectrum

Congested to hell, barely useable and riddled with “supposedly faster” wide channels.

What makes me even sadder is that the hotel wifi firewall is blocking all sorts of stuff, including ssh and vpns. Sigh.

Guess I’ll have to go back to the beach…

A hotel that got wifi right

Normally the one to highlight when something is done badly, I also want to give praise where it is due.

I’m currently staying in a Premier Inn in leafy Abingdon. The data service here that I’d normally tether to is next to non-existent, dropping out all over the place. It looks like I’m in the shadow of some structure, between me and the Three UK antenna. There are also a couple of water courses in between, which might be hindering the signal.

So, I’m forced onto one of my pet hates, paid-for hotel wifi. Remember that Premier Inn are marketed as a “no frills” hotel – but they are almost always spotlessly clean and consistent.

It was either pony up for that or go and track down (and pay for) an O2 PAYG data sim, as I do at least have line of sight from my room here to one of their masts.

Firstly, I fired up Wifi Explorer, and took a look at what is deployed here.

Nice, uncrowded 2.4GHz spectrum, sensibly placed channels.
Nice, uncrowded 2.4GHz spectrum, sensibly placed channels.

Not only was the 2.4GHz likely to work okay, but they also had 5GHz too!

Wow! 5Ghz as well.
Wow! 5Ghz as well.

So, I decided that it was worth a spin. I signed up for the free half hour. Then I actually found I could get real work done on this connection, so I gave it a speed test.

Reasonably speedy too. I'd guess it's a VDSL line.
Reasonably speedy too. I’d guess it’s a VDSL line. Might get crowded later, I guess?

Not only have they got 5GHz, but they have recently slashed their prices. Some would say that it should be free anyway, but £3 for the day, or £20 for a month seemed a reasonable deal, especially if you’re staying in a Premier Inn a lot (I’m actually back here again next week).

I’ve not tried connecting multiple devices simultaneously using the same login, but I suspect you can’t, which is possibly the only downside.

However, big props to the folks at Premier Inn for actually having a wifi install that works, even if that means having to pay for it. I’ve seen much worse services in high-end hotels which have under-provisioned, congested (and often expensive) 2.4GHz networks.

Credit where it is earned, indeed.


Update: Sadly, it seems Premier Inn have decided we can’t have too much of a good thing, and need to manage our expectations. It’s alleged that they have therefore plotted with those dastardly people at Arqiva to make Premier Inn wifi universally shit.

Please read this item from Bill Buchan, which reports that the wifi is now clamped to 2.5Mb/sec on the premium “ultimate” offering.

The question I’ve got is if the “ultimate wifi” is, as they market it, 8x faster than the free wifi, then I make the free wifi out to be <500Kb/sec.

I can just imagine a load of product management muppets sat around some buzzword infested meeting room table, cowed by groupthink, agreeing this is a good idea.

Public wifi – why use more radio spectrum than you need?

Here’s the second of my series of little rants about poor public wifi – this time, why use more spectrum than you need?

Using the Wifi Explorer tool, I was recently checking out a venue with a modern public wifi installation. Here’s what the 5GHz spectrum looked like:

crowded_5Ghz_spectrumI’ve redacted the SSIDs so that we aren’t naming names and we’re hopefully saving face.

You’re probably thinking “They aren’t using much spectrum at all“, right? All their access points are all clustered down on 4 channels – that in itself not being a good idea.

Note that they are using “wide” 40MHz channels – the signal from each access point is occupying two standard 20MHz channels. Networks are usually setup like this to increase the amount of available bandwidth, by using multiple signals on multiple radio channels at once between the base station and the client.

This was also a brand new installation, and the access points were supporting 802.11a, n and ac, and the Wifi Explorer tool reported each AP could support a theoretical speed of 540Mb/sec.

What if I told you the access circuit feeding this public wifi network, and therefore the most bandwidth available to any single client, was 100Mb/sec?

Vanilla 802.11a would give a maximum data rate of 54Mb/sec (probably about 30Mb/sec usable payload) on a single channel, this could be 150 or 300Mb/sec with 802.11n (MIMO). Plenty for getting to that 100Mb.

Thus rather than having as many as 4 overlapping access points sharing the same channels, this could be reduced significantly by only using 20MHz channels. This would result in less radio congestion (fewer clients on the same frequency), and probably wouldn’t negatively effect access speeds for the clients on the network.

There’s also the question of why all 6 access points visible in this sweep are spread across just two 40MHz channels.

The main reason is that DFS (Dynamic Frequency Selection) and TPC (Transmit Power Control) is required for any of the channels highlighted with blue numbers in the chart above – it’s also known as “Radar Detection”, because some radar operates in these channels. An access point running DFS will “listen” first for any radar signals before choosing an unoccupied channel and advertising the network SSID. If it hears any radar transmissions, it will shut down and move channel.

Sure, avoiding the DFS mandatory channels gives more predictability in your channel use, and means you aren’t affected by an access point needing to go off air.

However, an option in designing the network could be to use the DFS mandatory channels to increase available spectrum, but strategically place access points on non-DFS channels spatially in between those using DFS, getting away from the “listen on startup” phase (e.g. if there’s a need to reset an access point), or from the service suddenly going offline because of radar detection.

Also, remember that this is an indoor deployment and well inside a building. The chances of encountering radar interference are relatively low. I don’t recall seeing a problem using DFS when I’ve deployed temporary networks for meetings.

The other thing to note is that this deployment is not using a controller-based architecture. It is made of access points which can signal control messages between each other, but each access point maintains effectively it’s own view of the world. (Those of you in the Wifi space can now probably work out who I’m talking about.)

Is the above configuration using so few channels, and using them unwisely considering the target bandwidth actually available to the wifi clients, just asking for trouble once a few hundred users show up?

 

Event (or Public) Wifi. It’s not that difficult.

A not uncommon source of frustration is poor wifi access in public spaces such as hotels, airports, etc., and by extension, poor wifi at events. We’ve all seen the issues – very slow downloads, congestion, dropped connections.

One of the things I do is regularly attend Internet industry events, and by nature of what they are, they are full of geeks, nerds and other types of “heavy user”, and they need their own significant wifi capability to support the event. Yet even those events, which don’t rely on the in-house wifi provision, still run into problems – for instance, the most recent NANOG meeting had some significant wifi issues on the first day, though they did have the challenge of serving over 800 users in a relatively tight space.

I’ve also been involved in setting up connectivity at meetings, so I know from experience that it’s not that difficult to provide good quality wifi. You just need to make a little bit of effort.

This is probably the first of a short series of posts, where I’ll share nuggets of wisdom I’ve picked up along the years. I’m going to assume that if you’re reading this, you know a bit about wifi.

1) Band steering does not work reliably enough

Wi-fi operates on two bands, 5GHz and 2.4GHz. The 2.4GHz spectrum is congested. Most modern clients support both bands. Band steering is an attempt to force clients that can use the 5GHz spectrum off the 2.4GHz spectrum. It works by having a base station “ignore” 2.4GHz association attempts from a client that can associate on 5GHz.

However, experience shows that this is not reliable for all clients, and many clients which could be associated with a 5GHz base station end up associated not only with a 2.4GHz base station, but one which is suboptimal (e.g. further away).

Band steering seems especially problematic when enabled on autonomous base stations. At least on a centrally orchestrated controller-based network, the controller can simultaneously tell all base stations to band steer a particular client.

Which leads me on to…

2) Run separate SSIDs for 5GHz and 2.4GHz bands

If band steering is unreliable, then you need some other way of getting clients on the “right” wifi band. Running separate SSIDs is the best way of doing this.

The majority of modern clients support 5GHz (802.11a). I would therefore recommend that your main/primary SSID is actually 5GHz only. All the clients will ordinarily connect to that.

You can then set up a second SSID which could end in “-2.4” or “-g” or “-legacy” for the non-5GHz clients to connect to. The 2.4GHz clients will only see this SSID and not the 5GHz one. There are significantly fewer 2.4GHz clients around these days.

At the end of the day, both the 5GHz and 2.4GHz wifi SSIDs should then still be bridged to the same backend network so that the network services are the same for the 5GHz and 2.4GHz clients.

3) Turn off 802.11b

Are there any 802.11b only devices still around in regular use?

Disabling 802.11b will restrict your 2.4GHz spectrum to .11g capable devices only, and has the effect of raising the minimum connect speed.

If you know you don’t have to support legacy 802.11b devices, switch off support for it.

4) Restrict the minimum connect speed

The default configuration on some base stations, especially if 802.11b is switched on, is to accept wifi connect speeds as slow as 1Mb/sec.

All you need is one rogue, buggy or distant client to connect at a slow speed and this acts as a “lowest common denominator” to bring your wifi to it’s knees, slowing all clients on that base station to a crawl.

Right. That’s it for today.

Eventually I’ll be showing you how you can run wifi for a 200-300 person meeting out of a suitcase.

For peering in New York, read New Amsterdam

Dutch East India Company Logo
It’s colonialism all over again. Just not as we know it…

Last week, there was this announcement about the establishment of a new Internet Exchange point in New York by the US arm of the Amsterdam Internet Exchange – “AMS-IX New York” – or should that be “New Amsterdam”… 🙂

This follows on from the vote between AMS-IX members about whether or not the organisation should establish an operation in the US was carried by a fairly narrow majority. I wrote about this a few weeks ago.

This completes the moves by the “big three” European IX operators into the US market, arriving on US shores under the umbrella of the Open-IX initiative to increase market choice and competitiveness of interconnection in the US markets.

LINX have established LINX-NoVA in the Washington DC metro area, and AMS-IX are proceeding with their NY-NJ platform, while DECIX have issued a press statement on their plan to enter the NY market in due course.

One of the key things this does is bring these three IXPs into real direct competition in the same market territory for the first time.

There has always been some level of competition among the larger EU exchanges when attracting new international participants to their exchange, for instance DECIX carved itself a niche for attracting Eastern European and Russian players on account that many carrier services to these regions would hub through Frankfurt anyway.

But each exchange always had it’s indigenous home market to provide a constant base load of members, there wasn’t massive amounts of competition for the local/national peers, even though all three countries have a layer of smaller exchanges active in the home market.

Now, to some extent, they are going head-to-head, not just with US incumbents such as Equinix, TelX and Any2, but potentially with each other as well.

The other thing the AMS-IX move could end up doing is potentially fracture even further the NY peering market, which is already fractured – being served by three, maybe four, sizeable exchanges. Can it sustain a fifth or sixth?

Is it going to be economical for ISPs and Content Providers to connect to a further co-terminous IXP (or two)? Can the NY market support that? Does it make traffic engineering more complex for networks which interconnect in NY? So complex that it’s not worth it? Or does it present an opportunity to be able to more finely slice-and-dice traffic and share the load?

Don’t forget we’re also in a market which has been traditionally biased toward minimising the amount of public switch-based peering in favour of private bi-lateral cross-connects. Sure, the viewpoint is changing, but are we looking for a further swing in a long-term behaviour?

We found out from experience in the 2000s that London can only really sustain two IXPs – LINX and LONAP. There were at least 4 well-known IXPs in London in the 2000s, along with several smaller ones. (Aside… if you Google for LIPEX today, you get a link to a cholesterol-reducing statin drug.)

Going to locations on the East Coast may have made sense when we sailed there in ships and it took us several weeks to do it, but that’s no reason for history to repeat itself in this day and age, is it? So why choose New York now?

Will the EU players become dominant in these markets? Will they manage to help fractured markets such as NY to coalesce? If they do, they will have achieved something that people have been trying to do for years. Or, will it turn out to be an interesting experiment and learning experience?

It will be interesting to see how this plays out over time.

My recent talk at INEX – Video

Or, I never thought of myself as a narcissist but…

Thanks to the folks at HEAnet, here’s a link to the video of the talk “It’s peering, Jim…” that I gave at the recent INEX meeting in Dublin, where I discuss topics such as changes in the US peering community thanks to Open-IX and try to untangle what people mean when they say “Regional Peering”.

The talk lasts around 20-25 minutes and I was really pleased to get around 15 minutes of questions at the end of it.

I also provide some fairly pragmatic advice to those seeking to start an IX in Northern Ireland during the questions. 🙂

mh_inex_video

Errata in RFC1925: The Twelve Networking Truths

Some things in RFC1925, despite it being one of a series of April Fools’ RFCs (and therefore in the good company of the all time classic RFC1149 and it’s brethren), actually do hold true, for instance:

Fast, Good, Cheap: Pick any two – still tends to hold true.

However, like all good April Fools’ RFCs, it will declare that ‘ERRATA EXIST’ at the top. In this case, there’s definitely a shred of truth to this. Especially when you look at truth number 4:

Some things in life can never be fully appreciated nor
understood unless experienced firsthand. Some things in
networking can never be fully understood by someone who neither
builds commercial networking equipment nor runs an operational
network.

My concern is that this statement no longer holds true for the makers of commercial networking equipment.

If the makers and protocol designers really understood, we wouldn’t be pushing water up hill with things such as IPv6 deployment and encouraging use of other networking best practices, they would have made them easier to deploy in the first place.

Therefore a correction is needed, “Some things in networking can never be fully understood by someone who doesn’t run an operational network“.

Getting info out of your HomePlug AV Network

I recently blogged on the trials and tribulations (and gotchas) of using HomePlug AV to glue bits of network together around the house without having to run UTP around the place.

One of the comments I’d made in that article was about monitoring and logging the node-to-node speeds between the HomePlug bridges. Obviously, being a consumer product, they come with a pretty (and depending on your PoV, awful) GUI.

How was the information being gathered by the GUI? Turns out it’s using a Layer 2 protocol, so I cracked out Wireshark.

The head and tail of it is that the requesting station sends a L2 broadcast frame of ethertype 0x88e1 (Homeplug AV), containing a network info request.

The HomePlug bridges reply to the MAC address originating the broadcast with a network info confirmation – there are other sorts of management frames (such as read module data and get device/sw version), but this is the bit we’re interested in, containing the negotiated speed between each pair of bridges.

The speeds are plain hex values, which occur in certain places in the frame.

homeplug-av-frame

The number of networks (and therefore how many instances of “Networks informations“) to expect are in byte 21 of the frame, and the number of AV stations in a network is embedded in byte 17 of the Networks Informations, so you then know how many sets of stations informations to expect.

In the stations informations the address of the far end HomePlug bridge is in the first 6 bytes, while the TX rate is in byte 14, and the RX in byte 15.

It should therefore, maybe with a bit of scripting, be possible to grab those values and write them into a rrd or something like that, and make graphs, rather than have to fire up a GUI which only helps you in realtime. But here I am talking about banging away with my awful scripting crafting specific L2 frames and interpreting what comes back using regex matching and chomp. Surely someone’s done something like this before, and come up with something more elegant?

Well it turns out, github is your friend, as it seems are the people at Qualcomm Atheros who make the chips inside a lot of the HomePlug devices.

They’ve put up something called Open Powerline Utils – which I think may be able to do the job I want.

So, when I get a free evening, I’ll have a read of the docs and see what can be boshed together using this toolset rather than some ugly scripts.

Releasing a bottleneck in the home network, Pt2 – at home with HomePlug

As promised the next instalment of what happened when I upgraded my home Internet access from ADSL to FTTC, and found that I had some interesting bottlenecks existing in what is a fairly simple network.

Last time, I left you hanging, with the smoking gun being the HomePlug AV gear which glues the “wired” part of the network together around the house.

HomePlug is basically “powerline networking”, using the existing copper in the energised mains cables already in your walls to get data around without the cost of installing UTP cabling, drilling through walls, etc. As such, it’s very helpful for temporary or semi-permanent installations, and therefore a good thing if you’re renting your home.

The HomePlug AV plant at Casa Mike is a mix of “straight” HomePlug AV (max data rate 200Mb/sec), and a couple of “extended” units based on the Qualcomm Atheros chipset which will talk to each other at up to 500Mb/sec as well as interoperate at up to 200Mb/sec with the vanilla AV units.

One of the 500Mb units is obviously the one in the cupboard in the front room where all the wires come into the house and the router lives. However, despite being the front room, it’s not the lounge, that’s in an extension at the back, so the second 500Mb unit is in the extension, with the second wifi access point hanging off it so we’ve got good wifi signal (especially 5GHz) where we spend a lot of our time. The other 200Mb units get dotted around the house as necessary, wherever there’s something that needs a wired connection.

So, if you remember, I was only getting around 35Mb/sec if I was on the “wrong side” of the HomePlug network – i.e. not associated with the access point which is hardwired to the router, so this was pointing to the HomePlug setup.

I fired up the UI tool supplied with the gear (after all, it’s consumer grade, what could I expect?), and this shows a little diagram of the HomePlug network, along with the speed between each node. This is gleaned via a L2 management protocol which is spoken by the HomePlug devices (and the UI). I really should look at something which can collect this stuff and graph it.

HomePlug is rate adaptive, which means it can vary the speed dependant on conditions such as noise interference, quality of the cabling, etc., and the speed is different for the virtual link between each pair of nodes in the HomePlug network. (When you build a HomePlug network, the HomePlug nodes logically seem to emulate a bus network to the attached Ethernet – the closest thing I can liken it to is something ATM LAN emulation, remember that?)

The UI reported a speed of around 75-90Mb between the front and the back of the house, which fluctuated a little. But this doesn’t match my experience of around 35Mb throughput on speed tests.

So where did my thoughput go?

My initial reaction was “Is HomePlug half-duplex?” – well, turns out it is.

HomePlug is almost like the sordid love child conceived between two old defunct networking protocols, frequency-hopping wifi and token ring, after a night on the tequilas, but implemented over copper cables, using multiple frequencies, all put together during an encoding technique called Orthogonal Frequency Division Multiplexing (OFDM).

Only one HomePlug station can transmit at a time, and this is controlled using Beaconing (cf token passing in Token Ring) and Time Division Multiplexing between the active HomePlug nodes, orchestrated by the concept of a “master” node called a “Central Coordinator”, which is elected automatically when a network is established.

When you send an Ethernet frame into your HomePlug adaptor, it’s encapsulated into a HomePlug frame (think of your data like a set of Russian Dolls or a 1970’s nest of tables), which is then put in a queue called a “MAC frame stream”. These are then chopped up into smaller (512 byte) segments called a PHY block, the segments being encrypted and serialised.

Forward error correction is also applied, and as soon as the originating adaptor enters it’s permission to transmit (it’s “beacon period”), your data, now chopped down into these tiny PHY block chunks, is striped across the multiple frequencies in the HomePlug network. As they arrive at their destination, acknowledgments are sent back into the network. The sending station keeps transmitting the PHY blocks until the receiving node has acknowledged receipt.

Assuming all the PHY blocks that make up the MAC frame arrive intact at the exit HomePlug bridge, these are decrypted, reassembled, and decapsulated, coughing up the Ethernet frame which was put in the other end, which is written to the wire.

The upshot of this is that there’s a reasonably hefty framing overhead… IP, into Ethernet Frame, into HomePlug AV MAC frame, into PHY block.

Coupled with the half-duplex, beaconing nature, that’s how my ~70Mb turned into ~35Mb.

The thing to remember here, the advertised speed on HomePlug gear is quoted at the PHY rate – the speed attainable between HomePlug devices, which includes all the framing overhead.

This means, where HomePlug AV says that it supports 200Mb/sec, this is not the speed you should expect to get out of the ethernet port on the bottom, even in ideal conditions. 100Mb/sec seems more realistic and this would be on perfect cabling, directly into the wall socket.

Talking of ideal conditions, one of the things that you are warned against with HomePlug is hanging the devices off power strips, as this reduces the signal arriving at the HomePlug interface. They recommend that you plug the HomePlug bridge directly into a wall socket whenever possible. Given my house was built in the 1800s (no stud-walls, hence the need for HomePlug!), it’s not over-endowed with mains sockets, so of course, mine were plugged into power strips.

However, not to be deterred, I reshuffled things and managed to get the two 500Mb HomePlug bridges directly into the wall sockets, and voila: Negotiated speed went up to around 150-200Mb, and the full 70-odd Mb/sec of the upgraded broadband was available on the other side of the homeplug network.

Performance is almost doubled by being plugged directly into a wall socket.

In closing, given everything which is going on under the skin, and that it works by effectively superimposing and being able to recover minute amounts of “interference” on your power cables, it’s almost surprising HomePlug works as well as it does.

This HomePlug white paper will make interesting reading if you’re interested in what’s going on under the skin!