Premier Inn Wifi – If only it were consistent.

I recently heaped praise on Premier Inn for providing a good wifi service in one of their hotels.

Sadly, this is not consistent across all their properties. I’m currently staying in another Premier Inn just down the road from the one with the good wifi (which was already full for this evening).

The wifi performance here isn’t great at all…

This is as good as it got. Fail.
This is as good as it got. Fail.

It does have sensibly laid out 5GHz and 2.4GHz spectrum like the other Premier Inn, so it seems the wifi architecture is sound, however what’s different here is the backhaul technology.

The other property was on what appeared to be a VDSL line from a more specialist business ISP. It also had the advantage that it was only shared between about 20-odd rooms.

This Premier Inn is much larger, but based on the ISP (Sharedband) it is likely to be using a link-bundled ADSL2 connection, and is shared amongst many more users – about 140 rooms. I’ve noticed several other Arqiva-managed hotspots using Sharedband as the backhaul technology, and these all have suffered from very slow speeds, high latency and signs of heavy oversubscription and congestion.

Notice the “star rating” on the Speedtest above. One star. Lots of unhappy punters?

I’m currently getting better performance on a 3G modem. (No 4G coverage on my provider in this area.)

It would be great if Premier Inn could offer a more consistent experience in it’s wifi product, and I mean a consistently good experience such as the one I enjoyed just up the road in Abingdon, and not the lowest common denominator of the congested barely useable mess here.

They aim for a consistent product in the rest of their offerings and for the most part achieve it, however if I was only staying here in this property, I’d be asking for a refund for the wifi charge.

Update at 1am in the morning, after the fire alarm went off around 11.30pm and caused the hotel to be evacuated…

I can just about get 3Mb/sec down (and less than 256k up) out of the connection here now, and I assume the large majority guests are sleeping. Still less than great. This is very obviously based around oversubscribed link-bundled ADSL stuff.

A hotel that got wifi right

Normally the one to highlight when something is done badly, I also want to give praise where it is due.

I’m currently staying in a Premier Inn in leafy Abingdon. The data service here that I’d normally tether to is next to non-existent, dropping out all over the place. It looks like I’m in the shadow of some structure, between me and the Three UK antenna. There are also a couple of water courses in between, which might be hindering the signal.

So, I’m forced onto one of my pet hates, paid-for hotel wifi. Remember that Premier Inn are marketed as a “no frills” hotel – but they are almost always spotlessly clean and consistent.

It was either pony up for that or go and track down (and pay for) an O2 PAYG data sim, as I do at least have line of sight from my room here to one of their masts.

Firstly, I fired up Wifi Explorer, and took a look at what is deployed here.

Nice, uncrowded 2.4GHz spectrum, sensibly placed channels.
Nice, uncrowded 2.4GHz spectrum, sensibly placed channels.

Not only was the 2.4GHz likely to work okay, but they also had 5GHz too!

Wow! 5Ghz as well.
Wow! 5Ghz as well.

So, I decided that it was worth a spin. I signed up for the free half hour. Then I actually found I could get real work done on this connection, so I gave it a speed test.

Reasonably speedy too. I'd guess it's a VDSL line.
Reasonably speedy too. I’d guess it’s a VDSL line. Might get crowded later, I guess?

Not only have they got 5GHz, but they have recently slashed their prices. Some would say that it should be free anyway, but £3 for the day, or £20 for a month seemed a reasonable deal, especially if you’re staying in a Premier Inn a lot (I’m actually back here again next week).

I’ve not tried connecting multiple devices simultaneously using the same login, but I suspect you can’t, which is possibly the only downside.

However, big props to the folks at Premier Inn for actually having a wifi install that works, even if that means having to pay for it. I’ve seen much worse services in high-end hotels which have under-provisioned, congested (and often expensive) 2.4GHz networks.

Credit where it is earned, indeed.


Update: Sadly, it seems Premier Inn have decided we can’t have too much of a good thing, and need to manage our expectations. It’s alleged that they have therefore plotted with those dastardly people at Arqiva to make Premier Inn wifi universally shit.

Please read this item from Bill Buchan, which reports that the wifi is now clamped to 2.5Mb/sec on the premium “ultimate” offering.

The question I’ve got is if the “ultimate wifi” is, as they market it, 8x faster than the free wifi, then I make the free wifi out to be <500Kb/sec.

I can just imagine a load of product management muppets sat around some buzzword infested meeting room table, cowed by groupthink, agreeing this is a good idea.

Waiting for (BT) Infinity – an update

I mentioned in my last post about my partner’s Mother moving home this week, and how it looks like BT have missed an opportunity to give a seamless transition of her VDSL service.

The new house was only around the corner from the old one, so should be on the same exchange, and maybe even on the same DSLAM and cabinet. It had previously had VDSL service, judging from the master socket faceplate.

20140624_103830

Was the jumpering in the cab over to the DSLAM still set up? Well, we dug out the old BT VDSL modem and HomeHub 3, and set those up.

Guess what…

20140626_144809The VDSL modem successfully trained up. The line is still connected to the VDSL DSLAM.

However, it’s failing authentication – a steady red “b“. Therefore it looks like the old gear won’t work on the new line.

But then the new HomeHub 5 they’ve needlessly shipped out won’t work either: we set that up too, and get an orange “b” symbol.

Evidently, something isn’t provisioned somewhere on the backend. Maybe the account credentials have been changed, or the port on the DSLAM isn’t provisioned correctly yet.

Does this look like a missed opportunity to provide a seamless transition, without the need for an engineer visit, or what?

 

When parents-in-law move homes – a tale of being “default” tech support

Sheesh BT.

The MiL has moved. Around the corner from her old house. She had BT Infinity (BT’s Retail FTTC product) at the old house. She ordered the service to be moved. The voice service was activated on the day she moved, but not the Internet access.

The new house has previously had FTTC with the last occupant, it has the FTTC faceplate. One can only assume that the “double jumpering” to the FTTC MSAN is still in place too.

I wouldn’t mind betting that it’s even coming off the same bloody street cab/MSAN.

Can we just take the old Homehub 3 and VDSL modem over and plug those in? Oh no.

BT have sent a new Homehub 5 and scheduled an engineer visit for Friday, 5 days after she’s moved in.

It just feels a bit wrong, and maybe even on the crazy side. In theory this could have been done as a simultaneous provide – i.e. both the voice and the internet service brought up at the same time, and in this case potentially without an engineer visit!

Who knows why it’s not happened. Certainly the MiL wouldn’t have known to ask for a “sim-provide”, but should she have to?

“Ambassador, with these Atlas probes, you’re really spoiling us…”

Okay. So I only expect the Brits to get the title of this. Though if you’re desperate to be in on the “joke”, watch this YouTube video of an old British TV ad for some chocolates.

One of the things I do for the community is act as a “RIPE Atlas Ambassador” – that’s someone who helps distribute RIPE Atlas internet measurement probes into the wider Internet community. The Measurements Community Builders at the RIPE NCC send me a box of Atlas probes, I go to conferences, meetings and other get togethers and I give them out to folk who would like to host a probe, along with answering any questions as best I can.

Recently, Fearghas McKay of the IX Scotland steering group asked me if I had any data from the Atlas project on internet round-trip time for probes located in Scotland, to get to services hosted in Scotland, and if I could talk about it at a meeting of IX Scotland participants.

This is a fairly similar exercise to the one I did for Northern Ireland.

One of the challenges I was faced with was the distinct lack of source data. Firstly, there weren’t that many Atlas probes in Scotland to begin with, and those which are there are mostly located in the “central belt” – around Glasgow and Edinburgh. The furthest North was a single probe in Aberdeen, and Scotland is a big country – it’s around 300 miles from the border at Gretna to Thurso, one of the most northerly towns on the Scottish mainland, as far again as it is from London to Gretna. That’s not even counting the Orkneys, Shetlands or Hebridean Islands, which have their own networking challenges.

The second problem was that of those probes, only three at the time were on an ISP connected directly to IX Scotland, and one of those was down! The majority were on consumer broadband providers such as BT and Virgin Media, which aren’t connected to many regional exchanges.

I saw attending the IX Scotland meeting as a good chance to redress the balance and extend the usefulness of the Atlas platform by distributing probes to networks which could improve the coverage.

This has resulted in what is currently the most Northerly probe in the UK being brought online in Dingwall, not far from Inverness, thanks to the folk at HighNet. They’ve also got a few other probes from me, so expect to see more in that area soon.

Most Northerly Probe in the UK
Most Northerly Probe in the UK

HighNet aren’t connected to IX Scotland yet, but maybe now they’ve got access to this instrumentation it might help them make a business case to follow up on that.

I also issued a number of probes at UKNOF in Manchester last week and I’m looking forward to seeing where they turn up.

I’d really like to get some of the community broadband projects in the UK instrumented, such as B4RN and Gigaclear. These bring some of their own challenges, such as issues with equipment at the customer premises that can actually handle the available bandwidth on the connection! It would also be great to be able to draw comparisons in performance between the community fibre service and the slower ADSL service provided over long copper tails in those areas.

Releasing a bottleneck in the home network, Pt2 – at home with HomePlug

As promised the next instalment of what happened when I upgraded my home Internet access from ADSL to FTTC, and found that I had some interesting bottlenecks existing in what is a fairly simple network.

Last time, I left you hanging, with the smoking gun being the HomePlug AV gear which glues the “wired” part of the network together around the house.

HomePlug is basically “powerline networking”, using the existing copper in the energised mains cables already in your walls to get data around without the cost of installing UTP cabling, drilling through walls, etc. As such, it’s very helpful for temporary or semi-permanent installations, and therefore a good thing if you’re renting your home.

The HomePlug AV plant at Casa Mike is a mix of “straight” HomePlug AV (max data rate 200Mb/sec), and a couple of “extended” units based on the Qualcomm Atheros chipset which will talk to each other at up to 500Mb/sec as well as interoperate at up to 200Mb/sec with the vanilla AV units.

One of the 500Mb units is obviously the one in the cupboard in the front room where all the wires come into the house and the router lives. However, despite being the front room, it’s not the lounge, that’s in an extension at the back, so the second 500Mb unit is in the extension, with the second wifi access point hanging off it so we’ve got good wifi signal (especially 5GHz) where we spend a lot of our time. The other 200Mb units get dotted around the house as necessary, wherever there’s something that needs a wired connection.

So, if you remember, I was only getting around 35Mb/sec if I was on the “wrong side” of the HomePlug network – i.e. not associated with the access point which is hardwired to the router, so this was pointing to the HomePlug setup.

I fired up the UI tool supplied with the gear (after all, it’s consumer grade, what could I expect?), and this shows a little diagram of the HomePlug network, along with the speed between each node. This is gleaned via a L2 management protocol which is spoken by the HomePlug devices (and the UI). I really should look at something which can collect this stuff and graph it.

HomePlug is rate adaptive, which means it can vary the speed dependant on conditions such as noise interference, quality of the cabling, etc., and the speed is different for the virtual link between each pair of nodes in the HomePlug network. (When you build a HomePlug network, the HomePlug nodes logically seem to emulate a bus network to the attached Ethernet – the closest thing I can liken it to is something ATM LAN emulation, remember that?)

The UI reported a speed of around 75-90Mb between the front and the back of the house, which fluctuated a little. But this doesn’t match my experience of around 35Mb throughput on speed tests.

So where did my thoughput go?

My initial reaction was “Is HomePlug half-duplex?” – well, turns out it is.

HomePlug is almost like the sordid love child conceived between two old defunct networking protocols, frequency-hopping wifi and token ring, after a night on the tequilas, but implemented over copper cables, using multiple frequencies, all put together during an encoding technique called Orthogonal Frequency Division Multiplexing (OFDM).

Only one HomePlug station can transmit at a time, and this is controlled using Beaconing (cf token passing in Token Ring) and Time Division Multiplexing between the active HomePlug nodes, orchestrated by the concept of a “master” node called a “Central Coordinator”, which is elected automatically when a network is established.

When you send an Ethernet frame into your HomePlug adaptor, it’s encapsulated into a HomePlug frame (think of your data like a set of Russian Dolls or a 1970’s nest of tables), which is then put in a queue called a “MAC frame stream”. These are then chopped up into smaller (512 byte) segments called a PHY block, the segments being encrypted and serialised.

Forward error correction is also applied, and as soon as the originating adaptor enters it’s permission to transmit (it’s “beacon period”), your data, now chopped down into these tiny PHY block chunks, is striped across the multiple frequencies in the HomePlug network. As they arrive at their destination, acknowledgments are sent back into the network. The sending station keeps transmitting the PHY blocks until the receiving node has acknowledged receipt.

Assuming all the PHY blocks that make up the MAC frame arrive intact at the exit HomePlug bridge, these are decrypted, reassembled, and decapsulated, coughing up the Ethernet frame which was put in the other end, which is written to the wire.

The upshot of this is that there’s a reasonably hefty framing overhead… IP, into Ethernet Frame, into HomePlug AV MAC frame, into PHY block.

Coupled with the half-duplex, beaconing nature, that’s how my ~70Mb turned into ~35Mb.

The thing to remember here, the advertised speed on HomePlug gear is quoted at the PHY rate – the speed attainable between HomePlug devices, which includes all the framing overhead.

This means, where HomePlug AV says that it supports 200Mb/sec, this is not the speed you should expect to get out of the ethernet port on the bottom, even in ideal conditions. 100Mb/sec seems more realistic and this would be on perfect cabling, directly into the wall socket.

Talking of ideal conditions, one of the things that you are warned against with HomePlug is hanging the devices off power strips, as this reduces the signal arriving at the HomePlug interface. They recommend that you plug the HomePlug bridge directly into a wall socket whenever possible. Given my house was built in the 1800s (no stud-walls, hence the need for HomePlug!), it’s not over-endowed with mains sockets, so of course, mine were plugged into power strips.

However, not to be deterred, I reshuffled things and managed to get the two 500Mb HomePlug bridges directly into the wall sockets, and voila: Negotiated speed went up to around 150-200Mb, and the full 70-odd Mb/sec of the upgraded broadband was available on the other side of the homeplug network.

Performance is almost doubled by being plugged directly into a wall socket.

In closing, given everything which is going on under the skin, and that it works by effectively superimposing and being able to recover minute amounts of “interference” on your power cables, it’s almost surprising HomePlug works as well as it does.

This HomePlug white paper will make interesting reading if you’re interested in what’s going on under the skin! 

80 down, 20 up, releasing a bottleneck in the home

A couple of weeks ago, I upgraded the Internet connectivity at home, from an ADSL service which could be a little bit wobbly (likely due to poor condition on some of the cabling) and usually hovered between 2Mb and 3Mb down, to FTTC – reducing the copper run from about 3.5km down to about 200m.

The service is sold as “up to 80Mb/sec” downstream, with upload of up to 20Mb/sec, which turns out to be achievable on my line, as my ISP’s portal reported the initial sync as 80Mb, and this gives around 75Mb of usable capacity at the IP layer once you’ve knocked off the framing and encapsulation overheads.

I eagerly headed off to thinkbroadband.co.uk and speedtest.net to run some tests. They confirmed I’d only get 40Mb/sec until I replaced my trusty but ageing Cisco 877 – that’s one bottleneck I already knew about and had a replacement router coming. But, never the less, I was happy with a >10x uplift on the previous downstream speed, and off I went happily streaming things, as can be seen from my daily usage…

Guess when I switched to FTTC?
Guess when I switched to FTTC?

Yes, some of that usage in the first day or two would have been repeatedly running speed tests in giddy abandon at the bandwidth at my disposal, but the daily usage is now generally higher.

There’s a number of reasons that could be behind that, but I suspect that among the most likely are services which support variable bit-rate video delivery, which include things such as YouTube and BBC iPlayer will be automatically upping to the higher quality stream.

The new router arrived on the 9th, and it was off with the speedtests again… and that’s where I found an interesting bottleneck in the house.

I could happily get 75Mb/sec in one room – where the router and main access point was. However, in the lounge, which is in an extension at the back of the house, I could only get around 30Mb/sec, despite having an access point in the same room.

I’ve ended up with multiple access points in the house, because the original “cottage” was built in 1890 and has fairly thick walls made of something very, very tough (from experience of hanging up pictures) which is also largely impervious to radio waves it seems, while the extension is attached to the “outside” of one of the original external walls, as well as being the furthest point away from where the Internet access comes into the house. This meant that I wasn’t left with much choice but to infill using a second wireless AP.

But both APs are of a similar spec and support 802.11a/b/g/n, and I was connecting on the less congested 5Ghz spectrum on both. So, where was the bottleneck?

The attention turned fairly quickly to the HomePlug AV network which I was using between the front and back of the house. It hadn’t caused me much concern in the past, but now it was prime suspect in my quest to wring the maximum out of my shiny new upgraded circuit.

Finding the longest piece of cat5 cable I have (a big yellow monster of a cable), and running that through the middle of the house to the AP, revealed that my suspicions were correct, but I also knew that the bright yellow cable snaking through the kitchen couldn’t stay there.

In the next few days I learned more about HomePlug than is probably healthy, and that will form the basis for my next article…

…and you’re not gonna reach my telephone.

Or, when an FTTC install goes bad.

Finally got around to getting FTTC installed to replace my ADSL service which seldom did more than about 3Mb/sec has had it’s fair share of ups and downs in the past. Didn’t want to commit to the 12 month contract term until I knew the owner was willing to extend our lease, but now that’s happened, I ordered the upgrade, sticking with my existing provider, Zen Internet, who I’m actually really happy with (privately held, decent support when you need it, don’t assume you’re a newbie, well run network, etc…).

For the uninitiated, going FTTC requires an engineer to visit your home, and to the cabinet in the street that your line runs through and get busy in a big rats nest of wires. The day of the appointment rolled around, and mid-morning, a van rolls up outside – “Working on behalf of BT Openreach”. “At least they kept the appointment…”, I think to myself

BT doesn’t always send an Openreach employee on these turnups, but they send a third-party contractor, and this was the case for this FTTC turn-up…

Continue reading “…and you’re not gonna reach my telephone.”

UK 4G LTE Launch and the scramble for spectrum

So, the next path on the road to fast mobile data, 4G LTE finally launches in the UK, after much barracking from competitors, on the “EE” network (the combined Orange and T-Mobile brand).

It’s only available in a handful of markets at the moment, and the BBC’s tech correspondent, Rory Cellan-Jones, did many articles for TV and Radio yesterday, while conducting countless speedtests, which he has extensively blogged about.

Some of the comments have been that it’s no better in terms of speed than a good 3G service in some circumstances, while others complain about the monthly cost of the contracts.

Get locked-in to 4G(EE)?

The initial cost for the early adopters was always going to attract a premium, and those who want it will be prepared to pay for it. It’s also worth noting that there are no “all you can eat” data plans offered on EE’s 4G service. Everything comes with an allowance, and anything above that has to be bought as “extra”.

The most concerning thing as far as the commercial package goes are the minimum contract terms.

12 months appears to be the absolute minimum (SIM only), while 18 months seems to be the offering if you need a device (be it a phone, dongle or MiFi), and 24 month contracts are also being offered.

Pay As You Go is not being offered on EE’s 4G service (as yet), probably because they’ve no incentive to, because there’s no competition.

Are EE trying to make the most of the headstart they have over competitors 3, O2 and Voda and capture those early adopters?

Penetrating matters?

Rory Cellan-Jones referred in his blog about problems with reduced performance when in buildings.

A number of factors affect the propagation of radio waves and how well they penetrate buildings and other obstacles, such as the nature of the building’s construction (for instance a building which exhibits the properties of a Faraday Cage would block radio signals, or attenuate them to the point of being useless), but also the frequency of the radio signal.

Longer wavelengths (lower frequencies) can travel further and are less impacted by having to pass through walls. I’m sure there’s an xkcd on this somewhere, but the best I can find is this….

Electromagnetic Spectrum according to xkcd

The reason EE were able to get a steal on the other mobile phone companies was because OFCOM (the UK regulator, who handle radio spectrum licensing for the Nation) allowed EE to “refarm” (repurpose) some of their existing allocated frequency, previously used for 2G (GSM), and convert it to support 4G. The 2G spectrum available to EE was in the 1800 Mhz range, as that was the 2G spectrum allocated to EE’s constituent companies, Orange and T-Mobile.

Now, 1800 does penetrate buildings, but not as well as the 900 Mhz which are the 2G spectrum allocated to Voda and O2.

Voda are apparently applying to OFCOM for authority to refarm their 900 Mhz spectrum for 4G LTE. Now, this would give a 4G service which had good propagation properties (i.e. travel further from the mast) and better building penetration. Glossing over (non-)availability of devices which talk LTE in the 900 Mhz spectrum, could actually be good for extra-urban/semi-rural areas which are broadband not-spots?

Well, yes, but it might cause problems in dense urban areas where the device density is so high it’s necessary to have a large number of small cells, in order to limit the number of devices associated with a single cell to a manageable amount – each cell can only deal with a finite number of client devices. This is already the case in places suce as city centres, music venues and the like.

Ideally, a single network would have a situation whereby you have a high density of smaller cells (micro- and femto-cells) running on the higher frequency range to intentially limit  (and therefore number of connected devices) it’s reach in very dense urban areas such as city centres, and a lower density of large cells (known as macro-cells) running on lower frequencies to cover less built-up areas and possibly better manage building penetration.

But, that doesn’t fit with our current model of how spectrum is licensed in the UK (and much of the rest of the world).

Could the system of spectrum allocation and use be changed?

One option could be for the mobile operators to all get together and agree to co-operate, effectively exchanging bits of spectrum so that they have the most appropriate bit of spectrum allocated to each base station. But this would involve fierce competitors to get to together and agree, so there would have to be something in it for them, the best incentive being cost savings. This is happening to a limited extent now.

The more drastic approach could be for OFCOM to decouple the operation of base stations (aka cell towers) from the provision of service – effectively moving the radio part of the service to a wholesale model. Right now, providing the consumer service is tightly coupled to building and operating the radio infrastructure, the notable exception being the MVNOs such as Virgin (among others), who don’t own any radio infrastructure, but sell a service provided over one of the main four.

It wouldn’t affect who the man in the street buys his phone service from – it could even increase consumer choice by allowing further new entrants into the market, beyond the MVNO model – but it could result in better use of spectrum which is, after all, a finite resource.

Either model could ensure that regardless of who is providing the consumer with service, the most appropriate bit of radio spectrum is used to service them, depending on where they are and which base stations their device can associate with.

DSL Diary – 23/10/2012

Latest instalment…

Currently away at the NANOG meeting in Dallas. Got an alert from the RIPE Atlas system that my Atlas probe had become unreachable.

Bit of testing from the outside world showed serious packet loss, and nothing on the home network was actually reachable with anything other than very small pings. I’d guessed the line had got into one of it’s seriously errored modes again, but thought I’d try leaving it overnight to see if it cleared itself up. Which it didn’t.

So, how did I get around this, and reset the line, given that by now my tolerant girlfriend would be at work, and couldn’t go into the “internet cupboard” and unplug some wires?

Well, turns out that you get BT to do an invasive test on a line using this tool on bt.com. This has the effect of dropping any calls on the line and resetting.

The line re-negotiated, and came back up with the same speed as before, 3Mb/sec down, 0.45Mb/sec up, no interleave.

Looking at the router log, the VirtualAccess interface state was bouncing up and down during the errored period, so the errors are bad enough to make the PPP session fail and restart (again and again), but the physical layer wasn’t picking this up and renegotiating.

Of course, BT’s test says “No fault found”. In terms of the weather in London, it has been damp and foggy, further fuelling the dry joint theory.

I’ve also had a chat with Mirjam Kuehne from RIPE Labs about seeing if it’s possible to make the Atlas probe’s hardware uptime visible, as well as the “reachability-based” uptime metric. They are looking in to it.