Finally got around to getting FTTC installed to replace my ADSL service which seldom did more than about 3Mb/sec has had it’s fair share of ups and downs in the past. Didn’t want to commit to the 12 month contract term until I knew the owner was willing to extend our lease, but now that’s happened, I ordered the upgrade, sticking with my existing provider, Zen Internet, who I’m actually really happy with (privately held, decent support when you need it, don’t assume you’re a newbie, well run network, etc…).
For the uninitiated, going FTTC requires an engineer to visit your home, and to the cabinet in the street that your line runs through and get busy in a big rats nest of wires. The day of the appointment rolled around, and mid-morning, a van rolls up outside – “Working on behalf of BT Openreach”. “At least they kept the appointment…”, I think to myself
BT doesn’t always send an Openreach employee on these turnups, but they send a third-party contractor, and this was the case for this FTTC turn-up…
It’s only available in a handful of markets at the moment, and the BBC’s tech correspondent, Rory Cellan-Jones, did many articles for TV and Radio yesterday, while conducting countless speedtests, which he has extensively blogged about.
Some of the comments have been that it’s no better in terms of speed than a good 3G service in some circumstances, while others complain about the monthly cost of the contracts.
Get locked-in to 4G(EE)?
The initial cost for the early adopters was always going to attract a premium, and those who want it will be prepared to pay for it. It’s also worth noting that there are no “all you can eat” data plans offered on EE’s 4G service. Everything comes with an allowance, and anything above that has to be bought as “extra”.
The most concerning thing as far as the commercial package goes are the minimum contract terms.
12 months appears to be the absolute minimum (SIM only), while 18 months seems to be the offering if you need a device (be it a phone, dongle or MiFi), and 24 month contracts are also being offered.
Pay As You Go is not being offered on EE’s 4G service (as yet), probably because they’ve no incentive to, because there’s no competition.
Are EE trying to make the most of the headstart they have over competitors 3, O2 and Voda and capture those early adopters?
Rory Cellan-Jones referred in his blog about problems with reduced performance when in buildings.
A number of factors affect the propagation of radio waves and how well they penetrate buildings and other obstacles, such as the nature of the building’s construction (for instance a building which exhibits the properties of a Faraday Cage would block radio signals, or attenuate them to the point of being useless), but also the frequency of the radio signal.
Longer wavelengths (lower frequencies) can travel further and are less impacted by having to pass through walls. I’m sure there’s an xkcd on this somewhere, but the best I can find is this….
The reason EE were able to get a steal on the other mobile phone companies was because OFCOM (the UK regulator, who handle radio spectrum licensing for the Nation) allowed EE to “refarm” (repurpose) some of their existing allocated frequency, previously used for 2G (GSM), and convert it to support 4G. The 2G spectrum available to EE was in the 1800 Mhz range, as that was the 2G spectrum allocated to EE’s constituent companies, Orange and T-Mobile.
Now, 1800 does penetrate buildings, but not as well as the 900 Mhz which are the 2G spectrum allocated to Voda and O2.
Voda are apparently applying to OFCOM for authority to refarm their 900 Mhz spectrum for 4G LTE. Now, this would give a 4G service which had good propagation properties (i.e. travel further from the mast) and better building penetration. Glossing over (non-)availability of devices which talk LTE in the 900 Mhz spectrum, could actually be good for extra-urban/semi-rural areas which are broadband not-spots?
Well, yes, but it might cause problems in dense urban areas where the device density is so high it’s necessary to have a large number of small cells, in order to limit the number of devices associated with a single cell to a manageable amount – each cell can only deal with a finite number of client devices. This is already the case in places suce as city centres, music venues and the like.
Ideally, a single network would have a situation whereby you have a high density of smaller cells (micro- and femto-cells) running on the higher frequency range to intentially limit (and therefore number of connected devices) it’s reach in very dense urban areas such as city centres, and a lower density of large cells (known as macro-cells) running on lower frequencies to cover less built-up areas and possibly better manage building penetration.
But, that doesn’t fit with our current model of how spectrum is licensed in the UK (and much of the rest of the world).
Could the system of spectrum allocation and use be changed?
One option could be for the mobile operators to all get together and agree to co-operate, effectively exchanging bits of spectrum so that they have the most appropriate bit of spectrum allocated to each base station. But this would involve fierce competitors to get to together and agree, so there would have to be something in it for them, the best incentive being cost savings. This is happening to a limited extent now.
The more drastic approach could be for OFCOM to decouple the operation of base stations (aka cell towers) from the provision of service – effectively moving the radio part of the service to a wholesale model. Right now, providing the consumer service is tightly coupled to building and operating the radio infrastructure, the notable exception being the MVNOs such as Virgin (among others), who don’t own any radio infrastructure, but sell a service provided over one of the main four.
It wouldn’t affect who the man in the street buys his phone service from – it could even increase consumer choice by allowing further new entrants into the market, beyond the MVNO model – but it could result in better use of spectrum which is, after all, a finite resource.
Either model could ensure that regardless of who is providing the consumer with service, the most appropriate bit of radio spectrum is used to service them, depending on where they are and which base stations their device can associate with.
Currently away at the NANOG meeting in Dallas. Got an alert from the RIPE Atlas system that my Atlas probe had become unreachable.
Bit of testing from the outside world showed serious packet loss, and nothing on the home network was actually reachable with anything other than very small pings. I’d guessed the line had got into one of it’s seriously errored modes again, but thought I’d try leaving it overnight to see if it cleared itself up. Which it didn’t.
So, how did I get around this, and reset the line, given that by now my tolerant girlfriend would be at work, and couldn’t go into the “internet cupboard” and unplug some wires?
Well, turns out that you get BT to do an invasive test on a line using this tool on bt.com. This has the effect of dropping any calls on the line and resetting.
The line re-negotiated, and came back up with the same speed as before, 3Mb/sec down, 0.45Mb/sec up, no interleave.
Looking at the router log, the VirtualAccess interface state was bouncing up and down during the errored period, so the errors are bad enough to make the PPP session fail and restart (again and again), but the physical layer wasn’t picking this up and renegotiating.
Of course, BT’s test says “No fault found”. In terms of the weather in London, it has been damp and foggy, further fuelling the dry joint theory.
I’ve also had a chat with Mirjam Kuehne from RIPE Labs about seeing if it’s possible to make the Atlas probe’s hardware uptime visible, as well as the “reachability-based” uptime metric. They are looking in to it.
The city has successfully applied for EU state aid to build network into underserved areas of the city, aligned with the Council’s regeneration plans for those areas. Virgin contest that it is “overbuilding” on their existing network footprint, and as such is unnecessary, effectively using EU subsidy to attack their revenue stream.
Broadband campaigner Chris Conder, one of the people behind the B4RN project, says that this is a case of VM and BT trying to close the stable door after the horse has bolted.
It’s going to be an interesting and important test case.
The other recent addition to my home network is a RIPE Atlas probe – this is part of a large scale internet measurement project being run by the RIPE NCC. One of the advantages of hosting a probe is that you get access to the measurements running from your probe, and you can also get the collection platform to email you if your probe becomes unreachable for a long period.
As it turned out, my probe appeared to be down for half an hour last night, but I know I was using the internet connection just fine at that time, so maybe I’ll put that down to interruption between the probe and the collection apparatus?
Well, the current status is that the line has been up for seven days now, at just over 3Mb/sec, Interleaving off.
Still not the fastest connection, but at least it now seems to be more stable.
One thing I’ll keep my eye open for is if the line goes back to Interleaved, as the Atlas probe should show up the difference in latency that you get with Interleaving enabled.
While I was away at the RIPE 65 meeting in Amsterdam last week, my home DSL went down. I suspected that the router and the exchange equipment had got into some crazy state where packets are massively errored, but sync isn’t lost, so there’s no retraining. The only way of recovering is to bounce the adsl interface on the router, either in software, or unplugging from the phone line. Occasionally, since moving, this happens, and seems to be related to the weather, which had been very wet and windy at the start of the week.
Since moving, I live toward the edge of the coverage of my exchange, the line length is estimated to be about 4km, and it has to get across a EM noise ridden town centre and an electrified railway line or two to get here. Both of which are potential factors that influence one’s line speed. It’s delivered overhead from the nearby pole on a dropwire, while the rest is UG, though that shouldn’t have any significant issue.
Initially, syncing at around 5Mb with Interleaving, but retraining several times a day the line eventually settled down to run stable without Interleave at around the 3Mbit mark, which is okay for most things other than TV streaming, but we’re not a Netflix kind of household, so don’t really mind.
My unfailingly patient girlfriend (who also needed to use the internet connection) reset the line, things retrained, and we were off again.
However, when I got home, I found the performance seemed a bit slow, so I checked the router. The line speed had dropped to sub 2.5Mb/sec, with Interleave on.
After a couple of retrains over the last few days, the speed has crept back up and following an “invasive line test” via BT.com, which forces a drop and regotiate it’s now syncing again at 3M, but still with Interleave – which is no great loss to me as I’m not a huge online gamer these days.
(Now realising that’s a way of forcing a remote reset when it’s got into a heavily errored state but hasn’t lost sync. Handy when I’m away and there’s no-one in to pull the plug.)
So, this is not the first time I’ve had fun with the line since moving. During the recent spell of hot weather, things would run fine until there was a sudden cooling, such as rapid cloud cover or a heavy shower, at which point the connection would drop and need to be nudged to renegotiate.
It’s got me wondering if the line is affected by a dry joint or degraded cable somewhere along the way.
Doing a ‘17070’ and quiet line test, it’s got a faint “shushing” noise, rather than total silence, and I did just notice what seemed to be “crosstalk” of ringing current (a faint “click-click, click-click”) for a few seconds.
Not sure whether to argue the toss with BT to get them out to give the line the once over (but risk having an indifferent BTO engineer make it worse rather than better), or just give in and go FTTC, despite the fact I’m 3 months into a 1 yr tenancy and FTTC has 1 yr minimum term and can’t cope with you moving house (yet!).
Had a chat with my old man about this. He’s a retired BT engineer, so generally knows his stuff about copper plant. Agreed with the likelihood of it being a dry joint and/or the possibility of their being other dry joints in the same cab/DP with a shared earth, given that the “click-click” of ringing current is sometimes audible over a quiet line test.
His suggestion: Phone BT until they are sick to death of you and keep asking for either the joints on your existing routing to be re-made, or a new routing to be provided.
It’s worth noting that the SCC deployment is being done seperately from the BDUK umbrella, and it’s been revealed BT were bidding against two other independant contractors, as opposed to their usual BDUK bidding rivals Fujitsu.
Of course, one advantage of going with BT for this deployment is that assuming BT in the main use their existing FTTC/FTTP service models, it shouldn’t be a problem for any ISP to deliver “superfast” service to homes and businesses on the Surrey deployment. It will be done using the same interconnects and some provisioning.
Compare that to more “bespoke” superfast networks such as Digital Region, which had been viewed as unattractive to work with because of the additional overheads for a consumer ISP of dealing with their processess and provisioning systems, in addition to the “defacto” wholesale broadband providers such as Be/O2 and the ubiquitous BT.
So, while I was at the IX Leeds meeting last week, I was interested to hear of a new service from Fluidata, which aims to solve the problems commonly associated with delivering service over multiple local access wholesalers, which they are calling “Stop@Nothing”.
Their plan is to offer a wholesale “middleman” service, interconnecting to various local access networks, both national (such as BT and O2) and regional (such as Digital Region), among others, and being able to deliver these over an inter-regional backhaul network to the ISP on a common pipe (or pipes), and provide a common API to the ISP for provisioning, regardless of which last mile network is delivering service to the customer premises.
I can see this helping the ISPs in two ways – potentially doing away with the time and cost implications of integrating a new wholesale broadband provider platform into your own provisioning processes and systems, and in giving ISPs who don’t have any local presence cheaper access to regional projects (such as Digital Region), without the risk of building into the area – maybe this becomes something can be done later if volume warrants it. It potentally also gets around issues such as minimum order commitments from individual ISPs, as these are aggregated behind the Fluidata service.
I haven’t got a clue how cost effective Fluidata’s product will be, as I’ve not seen any pricing for it. I can only assume that it’s competitive or they wouldn’t be doing it.
Meanwhile, the group of determined farmers and country-dwelling folk behind B4RN in the North West continue doing their own thing, their own way, and have recently been digging into a local church hall in Abbeystead:
There’s a whole series of videos on their YouTube channel about how they are progressing and details on the physical elements of their infrastructure such as digs and fibre installs.