…and you’re not gonna reach my telephone.

Or, when an FTTC install goes bad.

Finally got around to getting FTTC installed to replace my ADSL service which seldom did more than about 3Mb/sec has had it’s fair share of ups and downs in the past. Didn’t want to commit to the 12 month contract term until I knew the owner was willing to extend our lease, but now that’s happened, I ordered the upgrade, sticking with my existing provider, Zen Internet, who I’m actually really happy with (privately held, decent support when you need it, don’t assume you’re a newbie, well run network, etc…).

For the uninitiated, going FTTC requires an engineer to visit your home, and to the cabinet in the street that your line runs through and get busy in a big rats nest of wires. The day of the appointment rolled around, and mid-morning, a van rolls up outside – “Working on behalf of BT Openreach”. “At least they kept the appointment…”, I think to myself

BT doesn’t always send an Openreach employee on these turnups, but they send a third-party contractor, and this was the case for this FTTC turn-up…

Continue reading “…and you’re not gonna reach my telephone.”

Unsensationalist SDN

SDN – Software Defined Networking – has become one of the trendy Silicon Valley buzzwords. Often abused and misused, it seems that all the marketing folks are looking at ways of passing what they already have on their product roadmaps as being “SDN”, while startups are hoping that it will bring in the latest tranche of VC money. Some folk would even have you believe that SDN can cook your toast, make your coffee and iron your shirts.

People are talking about massively centrally orchestrated networks, a move away from the distributed intelligence which on the wider Internet is one of it’s strong points. Yes, you could have a massive network with a single brain, and very centralised policy control, if that’s what you wanted. For some applications, it’s what you need – for instance in Google’s internal networks. But for others you may really need things to be more distributed. The way some people are talking, they make it sound like SDN is an “all or nothing” proposal.

But, is there a place for SDN between distributed and centralised intelligence that people are missing in the hype?

Put all the excitement to one side for a minute, and we’ll go back to basics.

The majority of large networking equipment today usually already has a distributed architecture, such that when a packet of data arrives that the receiving interface doesn’t know what to do with, it’s forwarded from the high-speed forwarding silicon to software running in a CPU. In some devices, there are multiple CPUs, often arranged in a hierarchy with a “master” CPU in charge of the overall system, and local CPUs delegated to run elements of the system, such as logical or physical groups of interfaces, or specific processes in the system.

When the forwarding hardware sends a packet to the CPU, to get a decision on what to do with it, it’s commonly known as a “CPU punt”.

The software running on the CPU examines the packet based on the configuration of the device, compares it against it’s internal tables and any configured policy, and makes a decision about what to do with it.

It sends the packet back to the forwarding engine, along with an instruction of how to program the forwarding hardware so that it knows what to do with other packets with the same properties (i.e. from the same data stream, or with the same destination address, etc.), whether that be to apply policy, forward it, drop it, etc.

This isn’t that different from how OpenFlow works, but the CPU is abstracted from the device in the data path and resides in a thing known as a “Controller”. Effectively, what were CPU punts now become messages to the OpenFlow Controller. The Controller, by nature of it residing on a more generalised computer, is capable of doing things that are less likely to be easily doable on the software running in a router or switch, in terms of making “special” decisions.

Basically, SDN in some respects looks like the next step in something we’ve been doing for years. It’s an evolution, does it need to be a revolution?

So, here’s where I think what’s currently a “missed trick” lies…

The forwarding hardware used in SDN doesn’t have to or need to be totally dumb. Some decisions you might be happy for it to make for itself. It could pick up details of what those are from the policy in the Controller, and these can be written to the local CPU and the local forwarding silicon.

But, other things you know you do want to “punt” to the Controller, get set up (through applied policy) and handled that way.

I can think of occasions in the past where I would have loved to be able to take the stream of CPU punts in a device that can otherwise happily make a lot of it’s own decisions and be able to farm them out for analysing and processing in such a way which wasn’t possible on the device, and convert this back to policy, config, and ultimately CAM programming. But, to be able to do this without basically lobotomising the device?

Does SDN have to be the “all or nothing” approach which seems to be what’s getting proposed by some of the SDN evangelists? Or is a hybrid approach more realistic and aligned with how we actually build networks?

Is the Internet facing a “perfect storm”?

The Internet has become a massive part of our everyday lives. If you walk down a British high street, you can’t fail to notice people staring into their phones rather than looking where they are going! I did see a comment on TV this week that you have a 1-in-10 chance of tripping and falling over when walking along looking at your phone and messaging…

There are massive pushes for faster access in countries which already have widespread Internet adoption, both over fixed infrastructure (such as FTTC and FTTH) and wireless (LTE, aka 4G), which at times isn’t without controversy. In the UK, the incumbent, BT, is commonly (and sometimes unfairly) criticised for trying to sweat more and more out of it’s copper last mile infrastructure (the wires that go into people’s homes), while not doing enough to “future-proof” and enable remote areas by investing in fibre. There’s also been problems over the UK regulator’s decision to allow one mobile phone network get a head-start on it’s competitors in offering LTE/4G service ahead of them, using existing allocated radio frequencies (a process known as “spectrum refarming”).

Why do people care? Because the Internet helps foster growth and can reduce the costs of doing business, and it’s why the developing countries are working desperately hard to drive internet adoption, along the way having to manage the threats of “interfering” actors who either don’t fully understand or fear change.

However, a bigger threat could be facing the Internet, and it’s coming from multiple angles, technical and non-technical. A perfect storm?

  • IPv4 Resource Exhaustion
    • The existing addressing (numbering) scheme used by the Internet is running out
    • A secondary market for “spare” IPv4 resources is developing, IPv4 addresses will have a monetary value, driven by lack of IPv6 deployment
  • Slow IPv6 Adoption
  • Increasing Regulatory attention
    • On a national level, such as the French Regulator, ARCEP, wishing to collect details on all interconnects in France or involving French entities
    • On a regional level, such as ETNO pushing for regulation of interconnect through use of QoS – nicely de-constructed by my learned colleague Geoff Huston – possibly an attempt to retroactively fix a broken business model?
    • On a Global level through the ITU, who, having disregarded the Internet as “something for academics” and not relevant to public communications back in 1988, now want to update the International Telecommunication Regulations to extend these to who “controls the Internet” and how.

All of these things threaten some of the basic foundations of the Internet we have today:

  • The Internet is “open” – anyone can connect, it’s agnostic to the data which is run over it, and this allows people to innovate
  • The Internet is “transparent” – managed using a bottom-up process of policy making and protocol development which is open to all
  • The Internet is “cheap” – relatively speaking, Internet service is inexpensive

These challenges facing the Internet combine to break all of the above.

Close the system off, drive costs up, and make development and co-ordination an invite-only closed shop in which it’s expensive to participate.

Time and effort, and investing a little money (in deploying IPv6, in some regulatory efforts, and in checking your business model is still valid), are the main things which will head off this approaching storm.

Adopting IPv6 should just be a (stay in) business decision. It’s something operational and technical that a business is in control of.

But, the regulatory aspect is tougher, unless you are big enough to be able to afford your own lobbyists. Fortunately, if you live in the UK, it’s not reached “write to your MP time”, not just yet. The UK’s position remains one of “light touch” regulation, largely letting the industry self-regulate itself through market forces, and this is being advocated to the ITU. There’s also some very bright, talented and respected people trying to get the message through that it’s economically advantageous not to make the Internet a closed top-down operated system.

Nevertheless, the challenges remain very much real. We live in interesting times.

Recent IPv4 Depletion Events

Those of you who follow these things can’t have missed that the RIPE NCC had got down to it’s last /8 of unallocated IPv4 space last week.

They even made a cake to celebrate…

Photo (and cake?) by Rumy Spratley-Kanis

This means the RIPE NCC are down to their last 16 million IPv4 IP addresses, and they can’t get another big block allocated to them, because there aren’t any more to give out.

Continue reading “Recent IPv4 Depletion Events”

Networking equipment vs. Moore’s Law

Last week, I was at the NANOG conference in Vancouver.

The opening day’s agenda featured a thought provoking keynote talk from Silicon Valley entrepreneur and Sun Microsystems co-founder Andy Bechtolsheim, now co-founder and Chief Development Officer of Arista Networks, entitled “Moore’s Law and Networking“.

The basic gist of the talk is that while Moore’s Law continues to hold true for the general purpose computing chips, it has not applied for some time to development of networking technology. Continue reading “Networking equipment vs. Moore’s Law”

What might OpenFlow actually open up?

A few weeks ago, I attended the PacketPushers webinar on OpenFlow – a networking technology that, while not seeing widespread adoption as yet, is still creating a buzz on the networking scene.

It certainly busted a lot of myths and misconceptions folks in the audience may have had about OpenFlow, but the big questions it left me with are what OpenFlow stands to open up, and what effect it might have on many well established vendors who currently depend on selling “complete” pieces of networking hardware and software – the classic router, switch or firewall as we know it.

If I think back to my annoyances back in the early 2000’s it was of the amount of feature bloat creeping into network devices, while we still tended to have a lot of monolithic operating systems in use, so a bug in a feature that wasn’t even in use could crash the device, because the code would be running, even if it wasn’t in use. I was annoyed because there was nothing I could do other than apply kludgy workarounds, and nag the vendors to ship patched code. I couldn’t decide to rip that bit of code out and replace it with some fixed code myself. When the vendors finally shipped fixed code, it was a reboot to install it. I didn’t much like being so dependant on a vendor, as not working for an MCI or UUnet (remember, we’re talking early 1999-2001 here, they are the big guys), at times my voice in the “fix this bug” queue would be a little mouse squeak to their lion’s roar, in spite of heading up a high-profile Internet Exchange.

Eventually, we got proper multi-threaded and modular OS in networking hardware, but I remember asking for “fast, mostly stupid” network hardware a lot back then. No “boiling the sea”, an oft-heard cliché these days.

The other thing I often wished I could do was have hardware vendor A’s forwarding hardware because it rocked, but use vendor B’s routing engine, as vendor A’s was unstable or feature incomplete, or vendor B’s just had a better config language or features I wanted/needed.

So, in theory, OpenFlow could stand to enable network builders to do the sorts of things I describe above – allowing “mix-and-match” of “stuff that does what I want”.

This could stand to threaten the established “classic” vendors who have built their business around hardware/software pairings. So, how do they approach this? Fingers-in-ears “la, la, la, we can’t hear you”? Or embrace it?

You should, in theory, and with the right interface/shim/API/magic in your OpenFlow controller, be able to plug in whatever bits you like to run the control protocols and be the “brains” of your OpenFlow network.

So, using the “I like Vendor A’s hardware, but Vendor B’s foo implementation” example, a lot of people like the feature support and predictability of the routing code from folks such as Cisco and Juniper, but find they have different hardware needs (or a smaller budget), and choose a Brocade box.

Given that so much merchant silicon is going into network gear these days, the software is now the main ingredient in the “secret sauce”, the sort approach that folks such as Arista are taking.

In the case of a Cisco, their “secret sauce” is their industry standard routing engine. Are they enlightened enough to develop a version of their routing engine which can run in an OpenFlow controller environment? I’m not talking about opening the code up, but as a “black box” with appropriate magic wrapped around it to make it work with other folks’ controllers and silicon.

Could unpackaging these crown jewels be key to long term survival?

Regional Broadband, the Lords Select Committee and Google Fibre in the UK

Some of you may be aware of the Google Fibre project which is an experimental project to  build a high-speed FTTH network to the communities in Kansas City. They chose Kansas City from a number of different communities who responded to Google’s “beauty contest” for this pilot, because they had to pick just one and felt that it would have the greatest effect and be the best community to work with.

Like many other Community Broadband projects, Google point out that what the large incumbent telcos sell as “high speed internet” is seldom “high speed” at all, and is commonly sub 4Mb/sec. Google estimate that during the pilot, the cost of works for lighting up each subscriber premises may be as much as $8000 – though this is cheaper than the £10000 that it’s rumoured to cost to deploy high speed broadband to a rural subscriber in the UK.

So, this got me thinking, what could Google potentially bring to the UK with a similar sort of project?

It strikes me that one of the ways that Google could help the most is by facilitating the existing community benefit-based FTTC/FTTH groups to build networks in their communities, which right now can be frustrated by lack of access to public money from the super-fast broadband deployment fund (aka BDUK).

A significant amount of BDUK money is going to BT as the incumbent, or needs complex joint-venture constructs (such as Digital Region – though that was not a BDUK-funded project), because these organisations firstly have whole departments dedicated to handling the paperwork required to bid for the public funding, and secondly because they have a sufficiently high turnover to bid for a sufficient amount of public money to deliver the project. These are hurdles to community led companies, who will most likely just drown in the paperwork to bid for the funding, may not have all the necessary expertise either on staff or under contract, and likely don’t have the necessary turnover to support the application for the funding.

Meanwhile, the House of Lords Communications Select Commitee have issued this request for evidence (.pdf) in respect of an inquiry into whether the Government’s Super-fast Broadband strategy (and the BDUK funding) is going to be able to repair “digital divides” (and prevent new ones), deliver enough bandwidth where it’s needed, provide enough of a competitive market place in broadband delivery (such as a competitive wholesale fibre market), and generally “do enough”.

Could this be where Google enters stage left? As opposed to running the project in it’s entirety, they partner up – managing things such as the funding bid process on behalf of the communities, possibly acting as some sort of guarantor in place of turnover, as well as providing technical knowhow and leveraging their buying power and contacts?

This would at least give an alternative route to super-fast broadband. Right now, BT are winning a lot of the County Council led regional/rural fast broadband deployment projects, sometimes because they are the only organisation able to submit a compliant bid.

It remains to be seen if the money will benefit the real not-spots, or just prop-up otherwise marginal BT FTTC roll outs. Don’t get me wrong, I’ve no axe to grind with BT, but is the current situation, with little or no competition, ultimately beneficial to the communities that the awarded funding is purporting to benefit?

This is certainly one of the questions the House of Lords enquiry is looking to answer.

End of the line for buffers?

This could get confusing…The End of the Line by Alan Walker, licenced for reuse under the Creative Commons License

buffer (noun)

1. something or someone that helps protect from harm,

2. the protective metal parts at the front and back of a train or at the end of a track, that reduce damage if the train hits something,

3. a chemical that keeps a liquid from becoming more or less acidic

…or in this case, none of the above, because I’m referring to network buffers.

For a long time, larger and larger network buffers have been creeping into network equipment, with many equipment vendors telling us big buffers are necessary (and charging us handsomely for them), and various queue management strategies being developed over the years.

Now, with storage networks and clustering moving from specialised platforms such as fibre channel and infiniband to using ethernet and IP, and the transition to distributed clustering (aka “The cloud”), this wisdom is being challenged, not just by researchers, but operators and even equipment manufacturers.

The fact is, in lots of cases, it can often be better to let the higher level protocols in the end stations deal with network congestion rather than introduce variable congestion due to deep buffering and badly configured queueing in the network which attempts to mask the problem and confuses the congestion control behaviours built into the protocols.

So, it was interesting to find two articles in the same day with this as one of the themes.

Firstly, I was at the UKNOF meeting in London, where one of the talks was from a research working on the BufferBloat project, which is approaching the problem from the CPE end – looking at the affordable mass-market home router, or more specifically the software which runs on them and the buffer management therein.

Second thing I came across was a great blog post from technology blogger Greg Ferro’s Ethereal Mind – on his visit to innovative UK ethernet fabric switch startup Gnodal – who are doing some seriously cool stuff which removes buffer bloat from the network (as well as some other original ethernet fabric tech), which is really important for the data centre networking market with it’s latency and jitter sensitivity, which Gnodal are aiming at.

(Yes, I’m as excited as Greg about what the Gnodal guys are up to, as it’s really breaking the mould, and being developed in the UK, of course I’m likely to be a bit biased!)

Is the writing on the wall for super-deep buffers?

Comcast Residential IPv6 Deployment Pilot

Comcast, long active in the IPv6 arena have announced that they will be doing a native residential IPv6 deployment in Pleasanton, CA, on the edge of the San Francisco Bay Area, which will be a dual-stacked, native v4/v6 deployment with no NAT.

This is a much needed move to try and break the deadlock that seems to have been holding back wide scale v6 deployment in mass market broadband providers. Apart from isolated islands of activity such as XS4ALL‘s pioneering work in the Netherlands, v6 deployment has largely been available only as an option from providers focused on the tech savvy user (such as A&A in the UK).

Sure, it’s a limited trial, and initially aimed at single devices only (i.e. one device connected directly to the cable modem), but it’s a start, and there’s plans to expand this as experience is gained.

Read these good blog articles from Comcast’s John Brzozowski and Jason Livingood about the deployment and it’s aims.

Abstracting BGP from forwarding hardware with OpenFlow?

Interesting hallway discussion at RIPE 63 last week. Olaf Kolkman stopped me in the coffee break and asked if I could think of any way of intercepting BGP data in a switch/router, and somehow farming if out to an external control plane to process the BGP update, making the routing decisions external from the forwarding device and then updating the FIB in switch/router.

I had a think for about 15-20 seconds and asked “What about OpenFlow?”

In a classic switch/router type device, BGP packets are identified in the ingress packet processor and punted up to the control plane of the box. The routing process doesn’t run in the forwarding silicon, it’s running in software in the system CPU(s). The routing process evaluates the routing update, makes changes to the RIB in accordance, updates the system FIB, and then sends an internal message to program the forwarding silicon to do the right thing with the packet.

I’m assuming at this stage, that you understand the rationale behind wanting to move the routing decisions outside of the forwarding hardware? There are many reasons why you might want to do this: centralised routing decisions being one (hopefully quicker convergence?), the ability to apply routing policy based on higher level application needs (for example in cluster environments), to run routing decisions in powerful commodity hardware (rather than specialised, expensive, networking hardware), to run customised routing code to suit local requirements, or as Dave Meyer helpfully said, “Allows you do lots of abstractions”.

So, why not try and do this with Openflow?

It’s designed to make pattern matches on incoming packets in a switch and where necessary punt these packets to an OpenFlow controller for further processing. The OpenFlow controller can also signal back to the switch on what to do with further packets with the same properties, effectively programming the forwarding hardware in the switch. Not significantly different with how BGP updates are processed now, except it’s all happening inside the box.

It looks like OpenFlow could be the thing – except it’s probably a little young/incomplete to do what we’re asking of it at this stage – but it’s a rapidly developing protocol, and we’ve got folks who are well deployed in the SP arena that have already said they are roadmapping to build OpenFlow functionality into their SP product lines, such as Brocade.

It seems to me that there are plenty of open source routing daemons out there (Quagga, OpenBGPd, BIRD) which could serve as the outboard routing control plane.

So what seems to be needed is some sort of OpenFlow controller to routing daemon shim/abstraction layer, so that an existing BGP daemon can communicate with the OpenFlow controller, which seems to be what the QuagFlow project is doing.