The problem with the IETF

There’s been some good efforts to fix the hiatus that’s been perceived to exist between the Internet operator community and the IETF recently. I hope I’m not giving them the kiss of death here… πŸ™‚

A sense of frustration had been bubbling for a while that the IETF had become remote from the people who actually deploy the protocols, that IETF had become the preserve of hardware vendors who lack operational experience, and it’s no wonder they ship deficient protocols.

But, it can’t have always been that way right? Otherwise the Internet wouldn’t work as well as it does?

Well, when the Internet first got going, the people who actually ran the Internet participated in the IETF, because they designed protocols and they hacked at TCP stacks and routing code, as well as running operational networks. Protocols were written with operational considerations to the fore. However, I think people like this are getting fewer and fewer.

As time went by, the Internet moved on, a lot of these same folk stopped running networks day-in-day out, and got jobs with the vendors, but they stayed involved in the IETF, because they were part of that community, they were experienced in developing protocols, and brought operational experience to the working groups that do the development work.

The void in the Network Operations field was filled by the next generation of Network Engineers, and as time has gone by, fewer and fewer of them were interested in deveoping protocols, because they were busy running their rapidly growing networks. Effectively, there had been something of a paradigm shift in the sorts of people who were running networks, which differed from those who had been doing it in the past For the Internet to grow the way it did in such a short time, something had to change, and this was it.

At the same time, the operational engineers were finding more and more issues creeping into increasingly complex protocols. That’s bad for the Internet, right? How did things derail?

The operational experience within the IETF was suffering from two things – 1) it was becoming more and more stale the longer that key IETF participants didn’t have to run networks, and 2) the operator voice present at IETF was getting quieter and quieter, things suggested by operators had been largely rejected as impractical.

Randy Bush had started to refer to it as the IVTF – implying that Vendors had “taken over”.

There have been a few recent attempts to bridge that gap – “outreach” talks and workshops at operations meetings such as RIPE and NANOG sought to get operator input and feedback, however trying to express this without frustration hasn’t always been easy.

However, it looks like we’re getting somewhere…

Rob Shakir has currently got a good Internet Draft out aimed at building a bridge between the ops community who actually deploy the gear and the folks who write the protocol specs and develop the software and hardware.

This has been long overdue and needs to work. It looks good, and is finding support from both the Vendor and Ops communities.

It’s a “meta-problem” here is that one cannot exist without the other, it’s a symbiotic and mutually beneficial relationship that needs to work for a sustainable Internet.

I wonder if it’s actually important for people on the protocol design and vendor side to periodically work on production networks to ensure that they have current operational knowledge, and not relying on that from 10 years ago?

Internet Access as a right and the Egyptian Internet Shutdown

I’m not going to do any in depth analysis (I’ll leave that to my good friends at Renesys)- it’s everywhere – but unless you’ve been comatose for the last few days, you can’t have helped notice the situation in Egypt.

Being an internet geek, I’m still going to focus on the country’s decision to take itself offline – killing it’s Internet connectivity to the rest of the world.

Firstly, while it may have slowed down the ability for folks in Egypt to communicate rapidly with the rest of the world, and potentially organise demonstrations, it also seems to have managed to drive folk who might have otherwise stayed in front of their screens out onto the streets, where they can either generally protest at the Mubarak regime, or specifically protest about being isolated from something they now take for granted.

It certainly gives the Police something to do…

The “kill-switch” mechanism appears to have been pretty simplistic, and non-technical in implementation. It is highlighted from the Renesys, RIPE Labs, and other analysis that the main Egyptian ISPs seem to have been called on in turn by folks from the Mukhabarat (the Egyptian equivalent of the secret service) and instructed to shut down external connectivity – by taking down interfaces or BGP peers.

The lack of centralised technical measures required shows that it’s not necessarily difficult for any administration to do this – either using existing instruments in law, or just having enough agents and judges to churn out the court orders and pay folks a visit.

However, the other thing to consider is that some countries are now starting to treat the Internet as an essential service and almost fundamental right, like access to water and power.

I’ve just on the way from a visit to New Zealand (where I participated in the NZNOG ’11 conference). The NZ Government is currently embarking on a process of using Government subsidy – with the premise that this will get paid back over time – to bootstrap open access FTTH implementations in major urban areas in NZ, to the extent they should bring 75% of the country’s 4.5M inhabitants within easy reach of high-speed broadband.

The motivation behind such a move is that reliable high-speed internet access will be a cornerstone of economic growth, but that comes with the corollary that it becomes an expectation of the consumer, just like they expect the power or water supply not to go off unless it’s a genuine emergency (such as the flooding in Queensland, Australia).

However, it seems like the Government involvement could become a double edged sword, as they investment threatens to come with various regulatory strings attached – the change in funding, from a private, entrepreneurially-built infrastructure, means that the Government feels like it has a right to have a say. Remains to be seen how much of one yet, but there’s already dangerous talk of “mandatory peering”, and that sort of ilk.

As far as I can tell, there’s no talk of a massive comic-strip style busbar “kill switch” being built in NZ, and the NZ Government seem to appear like moderate and reasonable folk, but will the investment in UFB be brought to bear when the NZGovt want Internet providers to accede to their desires in blocking content or controlling access?

Back to the “Internet access as a right” for the closing few words: Flipping the question on it’s head, how would you feel if the Government shut the power off to your neighbourhood because they felt like it served their needs?

Finally, a touch of irony. I composed this blogpost from 37000ft over the Texas, thanks to the reliable and affordable in-flight internet access they provide on board Virgin America.

(While on board, I saw the announcement that Mubarak has appointed his Intelligence chief as Vice President. Says a lot, right?)

That’s how richly woven through our often already complex lives ready internet access has become. Think back a few years. People’s expectations are already changing.

Update – 31st Jan 2011

Vikram Kumar, Chief Exec of InternetNZ (the association that engages in technical and public policy issues on behalf of the NZ Internet community) has just blogged an article about the possibilities for a take-down of NZ’s external connectivity to the Rest of the World. Summary: probably unlikely.

Update – 2nd Feb 2011

Internet access in Egypt was restored in the last 12 hours, and there’s coverage of this on the Renesys Blog

Using Social Networking to build Corporate Kudos

Many companies hUAL's Twitter person jumps on report of problems...ave leapt on the Social Networking bandwagon as part of their marketing and public relations strategy. They have staff for whom posting on things like Twitter and Facebook is a major part of their day.

Why wouldn’t you? It’s an easier way of getting information out to, and interacting with, your customers (and potential customers).

Airlines have been fairly quick to catch on to this – it’s a great way of rapidly disseminating service info during disruption, and collecting rapid feedback from pax.

One of my industry colleagues recently tweeted at United Airlines because he saw something that he thought UAL HQ should know about. Obviously the stress of the weather-related disruption hitting the area was getting the better of both pax and airline employees alike…

“@UnitedAirlines your ground staff is yelling at an old man since 15 mins airport IAD gate D6 flight UA7599 Time Jan 27, 2011 1439”

UA’s Twitter person was responding within about half-an-hour…

“@mhmtkcn The Dulles manager will follow up. Today. Thanks for the heads up.”

Two things to take away:

1) Speed of contact and response – this was quicker than sending email, or probably trying to phone someone up in UAL to let them know this was happening. Getting the message to the right person isn’t always easy. The Social Media team can act as a rallying point for this info.

2) The positive response from the UA Twitter scribe – “This will be followed up today” – does a lot to show that someone in what could otherwise be percieved as a large, faceless, inaccessible, uncaring corporation, does give a damn, that the user can get their attention, and get something done.

This simple action shows how social networking can bridge the communications gap that often exists between large companies and it’s clients, and does a lot to raise UA’s kudos amongst those who saw the message.

Comcast-NBCU Deal Approved

Yesterday, the FCC approved Comcast‘s proposed purchase of the controlling stake in NBC Universal, but with some conditions – such as giving up the Board position (and control) in online video service Hulu, despite still owning a good share of it, along with other regulatory controls – limits on exclusivity of NBCU produced content and how it should operate according to the FCC’s Open Internet Principles – to try and keep a rein on what the merged entity can and cannot do, to try and retain a reasonably level playing field in the market (to the relief of other cable operators, online video distributors).

It looks like quite the regulatory straitjacket, which has no doubt also cost public money to develop, and will cost more to enforce.

Despite this, all I can hear in the back of my mind is the “thud, thud, thud” of the Stay Puft Marshmallow man‘s footsteps, coming to crush anything in it’s way under it’s squishy feet.

Auntie Beeb on Net Neutrality

Earlier this week Andy D suggested that I might be listening to too much Radio 4.

I don’t necessarily think that’s been a bad thing, as that means I caught a couple of items on Net Neutrality. Indeed, the Beeb seems to be showing increasing interest in this area, and wouldn’t you, if there was something which threatened your editorial freedom?

Imagine for a minute that Sky Broadband subscribers got ultra fast access to Sky News (and other News Corp) content, while poor old Auntie (among other content providers) got packet-shaped, throttled and capped to a crawl? Now you’ll see what the fuss is about.

Anyway, Radio 4 has recently discussed Net Neutrality on two occasions in the past few weeks.

Firstly during the long-running consumer affairs programme “You and Yours“, on the 7th October, there was a brief discussion (will open a link to BBC iPlayer) which included comments from ISOC’s Leslie Daigle.

This week, on Monday 17th October, there was a further segment on the subject (iPlayer link) in the “Click On” programme – fast forward to around the 17 minute mark – which contains the fantastic quote of “Put three geeks together in a room and you’ll get four definitions of Net Neutrality”.

I’m not sure if that says more about the issue, or more about the geeks. πŸ™‚

There’s also a rather nice BBC blog article from Erik Huggers (Director, BBC FM&T) which incorporates and sums up nicely elements from both the above articles, including that despite the appearance of freedom of choice and competition in the UK consumer broadband market, it isn’t all it’s cracked up to be, due to triple-play lock-ins or the sheer aggro factor.

The closing paragraph talks about “thin-end-of-the-wedge” concerns about this gradually creeping in through the backdoor if the regulators don’t use tools in their power to manage this contentious issue.

While the BBC, as a major content player, do have a vested interest in preserving their editorial freedom and equal opportunity to distribute their content, there’s a lot of sense behind it too.

Whither (UK) Regional Peering – Pt.1

Just last month, in mid-September, Andy Davidson brought up the switch at IXLeeds, the latest UK regional IXP.

You’ll note I say “the latest”, but how many non-London UK IXPs can you name off the top of your head? Not many, I’ll wager. Fewer that are still operating, too. No, the LINX PoP in Slough doesn’t count in my picture of non-London!

This is the problem: It’s often said that there isn’t the level of regional IP peering going on in the UK that there probably should be for redundancy reasons. The majority of IP peering in the UK happens in London, and when it isn’t happening in London, it’s probably happening in Amsterdam instead.

Let’s face it, on an island that’s ~15ms round-trip top-to-bottom, we’re less likely to peer to reduce latency, especially when the architecture of the incumbent wholesale DSL platform doesn’t encourage networks to do little beside haul all broadband customer traffic to a central point before dispersing it.

Previous attempts at establishing regional IXPs in the UK have had varying levels of success. The most successful to date, in terms of number of participants and achieving critical mass is probably MANAP – which was founded in Manchester in 1997.

Unlike LINX which did survive (well, successfully resisted) a demutualisation attempt, MANAP only sort-of did. It allowed it’s infrastructure to be taken over by a company funded by the local Regional Development Association, and the exchange became a service provided over the infrastructure which was no longer dedicated to IXP operations, but also carried other traffic and provided other services.

The MANAP that exists today is not the same exchange, it has been subsumed into the NWIX platform and operates as Edge-IX, a distributed exchange which is present in both Manchester, elsewhere in the Northwest, and in many other locations, including those in London’s Docklands that it was initially intended to provide redundancy for. It’s has a different flavour, and has lost some elements of it’s “regionality”.

What distinguishes it from a carrier, other than the Edge-IX services being non-profit, while the NWIX ones are?

I’m not suggesting that this is a better or worse model, just different, and probably not regional anymore. If this, i.e. reinventing yourself as an inter-regional IX, is the only way a regional IXP in the UK can survive, then we’ll find it very challenging to reach position of sustainable regional peering in the UK. Could things have been different in Manchester?

You may be questioning what issue I have with a “wide-area” exchange point, distributed over a large geographic area? The main concern is shared fate. A disruption that would otherwise be localised, spreading easily. I can probably write a whole article on that. Maybe I will another day…

So, why would a quick hop over the Pennines to Leeds be any different?

Manchester itself is also at risk of being unattractive as a location for regional IXs – with the recent purchase of IFL by Telecity Group there is very little organisational diversity or competition in the Manchester co-lo market – there’s existing facilities such as Vialtus Serverbank, and recent new entrant Ice Colo. Folks in the Manchester area were very quick on the social networks to state their fears about anticipated price rises and few options as a result of the lack of choice.

The Leeds scene is rather different, with lots of smaller, entrepreneurial companies active in the metro area. This is a double-edged sword, as while it results in competition in the co-lo market which folks like, it also meant that IXLeeds couldn’t be present everywhere the potential IX participants wanted to connect, certainly from day one. There’s a future aspiration to expand within the metro.

One of the early strengths in IXLeeds is that has a good community feel behind it, including the involvement of folks who have experienced peering in Manchester, while the Yorkshire RDA have been involved from the outset in getting folks together, but (so far) haven’t felt the temptation to get in the driving seat, instead choosing to play the role of facilitator.

There’s a will to succeed, so hopefully they will reach the critical mass that is required to sustain the exchange.

A concern I have is the lack of international capacity into the Leeds area, Manchester is in a better position here due to the independant (from London) Transatlantic connectivity arriving in the area.

That said, while international bandwidth a something of a pre-requisite for a national exchange point, is that actually necessary for a successful regional IX?

Then again – what are the success criteria for an IX, especially a regional one? A graph that forever goes up and to the right? They probably are and should be different from a national IX. Is the regional IX not being satisfied with it’s lot, and wanting to be like it’s larger neighbour, what actually destroys it? Maybe that’s another article in itself?

I’d say it depends on how non-London-centric the early IXLeeds ISPs are, how much of their traffic is delivered locally, and how much traffic they have between each other that they might normally route through London.

If my previous experience is anything to go by – such as opening a new PoP for an already successful IX – these things usually “slow start” – so that means patience is required.

I’m going to come back to this topic in the coming weeks, I’ll try and write about some of the side issues I’ve threatened to cover above, and maybe touch on a missed opportunity.

Still think they should have called it the Rhubarb Internet Exchange. Even if it was just to confuse people. πŸ™‚

Beginning of the end for hotspots?

The Dutch Telecoms Regulator has announced it will require Dutch hotels to register as ISPs (Slashdot article).

Despite the fact that the hotel usually doesn’t own the wifi infrastructure in the hotel, and certainly isn’t an ISP in the normal sense, the Dutch regulator’s rationale is that the hotel is reselling the ISP service – i.e. is a VISP.

I don’t see this is always the case, as the hotels in the .nl, from my experience, don’t rebrand the ISP services as their own. However, they often collect the money and charge it to your room folio.

I suspect the meta question here is what does this mean for hotspots generally, especially the ones which are currently free?

Does this drive up their costs significantly enough to either a) cause free hotspots to charge or b) shut the hotspot down, because the costs of the bureaucracy aren’t recovered?

What happens if I let someone else use a MiFi that I own? Am I an ISP too?

Seems more thought is needed!