The problem with the IETF

There’s been some good efforts to fix the hiatus that’s been perceived to exist between the Internet operator community and the IETF recently. I hope I’m not giving them the kiss of death here… 🙂

A sense of frustration had been bubbling for a while that the IETF had become remote from the people who actually deploy the protocols, that IETF had become the preserve of hardware vendors who lack operational experience, and it’s no wonder they ship deficient protocols.

But, it can’t have always been that way right? Otherwise the Internet wouldn’t work as well as it does?

Well, when the Internet first got going, the people who actually ran the Internet participated in the IETF, because they designed protocols and they hacked at TCP stacks and routing code, as well as running operational networks. Protocols were written with operational considerations to the fore. However, I think people like this are getting fewer and fewer.

As time went by, the Internet moved on, a lot of these same folk stopped running networks day-in-day out, and got jobs with the vendors, but they stayed involved in the IETF, because they were part of that community, they were experienced in developing protocols, and brought operational experience to the working groups that do the development work.

The void in the Network Operations field was filled by the next generation of Network Engineers, and as time has gone by, fewer and fewer of them were interested in deveoping protocols, because they were busy running their rapidly growing networks. Effectively, there had been something of a paradigm shift in the sorts of people who were running networks, which differed from those who had been doing it in the past For the Internet to grow the way it did in such a short time, something had to change, and this was it.

At the same time, the operational engineers were finding more and more issues creeping into increasingly complex protocols. That’s bad for the Internet, right? How did things derail?

The operational experience within the IETF was suffering from two things – 1) it was becoming more and more stale the longer that key IETF participants didn’t have to run networks, and 2) the operator voice present at IETF was getting quieter and quieter, things suggested by operators had been largely rejected as impractical.

Randy Bush had started to refer to it as the IVTF – implying that Vendors had “taken over”.

There have been a few recent attempts to bridge that gap – “outreach” talks and workshops at operations meetings such as RIPE and NANOG sought to get operator input and feedback, however trying to express this without frustration hasn’t always been easy.

However, it looks like we’re getting somewhere…

Rob Shakir has currently got a good Internet Draft out aimed at building a bridge between the ops community who actually deploy the gear and the folks who write the protocol specs and develop the software and hardware.

This has been long overdue and needs to work. It looks good, and is finding support from both the Vendor and Ops communities.

It’s a “meta-problem” here is that one cannot exist without the other, it’s a symbiotic and mutually beneficial relationship that needs to work for a sustainable Internet.

I wonder if it’s actually important for people on the protocol design and vendor side to periodically work on production networks to ensure that they have current operational knowledge, and not relying on that from 10 years ago?

I am the market Nokia lost

Remember when more than 50% of mobile phones in people’s hands said “Nokia” on them? When 50% of those phones had that iconic/irritating/annoying signature ring tone – often because folks hadn’t worked out how to get them off the default – long a prelude to yells of “Hello! I’m on a train/in a restaurant/in a library“.

Well, this week, a memo from the new Nokia CEO, Stephen Elop, has been doing the rounds online, which sums up the ferocious drubbing the once dominant Finnish company had in the handset market, at the hands of Apple’s iPhone and Google’s Android OS, and how it is now poised on the telecoms equivalent of a blazing oil platform.

I am part of the market that Nokia lost, maybe even forgot. I have a drawer which could be called “my life as a mobile phone user”, littered with old Nokia handsets, many of them iconic in their own right… the 2110, 6110, 6150, 6210, 6310i (probably one of the best handsets Nokia ever made), 6600, and three Communicators, the 9210i, 9500 and E90.

Why did I stop using Nokia?

Well, the last Nokia handset I tried was the N97, and since then I’ve been an iPhone convert.

While those around me used swishy iPhones, my previous loyalty to Nokia was rewarded with a slow and clunky UI, a terrible keyboard, and the appallingly bad software to run on your (Windows only) PC for backing up and synchronisation.

Nokia couldn’t even focus on keeping up with the needs of it’s previously loyal and high-yielding power users, for whom migrating handsets was always a pain, never mind the fickle throwaway consumer market.

Is it any wonder folks have deserted Nokia?

They have made themselves look like the British Leyland of the mobile phone world.

On a complete sidebar – any guesses on which airline will start up a HEL-SFO service first? There have got to be yield management folk looking at this in the wake of this news!

Update: 11 Feb 2011, 0855

As the pundits predicted, Nokia have announced they have aligned themselves with Microsoft, and their Windows mobile platform.

Back at work down the mines for Ethernet Standards Developers…

The ink of the 100GE standard is barely dry, and the first releases of products are only just shipping. “Phew,” thinks the large network operator, “we’re good for another few years.”

Well, among the largest, probably not. They are already faced with needing to aggregate (run in parallel) multiples of 100GE interfaces in their busiest areas. This doesn’t come cheaply, if you consider a single interface – you’re talking about a high five-figure list price minimum for interfaces (Hankins, NANOG 50), potentially more.

Fortunately, having had a little bit of a break, some enlightened folk involved in the 802.3ba standard are getting on the case again.

John D’Ambrosia, who was chair of the 802.3ba Working Group, and whose day job is in the Office of the CTO at Force 10 Networks, is in the process of kicking off a “Ethernet Wireline Bandwidth Needs” assessment activity, under the IEEE Industry Connections banner, to steer the next steps for Ethernet, so it can keep up with what the network is demanding of it.

There’s not much else online about this as yet, the effort is very much new, so I’ll add some links once there’s more information available.

This is a much needed activity, as there were some criticsms during the last iteration of the standards process about whether the faster speed was really needed, and disagreements about how big the market would be, almost conservative, while at the same time others said it would be too little, too late, at too high a price.

Good to see the new approach being taken, laying solid groundwork for the next (Terabit? Petabit? Something more creative?) run at the standard.