A week for new 40G toys…

It’s been a week for new 40G launches in the Ethernet switch world…

First out of the gate this week has been Arista, with their 7050S-64, 1U switch, with 48 dual-speed 1G/10G SFP ports and four 40G QSFP ports, 1.28Tbps of switching, 960Mpps, 9MB of packet buffer, front-to-back airflow for friendly top-of-rack deployment, etc, etc.

Next to arrive at the party is Cisco, with their Nexus 3064, 1u switch, with 48 dual-speed 1G/10G SFP ports and four 40G QSFP ports, 1.28Tbps of switching, 950Mpps, 9MB of packet buffer, front-to-back airflow for friendly top-of-rack deployment, etc, etc.

Whoa! Anyone else getting deja vu!

Continue reading “A week for new 40G toys…”

Flash the Cache

Some trends in content distribution have made me think back to my early days on the Internet, both at University, and during my time at a dialup ISP, back in the 1990s.

Bandwidth was a precious commodity, and webcaching, by deploying initially Harvest (remember Harvest?) and latterly it’s offspring Squid, became quite popular, to reduce load on external links. There was also the ability to exchange cache content with other topologically close caches – clustering your cache with those on your neighbour networks.

(I remember that at least one of the UK academic institutions – Imperial College, I think, or maybe it was Janet – had a very popular cache that it openly peered with other caches in the UK, was available via peering across LINX, and as a result was popular and well populated.)

There were attendant problems – staleness of cached content could blight some more active websites, and unless you tried enforced redirection of web traffic (unpopular in some quarters, even today, where transparent cacheing is commonplace), the ISP often had to convince your users to opt-in to using the cache through changing browser settings.

It was no surprise that once bandwidth prices fell, caches started to fall out of favour.

However, that trend has been reversing in recent times… the cache is making a comeback, but in a slightly different guise: Rather than general purpose caches that take a copy of anything passing by, these are very specific caches, targeted and optimised for the content, part of the worldwide growth of the distributed CDN.

Akamai have long been putting their servers out into ISP networks, and into major Internet Exchanges, delivering content locally to ISP subscribers. They famously say they “don’t have a backbone” – they just distribute their content through these local clusters. Akamai are delivering a local copy of “Akamaized” web content to local eyes, and continuing to experience significant growth.

Google is also in the caching game, with the “Google Content Cache” (GCC).

I heard a Google speaker at a recent conference explain how a GCC cluster installation at a broadband ISP provided an 85-90% saving in external bandwith to Google hosted content. To some extent this is clearly driven by YouTube content, but it has other smarts too.

So, what’s helped make caching popular again?

Herd Mentality – Social Networking is King. A link is posted and within minutes, it can have significant numbers of page impressions. For a network with a cache, that content only needs to be fetched once.

Bandwidth Demand – It’s not unheard of for large broadband networks to have huge (nx10G) private peerings with organisations such as Google. At some point, this is going to run into scaling difficulties, and it makes sense to distribute the content closer to the sink.

Fault Tolerance – User expectation is “it should just work”, distributed caches can help prevent localised failures from having widescale effect. (Would it have helped BBC this week?)

Response Speed – Placing the content closer to the user minimises latency, improves the user experience. For instance, GCC apparently takes this one step further, acting as a TCP proxy for more interactive Google services such as Gmail – this helps remove “spongyness” of interactive sessions for those in countries with limited or high-latency external connectivity (some African countries, for instance).

So, great, cacheing is useful and has taken it’s place in the Network Architect’s Swiss Army Knife again. But what’s the tipping point for installing something like the GCC or Akamai cluster on your network? There’s two main things at work: Bandwidth and Power.

Having a CDN cluster on your network doesn’t come for free – even if the CDN owns the hardware, you have to house it and power it and cool it. The normal hardware is a number of general purpose high spec 1U rack-mount PCs.

So the economics seem to be a case of factoring in the cost of bandwidth (whether it’s transit or peering), router interfaces, data centre cross-connects, etc., versus the cost of hosting the gear.

Down at Peckham Market… “Get your addresses here. Laaavley v4 addresses!”

One of the first big deals in the IPv4 address secondary market appears to be happening – Microsoft paying $7.5m for pre-RIR (aka “early registration”) IPv4 address space currently held by Nortel.

There have been deals happening on the secondary market already. But this one is significant for two reasons:

  • The size of the deal – over 600k IPv4 addresses
  • That Nortel’s administrators recognise these unused IPv4 addresses, that Nortel paid either nothing, or only a nominal fee, to recieve, are a real asset which they can realise capital against.

Interesting times… Now, where’s my dodgy yellow van?

Paging Air New Zealand, please report to the naughty corner.

A lot of folks who know me will know that I’ve held Air New Zealand in high regard for several years, that I really rated their inflight product and service, and would choose to fly Air NZ from Heathrow over to LA over other airlines such as BA or Virgin, as well as use them for flying to NZ itself.

I was particularly a big fan of their Pacific Premium Economy product – loads of leg room, and an inflight service which was deserving of the title “Premium” – offering Business class meals and fine NZ wines. I thought it represented very good value for money, and made the Virgin Premium Economy product, and especially the BA World Traveller Plus product look positively economy by comparison.

On a recent trip down to New Zealand (my third in as many years) in January, for the NZNOG meeting in Wellington, I was able to experience Air New Zealand’s new long-haul Premium Economy product on their much hyped new Boeing 777-300 aircraft on one of it’s first long haul flights.

Sadly, while the product was innovative, I was not impressed. I felt underwhelmed and disappointed with the experience, compared to that I would have received on the older plane. What’s worse is that it wasn’t all teething troubles. Sure, there were some teething troubles. But many were what I see as basic issues with the new product.

I found the new “SpaceSeat” anything but spacious – it felt confining, with the TV screen mere inches from your face, and at a weird angle compared to the seatback (and therefore your body), so you have to turn your neck or sit sort of twisted into a side-saddle position to try and be comfortable and watch a film at the same time.

The “seat pocket” as provided was a joke – it was made of a solid material (rather than an elasticated netting), and wasn’t even big enough to fit a book in.

The area of the aircraft I was sat in felt incredibly hot, stuffy and uncomfortable. Despite the crew setting a cooler temperature, where I was sat, it never seemed to cool down or get any sort of noticeable airflow.

I’m not normally a claustrophobic person, but I can only describe what I felt as being “freaked out” by the environment created by the cabin – from the stuffy air to the TV in your face, to the lack of space – and this was in a window seat!

I thought the personal space on the new product rather inferior to that you get on Air NZ’s 747-400 planes.

The fact is, Air NZ have, for some reason, crammed the rows of seats very tightly. I would probably be inclined to pay a little more if a row or so of seats was pulled out and spread between the other seats. Just to get that TV a bit further away from my face.

The inflight service was of a reduced quality compared to the old service, at least as far as I was concerned. It was very slow, due to the fiddly nature of the service, and because of the seat being angled away from the aisle, so tight up to the back of the seat in front, it wasn’t really possible to have any sort of interaction with the crew member serving you from the aisle, as this would involve being able to turn your head through much greater than 90 degrees.

This meant that it was much more difficult to experience the “soft side” of the service of the great Air NZ crews, as you couldn’t easily make eye-contact with them. They just became this sort of disembodied arm and hand pushing food and drinks in front of you. It also took over two (and more like three) hours to serve the main meal.

This seems to fly in the face of what Air NZ were aiming for, which was a more personal service!

It’s fair to say that on the older aircraft, Air NZ didn’t have the best PE seat in the sky, but I think they had the best Premium Economy soft product, in terms of the food and level of service. Air NZ seem to have gutted that great soft product, in order to provide what they percieve as a “better seat”.

There are other comedy errors, such as the location of the Premium Economy galley (over the wing) meant that it couldn’t have a hot water tap. If passengers ordered coffee or tea, it had to be brought from the other galleys – meaning staff walking through the cabin with jugs/flasks of hot water from the other galleys – not made easy because the aisles have been made narrower!

The changes don’t just affect the Premium Economy product, either. The quid-pro-quo of the “Economy Skycouch” product is that the Economy cabin is seated 10-across, which doesn’t sound bad, until you realise this is on a Boeing 777, which most other airlines, including Air NZ themselves on their 777-200s, only seat 9-across.

The aisles are noticeably narrower – more folk in the aisles will notice they get bumped – as are the seats themselves. A friend travelled in the back on the 777-300 and found it unbearably uncomfortable, having to sit with their shoulders “tucked in”. I can understand this on a 20-minute commute to Central London, but not on a 13 hour flight from LA to NZ.

The seat pitch (the space between the seats) in Economy has also been cranked down from 34″ on the 747-400 to 31-32″ on the 777-300. Air NZ have gone from one of the best Economy products in the sky to one of the most unbearably cramped in one fell swoop. Feels like a step backwards, and it’s not just me. There’s plenty of discussion about it on that perennial thorn-in-the-side of airlines, Flyertalk.

When the product was launched, it was accompanied by a lot of fanfare about the months of painstaking research that has gone on behind the scenes. If there has been all this research, how can the product be full of what I (as a 100K mile per year traveller) regard as such schoolboy errors.

Also interesting to observe is that there has been what I percieve to be an astroturfing campaign about how great the new products are via their social media outlets such as their Twitter account, yet nothing about the nightmares that I know for a fact they have been having with the new service. Oh, and what is it with that dreadful muppet character, Rico? How is that related to (what should be) high quality air travel?

So, not really enjoying this flight much, I contacted Air NZ to offer my feedback on the flight.

Sadly, after waiting about 6 to 8 weeks, all I got was a dreadful, bland, canned reply which basically indicated a “head in the sand” approach, that there couldn’t really be anything seriously wrong with their wonderful new product, could there, and these were all flukes which would be fixed next time I flew. Like I believed that.

They may as well have just said “You are free to take your business elsewhere”. Well, sadly, that’s what I’ve done on my next trip to California.

On 1st April, the last 747-400 operated NZ1 will leave Heathrow, and the next day London gets the “downgrade” to 777-300 service. The regulars won’t know what’s hit them.

Update – Thursday 31st March 2011

So, a few folks thought I was just having a rant here. Perhaps because it sounded a bit ranty, or I wasn’t explicit about something I wanted to get across:

What’s really disappointed me here is that an organisation which seemed to be switched on yet still be able treat it’s customers with good old-fashioned respect, and in the past seemed to have a great grasp of what people wanted, could have gone off the rails quite so spectacularly with a string of apparently shallow and unpopular moves.

This weeks oxymoron: Ethernet will never dominate in…

…Broadcast TV and Storage. Apparently.

I’ve just read a blog post by Shehzad Merchant of Extreme Networks, about a panel he recently participated in, where one of the other panelists asserted the above was true.

Fascinating that a conference in 2011 is talking about Ethernet not becoming dominant in broadcast TV.

There are several broadcast organisations who are already using large scale 10 Gig Ethernet platforms in places such as their archiving systems and in their playout platforms, and I’m not talking niche broadcasters, but big boys like ESPN. Not sure if any of them are using Extreme’s Purple boxes though.

This unnamed panelist would be better off moving into time-travel, as it seems are already able to come here from the past and make these assertions.

I do wonder if it’s actually the stick-in-the-mud storage industry which will be slower to move than the broadcasters!

SQ: Hey, folks in social housing, why not fly Business Class on our A380?

I have to wonder who is doing SQ’s media buying for billboard space, and what they might be smoking.

Why? Because at least four billboards in quick succession along the same road carried large ads suggesting one should try out SQ’s A380 Business Class product. So what? It’s some sort of blanket advertising campaign.

But, these billboards are along a road passing through an area which is characterised by social housing along one side, and light industrial units along the other. The average passerby is hardly the target market for round trips to Singapore at £3.5k a pop, right?

Being situated on the way to Belmarsh Prison (once dubbed the “British version of Guantanamo Bay”, and where the UK sends it’s really quite dangerous criminals), it’s not like it’s a through route for high rollers either. The folks passing by in these vehicles with blacked out windows aren’t likely to be leaving the country any time soon, unless they depart handcuffed to a police escort.

So, I’ll ask the question again. What is the media buyer responsible for these ads thinking?

I guess it got me thinking. I might fly SQ Biz to SIN, especially if someone else is paying. Maybe that’s the trick?

The problem with the IETF

There’s been some good efforts to fix the hiatus that’s been perceived to exist between the Internet operator community and the IETF recently. I hope I’m not giving them the kiss of death here… 🙂

A sense of frustration had been bubbling for a while that the IETF had become remote from the people who actually deploy the protocols, that IETF had become the preserve of hardware vendors who lack operational experience, and it’s no wonder they ship deficient protocols.

But, it can’t have always been that way right? Otherwise the Internet wouldn’t work as well as it does?

Well, when the Internet first got going, the people who actually ran the Internet participated in the IETF, because they designed protocols and they hacked at TCP stacks and routing code, as well as running operational networks. Protocols were written with operational considerations to the fore. However, I think people like this are getting fewer and fewer.

As time went by, the Internet moved on, a lot of these same folk stopped running networks day-in-day out, and got jobs with the vendors, but they stayed involved in the IETF, because they were part of that community, they were experienced in developing protocols, and brought operational experience to the working groups that do the development work.

The void in the Network Operations field was filled by the next generation of Network Engineers, and as time has gone by, fewer and fewer of them were interested in deveoping protocols, because they were busy running their rapidly growing networks. Effectively, there had been something of a paradigm shift in the sorts of people who were running networks, which differed from those who had been doing it in the past For the Internet to grow the way it did in such a short time, something had to change, and this was it.

At the same time, the operational engineers were finding more and more issues creeping into increasingly complex protocols. That’s bad for the Internet, right? How did things derail?

The operational experience within the IETF was suffering from two things – 1) it was becoming more and more stale the longer that key IETF participants didn’t have to run networks, and 2) the operator voice present at IETF was getting quieter and quieter, things suggested by operators had been largely rejected as impractical.

Randy Bush had started to refer to it as the IVTF – implying that Vendors had “taken over”.

There have been a few recent attempts to bridge that gap – “outreach” talks and workshops at operations meetings such as RIPE and NANOG sought to get operator input and feedback, however trying to express this without frustration hasn’t always been easy.

However, it looks like we’re getting somewhere…

Rob Shakir has currently got a good Internet Draft out aimed at building a bridge between the ops community who actually deploy the gear and the folks who write the protocol specs and develop the software and hardware.

This has been long overdue and needs to work. It looks good, and is finding support from both the Vendor and Ops communities.

It’s a “meta-problem” here is that one cannot exist without the other, it’s a symbiotic and mutually beneficial relationship that needs to work for a sustainable Internet.

I wonder if it’s actually important for people on the protocol design and vendor side to periodically work on production networks to ensure that they have current operational knowledge, and not relying on that from 10 years ago?

I am the market Nokia lost

Remember when more than 50% of mobile phones in people’s hands said “Nokia” on them? When 50% of those phones had that iconic/irritating/annoying signature ring tone – often because folks hadn’t worked out how to get them off the default – long a prelude to yells of “Hello! I’m on a train/in a restaurant/in a library“.

Well, this week, a memo from the new Nokia CEO, Stephen Elop, has been doing the rounds online, which sums up the ferocious drubbing the once dominant Finnish company had in the handset market, at the hands of Apple’s iPhone and Google’s Android OS, and how it is now poised on the telecoms equivalent of a blazing oil platform.

I am part of the market that Nokia lost, maybe even forgot. I have a drawer which could be called “my life as a mobile phone user”, littered with old Nokia handsets, many of them iconic in their own right… the 2110, 6110, 6150, 6210, 6310i (probably one of the best handsets Nokia ever made), 6600, and three Communicators, the 9210i, 9500 and E90.

Why did I stop using Nokia?

Well, the last Nokia handset I tried was the N97, and since then I’ve been an iPhone convert.

While those around me used swishy iPhones, my previous loyalty to Nokia was rewarded with a slow and clunky UI, a terrible keyboard, and the appallingly bad software to run on your (Windows only) PC for backing up and synchronisation.

Nokia couldn’t even focus on keeping up with the needs of it’s previously loyal and high-yielding power users, for whom migrating handsets was always a pain, never mind the fickle throwaway consumer market.

Is it any wonder folks have deserted Nokia?

They have made themselves look like the British Leyland of the mobile phone world.

On a complete sidebar – any guesses on which airline will start up a HEL-SFO service first? There have got to be yield management folk looking at this in the wake of this news!

Update: 11 Feb 2011, 0855

As the pundits predicted, Nokia have announced they have aligned themselves with Microsoft, and their Windows mobile platform.

Farewell IANA Free Pool…

Or, with apologies to Rolf Harris, “Can you guess what it is yet?”…

The NRO are inviting us to a webcast of a “special announcement” tomorrow. I wonder what it could be?

Might it be the end of the IANA IPv4 free pool? Or could it be that a few more /8s have been found down the back of the sofa? The latter is very unlikely.

We’re probably looking at a ceremonial doling out of the remaining /8s to the various RIRs.

While it may look a bit profligate to fly a load of RIR folk to Miami, it’s probably a necessary media stunt, as implementers and vendors have been sodding around, sat on their hands, for long enough, to the point that many folks’ home broadband routers and systems won’t do IPv6 and therefore can’t support a dual-stacked (v4 and v6 enabled) environment.

(The Real) Geoff Huston has, as usual, produced an interesting graph:

Per RIR IPv4 depletion to /8 and probability of when it's likely to happen
Per RIR IPv4 depletion to /8 and probability of when it's likely to happen

So we should find APNIC moving to activate their “final /8 policy” first, the idea being it – assuming Cyclone Yasi doesn’t try and finish the job the Queensland floods started in Brisbane – will start to issue allocations from the final /8 in smaller blocks, and only one allocation can be made to each APNIC member LIR from the last /8 – to try and give some level of running out fairly.

Anyway, it will certainly be interesting to keep an eye on these graphs!

Fortunately (in some respects), the runout in the RIPE NCC region looks to still be about 12 months away. Still doesn’t give me the warm fuzzies.

Folk need to start using IPv6, and debugging what’s wrong with it to stand any chance of being ready.