Comcast Residential IPv6 Deployment Pilot

Comcast, long active in the IPv6 arena have announced that they will be doing a native residential IPv6 deployment in Pleasanton, CA, on the edge of the San Francisco Bay Area, which will be a dual-stacked, native v4/v6 deployment with no NAT.

This is a much needed move to try and break the deadlock that seems to have been holding back wide scale v6 deployment in mass market broadband providers. Apart from isolated islands of activity such as XS4ALL‘s pioneering work in the Netherlands, v6 deployment has largely been available only as an option from providers focused on the tech savvy user (such as A&A in the UK).

Sure, it’s a limited trial, and initially aimed at single devices only (i.e. one device connected directly to the cable modem), but it’s a start, and there’s plans to expand this as experience is gained.

Read these good blog articles from Comcast’s John Brzozowski and Jason Livingood about the deployment and it’s aims.

Whither (UK) Regional Peering – Pt 3

Anyone still using C7513s?At the end of the last post, I vaguely threatened that at some point I’d go on to discuss IX Participant Connectivity.

The topic got a “bump” to the front of the queue last week, thanks to a presentation from Kurtis Lindqvist, CEO of Sweden’s Netnod exchange points, given at the RIPE 63 meeting in Vienna.

Netnod have been facing a dilemma. They provide a highly resilient peering service for Sweden, consisting of multiple discrete exchanges in various Swedish cities, with the biggest being in Stockholm – where they operate two physically seperate, redundant exchanges. They currently provide all their participants in Stockholm with the facility to connect to both fabrics, so they can benefit from the redundancy this provides. Sounds great doesn’t it? If one platform in Stockholm goes down, the other is up, traffic keeps flowing. Continue reading “Whither (UK) Regional Peering – Pt 3”

Latest Datacentre Expansion in Leeds

The Yorkshire Evening Post carried a story today about the future of the former Tetley’s Brewery site in Leeds, which closed back in June.

Leeds-based Internet and Telephony Services company aql have announced they are part of a consortium who wants to redevelop part of the site, to include more new co-location space, complementing their nearby redevelopment of the historic Salem Church, another Leeds landmark being saved from dereliction.

This also good news for the rapidly developing Leeds-based IXP – IXLeeds, who’s switch is co-located at the aql Salem Church facility. It opens up further access to co-location for the future, and further promotes technology growth in the region.

Old brewery buildings make good bases for something such as co-location, due to the buildings being engineered for high floor loadings. Part of the old Truman Brewery site on London’s Brick Lane was reborn as a datacentre some years ago, so there’s a sound precedent for this part of the redvelopment.

Adam Beaumont, aql founder, said that he’s “always looking for new ways to combine his interests of technology and beer” :).

This new plan deserves to go ahead for a number of reasons, and not only because it is significantly better than Carlsberg’s original proposal: To build a car park, locally dubbed as “Probably the most unadventurous redevelopment plan in the world“. Hilarious.

More specifics driving traffic to transit?

Interesting talk at RIPE 63 in Vienna today from Fredy Kuenzler of the Swiss network Init7 – How more specifics increase your transit bill (video transcript).

It proposes that although you may peer with directly with a network, any more specific prefixes or deaggregated routes which they announce to their upstream transit provider will eventually reach you, and circumvent the direct peering. If this forces traffic to your transit provider, it costs you money per meg, rather than it being covered in your (usually flat) cost of peering.

Of course, if it’s the one transit provider in the middle, they are getting to double-dip – being paid twice (once on each side) for the same traffic! Nice if you can get it!

So, the question is, how to find these more specific routes mark them as unattractive and not install them in your Forwarding Table, preferring the peered route, and saving you money.

Geoff Huston suggests he could provide a feed or a list of the duplicate more specific routes, crunching this sort of thing is something he’s been doing for ages with BGP routing data, such as the CIDR Report.

But the question remains how to take these routes and either a) keep them in the table, but deprefer the more specific which breaks a fundamental piece of decision making in BGP processing, or b) filter them out entirely, without affecting redundancy if the direct peering fails for any reason.

I started out being too simplistic, but hmm… having a think about this one…

Couldn’t go with the Openflow? Archive is online.

Last week, I missed an event I really wanted to make it to – the OpenFlow Symposium, hosted by PacketPushers and Tech Field Day. Just too much on (like the LONAP AGM), already done a fair bit of travel recently (NANOG, DECIX Customer Forum, RIPE 63), and couldn’t be away from home yet again. I managed to catch less than 30 minutes of the webcast.

From being something which initially seemed to be aimed at academia doing protocol development, lots of people are now talking about OpenFlow as it has attracted some funding, and more interest from folks with practical uses.

I think it’s potentially interesting for either centralised control plane management (almost a bit like a route reflector or a route server for BGP), or for implementing support for new protocols which are really unlikely to make it into silicon any time soon, as well as the originally intended purpose of protocol development and testing against production traffic and hardware.

Good news for folks such as myself is that some of the stream content is now being archived online, so I’m looking forward to catching up with proceedings.

Whither (UK) Regional Peering – Pt 2

It’s been a long while since I’ve blogged about this topic

Probably too long, as IXLeeds, something which inspired me to write Pt 1, is now a fully-fledged IX, not just a couple of networks plugged into a switch in a co-lo (all IXPs have to start somewhere!), but has formed a company, with directors, with about 12 active participants connected to its switch. Hurrah!

So, trying to pick up where I left off; in this post, I’m going to talk about shared fate, with respect to Internet Exchanges.

What do I mean by shared fate? Continue reading “Whither (UK) Regional Peering – Pt 2”

Telecity Dominance in Manchester continues?

It’s been a busy week for Manchester-based data centre UK Grid. Firstly, they announce a tie up with AMS-IX and IX Reach to provide a virtual blob of Dutch peering goodness in Manchester’s Science Park, then today, it’s announced that Telecity Group have acquired the UK Grid business, adding three further datacentres to their Manchester operations.

This certainly stays true to Telecity’s current form, which is to buy up competing data centre operators, having acquired Manchester’s IFL and Dublin’s DEG in fairly short order over the last 18 months, significantly increasing Telecity Group’s share, making them something of a dominant player in those markets.

Local (and privately-held) competitor M247 put their own slant on Telecity’s latest move in Manchester by announcing an acquisition of their own: a new building to enable further expansion.

Flash the Cache

Some trends in content distribution have made me think back to my early days on the Internet, both at University, and during my time at a dialup ISP, back in the 1990s.

Bandwidth was a precious commodity, and webcaching, by deploying initially Harvest (remember Harvest?) and latterly it’s offspring Squid, became quite popular, to reduce load on external links. There was also the ability to exchange cache content with other topologically close caches – clustering your cache with those on your neighbour networks.

(I remember that at least one of the UK academic institutions – Imperial College, I think, or maybe it was Janet – had a very popular cache that it openly peered with other caches in the UK, was available via peering across LINX, and as a result was popular and well populated.)

There were attendant problems – staleness of cached content could blight some more active websites, and unless you tried enforced redirection of web traffic (unpopular in some quarters, even today, where transparent cacheing is commonplace), the ISP often had to convince your users to opt-in to using the cache through changing browser settings.

It was no surprise that once bandwidth prices fell, caches started to fall out of favour.

However, that trend has been reversing in recent times… the cache is making a comeback, but in a slightly different guise: Rather than general purpose caches that take a copy of anything passing by, these are very specific caches, targeted and optimised for the content, part of the worldwide growth of the distributed CDN.

Akamai have long been putting their servers out into ISP networks, and into major Internet Exchanges, delivering content locally to ISP subscribers. They famously say they “don’t have a backbone” – they just distribute their content through these local clusters. Akamai are delivering a local copy of “Akamaized” web content to local eyes, and continuing to experience significant growth.

Google is also in the caching game, with the “Google Content Cache” (GCC).

I heard a Google speaker at a recent conference explain how a GCC cluster installation at a broadband ISP provided an 85-90% saving in external bandwith to Google hosted content. To some extent this is clearly driven by YouTube content, but it has other smarts too.

So, what’s helped make caching popular again?

Herd Mentality – Social Networking is King. A link is posted and within minutes, it can have significant numbers of page impressions. For a network with a cache, that content only needs to be fetched once.

Bandwidth Demand – It’s not unheard of for large broadband networks to have huge (nx10G) private peerings with organisations such as Google. At some point, this is going to run into scaling difficulties, and it makes sense to distribute the content closer to the sink.

Fault Tolerance – User expectation is “it should just work”, distributed caches can help prevent localised failures from having widescale effect. (Would it have helped BBC this week?)

Response Speed – Placing the content closer to the user minimises latency, improves the user experience. For instance, GCC apparently takes this one step further, acting as a TCP proxy for more interactive Google services such as Gmail – this helps remove “spongyness” of interactive sessions for those in countries with limited or high-latency external connectivity (some African countries, for instance).

So, great, cacheing is useful and has taken it’s place in the Network Architect’s Swiss Army Knife again. But what’s the tipping point for installing something like the GCC or Akamai cluster on your network? There’s two main things at work: Bandwidth and Power.

Having a CDN cluster on your network doesn’t come for free – even if the CDN owns the hardware, you have to house it and power it and cool it. The normal hardware is a number of general purpose high spec 1U rack-mount PCs.

So the economics seem to be a case of factoring in the cost of bandwidth (whether it’s transit or peering), router interfaces, data centre cross-connects, etc., versus the cost of hosting the gear.

Down at Peckham Market… “Get your addresses here. Laaavley v4 addresses!”

One of the first big deals in the IPv4 address secondary market appears to be happening – Microsoft paying $7.5m for pre-RIR (aka “early registration”) IPv4 address space currently held by Nortel.

There have been deals happening on the secondary market already. But this one is significant for two reasons:

  • The size of the deal – over 600k IPv4 addresses
  • That Nortel’s administrators recognise these unused IPv4 addresses, that Nortel paid either nothing, or only a nominal fee, to recieve, are a real asset which they can realise capital against.

Interesting times… Now, where’s my dodgy yellow van?