BGP Convergence with Jumbo Frames

This is something of a follow up to Breaking the IXP MTU Egg is no Chicken’s game

One of the reasons for adoption that was doing the rounds in the wake of Martin Levy’s internet draft on the topic of enabling jumbo frames across shared media IXPs is that using jumbos will help speed up BGP convergence during startup. The rationale here is that session set up and bulk update exchange will happen more quickly over a jumbo capable transport.

Something about this didn’t sit right in my mind, it seemed like a red herring to me. A tail wagging the dog, so to speak. The primary reasons for wanting jumbos are already documented in the draft and discussed elsewhere. If using jumbos gave a performance boost during convergence, then it was a nice bonus, but that flew in the face of my experience of convergence – that it’s more likely to be bound by the CPU rather than the speed of the data exchange.

I wondered if any research had been done on this, so I had a quick Google to see what was out there.

No formal research on the first page of hits, but some useful (if a few years old) archive material from the Cisco-NSP mailing list, particularly this message

Spent some time recently trying to tune BGP to get
convergence down as far as possible. Noticed some peculiar
behavior.

I'm running 12.0.28S on GSR12404 PRP-2.

Measuring from when the BGP session first opens, the time to
transmit the full (~128K routes) table from one router to
another, across a jumbo-frame (9000-bytes) GigE link, using
4-port ISE line cards (the routers are about 20 miles apart
over dark fiber).

I noticed that the xmit time decreases from ~ 35 seconds
with a 536-byte MSS to ~ 22 seconds with a 2500-byte MSS.
From there, stays about the same, until I get to 4000,
when it beings increasing dramatically until at 8636 bytes it
takes over 2 minutes.

I had expected that larger frames would decrease the BGP
converence time. Why would the convergence time increase
(and so significantly) as the MSS increases?

Is there some tuning tweak I'm missing here?

Pete.

While not massively scientific, this does seem like a reasonable strawman test of the router architecture of the day (2004), and got this reply from Tony Li:

How are your huge processor buffers set up?

I would not expect a larger MTU/MSS to have much of an
effect, if at all.  BGP is typically not constrained by
throughput.  In fact, what you may be seeing is that
with really large MTUs and without a bigger TCP window,
you're turning TCP into a stop and wait protocol.

Tony

This certainly confirmed my suspicion that BGP convergence performance is not constrained by throughput but by other factors, and primarily by processing power.

Maybe there are some modest gains in convergence time to be had, but there is a danger that loss of a single frame of routing information data (due to some sort of packet loss, maybe a congested link, or a queue drop somewhere in a shallow buffer) could cause retransmits sufficiently damaging to slow reconvergence.

It somewhat indicates that performance gains in BGP convergance are marginal and “nice to have”, rather than a compelling argument to deploy jumbos on your shared fabric. The primary arguments are far more convincing, and my opinion is that we shouldn’t allow a fringe benefit (that may even have it’s downsides) such as this to cloud the main reasoning.

It does seem like some more up-to-date research is necessary to accompany the internet-draft, maybe even considering how other applications (beside BGP) handle packet drops and data retransmits in a jumbo framed environment? Does it reach a point where application performance is being impacted because a big lump of data got retransmitted.

Possibly, there is some expertise to be had from the R&E community which have been using jumbo capable transport for a number of years.?

Breaking the IXP MTU Egg is no Chicken’s game

Networks have a thing called a Maximum Transmission Unit (MTU), and on many networks it’s long been somewhere around 1500 bytes, the default MTU of that pervasive protocol, Ethernet.

Why might you want a larger MTU? For a long time the main reason was if you’re transferring very large amounts of data, you reduced the framing and encapsulation overheads. More recent reasons for wanting a larger MTU include being able to accommodate additional encapsulations (such as MPLS/VPLS) in the network without reducing the end-to-end MTU of the service, to carry protocols which by default have a higher MTU (such as FCoE, which defaults to ~2.5k bytes), and make things like iSCSI more efficient.

There’s often been discussions about whether Ethernet-based Internet Exchange Points – the places where networks meet and interconnect over a shared fabric – have been one of the barriers (there are lots!) to adoption of a higher MTU in the network. Most are based on Ethernet, and most have a standard Ethernet MTU of ~1500 bytes.

Ethernet can carry larger frames, up to around the 9k byte mark in most cases. These are known as “Jumbo Frames“. Here is quite a nice article about the ups and downs of jumbo frame support from the perspective of doing it on your home network.

The inter-provider networks to date where you can usually depend on having a higher end-to-end MTU are the large Research and Educational Networks, such as JANET in the UK. With white-coated (and occasionally bearded) scientists wanting to move huge volumes of experimental data around the world, they’ve long needed and have been getting the benefits of a larger MTU. They deliberately interconnect their networks directly to ensure the MTU isn’t reduced by a third-party enroute.

This has recently resurfaced in the form of this Internet Draft submitted by my learned friend Martin Levy of Hurricane Electric – yes, he of “Just deploy IPv6” fame.

With this draft Martin is trying to break what is at best a chicken-and-egg, and at worst a deadlock:

  • Should an IXP support jumbo frames?
  • What should the maximum frame size be?

He’s trying to support his argument by collecting the various pieces of rationale for supporting inter-provider jumbos in one place, to guide IXP communities in making the right decision, and hopefully documenting the pitfalls and things to watch out for – the worst being that your packets go into the bit-bucket because of a MTU mismatch and PMTUD being broken by accident at best or foolish design at worst.

My own personal, most-recent experience dancing around this particular handbag as with an IXP operators hat on, gave the following results:

  • A miniscule (<5%) proportion of IXP participants wanted to exchange jumbo frames across the IXP
  • Of those who wanted jumbo frames, it was not possible to reach consensus on a supported maximum frame size. Some wanted 9k, others only wanted 4470, some wanted different 9k MTUs (9218, 9126, 9000), likely due to limitations of their own equipment and networks.

It was actually easier for this minority to interconnect bi-laterally over private pieces of wire or fibre, where they could also set the MTU for that link on a bi-lateral basis, rather than across a shared fabric where everyone had to agree.

Martin’s rationale is that folk argue about this because there’s no well-known guidance on the subject, so his draft is being proposed to provide just that and break the previous deadlock.

In terms of IXPs which do support a larger MTU today, there are a few, the most well-known probably being Sweden’s Netnod, which has long had an MTU of 4470, largely due to it’s own ancestry of originating on FDDI, and subsequently using Cisco’s proprietary DPT/SRP technology after the exchange outgrew FDDI (largely because of a local preference for maintaining a higher MTU). When Netnod moved to a Gigabit/10 Gigabit Ethernet based exchange fabric, the 4470 MTU was retained despite the newer ethernet hardware having support for a ~9k MTU, and it’s explicitly required by Netnod that IXP participant interfaces are configured with a 4470 MTU to avoid mismatches. It seems to be working pretty well.

One of the issues which is likely to cause discussion is where 100Mb Ethernet is deployed at an exchange, as this, generally speaking, cannot support jumbo frames. Does this create a “second class” exchange in some way?

Anyway, I applaud Martin for trying to take this slippery subject head-on. Looking forward to seeing where it goes.

Whither (UK) Regional Peering – Pt 3

Anyone still using C7513s?At the end of the last post, I vaguely threatened that at some point I’d go on to discuss IX Participant Connectivity.

The topic got a “bump” to the front of the queue last week, thanks to a presentation from Kurtis Lindqvist, CEO of Sweden’s Netnod exchange points, given at the RIPE 63 meeting in Vienna.

Netnod have been facing a dilemma. They provide a highly resilient peering service for Sweden, consisting of multiple discrete exchanges in various Swedish cities, with the biggest being in Stockholm – where they operate two physically seperate, redundant exchanges. They currently provide all their participants in Stockholm with the facility to connect to both fabrics, so they can benefit from the redundancy this provides. Sounds great doesn’t it? If one platform in Stockholm goes down, the other is up, traffic keeps flowing. Continue reading “Whither (UK) Regional Peering – Pt 3”

Latest Datacentre Expansion in Leeds

The Yorkshire Evening Post carried a story today about the future of the former Tetley’s Brewery site in Leeds, which closed back in June.

Leeds-based Internet and Telephony Services company aql have announced they are part of a consortium who wants to redevelop part of the site, to include more new co-location space, complementing their nearby redevelopment of the historic Salem Church, another Leeds landmark being saved from dereliction.

This also good news for the rapidly developing Leeds-based IXP – IXLeeds, who’s switch is co-located at the aql Salem Church facility. It opens up further access to co-location for the future, and further promotes technology growth in the region.

Old brewery buildings make good bases for something such as co-location, due to the buildings being engineered for high floor loadings. Part of the old Truman Brewery site on London’s Brick Lane was reborn as a datacentre some years ago, so there’s a sound precedent for this part of the redvelopment.

Adam Beaumont, aql founder, said that he’s “always looking for new ways to combine his interests of technology and beer” :).

This new plan deserves to go ahead for a number of reasons, and not only because it is significantly better than Carlsberg’s original proposal: To build a car park, locally dubbed as “Probably the most unadventurous redevelopment plan in the world“. Hilarious.

More specifics driving traffic to transit?

Interesting talk at RIPE 63 in Vienna today from Fredy Kuenzler of the Swiss network Init7 – How more specifics increase your transit bill (video transcript).

It proposes that although you may peer with directly with a network, any more specific prefixes or deaggregated routes which they announce to their upstream transit provider will eventually reach you, and circumvent the direct peering. If this forces traffic to your transit provider, it costs you money per meg, rather than it being covered in your (usually flat) cost of peering.

Of course, if it’s the one transit provider in the middle, they are getting to double-dip – being paid twice (once on each side) for the same traffic! Nice if you can get it!

So, the question is, how to find these more specific routes mark them as unattractive and not install them in your Forwarding Table, preferring the peered route, and saving you money.

Geoff Huston suggests he could provide a feed or a list of the duplicate more specific routes, crunching this sort of thing is something he’s been doing for ages with BGP routing data, such as the CIDR Report.

But the question remains how to take these routes and either a) keep them in the table, but deprefer the more specific which breaks a fundamental piece of decision making in BGP processing, or b) filter them out entirely, without affecting redundancy if the direct peering fails for any reason.

I started out being too simplistic, but hmm… having a think about this one…