Errata in RFC1925: The Twelve Networking Truths

Some things in RFC1925, despite it being one of a series of April Fools’ RFCs (and therefore in the good company of the all time classic RFC1149 and it’s brethren), actually do hold true, for instance:

Fast, Good, Cheap: Pick any two – still tends to hold true.

However, like all good April Fools’ RFCs, it will declare that ‘ERRATA EXIST’ at the top. In this case, there’s definitely a shred of truth to this. Especially when you look at truth number 4:

Some things in life can never be fully appreciated nor
understood unless experienced firsthand. Some things in
networking can never be fully understood by someone who neither
builds commercial networking equipment nor runs an operational
network.

My concern is that this statement no longer holds true for the makers of commercial networking equipment.

If the makers and protocol designers really understood, we wouldn’t be pushing water up hill with things such as IPv6 deployment and encouraging use of other networking best practices, they would have made them easier to deploy in the first place.

Therefore a correction is needed, “Some things in networking can never be fully understood by someone who doesn’t run an operational network“.

Getting info out of your HomePlug AV Network

I recently blogged on the trials and tribulations (and gotchas) of using HomePlug AV to glue bits of network together around the house without having to run UTP around the place.

One of the comments I’d made in that article was about monitoring and logging the node-to-node speeds between the HomePlug bridges. Obviously, being a consumer product, they come with a pretty (and depending on your PoV, awful) GUI.

How was the information being gathered by the GUI? Turns out it’s using a Layer 2 protocol, so I cracked out Wireshark.

The head and tail of it is that the requesting station sends a L2 broadcast frame of ethertype 0x88e1 (Homeplug AV), containing a network info request.

The HomePlug bridges reply to the MAC address originating the broadcast with a network info confirmation – there are other sorts of management frames (such as read module data and get device/sw version), but this is the bit we’re interested in, containing the negotiated speed between each pair of bridges.

The speeds are plain hex values, which occur in certain places in the frame.

homeplug-av-frame

The number of networks (and therefore how many instances of “Networks informations“) to expect are in byte 21 of the frame, and the number of AV stations in a network is embedded in byte 17 of the Networks Informations, so you then know how many sets of stations informations to expect.

In the stations informations the address of the far end HomePlug bridge is in the first 6 bytes, while the TX rate is in byte 14, and the RX in byte 15.

It should therefore, maybe with a bit of scripting, be possible to grab those values and write them into a rrd or something like that, and make graphs, rather than have to fire up a GUI which only helps you in realtime. But here I am talking about banging away with my awful scripting crafting specific L2 frames and interpreting what comes back using regex matching and chomp. Surely someone’s done something like this before, and come up with something more elegant?

Well it turns out, github is your friend, as it seems are the people at Qualcomm Atheros who make the chips inside a lot of the HomePlug devices.

They’ve put up something called Open Powerline Utils – which I think may be able to do the job I want.

So, when I get a free evening, I’ll have a read of the docs and see what can be boshed together using this toolset rather than some ugly scripts.

Releasing a bottleneck in the home network, Pt2 – at home with HomePlug

As promised the next instalment of what happened when I upgraded my home Internet access from ADSL to FTTC, and found that I had some interesting bottlenecks existing in what is a fairly simple network.

Last time, I left you hanging, with the smoking gun being the HomePlug AV gear which glues the “wired” part of the network together around the house.

HomePlug is basically “powerline networking”, using the existing copper in the energised mains cables already in your walls to get data around without the cost of installing UTP cabling, drilling through walls, etc. As such, it’s very helpful for temporary or semi-permanent installations, and therefore a good thing if you’re renting your home.

The HomePlug AV plant at Casa Mike is a mix of “straight” HomePlug AV (max data rate 200Mb/sec), and a couple of “extended” units based on the Qualcomm Atheros chipset which will talk to each other at up to 500Mb/sec as well as interoperate at up to 200Mb/sec with the vanilla AV units.

One of the 500Mb units is obviously the one in the cupboard in the front room where all the wires come into the house and the router lives. However, despite being the front room, it’s not the lounge, that’s in an extension at the back, so the second 500Mb unit is in the extension, with the second wifi access point hanging off it so we’ve got good wifi signal (especially 5GHz) where we spend a lot of our time. The other 200Mb units get dotted around the house as necessary, wherever there’s something that needs a wired connection.

So, if you remember, I was only getting around 35Mb/sec if I was on the “wrong side” of the HomePlug network – i.e. not associated with the access point which is hardwired to the router, so this was pointing to the HomePlug setup.

I fired up the UI tool supplied with the gear (after all, it’s consumer grade, what could I expect?), and this shows a little diagram of the HomePlug network, along with the speed between each node. This is gleaned via a L2 management protocol which is spoken by the HomePlug devices (and the UI). I really should look at something which can collect this stuff and graph it.

HomePlug is rate adaptive, which means it can vary the speed dependant on conditions such as noise interference, quality of the cabling, etc., and the speed is different for the virtual link between each pair of nodes in the HomePlug network. (When you build a HomePlug network, the HomePlug nodes logically seem to emulate a bus network to the attached Ethernet – the closest thing I can liken it to is something ATM LAN emulation, remember that?)

The UI reported a speed of around 75-90Mb between the front and the back of the house, which fluctuated a little. But this doesn’t match my experience of around 35Mb throughput on speed tests.

So where did my thoughput go?

My initial reaction was “Is HomePlug half-duplex?” – well, turns out it is.

HomePlug is almost like the sordid love child conceived between two old defunct networking protocols, frequency-hopping wifi and token ring, after a night on the tequilas, but implemented over copper cables, using multiple frequencies, all put together during an encoding technique called Orthogonal Frequency Division Multiplexing (OFDM).

Only one HomePlug station can transmit at a time, and this is controlled using Beaconing (cf token passing in Token Ring) and Time Division Multiplexing between the active HomePlug nodes, orchestrated by the concept of a “master” node called a “Central Coordinator”, which is elected automatically when a network is established.

When you send an Ethernet frame into your HomePlug adaptor, it’s encapsulated into a HomePlug frame (think of your data like a set of Russian Dolls or a 1970’s nest of tables), which is then put in a queue called a “MAC frame stream”. These are then chopped up into smaller (512 byte) segments called a PHY block, the segments being encrypted and serialised.

Forward error correction is also applied, and as soon as the originating adaptor enters it’s permission to transmit (it’s “beacon period”), your data, now chopped down into these tiny PHY block chunks, is striped across the multiple frequencies in the HomePlug network. As they arrive at their destination, acknowledgments are sent back into the network. The sending station keeps transmitting the PHY blocks until the receiving node has acknowledged receipt.

Assuming all the PHY blocks that make up the MAC frame arrive intact at the exit HomePlug bridge, these are decrypted, reassembled, and decapsulated, coughing up the Ethernet frame which was put in the other end, which is written to the wire.

The upshot of this is that there’s a reasonably hefty framing overhead… IP, into Ethernet Frame, into HomePlug AV MAC frame, into PHY block.

Coupled with the half-duplex, beaconing nature, that’s how my ~70Mb turned into ~35Mb.

The thing to remember here, the advertised speed on HomePlug gear is quoted at the PHY rate – the speed attainable between HomePlug devices, which includes all the framing overhead.

This means, where HomePlug AV says that it supports 200Mb/sec, this is not the speed you should expect to get out of the ethernet port on the bottom, even in ideal conditions. 100Mb/sec seems more realistic and this would be on perfect cabling, directly into the wall socket.

Talking of ideal conditions, one of the things that you are warned against with HomePlug is hanging the devices off power strips, as this reduces the signal arriving at the HomePlug interface. They recommend that you plug the HomePlug bridge directly into a wall socket whenever possible. Given my house was built in the 1800s (no stud-walls, hence the need for HomePlug!), it’s not over-endowed with mains sockets, so of course, mine were plugged into power strips.

However, not to be deterred, I reshuffled things and managed to get the two 500Mb HomePlug bridges directly into the wall sockets, and voila: Negotiated speed went up to around 150-200Mb, and the full 70-odd Mb/sec of the upgraded broadband was available on the other side of the homeplug network.

Performance is almost doubled by being plugged directly into a wall socket.

In closing, given everything which is going on under the skin, and that it works by effectively superimposing and being able to recover minute amounts of “interference” on your power cables, it’s almost surprising HomePlug works as well as it does.

This HomePlug white paper will make interesting reading if you’re interested in what’s going on under the skin! 

…and you’re not gonna reach my telephone.

Or, when an FTTC install goes bad.

Finally got around to getting FTTC installed to replace my ADSL service which seldom did more than about 3Mb/sec has had it’s fair share of ups and downs in the past. Didn’t want to commit to the 12 month contract term until I knew the owner was willing to extend our lease, but now that’s happened, I ordered the upgrade, sticking with my existing provider, Zen Internet, who I’m actually really happy with (privately held, decent support when you need it, don’t assume you’re a newbie, well run network, etc…).

For the uninitiated, going FTTC requires an engineer to visit your home, and to the cabinet in the street that your line runs through and get busy in a big rats nest of wires. The day of the appointment rolled around, and mid-morning, a van rolls up outside – “Working on behalf of BT Openreach”. “At least they kept the appointment…”, I think to myself

BT doesn’t always send an Openreach employee on these turnups, but they send a third-party contractor, and this was the case for this FTTC turn-up…

Continue reading “…and you’re not gonna reach my telephone.”

Unsensationalist SDN

SDN – Software Defined Networking – has become one of the trendy Silicon Valley buzzwords. Often abused and misused, it seems that all the marketing folks are looking at ways of passing what they already have on their product roadmaps as being “SDN”, while startups are hoping that it will bring in the latest tranche of VC money. Some folk would even have you believe that SDN can cook your toast, make your coffee and iron your shirts.

People are talking about massively centrally orchestrated networks, a move away from the distributed intelligence which on the wider Internet is one of it’s strong points. Yes, you could have a massive network with a single brain, and very centralised policy control, if that’s what you wanted. For some applications, it’s what you need – for instance in Google’s internal networks. But for others you may really need things to be more distributed. The way some people are talking, they make it sound like SDN is an “all or nothing” proposal.

But, is there a place for SDN between distributed and centralised intelligence that people are missing in the hype?

Put all the excitement to one side for a minute, and we’ll go back to basics.

The majority of large networking equipment today usually already has a distributed architecture, such that when a packet of data arrives that the receiving interface doesn’t know what to do with, it’s forwarded from the high-speed forwarding silicon to software running in a CPU. In some devices, there are multiple CPUs, often arranged in a hierarchy with a “master” CPU in charge of the overall system, and local CPUs delegated to run elements of the system, such as logical or physical groups of interfaces, or specific processes in the system.

When the forwarding hardware sends a packet to the CPU, to get a decision on what to do with it, it’s commonly known as a “CPU punt”.

The software running on the CPU examines the packet based on the configuration of the device, compares it against it’s internal tables and any configured policy, and makes a decision about what to do with it.

It sends the packet back to the forwarding engine, along with an instruction of how to program the forwarding hardware so that it knows what to do with other packets with the same properties (i.e. from the same data stream, or with the same destination address, etc.), whether that be to apply policy, forward it, drop it, etc.

This isn’t that different from how OpenFlow works, but the CPU is abstracted from the device in the data path and resides in a thing known as a “Controller”. Effectively, what were CPU punts now become messages to the OpenFlow Controller. The Controller, by nature of it residing on a more generalised computer, is capable of doing things that are less likely to be easily doable on the software running in a router or switch, in terms of making “special” decisions.

Basically, SDN in some respects looks like the next step in something we’ve been doing for years. It’s an evolution, does it need to be a revolution?

So, here’s where I think what’s currently a “missed trick” lies…

The forwarding hardware used in SDN doesn’t have to or need to be totally dumb. Some decisions you might be happy for it to make for itself. It could pick up details of what those are from the policy in the Controller, and these can be written to the local CPU and the local forwarding silicon.

But, other things you know you do want to “punt” to the Controller, get set up (through applied policy) and handled that way.

I can think of occasions in the past where I would have loved to be able to take the stream of CPU punts in a device that can otherwise happily make a lot of it’s own decisions and be able to farm them out for analysing and processing in such a way which wasn’t possible on the device, and convert this back to policy, config, and ultimately CAM programming. But, to be able to do this without basically lobotomising the device?

Does SDN have to be the “all or nothing” approach which seems to be what’s getting proposed by some of the SDN evangelists? Or is a hybrid approach more realistic and aligned with how we actually build networks?

Is the Internet facing a “perfect storm”?

The Internet has become a massive part of our everyday lives. If you walk down a British high street, you can’t fail to notice people staring into their phones rather than looking where they are going! I did see a comment on TV this week that you have a 1-in-10 chance of tripping and falling over when walking along looking at your phone and messaging…

There are massive pushes for faster access in countries which already have widespread Internet adoption, both over fixed infrastructure (such as FTTC and FTTH) and wireless (LTE, aka 4G), which at times isn’t without controversy. In the UK, the incumbent, BT, is commonly (and sometimes unfairly) criticised for trying to sweat more and more out of it’s copper last mile infrastructure (the wires that go into people’s homes), while not doing enough to “future-proof” and enable remote areas by investing in fibre. There’s also been problems over the UK regulator’s decision to allow one mobile phone network get a head-start on it’s competitors in offering LTE/4G service ahead of them, using existing allocated radio frequencies (a process known as “spectrum refarming”).

Why do people care? Because the Internet helps foster growth and can reduce the costs of doing business, and it’s why the developing countries are working desperately hard to drive internet adoption, along the way having to manage the threats of “interfering” actors who either don’t fully understand or fear change.

However, a bigger threat could be facing the Internet, and it’s coming from multiple angles, technical and non-technical. A perfect storm?

  • IPv4 Resource Exhaustion
    • The existing addressing (numbering) scheme used by the Internet is running out
    • A secondary market for “spare” IPv4 resources is developing, IPv4 addresses will have a monetary value, driven by lack of IPv6 deployment
  • Slow IPv6 Adoption
  • Increasing Regulatory attention
    • On a national level, such as the French Regulator, ARCEP, wishing to collect details on all interconnects in France or involving French entities
    • On a regional level, such as ETNO pushing for regulation of interconnect through use of QoS – nicely de-constructed by my learned colleague Geoff Huston – possibly an attempt to retroactively fix a broken business model?
    • On a Global level through the ITU, who, having disregarded the Internet as “something for academics” and not relevant to public communications back in 1988, now want to update the International Telecommunication Regulations to extend these to who “controls the Internet” and how.

All of these things threaten some of the basic foundations of the Internet we have today:

  • The Internet is “open” – anyone can connect, it’s agnostic to the data which is run over it, and this allows people to innovate
  • The Internet is “transparent” – managed using a bottom-up process of policy making and protocol development which is open to all
  • The Internet is “cheap” – relatively speaking, Internet service is inexpensive

These challenges facing the Internet combine to break all of the above.

Close the system off, drive costs up, and make development and co-ordination an invite-only closed shop in which it’s expensive to participate.

Time and effort, and investing a little money (in deploying IPv6, in some regulatory efforts, and in checking your business model is still valid), are the main things which will head off this approaching storm.

Adopting IPv6 should just be a (stay in) business decision. It’s something operational and technical that a business is in control of.

But, the regulatory aspect is tougher, unless you are big enough to be able to afford your own lobbyists. Fortunately, if you live in the UK, it’s not reached “write to your MP time”, not just yet. The UK’s position remains one of “light touch” regulation, largely letting the industry self-regulate itself through market forces, and this is being advocated to the ITU. There’s also some very bright, talented and respected people trying to get the message through that it’s economically advantageous not to make the Internet a closed top-down operated system.

Nevertheless, the challenges remain very much real. We live in interesting times.

Recent IPv4 Depletion Events

Those of you who follow these things can’t have missed that the RIPE NCC had got down to it’s last /8 of unallocated IPv4 space last week.

They even made a cake to celebrate…

Photo (and cake?) by Rumy Spratley-Kanis

This means the RIPE NCC are down to their last 16 million IPv4 IP addresses, and they can’t get another big block allocated to them, because there aren’t any more to give out.

Continue reading “Recent IPv4 Depletion Events”