Are venue wifi networks turning the corner?

I’m currently at the APRICOT 2013 conference in Singapore. The conference has over 700 registered attendees, and being Internet geeks (and mostly South-East Asian ones, at that), there are lots of wifi enabled devices here. To cope with the demands, the conference does not use the hotel’s own internet access.

Anyone who’s been involved with Geek industry events knows by painful experience that most venue-provided internet access solutions are totally inadequate. They can’t cope with the density of wifi clients, nor can their gateways/proxy servers/NATs cope with the amount of network state created by the network access demands created by us techies. The network disintegrates into a smouldering heap.

Therefore, the conference installs it’s own network. It brings it’s own internet access bandwidth into the hotel. Usually at least 100Mb/sec, and generally speaking, a lot more, sometimes multiple 1Gbps connections. The conference blankets the ballrooms and various meeting rooms in a super high density of access points. All this takes a lot of time and money.

According to the NOC established for the Conference, most concurrent connections to the network are over 1100, s0 about 1.6 devices per attendee. Sounds about right: everyone seems to have a combination of laptop and phone, or tablet and phone, or laptop and tablet.

One thing which impressed me was how the hotel hosting the conference has worked in harmony with the conference. Previous experience has been that some hotels and venues won’t allow installation of third party networks, and insist the event uses their own in house networks. Or even when the event brings it’s own infrastructure, the deployment isn’t the smoothest.

Sure, we’re in a nice (and not cheap!) hotel, the Shangri-La. It’s very obviously got a recently upgraded in-house wifi system, with a/b/g/n capability, using Ruckus Wireless gear. The wifi in the rooms just works. No constant re-authentication needed from day-to-day. I can wander around the hotel on a VOIP call on my iPhone, and call quality is rock solid. Handoff between the wifi base stations wasn’t noticeable. Even made VOIP calls outside by the pool. Sure, it’s a top-notch five-star hotel, but so many supposedly equivalent hotels don’t offer such a stable and speedy wifi, which makes the Shangri-La stand out in my experience.

There’s even been some anecdotal evidence that performance was better over the hotel network to certain sites, which is almost unheard of!

(This may be something to do with the APRICOT wifi being limited to allow only 24Mb connections on their 802.11-a infrastructure. Not sure why they did that?)

As the Shangri-La places aesthetics very high on the list of priorities, they weren’t at all in favour of the conference’s NOC team running cables all over the place, so their techs were happy to provide them with VLANs on the hotel’s switched infrastructure, as well as access to the structured cabling plant.

This also allowed the APRICOT NOC team to extend the conference LAN onto the hotel’s own wifi system – the conference network ID was visible in the lobby, bar and other communal areas in the hotel without having to install extra (and unsightly) access points into the public areas.

This is one of the few times I’ve seen this done and seen it actually work.

So, in the back of my mind, I’m wondering if we’re actually turning a corner, to reach a point where in-house wifi can be depended on by event managers (and hotel guests!) to such an extent they don’t need to DIY anymore?

Beware the NTP “false-ticker” – or do the time warp again…

For the uninitiated, it’s easy to keep the clocks of computers on the Internet in synch using a protocol called NTP (Network Time Protocol).

Why might you want to do this? It’s very helpful in a large network to know that all your gear is synchronised to the same time, so that things such as transactional and logging information has the correct timestamps. It’s a must have for when you’re debugging and trying to get to the bottom of a problem.

There was an incident earlier this week where the two open NTP servers run by the US Naval Observatory (the “authority” for time within the US) both managed to give out incorrect time – there are reports of computers which synchronised against these (and more importantly, only these, or one or two other systems) had their clocks reset to 2000. The error then corrected, and clocks got put back.

Because the affected systems were chiming either only against the affected master clocks, or a limited number of others, the two incorrect times, but from a high stratum source, were taken as being correct and the affected systems had their local clocks reset.

There’s been discussion about the incident on the NANOG list…

Continue reading “Beware the NTP “false-ticker” – or do the time warp again…”

Recent IPv4 Depletion Events

Those of you who follow these things can’t have missed that the RIPE NCC had got down to it’s last /8 of unallocated IPv4 space last week.

They even made a cake to celebrate…

Photo (and cake?) by Rumy Spratley-Kanis

This means the RIPE NCC are down to their last 16 million IPv4 IP addresses, and they can’t get another big block allocated to them, because there aren’t any more to give out.

Continue reading “Recent IPv4 Depletion Events”

Networking equipment vs. Moore’s Law

Last week, I was at the NANOG conference in Vancouver.

The opening day’s agenda featured a thought provoking keynote talk from Silicon Valley entrepreneur and Sun Microsystems co-founder Andy Bechtolsheim, now co-founder and Chief Development Officer of Arista Networks, entitled “Moore’s Law and Networking“.

The basic gist of the talk is that while Moore’s Law continues to hold true for the general purpose computing chips, it has not applied for some time to development of networking technology. Continue reading “Networking equipment vs. Moore’s Law”

A table for 25? Not currying any favour with me…

Many of you will know that I’m involved in organising the UKNOF meetings.

Some of you will know that I don’t understand this obsession that many UKNOF attendees have with going en-masse for a curry (usually with someone’s employer picking up the tab) the evening beforehand.

What is the attraction, apart from maybe not having to pay for it yourself, of sitting at a big long table, when all it achieves is you having to yell at the person next to you in order to have a conversation while receiving iffy service of usually disappointing (sometimes downright poor) food?

It’s no good for mixing and networking, one of the attractions of going for dinner with industry colleagues, as you can only bellow your conversation at your immediate neighbours, either because everyone else is pissed and shouting, or just to make yourself heard over the loud sitar music.

Sitting in tables of 6-8 would help a lot with conversation, and probably improve service as well!

It’s also not a good dining experience. The most recent curry being a particular lowlight, when a) I hardly ate any of what I ordered because it was so unpleasant (and it wasn’t as though I’d ordered a phall!), and b) I was later unwell in the middle of the night. I should have seen the warning signs when they handed us each a sticky, laminated menu card, I guess.

While I don’t think of myself as entirely Grumpy Old Man as yet, I still don’t really see the attraction…

I also can’t talk about drunken behaviour in curry houses without a link to Rowan Atkinson’s Indian Restaurant sketch… It is a tricky bit of floor. Deceptively flat…

Successful 1st IXLeeds Open Meeting

I attended by all accounts a very successful first open meeting for the IXLeeds exchange point yesterday – with around 120 attendees, including many faces that are not regulars on the peering circuit making for brilliant networking opportunities and great talks from the likes of the Government super-fast broadband initiative, BDUK, and energy efficient processor giants ARM (behind the technology at the heart of most of the World’s smartphones), as well as more familiar faces such as RIPE NCC and LINX, among others.

Definitely impressed with the frank discussion that followed the talk by the DCMS’ Robert Ling on BDUK funding and framework, but still sceptical that it’s going to be any easier for smaller businesses to successfully get access to the public purse.

Andy Davidson, IXLeeds Director, was able to proudly announce that IXLeeds now provides support for jumbo frames via a seperate vlan overlaid on their switch, which is probably the only IXP in the UK which officially offers and promotes this service – at least for the time being. Of course, they are supporting a 9k frame size

Well done to my friends and colleagues of IXLeeds for making it to this major milestone, and doing it in great style. It seems a long, long way from a discussion over some pizza in 2008.

The only thing I didn’t manage to do while in Leeds is take a look at the progress on the next phase of aql’s Salem Church data centre, but I’m sure I’ll just have to ask nicely and drop by aql at some point in the future. 🙂

End of the line for buffers?

This could get confusing…The End of the Line by Alan Walker, licenced for reuse under the Creative Commons License

buffer (noun)

1. something or someone that helps protect from harm,

2. the protective metal parts at the front and back of a train or at the end of a track, that reduce damage if the train hits something,

3. a chemical that keeps a liquid from becoming more or less acidic

…or in this case, none of the above, because I’m referring to network buffers.

For a long time, larger and larger network buffers have been creeping into network equipment, with many equipment vendors telling us big buffers are necessary (and charging us handsomely for them), and various queue management strategies being developed over the years.

Now, with storage networks and clustering moving from specialised platforms such as fibre channel and infiniband to using ethernet and IP, and the transition to distributed clustering (aka “The cloud”), this wisdom is being challenged, not just by researchers, but operators and even equipment manufacturers.

The fact is, in lots of cases, it can often be better to let the higher level protocols in the end stations deal with network congestion rather than introduce variable congestion due to deep buffering and badly configured queueing in the network which attempts to mask the problem and confuses the congestion control behaviours built into the protocols.

So, it was interesting to find two articles in the same day with this as one of the themes.

Firstly, I was at the UKNOF meeting in London, where one of the talks was from a research working on the BufferBloat project, which is approaching the problem from the CPE end – looking at the affordable mass-market home router, or more specifically the software which runs on them and the buffer management therein.

Second thing I came across was a great blog post from technology blogger Greg Ferro’s Ethereal Mind – on his visit to innovative UK ethernet fabric switch startup Gnodal – who are doing some seriously cool stuff which removes buffer bloat from the network (as well as some other original ethernet fabric tech), which is really important for the data centre networking market with it’s latency and jitter sensitivity, which Gnodal are aiming at.

(Yes, I’m as excited as Greg about what the Gnodal guys are up to, as it’s really breaking the mould, and being developed in the UK, of course I’m likely to be a bit biased!)

Is the writing on the wall for super-deep buffers?