#didsburydoubles – can we get Metrolink to reinstate double trams to East Didsbury?

The summer is over, and it’s time to get back to work.

For many of us in Manchester, we breathe a sigh of relief as it also signals the reconnection of the Northern and Southern parts of the Metrolink tram network after almost two months of no service through the City Centre.

Our messed up commutes could return to something looking like normality, or so we thought…

Last week, Metrolink announced their new service patterns for the re-joined network, no longer constrained by the single-track contraflow system through the St Peter’s Square worksite:

“People of Didsbury rejoice! For we are improving your service, with trams every 6 minutes!”

Now here’s the catch and small print:

Note that while there are twice as many trams, 
they will only be half as long, 
and half of them will terminate at Deansgate, 
on the extreme south side of the city centre, 
which will mean they are no use to some of you.

So, while we get more frequent trams, at least as far as Deansgate, the overall capacity on the line has stayed the same, yet we were experiencing busy and crowded trams when they were double trams every 12 minutes, and we’ve now actually got reduced capacity on cross-city journeys.

We’re already seeing complaints about crowding and reduction of tram length:

So I’ve decided to start tweeting and hashtagging when I observe overcrowding due to single tram operation on the Didsbury line, using the hashtag #didsburydoubles and suggest those similarly affected do the same.

We then make it easier to track and hopefully get this trending on social media and get Metrolink & TfGM to sit up, listen to their users and understand how we actually use their tram network.

On paper the capacity is the same, so what’s happened?

Metrolink planners have made an assumption that passengers will always take the first tram and change where necessary.

Taking a look at my more usual trip into town, I’m normally heading to Market Street or Shudehill:

  • Under the old service pattern there was a direct double tram every 12 minutes.
  • Under the new service pattern there is a direct single tram every 12 minutes, or I can take the Deansgate tram, which runs in between the direct tram, and change at Deansgate.

I now have to make a decision, do I take whatever turns up first and proceed accordingly, or do I always wait for the direct?

I’m missing a vital piece of information if I take the Deansgate tram and change: How long will I need to wait at Deansgate for a Market Street/Shudehill tram?

What I don’t have is the planned sequence through Deansgate. I know that each “route” is planned to have a tram every 6 minutes, and it repeats on a 12 minute cycle. I just don’t know the order they are meant to come in, because Metrolink does not publish that information.

If the tram terminating at Deansgate is immediately followed by a cross-city Altrincham – Bury tram, then I’m fine. My end-to-end journey time remains basically the same, I have to change once, and don’t have to wait long.

But what if the sequence of trams means that I’m waiting, let’s say 4 minutes, for the Altrincham – Bury direct tram? Or worse still, my Didsbury – Deansgate tram arrives at Deansgate platform just in time to see the Altrincham – Bury tram pulling away?

I don’t gain anything and I may as well have taken the direct tram, and who’s to say I’ll be able to even get on to the next tram, that might be busy too?

They have not accounted for human nature: where a direct service exists we will prefer to take it.

Remember that I am a transport geek as well. I’ve studied this stuff, and have a degree from Aston Uni in Transport Management. The thought process above comes naturally to me. Heh… Maybe TfGM/Metrolink could hire me to tell them the blindingly obvious?

An average person won’t even bother going though the thought process above. They will just wait for the direct tram.

On outbound journeys in the evening commute, this situation is made even worse. People are less inclined to change on the way home, because the trams are already at their fullest in the City centre.

One simply daren’t take the first cross-city tram from Shudehill or Market Street and expect to change at Deansgate or Cornbrook because that will mean trying to board an already crowded tram.

This means evening commutes will likely be worse than morning commutes because people will almost certainly wait for the direct.

When the Didsbury line was first opened, there were waves of complaints because the use of the line outstripped Metrolink’s predictions, rapidly leading to the decision to run Didsbury trams as doubles, and this remained until this week.

It’s time to make sure TfGM and Metrolink hear our voices again.

We should at least have the through trams operating as double trams, so that cross-city capacity is restored to what it was before the St Peter’s Square works were completed.

This is how the Altrincham and Bury lines work – a 6 minute headway with alternate trams, the cross-city trams, as doubles.

If you experience an uncomfortably crowded journey on the East Didsbury line, or you have to let a tram depart without you onboard because it arrived already full, please tweet about it and use the #didsburydoubles hashtag.

Tech Skills: Business Needs to Give Back to Education?

Another day, another skills and tech education rant…

My thoughts here are partly triggered by this tweet about the HE system being effectively broken, from Martin Bryant of Tech North:

The overall takeaway is that the current HE models aren’t working for the tech sector on two counts: a) It moves too fast and b) There is too much to learn. Sounds like a perfect storm. The syllabuses and the people doing the teaching are continually at risk of being out of date.

I agree with the comments in the article that there is an expectation mismatch going on.

Companies can’t realistically expect graduates to just drop into a position and be as useful and productive as a person with several years’ experience, nor can they expect them to know the ins and outs of every protocol or language. As an employer, there needs to be some sort of development plan in place for every person you hire.

I’ve previously written about my own entry into the tech industry, somewhat by accident, in the 1990s: that I left University with a degree in nothing to do with tech, but with a solid understanding of the fundamentals, and basically ended up learning through apprenticeship-type techniques once on the job in an ISP.

Where I believe tech companies can help is by pouring their real-world knowledge and experience back into the teaching, and by this I mean moving beyond just the usual collaboration between industry and education and taking it a stage further: Encouraging their staff to go into Universities and teach, and the Universities to do more to guide and nurture this.

Yes, it takes effort. Yes, for the tech firms releasing staff members one or two days a week to teach, it’s going to be a longer term investment, the payback won’t immediately come for three to five years. But once it does arrive, it seems that the gift will keep giving.

For the Universities, they will need to support people who maybe aren’t experienced in teaching and find new ways of making their curriculum flexible enough to keep up.

I see this potentially having multiple positive paybacks:

  • For the University: they are getting some of the most up-to-date and practical real-world knowledge being taught to their students.
  • For the Students: increased employability as a result of relevant teaching.
  • For the Industry: a reducing skills gap and more inquisitive and employable individuals looking for careers.
  • For the tech employee doing the teaching: a massively rewarding experience that can help increase job satisfaction.

For this, I’d like to put my industry colleague Kevin Epperson on a bit of a pedestal. Kevin is an experienced Internet Engineering professional, and has worked at big industry names such as Microsoft, Level 3 and Netflix, in senior Network Engineering leadership roles. Kevin also teaches at University of Colorado, Boulder and is the instructor of their IP Routing Protocols course.

This is something Kevin has been doing for 15 years, during tenures at three different employers, and recently received an award for his contributions to their ITP courses.

The value of having a real-world industry expert involved in instructing students can’t be underestimated, yet I can think of very few people in my part of the tech industry – Internet Engineering – that actually teach in the formalised way Kevin does. All I can really think of is people doing more informal tutoring at conferences, and helping to organise hackathons and other events, which don’t necessarily reach the HE audience we’re discussing in this article.

The industry needs to be willing to move outside the rigid “full-time” staff model that I still find many companies wedded to, despite claims to the contrary, and be confident about releasing their staff to do these things, in order to give back. It may even improve staff retention and satisfaction for them, as well as produce more productive new hires! Be interested to see if people have real stats, or just good stories, on that.

Yet, at the same time, we’re talking about an industry that won’t give many members of staff a day out of the office to attend a free industry event which will benefit their skills and experience and improve their job satisfaction. Why? Who knows? I have hard data on this from running UKNOF that I’m willing to share if folks are interested.

The answer that I don’t have is how receptive the Universities would be to this? Or do they have their own journey of trust to go on?

Net Eng Skills Gap Redux: Entry Routes into Network Engineering

While I was recently at the LINX meeting in London, I ended up having a side-discussion about entry routes into the Internet Engineering industry, and the relatively small amount of new blood coming into the industry.

With my UKNOF Director’s hat on for a moment, we’re concerned about the lack of new faces showing up to our meetings too.

Let me say one thing here and now:

If you work in any sort of digital business, remember that you are nothing without the network, nothing without the infrastructure. This eventually affects you too.

Yes, I know you can just “shove it in the Cloud”, but this has to be built and operated. It has real costs associated with it, and needs real people to keep it healthily developing and running.

I’ve written about this before here, almost 3 years ago. But it seems we’re still not much better off. I think that’s because we’ve not done enough about it.

One twitter correspondent said, “I didn’t know the entry route, so ended up in sysadmin, then internet research, and not netops.”

This pretty much confirmed some of my previous post, that we’d basically destroyed the previous entry route through commoditisation of first-line support, and that was already happening some time around 1998/1999.

It’s too easy to sit here and bleat, blaming “sexy devops” for robbing Net Eng and Network Infrastructure of keen individuals.

But why are things such as devops and more digital and software oriented industries attracting the new entrants?

One comment is that because a large number of network infra companies are well established, there isn’t the same pioneering spirit, nor the same chance to experiment and build, with infrastructure compared to the environment I joined 20 years ago.

My colleague, Paul Thornton, characterised this pioneering spirit in a recent UKNOF presentation titled “None of us knew what we were doing, we made it up as we went along” – note that it is full of jargon and colloquialism, aimed at a specific techie audience, but if you can excuse that, it really captures in a nutshell the mid-90’s Internet engineering environment the likes of he and I grew up in.

Typing “debug all” on a core router can liven up your afternoon no end… But I didn’t really know what I wanted to do back then, I was green and wet behind the ears.

Many infrastructure providers are dominated by obsessions with high-availability, and as a result resistance to change, because they view a stable and available infrastructure as the utopia. An infrastructure which is being changed and experimented upon, by implication, is not as stable.

do-not-touch-any-of-these-wires
DO NOT TOUCH ANY OF THESE WIRES

Has a desire to learn (from mistakes if necessary!) become mutually exclusive from running infrastructure?

In many organisations, the “labs” – the development and staging environments – are pitiful. They often aren’t running the same equipment as that which exists in production, but are cobbled together from various hand-me-down pieces of gear. This means it’s not always possible to compare apples with
apples, or exactly mimic conditions which will exist in production.

Compare this to the software world, where everything is on fairly generic compute, and the software is largely portable from the development and staging environments, especially so in a world of virtualisation and containerisation. There’s more chances to experiment, test, fail, fix and learn in this environment, than there is in an environment where people are discouraged from touching anything for fear of causing an outage.

This means we Network Engineering types need to spend a lot of time on preparation and nerves of steel before making any changes.

Why are the lab environments often found wanting? Classically it’s because of the high capital cost of network gear, which doesn’t directly earn any revenue. It’s harder to get signoff, unless your company has a clear policy about lab infrastructure.

I’m not saying a blanket “change control is bad”, but a hostile don’t touch anything” environment may certainly drive away some of the inquisitive folks who are keen to learn through experimentation.

Coupled with the desire of organisations to achieve high availability with the lowest realistically achievable capital spend, it means that when these organisations hire for Network Engineering posts, they often want seasoned and experienced individuals, sometimes with vendor specific certifications. You know how I hold those in high esteem, or not as the case may be, right?

So what do we need to do?

I can’t take all the credit for this, but it’s partly my own opinions, mixed in with what I’ve aggregated from various discussions.

We need to create clear Network and Infra Engineering apprenticeship and potential career paths.

The “Way In” needs to be clearly signposted, and “what’s in it for you” made obvious.

There needs to be an established and recognised industry standard for the teaching in solid basic network engineering principles, that is distinct from vendor-led accreditations.

In some areas of the sector, the “LAIT” (LINX Accredited Internet Technician) programme is recognised and respected for it’s thoroughness in teaching basic Internet engineering skill, but it’s quite a narrow niche. Is there room to expand the recognition this scheme, and possibly others have?

A learning environment needs to exist where we enable people to make mistakes and learn from them, where failure can be tolerated, and priority placed on teaching and information sharing.

This means changing how we approach running the network. Proper labs. Proper tooling. Proper redundant infrastructure. No hostile “change control” environment.

Possibly running more outreach events that are easier for the curious and inquisitive to get into? That’s a whole post in itself. Stay tuned.

CashZone cash machine: Terrible UX

The cash machine at my local Co-Op used to be run by the Co-Op Bank. Then, this banking bit of the Co-Op had an encounter with a capital shortfall somewhere in the region of £1.5bn, and has since been restructured and rehabilitated.

As part of the restructure, the Co-Op group own less of the bank, and most of the cash machines which aren’t in a branch have been sold off to a commercial operator called CashZone, as part of a cost-cutting exercise.

It still gives out money, that’s fine. It’s the getting to the point where you get it which is frustrating.

The menu system is a paragon of terrible UI design. Here’s an example…

You tell it you want “Cash Only” from the list of options, and it asks you if you want to see your balance. No! If I’d wanted that I’d have pressed “Cash with Balance”. Likewise, if I’d wanted a receipt, I’d have pressed “Cash with Receipt”.

If I press “Cash Only”, I think it’s fairly safe to assume that I only want that.

Most of all, the “circular questioning” of the CashZone menu system seems to seriously confuse some folks. It’s clear that the transactions are taking longer since the machine was converted. It frequently has a queue of 4 or 5 people in front of it, where as it seldom had a queue of 1 or 2 before.

How is this supposed to be an improvement in service? Well, it isn’t. It’s a step back.

The CashZone menu system is an example of a terrible user experience, designed by someone who probably never has to use the damned thing.

However, Co-Op have made £35m out of wasting our time, so that’s okay, I suppose?

Update: One of my twitter followers @jamesheridan pointed out that CashZone may receive a micro-payment for showing a balance enquiry. Still doesn’t make it any less slower or sucky.

Public wifi – why use more radio spectrum than you need?

Here’s the second of my series of little rants about poor public wifi – this time, why use more spectrum than you need?

Using the Wifi Explorer tool, I was recently checking out a venue with a modern public wifi installation. Here’s what the 5GHz spectrum looked like:

crowded_5Ghz_spectrumI’ve redacted the SSIDs so that we aren’t naming names and we’re hopefully saving face.

You’re probably thinking “They aren’t using much spectrum at all“, right? All their access points are all clustered down on 4 channels – that in itself not being a good idea.

Note that they are using “wide” 40MHz channels – the signal from each access point is occupying two standard 20MHz channels. Networks are usually setup like this to increase the amount of available bandwidth, by using multiple signals on multiple radio channels at once between the base station and the client.

This was also a brand new installation, and the access points were supporting 802.11a, n and ac, and the Wifi Explorer tool reported each AP could support a theoretical speed of 540Mb/sec.

What if I told you the access circuit feeding this public wifi network, and therefore the most bandwidth available to any single client, was 100Mb/sec?

Vanilla 802.11a would give a maximum data rate of 54Mb/sec (probably about 30Mb/sec usable payload) on a single channel, this could be 150 or 300Mb/sec with 802.11n (MIMO). Plenty for getting to that 100Mb.

Thus rather than having as many as 4 overlapping access points sharing the same channels, this could be reduced significantly by only using 20MHz channels. This would result in less radio congestion (fewer clients on the same frequency), and probably wouldn’t negatively effect access speeds for the clients on the network.

There’s also the question of why all 6 access points visible in this sweep are spread across just two 40MHz channels.

The main reason is that DFS (Dynamic Frequency Selection) and TPC (Transmit Power Control) is required for any of the channels highlighted with blue numbers in the chart above – it’s also known as “Radar Detection”, because some radar operates in these channels. An access point running DFS will “listen” first for any radar signals before choosing an unoccupied channel and advertising the network SSID. If it hears any radar transmissions, it will shut down and move channel.

Sure, avoiding the DFS mandatory channels gives more predictability in your channel use, and means you aren’t affected by an access point needing to go off air.

However, an option in designing the network could be to use the DFS mandatory channels to increase available spectrum, but strategically place access points on non-DFS channels spatially in between those using DFS, getting away from the “listen on startup” phase (e.g. if there’s a need to reset an access point), or from the service suddenly going offline because of radar detection.

Also, remember that this is an indoor deployment and well inside a building. The chances of encountering radar interference are relatively low. I don’t recall seeing a problem using DFS when I’ve deployed temporary networks for meetings.

The other thing to note is that this deployment is not using a controller-based architecture. It is made of access points which can signal control messages between each other, but each access point maintains effectively it’s own view of the world. (Those of you in the Wifi space can now probably work out who I’m talking about.)

Is the above configuration using so few channels, and using them unwisely considering the target bandwidth actually available to the wifi clients, just asking for trouble once a few hundred users show up?

 

Waiting for (BT) Infinity – an update

I mentioned in my last post about my partner’s Mother moving home this week, and how it looks like BT have missed an opportunity to give a seamless transition of her VDSL service.

The new house was only around the corner from the old one, so should be on the same exchange, and maybe even on the same DSLAM and cabinet. It had previously had VDSL service, judging from the master socket faceplate.

20140624_103830

Was the jumpering in the cab over to the DSLAM still set up? Well, we dug out the old BT VDSL modem and HomeHub 3, and set those up.

Guess what…

20140626_144809The VDSL modem successfully trained up. The line is still connected to the VDSL DSLAM.

However, it’s failing authentication – a steady red “b“. Therefore it looks like the old gear won’t work on the new line.

But then the new HomeHub 5 they’ve needlessly shipped out won’t work either: we set that up too, and get an orange “b” symbol.

Evidently, something isn’t provisioned somewhere on the backend. Maybe the account credentials have been changed, or the port on the DSLAM isn’t provisioned correctly yet.

Does this look like a missed opportunity to provide a seamless transition, without the need for an engineer visit, or what?

 

Ken Morrison – A simple business philosophy

Recently the UK supermarket chain Morrisons has been in the news, regarding the state of the business, potential job cuts, and a lambasting for the Board at the recent company AGM from former chairman and straight-talking Yorkshireman Sir Ken Morrison, son of the company’s founder.

While CEO of the company, he was known for reportedly “skip diving” on visits to his Morrison’s stores – sifting through the bins to see what was being thrown out and wasted. Sir Ken has a simple philosophy to the supermarket business – “shop in your own shops, get to know your customers and don’t make presidential visits“.

It seemed to work well for him, for Morrisons was profitable for good number of years, until, in 2004, Morrisons acquired Safeway UK (by then already independent from it’s US namesake), a company who I used to work for as a teenager.

One of the things which used to irk me about the way that Safeway was managed was the way that the senior management conducted visits of the stores. When it was known that the regional manager was visiting, significant amounts of overtime became available. The store would be scrubbed top to bottom, the normally messy behind the scenes stockrooms would be tidied up, the shelves would be neatly filled and faced up, and almost every checkout would be open.

This resulted in the management not seeing the real experience, but some sort of show, or “shop in a bottle”.

The very “presidential visits” that Ken Morrison speaks of.

To borrow Sir Ken’s turn of phrase, they left with a “bullshit” experience of what shopping in one of their stores was like. They thought it was okay, and didn’t suck.

Even on short notice “unannounced” visits, somehow the store was tipped off, either by other local managers or by more junior flunkies of the regional managers, fearing for their own jobs if a shop was seen in disarray. Of course, overtime was rapidly offered, and 90% of the time you would take it because you wanted the money.

It seems that Morrisons’ management have picked up this behaviour along with a number of other bad habits from the Safeway acquisition.

One of my own pet hates is the way they build-out the aisle ends with free-standing stacks of items on promotion. This narrows the aisle width, reducing circulation area, and making it harder to manoeuvre your trolley, for fear of knocking over this teetering pile of products.

Obviously the idea is you take something from this wobbly pile to reduce it in size!

Tesco still aren’t much better. It’s a confusing environment of bright yellow price tags, contradictory “special non-offers”, and shouty shelf-edge “barkers”. It’s just a meh experience, and that’s after you’ve battled your way in past all the TVs, clothes and other crap they sell in the big stores.

Also, you’ve got to look if the business model is wrong? Are Morrisons working to a growth-centric business model? In a saturated market such as grocery shopping, the growth most likely has to come from stealing market share from a competitor. This likely comes with a higher cost of sale, as you’ve got to do something to make that fickle customer choose you today. Should Morrisons instead be looking after it’s own customers and working to a retention-based business model?

Rather than providing an unpleasant and stressful experience, do something to make your customers want to come back. You can’t compete on price alone or Aldi and Lidl will take your business away, and the niche high-ends are dominated by the likes of Waitrose and M&S.

I can’t help feeling that devouring Safeway was a meal that still gives Morrisons indigestion to this day, and they would maybe do better following Ken Morrison’s three simple tenets by which he ran the business for many years: good staff, good suppliers, loyal customers.

Read the BBC article and watch the interview with Ken Morrison

For peering in New York, read New Amsterdam

Dutch East India Company Logo
It’s colonialism all over again. Just not as we know it…

Last week, there was this announcement about the establishment of a new Internet Exchange point in New York by the US arm of the Amsterdam Internet Exchange – “AMS-IX New York” – or should that be “New Amsterdam”… 🙂

This follows on from the vote between AMS-IX members about whether or not the organisation should establish an operation in the US was carried by a fairly narrow majority. I wrote about this a few weeks ago.

This completes the moves by the “big three” European IX operators into the US market, arriving on US shores under the umbrella of the Open-IX initiative to increase market choice and competitiveness of interconnection in the US markets.

LINX have established LINX-NoVA in the Washington DC metro area, and AMS-IX are proceeding with their NY-NJ platform, while DECIX have issued a press statement on their plan to enter the NY market in due course.

One of the key things this does is bring these three IXPs into real direct competition in the same market territory for the first time.

There has always been some level of competition among the larger EU exchanges when attracting new international participants to their exchange, for instance DECIX carved itself a niche for attracting Eastern European and Russian players on account that many carrier services to these regions would hub through Frankfurt anyway.

But each exchange always had it’s indigenous home market to provide a constant base load of members, there wasn’t massive amounts of competition for the local/national peers, even though all three countries have a layer of smaller exchanges active in the home market.

Now, to some extent, they are going head-to-head, not just with US incumbents such as Equinix, TelX and Any2, but potentially with each other as well.

The other thing the AMS-IX move could end up doing is potentially fracture even further the NY peering market, which is already fractured – being served by three, maybe four, sizeable exchanges. Can it sustain a fifth or sixth?

Is it going to be economical for ISPs and Content Providers to connect to a further co-terminous IXP (or two)? Can the NY market support that? Does it make traffic engineering more complex for networks which interconnect in NY? So complex that it’s not worth it? Or does it present an opportunity to be able to more finely slice-and-dice traffic and share the load?

Don’t forget we’re also in a market which has been traditionally biased toward minimising the amount of public switch-based peering in favour of private bi-lateral cross-connects. Sure, the viewpoint is changing, but are we looking for a further swing in a long-term behaviour?

We found out from experience in the 2000s that London can only really sustain two IXPs – LINX and LONAP. There were at least 4 well-known IXPs in London in the 2000s, along with several smaller ones. (Aside… if you Google for LIPEX today, you get a link to a cholesterol-reducing statin drug.)

Going to locations on the East Coast may have made sense when we sailed there in ships and it took us several weeks to do it, but that’s no reason for history to repeat itself in this day and age, is it? So why choose New York now?

Will the EU players become dominant in these markets? Will they manage to help fractured markets such as NY to coalesce? If they do, they will have achieved something that people have been trying to do for years. Or, will it turn out to be an interesting experiment and learning experience?

It will be interesting to see how this plays out over time.

IX Scotland – Why might it work this time?

Yesterday the BBC ran this news item about the launch of a new Internet Exchange in Edinburgh – IX Scotland. This is the latest in an emerging trend of local IXPs developing in the UK, such as IX Leeds and IX Manchester.

There was some belief that this is the first Internet Exchange in Scotland, however those people have short memories. There have been two (or three) depending on how you look at it, attempts at getting a working IXP in Edinburgh in the past 15 years, all of which ultimately failed.

So, why should IX Scotland be any different to it’s predecessors? Continue reading “IX Scotland – Why might it work this time?”

BA’s Heathrow Lounge Food, Pt 2: The Lord of the Flies?

Following on from my recent post regarding a rather poor Environmental Health Report for BA’s “exclusive” Concorde Room, Hillingdon Council, the local authority responsible for Heathrow, have conducted further inspections of BA’s lounge operations at the airport.

This time, the largest lounge in T5, the “Galleries Club South” scored 2 out of 5, like it’s neighbour.

The report for this lounge highlights a number of basic food hygiene failings that seem to indicate a real lack of care.

Continue reading “BA’s Heathrow Lounge Food, Pt 2: The Lord of the Flies?”