Brocade in a stitch over Exit Strategy?

A couple of days ago, on 18th August, storage and ethernet vendor Brocade released their Q3 2011 results (their Year End is in October).

It roughly followed what was outlined in their Preliminary Q3 numbers (flat revenue year-on-year) released on 5th August, and their shares taking their biggest hit since IPO, back in the ’99 boom time. Despite showing earlier signs of rallying, their stock is still trading around the $3.40 mark as I write.

It’s no secret that Brocade has been looking for a buyer for a couple of years, since completing the Foundry Networks integration to add ethernet to their existing storage focus. However, in the buyout beauty contest, likely suitor Dell has just passed Brocade up in favour of fluttering it’s eyelashes at Force10 Networks, which abandoned it’s own plans to IPO in favour of the Dell acquisition.

Interestingly, it’s not as though Foundry ended up being an indigestable meal for Brocade, as some might have predicted. It seems quite the opposite. The growth in their ethernet business is somewhat offsetting a slowdown in their SAN equipment sales, though it’s unclear if some element of this is “ethernet fabric” displacement in what would be classic fibre-channel space.

So, might Brocade be finding themselves in a stitch over their investors getting their money out? Their Exit Strategy, which seemed to be concentrated on selling on to a large storage/server builder at a profit, doesn’t look like it’s getting much traction anymore. Are there many potential buyers left? What happens if the answer is “no”? How do you keep going?

Here I think lies part of the problem: It seems that the common investor Exit Strategy becomes focused around “doing stuff to make money”, rather than “doing stuff that makes money”.

That has been the achilles heel I’ve long suspected exists in technology investing: While there might be a good solid idea at the foundation, one often gets the impression that more corporate priority can be given over to ensuring there’s an Exit Strategy for the investors, and following that road, rather than a sound development strategy to ensure the product or service itself drives the success of the company, and not how you sell it. The two find themselves at juxtaposition and can be a source of unwanted friction, as well.

In Brocade’s case, I think the current marketing style isn’t doing them any favours. Look at brocade.com. Videos of talking heads, trotting out technobabble, or grinning product managers waxing lyrical while stroking their hardware. “Video data sheet” – now what the hell is that all about? Yet, everyone’s up to it. Animated banners on pages where you actually just want the dry data and cold hard facts. It’s almost like they don’t want you to find the information you’re really looking for. Please can I just have a boring old data sheet?

Maybe in cases like this it’s time to go back to basics: Making something you’re proud of, something you can be happy putting your name to, which is how many great products and brands developed in the past. Question is, have tech companies remembered how, and do investors have the longer-term stomachs for it?

A week for new 40G toys…

It’s been a week for new 40G launches in the Ethernet switch world…

First out of the gate this week has been Arista, with their 7050S-64, 1U switch, with 48 dual-speed 1G/10G SFP ports and four 40G QSFP ports, 1.28Tbps of switching, 960Mpps, 9MB of packet buffer, front-to-back airflow for friendly top-of-rack deployment, etc, etc.

Next to arrive at the party is Cisco, with their Nexus 3064, 1u switch, with 48 dual-speed 1G/10G SFP ports and four 40G QSFP ports, 1.28Tbps of switching, 950Mpps, 9MB of packet buffer, front-to-back airflow for friendly top-of-rack deployment, etc, etc.

Whoa! Anyone else getting deja vu!

Continue reading “A week for new 40G toys…”

This weeks oxymoron: Ethernet will never dominate in…

…Broadcast TV and Storage. Apparently.

I’ve just read a blog post by Shehzad Merchant of Extreme Networks, about a panel he recently participated in, where one of the other panelists asserted the above was true.

Fascinating that a conference in 2011 is talking about Ethernet not becoming dominant in broadcast TV.

There are several broadcast organisations who are already using large scale 10 Gig Ethernet platforms in places such as their archiving systems and in their playout platforms, and I’m not talking niche broadcasters, but big boys like ESPN. Not sure if any of them are using Extreme’s Purple boxes though.

This unnamed panelist would be better off moving into time-travel, as it seems are already able to come here from the past and make these assertions.

I do wonder if it’s actually the stick-in-the-mud storage industry which will be slower to move than the broadcasters!

The problem with the IETF

There’s been some good efforts to fix the hiatus that’s been perceived to exist between the Internet operator community and the IETF recently. I hope I’m not giving them the kiss of death here… 🙂

A sense of frustration had been bubbling for a while that the IETF had become remote from the people who actually deploy the protocols, that IETF had become the preserve of hardware vendors who lack operational experience, and it’s no wonder they ship deficient protocols.

But, it can’t have always been that way right? Otherwise the Internet wouldn’t work as well as it does?

Well, when the Internet first got going, the people who actually ran the Internet participated in the IETF, because they designed protocols and they hacked at TCP stacks and routing code, as well as running operational networks. Protocols were written with operational considerations to the fore. However, I think people like this are getting fewer and fewer.

As time went by, the Internet moved on, a lot of these same folk stopped running networks day-in-day out, and got jobs with the vendors, but they stayed involved in the IETF, because they were part of that community, they were experienced in developing protocols, and brought operational experience to the working groups that do the development work.

The void in the Network Operations field was filled by the next generation of Network Engineers, and as time has gone by, fewer and fewer of them were interested in deveoping protocols, because they were busy running their rapidly growing networks. Effectively, there had been something of a paradigm shift in the sorts of people who were running networks, which differed from those who had been doing it in the past For the Internet to grow the way it did in such a short time, something had to change, and this was it.

At the same time, the operational engineers were finding more and more issues creeping into increasingly complex protocols. That’s bad for the Internet, right? How did things derail?

The operational experience within the IETF was suffering from two things – 1) it was becoming more and more stale the longer that key IETF participants didn’t have to run networks, and 2) the operator voice present at IETF was getting quieter and quieter, things suggested by operators had been largely rejected as impractical.

Randy Bush had started to refer to it as the IVTF – implying that Vendors had “taken over”.

There have been a few recent attempts to bridge that gap – “outreach” talks and workshops at operations meetings such as RIPE and NANOG sought to get operator input and feedback, however trying to express this without frustration hasn’t always been easy.

However, it looks like we’re getting somewhere…

Rob Shakir has currently got a good Internet Draft out aimed at building a bridge between the ops community who actually deploy the gear and the folks who write the protocol specs and develop the software and hardware.

This has been long overdue and needs to work. It looks good, and is finding support from both the Vendor and Ops communities.

It’s a “meta-problem” here is that one cannot exist without the other, it’s a symbiotic and mutually beneficial relationship that needs to work for a sustainable Internet.

I wonder if it’s actually important for people on the protocol design and vendor side to periodically work on production networks to ensure that they have current operational knowledge, and not relying on that from 10 years ago?

I am the market Nokia lost

Remember when more than 50% of mobile phones in people’s hands said “Nokia” on them? When 50% of those phones had that iconic/irritating/annoying signature ring tone – often because folks hadn’t worked out how to get them off the default – long a prelude to yells of “Hello! I’m on a train/in a restaurant/in a library“.

Well, this week, a memo from the new Nokia CEO, Stephen Elop, has been doing the rounds online, which sums up the ferocious drubbing the once dominant Finnish company had in the handset market, at the hands of Apple’s iPhone and Google’s Android OS, and how it is now poised on the telecoms equivalent of a blazing oil platform.

I am part of the market that Nokia lost, maybe even forgot. I have a drawer which could be called “my life as a mobile phone user”, littered with old Nokia handsets, many of them iconic in their own right… the 2110, 6110, 6150, 6210, 6310i (probably one of the best handsets Nokia ever made), 6600, and three Communicators, the 9210i, 9500 and E90.

Why did I stop using Nokia?

Well, the last Nokia handset I tried was the N97, and since then I’ve been an iPhone convert.

While those around me used swishy iPhones, my previous loyalty to Nokia was rewarded with a slow and clunky UI, a terrible keyboard, and the appallingly bad software to run on your (Windows only) PC for backing up and synchronisation.

Nokia couldn’t even focus on keeping up with the needs of it’s previously loyal and high-yielding power users, for whom migrating handsets was always a pain, never mind the fickle throwaway consumer market.

Is it any wonder folks have deserted Nokia?

They have made themselves look like the British Leyland of the mobile phone world.

On a complete sidebar – any guesses on which airline will start up a HEL-SFO service first? There have got to be yield management folk looking at this in the wake of this news!

Update: 11 Feb 2011, 0855

As the pundits predicted, Nokia have announced they have aligned themselves with Microsoft, and their Windows mobile platform.

Back at work down the mines for Ethernet Standards Developers…

The ink of the 100GE standard is barely dry, and the first releases of products are only just shipping. “Phew,” thinks the large network operator, “we’re good for another few years.”

Well, among the largest, probably not. They are already faced with needing to aggregate (run in parallel) multiples of 100GE interfaces in their busiest areas. This doesn’t come cheaply, if you consider a single interface – you’re talking about a high five-figure list price minimum for interfaces (Hankins, NANOG 50), potentially more.

Fortunately, having had a little bit of a break, some enlightened folk involved in the 802.3ba standard are getting on the case again.

John D’Ambrosia, who was chair of the 802.3ba Working Group, and whose day job is in the Office of the CTO at Force 10 Networks, is in the process of kicking off a “Ethernet Wireline Bandwidth Needs” assessment activity, under the IEEE Industry Connections banner, to steer the next steps for Ethernet, so it can keep up with what the network is demanding of it.

There’s not much else online about this as yet, the effort is very much new, so I’ll add some links once there’s more information available.

This is a much needed activity, as there were some criticsms during the last iteration of the standards process about whether the faster speed was really needed, and disagreements about how big the market would be, almost conservative, while at the same time others said it would be too little, too late, at too high a price.

Good to see the new approach being taken, laying solid groundwork for the next (Terabit? Petabit? Something more creative?) run at the standard.