Could a bit of cultural sensitivity help make better tech products?

A post from a person I follow on twitter got me thinking about tech product development…

Dear Word for Mac 2011: No.

This was on a Mac in the UK. With a UK keyboard. With the system locale set to UK. With the system language set to British English.

Yet the software offered an autocomplete using the American styling of “Mom”, seemingly ignoring the locale settings on the machine!

Okay, it’s not escaped me that Word for Mac is a MSFT product. So maybe this is about cultural insensitivity in tech (or maybe all) companies in general, but as this was on a Mac, I’m going to use Apple as an example of what could be done better.

Everyone remembers the Apple Maps launch debacle, right?

So many of the faux-pas could have been avoided if there was a bit of cultural sensitivity and local knowledge applied when sanity checking the mapping data, especially the place-mark data.

Firstly, there’s a GIGO problem at work here. Apple took in some seriously old source data.

For instance, the data was so out-of-date it contained companies long since closed down, gone bust, or merged with competitors. Yet, if there had been a bit of local clue applied, these could have been caught in the sanity checking of the data.

Here’s a few examples still there, which could have been eliminated this way, all in the locality in which I live:

Benjys - a sandwich chain - gone in 2007
Benjys – a sandwich chain – gone bust in 2007
Dewhurst Butchers - into administration in 2005
Dewhurst Butchers – into administration in 2005
Safeway. Might still exist in US. Taken over in UK by Morrisons in 2004l
Safeway. Yes, still exists in US, but this is Petts Wood, Kent. Still a supermarket here, taken over in UK by Morrisons in 2004

I understand that Apple did conduct a beta of Maps, but if they did, they either didn’t have many beta testers in the UK, or the ability to let them correct bad data wasn’t great, or the feedback simply didn’t make it to the released version.

But, that’s okay, now it’s released, it can be corrected by crowd-sourcing – i.e. getting our paying customers to do our jobs for us – right?

Well, there is a “report a problem” option, but that doesn’t seem to be working well, either it’s too hard to report an inaccurate place-mark, there’s a colossal backlog of reports, or they are going straight to the bitbucket.

If only they had bothered to actually get some local knowledge, obvious clangers like these could have been sifted out early in the process.

Customer Service is Crucial

When I need to get a memory upgrade for one of my own machines or for someone I know, I tend to go straight to Crucial. I usually don’t even bother looking anywhere else anymore. Thats where I went on Monday evening to order an upgrade for one of my machines.

Why? Well, partly because I’m lazy. Partly because I’m a man, and I mostly hate shopping. Even online.

But most of all, there’s a positive reason I’ll go back to Crucial. I’ve never been unhappy with their products or service, I’ve never had to return anything, they are competitively priced, and they always deliver things when they say they will, or do even better!

That memory upgrade I ordered on Monday night? I decided I’d pay the extra couple of quid to get it sent more quickly than their 3-5 day free standard postal delivery. I got an email yesterday, early evening, confirming my memory had been dispatched, and with a tracking number.

It arrived at about 0830 this morning, by Royal Mail Special Delivery, which costs us mere mortals about £6 if we go and send anything SD ourselves from the post office. Fantastic.

I know, I’m waxing lyrical about a company just going about it’s business of delivering the service and product I’ve paid for. But when you hear so often about companies who can’t keep their promises, it’s great when you find one that consistently can.

This Mac is much happier now, and I’m seeing fewer spinny beachballs.

Can you hear Steve screaming too?

Interesting article on BBC News about the impending iPhone 5 launch by the chap behind the “fake Steve” blog.

Definitely some valid points made, especially with reference to the leaks in the run up to the launch about how potentially unremarkable the iPhone 5 could be, that Apple’s share of the smartphone market that they helped to define is being thumped by the nimbler Asian companies’ Android handsets, and that Apple’s spend on R&D as a percentage of revenue is a paltry 2% under Cook’s leadership. There’s a good argument which says that for a company like Apple, it needs the yin and yang balance at the top – both the eccentric visionary to keep driving new ideas and push to take risks, and the number-crunching expert to keep the corporate feet on the ground once in a while, and stop the money running out. Very rare you find these qualities in the same person, if you ask me.

But there’s one comment which doesn’t sit right with me, and that’s the comment that the UI hasn’t changed in years, and that is somehow a bad thing.

I don’t know about you, but people who lead busy lives don’t appreciate having to start on a whole new learning curve just because they’ve updated their device. People like familiarity, which seems to be something Apple haven’t lost sight of.

The “familiarity” aspect is a huge selling point for those who don’t have time to to re-learn, or if you’re someone like my parents, don’t really want to have to re-learn, because they a) don’t much like change, and b) are a bit technophobic, usually because just as they get the hang of something, the UI changes on them.

But, I’ll go one step further. The entire smartphone market is, at first glance, pretty unremarkable now. They are all hand-sized rectangles with a capacitative touch screen on which you can read your email, a half decent point-and-click camera, and you can even make and recieve phonecalls.

So, does this give grist to the “upgrade the UI” mill? Maybe there’s some way of keeping both camps happy – like a “simple” and an “expert” mode?

As for Steve? I’d say he’s screaming and spinning in his grave.

Networking equipment vs. Moore’s Law

Last week, I was at the NANOG conference in Vancouver.

The opening day’s agenda featured a thought provoking keynote talk from Silicon Valley entrepreneur and Sun Microsystems co-founder Andy Bechtolsheim, now co-founder and Chief Development Officer of Arista Networks, entitled “Moore’s Law and Networking“.

The basic gist of the talk is that while Moore’s Law continues to hold true for the general purpose computing chips, it has not applied for some time to development of networking technology. Continue reading “Networking equipment vs. Moore’s Law”

What might OpenFlow actually open up?

A few weeks ago, I attended the PacketPushers webinar on OpenFlow – a networking technology that, while not seeing widespread adoption as yet, is still creating a buzz on the networking scene.

It certainly busted a lot of myths and misconceptions folks in the audience may have had about OpenFlow, but the big questions it left me with are what OpenFlow stands to open up, and what effect it might have on many well established vendors who currently depend on selling “complete” pieces of networking hardware and software – the classic router, switch or firewall as we know it.

If I think back to my annoyances back in the early 2000’s it was of the amount of feature bloat creeping into network devices, while we still tended to have a lot of monolithic operating systems in use, so a bug in a feature that wasn’t even in use could crash the device, because the code would be running, even if it wasn’t in use. I was annoyed because there was nothing I could do other than apply kludgy workarounds, and nag the vendors to ship patched code. I couldn’t decide to rip that bit of code out and replace it with some fixed code myself. When the vendors finally shipped fixed code, it was a reboot to install it. I didn’t much like being so dependant on a vendor, as not working for an MCI or UUnet (remember, we’re talking early 1999-2001 here, they are the big guys), at times my voice in the “fix this bug” queue would be a little mouse squeak to their lion’s roar, in spite of heading up a high-profile Internet Exchange.

Eventually, we got proper multi-threaded and modular OS in networking hardware, but I remember asking for “fast, mostly stupid” network hardware a lot back then. No “boiling the sea”, an oft-heard cliché these days.

The other thing I often wished I could do was have hardware vendor A’s forwarding hardware because it rocked, but use vendor B’s routing engine, as vendor A’s was unstable or feature incomplete, or vendor B’s just had a better config language or features I wanted/needed.

So, in theory, OpenFlow could stand to enable network builders to do the sorts of things I describe above – allowing “mix-and-match” of “stuff that does what I want”.

This could stand to threaten the established “classic” vendors who have built their business around hardware/software pairings. So, how do they approach this? Fingers-in-ears “la, la, la, we can’t hear you”? Or embrace it?

You should, in theory, and with the right interface/shim/API/magic in your OpenFlow controller, be able to plug in whatever bits you like to run the control protocols and be the “brains” of your OpenFlow network.

So, using the “I like Vendor A’s hardware, but Vendor B’s foo implementation” example, a lot of people like the feature support and predictability of the routing code from folks such as Cisco and Juniper, but find they have different hardware needs (or a smaller budget), and choose a Brocade box.

Given that so much merchant silicon is going into network gear these days, the software is now the main ingredient in the “secret sauce”, the sort approach that folks such as Arista are taking.

In the case of a Cisco, their “secret sauce” is their industry standard routing engine. Are they enlightened enough to develop a version of their routing engine which can run in an OpenFlow controller environment? I’m not talking about opening the code up, but as a “black box” with appropriate magic wrapped around it to make it work with other folks’ controllers and silicon.

Could unpackaging these crown jewels be key to long term survival?

Couldn’t go with the Openflow? Archive is online.

Last week, I missed an event I really wanted to make it to – the OpenFlow Symposium, hosted by PacketPushers and Tech Field Day. Just too much on (like the LONAP AGM), already done a fair bit of travel recently (NANOG, DECIX Customer Forum, RIPE 63), and couldn’t be away from home yet again. I managed to catch less than 30 minutes of the webcast.

From being something which initially seemed to be aimed at academia doing protocol development, lots of people are now talking about OpenFlow as it has attracted some funding, and more interest from folks with practical uses.

I think it’s potentially interesting for either centralised control plane management (almost a bit like a route reflector or a route server for BGP), or for implementing support for new protocols which are really unlikely to make it into silicon any time soon, as well as the originally intended purpose of protocol development and testing against production traffic and hardware.

Good news for folks such as myself is that some of the stream content is now being archived online, so I’m looking forward to catching up with proceedings.

Dell Acquisition Taking Hold at Force 10?

It seems the rDell/Force 10 combined logoecent acquisition of Force 10 by Dell is starting to make itself felt, and not only in a change of the logo on the website.

Eagle-eyed followers of the product information on their website will have noticed the complete disappearance of the product information for the chassis-based Zettascale switch, the Z9512, which was announced back in April. Continue reading “Dell Acquisition Taking Hold at Force 10?”

Brocade in a stitch over Exit Strategy?

A couple of days ago, on 18th August, storage and ethernet vendor Brocade released their Q3 2011 results (their Year End is in October).

It roughly followed what was outlined in their Preliminary Q3 numbers (flat revenue year-on-year) released on 5th August, and their shares taking their biggest hit since IPO, back in the ’99 boom time. Despite showing earlier signs of rallying, their stock is still trading around the $3.40 mark as I write.

It’s no secret that Brocade has been looking for a buyer for a couple of years, since completing the Foundry Networks integration to add ethernet to their existing storage focus. However, in the buyout beauty contest, likely suitor Dell has just passed Brocade up in favour of fluttering it’s eyelashes at Force10 Networks, which abandoned it’s own plans to IPO in favour of the Dell acquisition.

Interestingly, it’s not as though Foundry ended up being an indigestable meal for Brocade, as some might have predicted. It seems quite the opposite. The growth in their ethernet business is somewhat offsetting a slowdown in their SAN equipment sales, though it’s unclear if some element of this is “ethernet fabric” displacement in what would be classic fibre-channel space.

So, might Brocade be finding themselves in a stitch over their investors getting their money out? Their Exit Strategy, which seemed to be concentrated on selling on to a large storage/server builder at a profit, doesn’t look like it’s getting much traction anymore. Are there many potential buyers left? What happens if the answer is “no”? How do you keep going?

Here I think lies part of the problem: It seems that the common investor Exit Strategy becomes focused around “doing stuff to make money”, rather than “doing stuff that makes money”.

That has been the achilles heel I’ve long suspected exists in technology investing: While there might be a good solid idea at the foundation, one often gets the impression that more corporate priority can be given over to ensuring there’s an Exit Strategy for the investors, and following that road, rather than a sound development strategy to ensure the product or service itself drives the success of the company, and not how you sell it. The two find themselves at juxtaposition and can be a source of unwanted friction, as well.

In Brocade’s case, I think the current marketing style isn’t doing them any favours. Look at brocade.com. Videos of talking heads, trotting out technobabble, or grinning product managers waxing lyrical while stroking their hardware. “Video data sheet” – now what the hell is that all about? Yet, everyone’s up to it. Animated banners on pages where you actually just want the dry data and cold hard facts. It’s almost like they don’t want you to find the information you’re really looking for. Please can I just have a boring old data sheet?

Maybe in cases like this it’s time to go back to basics: Making something you’re proud of, something you can be happy putting your name to, which is how many great products and brands developed in the past. Question is, have tech companies remembered how, and do investors have the longer-term stomachs for it?

A week for new 40G toys…

It’s been a week for new 40G launches in the Ethernet switch world…

First out of the gate this week has been Arista, with their 7050S-64, 1U switch, with 48 dual-speed 1G/10G SFP ports and four 40G QSFP ports, 1.28Tbps of switching, 960Mpps, 9MB of packet buffer, front-to-back airflow for friendly top-of-rack deployment, etc, etc.

Next to arrive at the party is Cisco, with their Nexus 3064, 1u switch, with 48 dual-speed 1G/10G SFP ports and four 40G QSFP ports, 1.28Tbps of switching, 950Mpps, 9MB of packet buffer, front-to-back airflow for friendly top-of-rack deployment, etc, etc.

Whoa! Anyone else getting deja vu!

Continue reading “A week for new 40G toys…”

This weeks oxymoron: Ethernet will never dominate in…

…Broadcast TV and Storage. Apparently.

I’ve just read a blog post by Shehzad Merchant of Extreme Networks, about a panel he recently participated in, where one of the other panelists asserted the above was true.

Fascinating that a conference in 2011 is talking about Ethernet not becoming dominant in broadcast TV.

There are several broadcast organisations who are already using large scale 10 Gig Ethernet platforms in places such as their archiving systems and in their playout platforms, and I’m not talking niche broadcasters, but big boys like ESPN. Not sure if any of them are using Extreme’s Purple boxes though.

This unnamed panelist would be better off moving into time-travel, as it seems are already able to come here from the past and make these assertions.

I do wonder if it’s actually the stick-in-the-mud storage industry which will be slower to move than the broadcasters!