Could a bit of cultural sensitivity help make better tech products?

A post from a person I follow on twitter got me thinking about tech product development…

Dear Word for Mac 2011: No.

This was on a Mac in the UK. With a UK keyboard. With the system locale set to UK. With the system language set to British English.

Yet the software offered an autocomplete using the American styling of “Mom”, seemingly ignoring the locale settings on the machine!

Okay, it’s not escaped me that Word for Mac is a MSFT product. So maybe this is about cultural insensitivity in tech (or maybe all) companies in general, but as this was on a Mac, I’m going to use Apple as an example of what could be done better.

Everyone remembers the Apple Maps launch debacle, right?

So many of the faux-pas could have been avoided if there was a bit of cultural sensitivity and local knowledge applied when sanity checking the mapping data, especially the place-mark data.

Firstly, there’s a GIGO problem at work here. Apple took in some seriously old source data.

For instance, the data was so out-of-date it contained companies long since closed down, gone bust, or merged with competitors. Yet, if there had been a bit of local clue applied, these could have been caught in the sanity checking of the data.

Here’s a few examples still there, which could have been eliminated this way, all in the locality in which I live:

Benjys - a sandwich chain - gone in 2007
Benjys – a sandwich chain – gone bust in 2007
Dewhurst Butchers - into administration in 2005
Dewhurst Butchers – into administration in 2005
Safeway. Might still exist in US. Taken over in UK by Morrisons in 2004l
Safeway. Yes, still exists in US, but this is Petts Wood, Kent. Still a supermarket here, taken over in UK by Morrisons in 2004

I understand that Apple did conduct a beta of Maps, but if they did, they either didn’t have many beta testers in the UK, or the ability to let them correct bad data wasn’t great, or the feedback simply didn’t make it to the released version.

But, that’s okay, now it’s released, it can be corrected by crowd-sourcing – i.e. getting our paying customers to do our jobs for us – right?

Well, there is a “report a problem” option, but that doesn’t seem to be working well, either it’s too hard to report an inaccurate place-mark, there’s a colossal backlog of reports, or they are going straight to the bitbucket.

If only they had bothered to actually get some local knowledge, obvious clangers like these could have been sifted out early in the process.

Customer Service is Crucial

When I need to get a memory upgrade for one of my own machines or for someone I know, I tend to go straight to Crucial. I usually don’t even bother looking anywhere else anymore. Thats where I went on Monday evening to order an upgrade for one of my machines.

Why? Well, partly because I’m lazy. Partly because I’m a man, and I mostly hate shopping. Even online.

But most of all, there’s a positive reason I’ll go back to Crucial. I’ve never been unhappy with their products or service, I’ve never had to return anything, they are competitively priced, and they always deliver things when they say they will, or do even better!

That memory upgrade I ordered on Monday night? I decided I’d pay the extra couple of quid to get it sent more quickly than their 3-5 day free standard postal delivery. I got an email yesterday, early evening, confirming my memory had been dispatched, and with a tracking number.

It arrived at about 0830 this morning, by Royal Mail Special Delivery, which costs us mere mortals about £6 if we go and send anything SD ourselves from the post office. Fantastic.

I know, I’m waxing lyrical about a company just going about it’s business of delivering the service and product I’ve paid for. But when you hear so often about companies who can’t keep their promises, it’s great when you find one that consistently can.

This Mac is much happier now, and I’m seeing fewer spinny beachballs.

Can you hear Steve screaming too?

Interesting article on BBC News about the impending iPhone 5 launch by the chap behind the “fake Steve” blog.

Definitely some valid points made, especially with reference to the leaks in the run up to the launch about how potentially unremarkable the iPhone 5 could be, that Apple’s share of the smartphone market that they helped to define is being thumped by the nimbler Asian companies’ Android handsets, and that Apple’s spend on R&D as a percentage of revenue is a paltry 2% under Cook’s leadership. There’s a good argument which says that for a company like Apple, it needs the yin and yang balance at the top – both the eccentric visionary to keep driving new ideas and push to take risks, and the number-crunching expert to keep the corporate feet on the ground once in a while, and stop the money running out. Very rare you find these qualities in the same person, if you ask me.

But there’s one comment which doesn’t sit right with me, and that’s the comment that the UI hasn’t changed in years, and that is somehow a bad thing.

I don’t know about you, but people who lead busy lives don’t appreciate having to start on a whole new learning curve just because they’ve updated their device. People like familiarity, which seems to be something Apple haven’t lost sight of.

The “familiarity” aspect is a huge selling point for those who don’t have time to to re-learn, or if you’re someone like my parents, don’t really want to have to re-learn, because they a) don’t much like change, and b) are a bit technophobic, usually because just as they get the hang of something, the UI changes on them.

But, I’ll go one step further. The entire smartphone market is, at first glance, pretty unremarkable now. They are all hand-sized rectangles with a capacitative touch screen on which you can read your email, a half decent point-and-click camera, and you can even make and recieve phonecalls.

So, does this give grist to the “upgrade the UI” mill? Maybe there’s some way of keeping both camps happy – like a “simple” and an “expert” mode?

As for Steve? I’d say he’s screaming and spinning in his grave.

Networking equipment vs. Moore’s Law

Last week, I was at the NANOG conference in Vancouver.

The opening day’s agenda featured a thought provoking keynote talk from Silicon Valley entrepreneur and Sun Microsystems co-founder Andy Bechtolsheim, now co-founder and Chief Development Officer of Arista Networks, entitled “Moore’s Law and Networking“.

The basic gist of the talk is that while Moore’s Law continues to hold true for the general purpose computing chips, it has not applied for some time to development of networking technology. Continue reading “Networking equipment vs. Moore’s Law”

What might OpenFlow actually open up?

A few weeks ago, I attended the PacketPushers webinar on OpenFlow – a networking technology that, while not seeing widespread adoption as yet, is still creating a buzz on the networking scene.

It certainly busted a lot of myths and misconceptions folks in the audience may have had about OpenFlow, but the big questions it left me with are what OpenFlow stands to open up, and what effect it might have on many well established vendors who currently depend on selling “complete” pieces of networking hardware and software – the classic router, switch or firewall as we know it.

If I think back to my annoyances back in the early 2000’s it was of the amount of feature bloat creeping into network devices, while we still tended to have a lot of monolithic operating systems in use, so a bug in a feature that wasn’t even in use could crash the device, because the code would be running, even if it wasn’t in use. I was annoyed because there was nothing I could do other than apply kludgy workarounds, and nag the vendors to ship patched code. I couldn’t decide to rip that bit of code out and replace it with some fixed code myself. When the vendors finally shipped fixed code, it was a reboot to install it. I didn’t much like being so dependant on a vendor, as not working for an MCI or UUnet (remember, we’re talking early 1999-2001 here, they are the big guys), at times my voice in the “fix this bug” queue would be a little mouse squeak to their lion’s roar, in spite of heading up a high-profile Internet Exchange.

Eventually, we got proper multi-threaded and modular OS in networking hardware, but I remember asking for “fast, mostly stupid” network hardware a lot back then. No “boiling the sea”, an oft-heard cliché these days.

The other thing I often wished I could do was have hardware vendor A’s forwarding hardware because it rocked, but use vendor B’s routing engine, as vendor A’s was unstable or feature incomplete, or vendor B’s just had a better config language or features I wanted/needed.

So, in theory, OpenFlow could stand to enable network builders to do the sorts of things I describe above – allowing “mix-and-match” of “stuff that does what I want”.

This could stand to threaten the established “classic” vendors who have built their business around hardware/software pairings. So, how do they approach this? Fingers-in-ears “la, la, la, we can’t hear you”? Or embrace it?

You should, in theory, and with the right interface/shim/API/magic in your OpenFlow controller, be able to plug in whatever bits you like to run the control protocols and be the “brains” of your OpenFlow network.

So, using the “I like Vendor A’s hardware, but Vendor B’s foo implementation” example, a lot of people like the feature support and predictability of the routing code from folks such as Cisco and Juniper, but find they have different hardware needs (or a smaller budget), and choose a Brocade box.

Given that so much merchant silicon is going into network gear these days, the software is now the main ingredient in the “secret sauce”, the sort approach that folks such as Arista are taking.

In the case of a Cisco, their “secret sauce” is their industry standard routing engine. Are they enlightened enough to develop a version of their routing engine which can run in an OpenFlow controller environment? I’m not talking about opening the code up, but as a “black box” with appropriate magic wrapped around it to make it work with other folks’ controllers and silicon.

Could unpackaging these crown jewels be key to long term survival?

Couldn’t go with the Openflow? Archive is online.

Last week, I missed an event I really wanted to make it to – the OpenFlow Symposium, hosted by PacketPushers and Tech Field Day. Just too much on (like the LONAP AGM), already done a fair bit of travel recently (NANOG, DECIX Customer Forum, RIPE 63), and couldn’t be away from home yet again. I managed to catch less than 30 minutes of the webcast.

From being something which initially seemed to be aimed at academia doing protocol development, lots of people are now talking about OpenFlow as it has attracted some funding, and more interest from folks with practical uses.

I think it’s potentially interesting for either centralised control plane management (almost a bit like a route reflector or a route server for BGP), or for implementing support for new protocols which are really unlikely to make it into silicon any time soon, as well as the originally intended purpose of protocol development and testing against production traffic and hardware.

Good news for folks such as myself is that some of the stream content is now being archived online, so I’m looking forward to catching up with proceedings.

Dell Acquisition Taking Hold at Force 10?

It seems the rDell/Force 10 combined logoecent acquisition of Force 10 by Dell is starting to make itself felt, and not only in a change of the logo on the website.

Eagle-eyed followers of the product information on their website will have noticed the complete disappearance of the product information for the chassis-based Zettascale switch, the Z9512, which was announced back in April. Continue reading “Dell Acquisition Taking Hold at Force 10?”