Or, I never thought of myself as a narcissist but…
Thanks to the folks at HEAnet, here’s a link to the video of the talk “It’s peering, Jim…” that I gave at the recent INEX meeting in Dublin, where I discuss topics such as changes in the US peering community thanks to Open-IX and try to untangle what people mean when they say “Regional Peering”.
The talk lasts around 20-25 minutes and I was really pleased to get around 15 minutes of questions at the end of it.
I also provide some fairly pragmatic advice to those seeking to start an IX in Northern Ireland during the questions. 🙂
Following on from yesterday’s post on the upcoming ITU-T WCIT conference, and what that might mean for how you can use the Internet, and how much you have to pay to use it, Google have also launched their own #freeandopen campaign, with this video which is their take on what top-down Government regulation could mean for you…
With less than two weeks to the WCIT (World Conference on International Telecommunications), and what that means for the internet (see “Is the Internet Facing a Perfect Storm”), the amount of coverage online is increasing rapidly.
Basically in one sentence, some Governments would like to see the ITU (a closed, top-down, Governmental organisation) take the lead in regulating the (open, collaborative, co-operative, bottom-up) Internet.
This video from the folks at accessnow.org explains it pretty well…
Lots of people I know have been working seriously hard in the background to educate those with a vote in the ITU, and keep the Internet based around open standards and collaborative governance.
Let’s face it, if the Internet wasn’t based around open standards, so many of the things which are part of our every day lives could be very different, or simply wouldn’t even exist.
Fortunately, as far as we know in the UK, our Government is still an advocate of the “light touch” and industry self-regulation when it comes to the Internet, but that doesn’t make what’s going to go on behind closed doors in Dubai in a couple of weeks any less of a concern.
It’s only available in a handful of markets at the moment, and the BBC’s tech correspondent, Rory Cellan-Jones, did many articles for TV and Radio yesterday, while conducting countless speedtests, which he has extensively blogged about.
Some of the comments have been that it’s no better in terms of speed than a good 3G service in some circumstances, while others complain about the monthly cost of the contracts.
Get locked-in to 4G(EE)?
The initial cost for the early adopters was always going to attract a premium, and those who want it will be prepared to pay for it. It’s also worth noting that there are no “all you can eat” data plans offered on EE’s 4G service. Everything comes with an allowance, and anything above that has to be bought as “extra”.
The most concerning thing as far as the commercial package goes are the minimum contract terms.
12 months appears to be the absolute minimum (SIM only), while 18 months seems to be the offering if you need a device (be it a phone, dongle or MiFi), and 24 month contracts are also being offered.
Pay As You Go is not being offered on EE’s 4G service (as yet), probably because they’ve no incentive to, because there’s no competition.
Are EE trying to make the most of the headstart they have over competitors 3, O2 and Voda and capture those early adopters?
Rory Cellan-Jones referred in his blog about problems with reduced performance when in buildings.
A number of factors affect the propagation of radio waves and how well they penetrate buildings and other obstacles, such as the nature of the building’s construction (for instance a building which exhibits the properties of a Faraday Cage would block radio signals, or attenuate them to the point of being useless), but also the frequency of the radio signal.
Longer wavelengths (lower frequencies) can travel further and are less impacted by having to pass through walls. I’m sure there’s an xkcd on this somewhere, but the best I can find is this….
The reason EE were able to get a steal on the other mobile phone companies was because OFCOM (the UK regulator, who handle radio spectrum licensing for the Nation) allowed EE to “refarm” (repurpose) some of their existing allocated frequency, previously used for 2G (GSM), and convert it to support 4G. The 2G spectrum available to EE was in the 1800 Mhz range, as that was the 2G spectrum allocated to EE’s constituent companies, Orange and T-Mobile.
Now, 1800 does penetrate buildings, but not as well as the 900 Mhz which are the 2G spectrum allocated to Voda and O2.
Voda are apparently applying to OFCOM for authority to refarm their 900 Mhz spectrum for 4G LTE. Now, this would give a 4G service which had good propagation properties (i.e. travel further from the mast) and better building penetration. Glossing over (non-)availability of devices which talk LTE in the 900 Mhz spectrum, could actually be good for extra-urban/semi-rural areas which are broadband not-spots?
Well, yes, but it might cause problems in dense urban areas where the device density is so high it’s necessary to have a large number of small cells, in order to limit the number of devices associated with a single cell to a manageable amount – each cell can only deal with a finite number of client devices. This is already the case in places suce as city centres, music venues and the like.
Ideally, a single network would have a situation whereby you have a high density of smaller cells (micro- and femto-cells) running on the higher frequency range to intentially limit (and therefore number of connected devices) it’s reach in very dense urban areas such as city centres, and a lower density of large cells (known as macro-cells) running on lower frequencies to cover less built-up areas and possibly better manage building penetration.
But, that doesn’t fit with our current model of how spectrum is licensed in the UK (and much of the rest of the world).
Could the system of spectrum allocation and use be changed?
One option could be for the mobile operators to all get together and agree to co-operate, effectively exchanging bits of spectrum so that they have the most appropriate bit of spectrum allocated to each base station. But this would involve fierce competitors to get to together and agree, so there would have to be something in it for them, the best incentive being cost savings. This is happening to a limited extent now.
The more drastic approach could be for OFCOM to decouple the operation of base stations (aka cell towers) from the provision of service – effectively moving the radio part of the service to a wholesale model. Right now, providing the consumer service is tightly coupled to building and operating the radio infrastructure, the notable exception being the MVNOs such as Virgin (among others), who don’t own any radio infrastructure, but sell a service provided over one of the main four.
It wouldn’t affect who the man in the street buys his phone service from – it could even increase consumer choice by allowing further new entrants into the market, beyond the MVNO model – but it could result in better use of spectrum which is, after all, a finite resource.
Either model could ensure that regardless of who is providing the consumer with service, the most appropriate bit of radio spectrum is used to service them, depending on where they are and which base stations their device can associate with.
The Internet has become a massive part of our everyday lives. If you walk down a British high street, you can’t fail to notice people staring into their phones rather than looking where they are going! I did see a comment on TV this week that you have a 1-in-10 chance of tripping and falling over when walking along looking at your phone and messaging…
There are massive pushes for faster access in countries which already have widespread Internet adoption, both over fixed infrastructure (such as FTTC and FTTH) and wireless (LTE, aka 4G), which at times isn’t without controversy. In the UK, the incumbent, BT, is commonly (and sometimes unfairly) criticised for trying to sweat more and more out of it’s copper last mile infrastructure (the wires that go into people’s homes), while not doing enough to “future-proof” and enable remote areas by investing in fibre. There’s also been problems over the UK regulator’s decision to allow one mobile phone network get a head-start on it’s competitors in offering LTE/4G service ahead of them, using existing allocated radio frequencies (a process known as “spectrum refarming”).
Why do people care? Because the Internet helps foster growth and can reduce the costs of doing business, and it’s why the developing countries are working desperately hard to drive internet adoption, along the way having to manage the threats of “interfering” actors who either don’t fully understand or fear change.
However, a bigger threat could be facing the Internet, and it’s coming from multiple angles, technical and non-technical. A perfect storm?
IPv4 Resource Exhaustion
The existing addressing (numbering) scheme used by the Internet is running out
A secondary market for “spare” IPv4 resources is developing, IPv4 addresses will have a monetary value, driven by lack of IPv6 deployment
On a Global level through the ITU, who, having disregarded the Internet as “something for academics” and not relevant to public communications back in 1988, now want to update the International Telecommunication Regulations to extend these to who “controls the Internet” and how.
All of these things threaten some of the basic foundations of the Internet we have today:
The Internet is “open” – anyone can connect, it’s agnostic to the data which is run over it, and this allows people to innovate
The Internet is “transparent” – managed using a bottom-up process of policy making and protocol development which is open to all
The Internet is “cheap” – relatively speaking, Internet service is inexpensive
These challenges facing the Internet combine to break all of the above.
Close the system off, drive costs up, and make development and co-ordination an invite-only closed shop in which it’s expensive to participate.
Time and effort, and investing a little money (in deploying IPv6, in some regulatory efforts, and in checking your business model is still valid), are the main things which will head off this approaching storm.
Adopting IPv6 should just be a (stay in) business decision. It’s something operational and technical that a business is in control of.
But, the regulatory aspect is tougher, unless you are big enough to be able to afford your own lobbyists. Fortunately, if you live in the UK, it’s not reached “write to your MP time”, not just yet. The UK’s position remains one of “light touch” regulation, largely letting the industry self-regulate itself through market forces, and this is being advocated to the ITU. There’s also some very bright, talented and respected people trying to get the message through that it’s economically advantageous not to make the Internet a closed top-down operated system.
Nevertheless, the challenges remain very much real. We live in interesting times.
The general gist of it is that the mandatory collection of marketing data like age, number of children, etc., were “not as specified”, and it is “being fixed” so that it’s no longer mandatory to enter these details just to change your account data, such as your email address, or opt-in/out status of marketing emails.
They don’t, however, consider the information collected as excessive, as long as it’s optional and you volunteered it in the first place.
But at least they have said they are fixing the inappropriate mandatory fields in their webforms.
However, I did make the train journey whose booking let me to be concerned about the excessive and irrelevant data they were collecting, which could only be stored for one reason, and that is to improve their market intelligence.
During the journey, I used the on-train wifi, for which it requires you to “register”, and asks you provide another stream of compulsory personal information. While they didn’t want to know my inside leg measurement this time, again they want to know who I am, where I live, what’s my nearest station, and what is my reason for travelling, again as “mandatory” responses, before allowing you to use the on-train wifi service.
I don’t understand how your nearest station, or why you’re travelling, are relevant to allowing you to access the on-train internet access service. Of course, I didn’t actually put any genuine details in this contact form.
This wifi registration page also presents the “opt-in” for marketing email as already ticked – so if you don’t notice and don’t untick the box, you’re opted in to their email marketing. While it complies with the letter of the law, it doesn’t really feel to be in the spirit of the law.
What’s your perception of East Coast’s data collection and retention policies based on what you’ve read?
IBM has banned it’s staff from using Siri – Big Blue has allowed it’s staff to BYOD and use their iPhone 4S on the company’s networks, but banned the use of Siri over fears that the sound bites uploaded for processing by Siri could contain IBM proprietary information, which could be stored indefinitely, and analysed by Apple.
This isn’t a new concern for corporates. It came to the forefront when employees commonly used services like MSN Messenger to keep in touch with their colleagues, and of course all but the paranoid thought nothing of discussing company business over IM, in unencyrpted packets, routed over the commodity Internet, to some server farm their employer didn’t have any control over. Who knows if and how long a messaging service could retain transcripts of chat sessions? Or if the packets were “sniffed” in transit and the transcript rebuilt?
Companies then got wise and started to provide internal IM systems which they had control over, and having their IT departments block external chat platforms (let’s assume we’re talking about vanilla users who don’t know how to punch their way through these things for now). This also obviously helped for things like regulatory compliance.
Most recently, this has moved into the social networking arena, with things such as Twitter and Facebook – people have lost their jobs over committing corporate faux-pas on a publically viewable service. This has opened the doors to platforms such as Yammer, a SAAS-based corporate social networking platform, who seek to give the company back some control. All the things your employees know and love about social networking, but just for your company and it’s staff, with you in control of the data and the rules. Your regulatory compliance people can sleep easier at night.
So, while there’s no current evidence to support the notion that Apple are using Siri to spy on Big Blue, it’s fair to say that IBM aren’t bellyaching: I think it’s a legitimate data privacy concern, and it’s one that you should share.
When you post something on Twitter, or Facebook, or write a blog, you know that you’re putting it out into some sort of public (or shared) domain. You expect other people to see it, and you expect it to be stored (though maybe you’re not clear on just how long it’s being stored!).
I think people’s mindset is different when talking to Siri. They have the concept, in their head, they are talking to their phone, and overlook the fact that what they’ve just said has been uploaded to a server farm, possibly in a location outside of their home jurisdiction, to be processed. Do those of you who use Siri even think about that is what happens? Or that what they have just said has been placed into storage, potentially forever?
So many of the geeks I know are horders by nature, so it’s a force of habit for them to turn on lots of logging and want to keep everything forever (or at least until the storage runs out or they can’t afford anymore), “just in case they need it”, and I suspect the backend of Siri is written no differently, because that’s how programmers are.
Given a company the size of Apple, I don’t think there’s any concerns about the storage running out, and the Siri licence agreement doesn’t say for how long you’re consenting to Apple storing the soundbites collected by Siri. With a large enough sample size, statistical analysis also makes it easier to find needles in such haystacks, and we’re getting increasingly good at it.
Could market intelligence generated from analysis of Siri requests even be revenue stream for Apple in due course?
My opinion is that it is a legitimate privacy concern…