Posted: Tuesday, May 14th, 2013
BT’s latest move forward in the deployment of infrastructure to underpin high-speed connections is Fibre to the Premises on Demand (FTTPoD). Unlike BT’s current FTTP offering, where fibre is run along pylons directly to the house, FTTPoD uses existing FTTC infrastructure as much as possible. Fibre optic cable is laid up to cabinets that are FTTC enabled in order to reach speeds of up to 330 Mb/s down and 30 Mb/s up. A pilot has already been launched, with results to follow shortly.
If this services was easily accessible to the general public, FTTPoD could be a big step towards competing with the likes of Hong Kong and South Korea in terms of average speeds. However, the number of the general population that will be able to access this service may be limited. FTTPoD runs off BT’s existing FTTC cabinets, which are still out of reach of large sections of the country, and in many areas may never be rolled out. BT has generally targeted highly populated residential areas for this infrastructure, leaving business areas out of reach. FTTPoD also cannot be installed into multi-tenanted premises, which further shows that this is not designed as a business service.
From what information we currently have available, the on-demand product will have a high install cost, but without contention or uptime guarantees normally associated with EAD services. This will raise interesting questions on how this service will be marketed – will home users be prepared to pay hundreds for the install in order to get speeds that arguably are not required by the majority? Will small business owners jump at the chance to access speeds previously only available through leased lines or bonded FTTC. While the install costs may well fall in line with the work that needs to be carried out, FTTPoD is offering BT a chance to begin the replacement last-mile copper lines with cheaper, faster and easier to manage fibre optic cable. No doubt over the next few decades copper will be phased out and fibre will be the main choice for last-mile connectivity, so this is a chance for BT customers to foot the bill for them.
While the lack of contention guarantees and SLAs will put off businesses that are more reliant on their connectivity, this technology could be very appealing to prosumers and start ups. It will be interesting to see if BT’s restrictions will impede businesses putting this to use once it is roll out across the country.
Posted in |
Posted: Wednesday, May 1st, 2013
In death, as in life, Margaret Thatcher divided opinion. Obituaries penned celebrating the ‘savior of the nation’, while others celebrated the demise of a merciless class war commander. What we can perhaps all agree on is she was fundamental to shaping the world we live in now; as citizens, as workers, as business’s – our lives, for better or for worse, owe much to her actions. Of course in some quarters her legacy is more marked than in others, unbeknown to some the telecommunications industry is one such place.
The privitisation of BT in 1984 represents both a watershed moment for Thatcher and her government as well for our industry. Whist it also reveals much about how the ‘Thatcher Revolution’ gathered pace. Before 1981, all telecoms services in Britain were provided through the Post Office Telecommunications (known as BT from 1980). Widely considered a ‘natural monopoly industry’ ( due to the high infrastructure costs associated with it) liberalisation of the market had been given little consideration up until the late ‘70s. However against the backdrop of public dissatisfaction with increasing delays for telephone line installation and with new technologies reducing capex costs required to enter the industry, this was soon to change under the first Thatcher government.
The 1981 Telecommunications Act removed BT from the Post Office and this was followed in 1982 by BP, Cable and Wireless and Barclays setting up Mercury – injecting competition into the market place. However at neither of these junctures is there evidence that privatisation was the ultimate vision of Thatcher or her government. Denationalisation was still viewed as a radical and risky policy against the backdrop of 30 plus years of state ownership consensus and state sales prior to ’84 reflected this tentativeness; in that they were small and discreet. The reasons for the sale coming to fruition were on the most part pragmatic; responses to the problems of the time formed gradually through multi-stakeholder negotiation.
Modernising BT was a key objective under the move to separate it from the post office, however financing that moderinisation was a challenge; in 1983 the government’s finances were deep in the red, with a deficit of around 4% of GDP and nationalised industries were competing with key services (health, education etc) for the treasuries limited coppers. Transferring assets into the private sphere not only opened up the options to find investment via other sources (i.e the city) but also raised money for the public purse.
Whilst the primary motivation for the sale of BT was raising funds for future investment, public shared ownership was also attractive to Thatcher for both pragmatic and ideological reasons. Knowing that both Labour and the unions were likely to be opposed to privitisation, the government was able to offer BT employees pre-registered share options (of which 90% took them up on) as a populist bulwark to any attempts to reverse the trend. In tandem with the ‘Right to Buy’ policy, it also began to shape the neo liberalist narrative Thatcher was developing on ‘rolling back the frontiers of the state’ and empowering the people through private equity purchases.
The sale of BT encapsulates how Thatcher’s early pragmatically reasoned social and economic policies evolved into a political ideology. Arriving at the conclusion to privatise was a policy building process over a number of years, which always kept a firm eye on what was considered acceptable. When, in November 1984, more than 50 percent of BT was sold to the public through share option, it became the largest ever most successful SOE privatisation exercise in the history. It was also an almost immediate political success, popular with the millions who purchased shares, invigorating the UK stock exchange (very much the beginning of the ‘Big Bang’ in the City) and raising money for the government. The sale would pave the way for a further 40 mass market sell off’s during the Thatcher years and irrevocably altered the relationship between state and market. In many ways the embryo of ‘Thatcherism’ was also hatched during the process – setting a template that one could argue has been followed by all subsequent British governments, as well as many others internationally.
As for the impact on telecommunications, opening up the sector allowed for other operators to enter the market, challenge BT, and invest in new technologies (such as mobile and Internet services). Of course without regulation BT’s ‘natural monopoly’ ( i.e owning the underlying infrastructure) would have made for an uneven playing field so Oftel ( later Ofcom) was established at the point of denationalisation to introduce price caps and optimise BT’s levels of efficiency. Ofcom oversaw further moves to liberalise the market in 1991; when authorising independent companies to bulk-buy telecommunications and sell in packages to customers, and again in 2003 when opening up the telephone exchange for LLU operators.
As of 2012, there were over 200 fixed telecommunications providers, over 100 mobile service providers and over 1,000 Internet service providers operating in the UK. For most consumers there is a wide array of services and providers to choice from and value for money to be gained from doing so. But not for everyone. Many areas of Britain (mostly rural) are without access to fast broadband; for those the wrong side of the ‘Digital Divide’ , in an increasingly digital world, there are serious social and economic consequences, for communities and individuals. The reasons for this divide? It’s hard not to arrive at the conclusion that privatisation constitutes the root cause; given that historically operators have reframed from investing in areas where there are unable to identify significant ROI. The establishment of BDUK (Broadband Delivery Fund) in 2009 represents a move from the government to intervene in the market and initiate state led solutions to this problem.
When Margaret Thatcher set the wheels in motion for the liberalisation of the telecommunications, she did so with more modest than radial intentions. Just a few years later, she would find herself presiding over change which would not only revolutionise the telecommunications industry, but which constituted a seismic shift in the relationships between the state, individual and market and had both immediate and long lasting economic, political and social consequences across the UK.
When people now debate Thatcher’s legacy, they debate the merits of policies and philosophy’s which were sharpened, developed and ultimately given momentum, by those changes to our industry, over 30 years ago.
Posted in |
Posted: Thursday, April 18th, 2013
Cyber War is, we are told, happening increasingly all around us. However it doesn’t normally (touch wood) affect the average man in the street, until last month that is when millions of ordinary Internet users were caught in an ugly crossfire between warring companies; suffering delays in services and disruption to access.
The target of what became the largest DDoS attack in history (up to 300 Gb/s) was Spamhaus – an anti-spam website whose practices and methods have made them unpopular within shadier corners of the internet. The attack, began on March 18th, fully saturating Spamhaus’ connection to the rest of the Internet and came close to knocking their site offline. If not for the intervention of Cloudflare (who provide protection against such attacks) it’s likely it would have done. Cloudflare ‘rescue’ story below.
The Spamhaus DDos attacks may be the biggest to date, but they are not in isolation, rather they are the latest in a long list of recent incidents. American Express and HSBC fell victim to large scale attacks last year and it’s a trend security vendor Kaspersky expects to continue. “In general, attacks of this type are growing in terms of quantity as well as scale. Among the reasons for this growth is the development of the Internet itself (network capacity and computing power) and past failures in investigating and prosecuting individuals behind past attacks.”
Another trend that we are witnessing is that of cyber criminals exploiting a fundamental feature that allows us to use the internet – DNS. Domain Name System converts from name to IP through your computer asking a server what the IP address is. However the chances are that the server you ask won’t know the answer, so it will go and get it for you from a list of known authoritative servers. Once it has the answer it will reply back to original sender. These ‘recursive’ DNS servers are the life blood of how we use the internet, without them you would have to memorise each IP address!
However there are thousands of ‘recursive’ DNS servers out there which will accept queries from any IP address. If spoofed DNS packets are then sent to those unsecured servers they are susceptible to what is known as a DNS amplification attack – where only 3 or 4 KB of data can be sent, but where the request can generate as much as 100x that amount. This means that even with a relatively small number of nodes the bandwidth hit can be enormous. Combating these attacks is possible, but the way in which we do so may hinge on the answers to many other much broader questions about the future of the internet and in particular – who governs it.
Looking at the Spamhaus attack, it would appear that both unsecured DNS (by design) and unsecured DNS (by misconfiguration) were responsible for the amplification of the attack. One way of nullifying this would be for all ISP’s to only allow their customers IP’s to query their own DNS servers (as we do at Fluidata) however the processing overheads deter many others from doing so. As it stands customers also have the option to build their own recursive DNS servers on their own infrastructure; moving DNS outside of the ISP’s responsibility and increasing the potential for misconfiguration; which can be exploited for malicious purposes.
In theory ISP’s could form a united front against DDoS attacks of this nature; through insisting that customers only use their recursive DNS servers and ensuring that those servers are secure. To increase security further BCP-38 could also be deployed – providing filtering on every edge port so that customers cannot spoof traffic from their links. However the move to a more regulated system would rely on (if it was to be truly effective) cross national coordination and likely meet opposition from service providers who do not wish to incur the processing overheads associated with such measures.
Overcoming that opposition (i.e. by turning regulation into something more akin to legal statute) would inexorably carry this issue into the contentious territory of who governs the internet, who polices it and whether anybody has the right to do; a proverbial Pandora’s box with far reaching consequences and considerations for subjects ranging from security to freedom of speech, right to privacy and the debate over the openness of the web. Given this, raising awareness around responsible DNS use seems the most viable course of action; the Spamhaus attack legacy might just be encouraging people to think a little more about it.
Posted in |
Posted: Thursday, March 28th, 2013
A study recently completed by retail analysts has suggested that the birth of 4G in the UK could have a major impact on retailers; with experts predicting it likely to increase retail spending by up to “£1.8 billion a year”.
The study has suggested that with the imminent widespread release of the technology resulting in faster and more reliable mobile connectivity, consumers will be increasing the number of online purchases whilst on the move.
4G will allow shoppers to browse the web with speeds between five & ten times faster than the current 3G networks, meaning that shoppers are less likely to become frustrated with slow downloads and slow loading pages; an all too familiar experience – especially in densely populated areas such as cites, busy train stations, and of course high streets.
Whilst great news for retailers on the most part, further increases in online purchases could speed up the decline of an already contracting high street. In response, some retailers are seeking to innovate and diversify to enhance customers’ in-store experience.
For example, Topshop and Selfridges in London have now built ‘chill out’ style rooms- giving shoppers (or bored husbands and boyfriends) the opportunity to relax before venturing back out to the busy shop floor. While the likes of Burberry have gone a step further; turning their Regent’s Street store into ‘Burberry Live World’, where they are piloting interactive technology such as 22ft-high screens, 500 hidden speakers, a hydraulic stage, and even RFID mirco-chipped clothes and ‘smart mirrors’. Guest Ipad’s for customers to browse collections and view video content are fast becoming a common feature in brand high street stores.
Of course while the likes of Arcadia group can afford to have snazzy technologies and gadgets in store, and spend huge amounts of money on Google adwords and Big Data, the same is not true for smaller, independent stores. How those will cope in a choppy economic climate and on what is increasingly looking like an even more uneven playing field, remains to be scene.
Posted in |
Posted: Wednesday, February 27th, 2013
At the end of a complex bidding process, the 4G auction has its victors and has raised £2.34bn for the public purse. About 90% less than the price paid at the 3G sale 13 years ago – at the height of the dot-com bubble. It’s also more than £1bn short of what the chancellor estimated in his autumn statement.
The relatively modest amounts raised by the auction may well be attributable to the limited success enjoyed by EE since launching the 4G service. Results published last week for EE’s financial end of year show contract net additions actually falling by over a third in 2012 Q4. It’s been suggested that this may be down to the way EE have been pricing their data bundles; offering only the same amount of data as on 3G platforms (resulting in customers running out of data early within the contracted month). The recent reduction in the price of EE’s most basic tariff (by £5 a month) may well be a move to remedy these perceived short comings.
There were no real surprises as to the companies who have succeeded in the auction with 3, EE, Vodafone and the new kid on the block Niche (well BT) all getting portions of the valuable spectrum.
It is interesting that BT, who left the mobile sector a decade ago when it sold off BT Cellnet (now O2), is now back in the market with a healthy chunk of the 2.6Mhz spectrum which is best suited to handling high data traffic in cities.
BT has stressed that it is not planning to operate a national mobile network, but it will be using its spectrum to boost its fixed and Wi-Fi networks for businesses and consumers.
Even if the Treasury is disappointed, the auction may be good news for the roll out. We can now expect plenty of competition to offer fast new mobile services across the UK. But those people in 3G “notspots” will be hoping that this time they will not be left out of the faster future. Ofcom CEO Ed Richards has said “we will be conducting research at the end of this year to show who is deploying services, in which areas and at what speeds. This will help consumers and businesses to choose their most suitable provider.”
Ofcom has attached a coverage obligation to one of the 800 MHz lots of spectrum. The winner of this lot is Telefónica who is obliged to provide a mobile broadband service for indoor reception to at least 98% of the UK population (expected to cover at least 99% when outdoors) and at least 95% of the population of the UK by the end of 2017. While the main part of the auction has concluded, there is a final stage in the process to determine where in the 800 MHz and 2.6 GHz bands each winning bidder’s new spectrum will be located. Bidding in this final stage, called the ‘assignment stage’, will take place shortly.
Following that stage, once bidders have paid their full licence fees, Ofcom will grant licences to the winners to use the spectrum. Operators will then be able to start roll out services.
By 2030, demand for mobile data could be 80 times higher than today. To help meet this demand and avert a possible ‘capacity crunch’, more mobile spectrum is needed over the long term, together with new technologies to make mobile broadband more efficient. Ofcom is planning now to support the release of further spectrum for possible future ‘5G’ mobile services.
As for Fluidata we expect to be able to launch our own 4G services in the not too distant future, if you would like further details please speak with your Account Manager.
Posted in |
Posted: Friday, February 15th, 2013
A recent BBC report has unveiled car manufactures plans to have all new vehicles connected to the web within the next few years. In fact Intel, which will invest £64million over the next five years in the ‘connected cars’ claims that is already the third fastest growing technological device after phones and tablets.
The introduction of smart technologies into vehicles could herald a new era of app laden dashboards; providing useful information on anything from the price of petrol at local garages, to the nearest free parking space. Interestingly enough this technology isn’t new as McLaren pioneered it over a decade ago with their F1 road car which could be connected to a mobile phone to provide data about the car back to headquarters in Woking.
Now though social media and entertainment would be placed in new vehicles; with specialist voice commands allowing drivers to check and update Facebook and Twitter without touching a button. BMW already have some of this functionality available in their cars.
Exciting stuff, but will ‘connected cars’ have grave consequences for road safety – as drivers are exposed to more distractions? Research suggests that a high number of road accidents are caused by drivers using their mobile phones at the wheel (according to the National Safety Council about 25 per cent in the US ) so the introduction of technology which stops us from taking our eyes of the road – should actually have a positive impact on safety.
‘Connected cars’ are just the latest example of how the internet is changing and how new connectivity solutions like 4G and 3G will affect how we work, live and play.
Posted in |
Posted: Monday, January 28th, 2013
If, like the vast majority of the population, you’ve never set foot inside a datacentre, then they may well be a bit of a mystery to you – large, nondescript buildings hosting the mystical cloud.
Of course you may have seen some photos of the interior of a facility, but if you have, in magazines like Wired or the Economist, then it’s probable you’ve glimpsed at the facilities of Google or Facebook and witnessed a glossy, shiny premises with row upon row of nicely colour coded servers, routers and switches all working 24/7.
Datacentres for major corporations like Google (who have seemingly limitless budgets) are one thing, but how do the “real” businesses find data centre space and what should they be looking for?
This article has not been written to hark on about the cloud and its benefits; that subject has been exhausted almost as much as the word ‘cloud’ has been printed in marketing campaigns. But rather to begin to break through the marketing jargon and be a useful guide to understanding what sort of data centre would be right for your business.
For many of us, irrespective of our technical qualifications, reading a datacentre specification sheet can be a most confusing exercise. In fact in many ways it almost seems as if you are studying maths; the sheets are riddled with algebraic equations and terminology such as 2N power redundancy, with N+1 cooling with VESDA Gas Suppression units delivering FM200…
Working on a recent project at our newest colocation facility in Manchester, Joule House, I’ve managed to gain some understanding of the algebra and with it what businesses should be looking for in a facility.
To begin with it’s important to understand that both power and cooling are delivered using generators and refrigerators; essential to keeping your equipment working 24/7. The total number of generators or refrigerators needed is called “N”. N is the optimum number but has no resiliency, so if a generator were to fail you would lose power. Therefore what many facilities do is introduce an extra fully redundant generator; this is referred to as N+1. If the total number of generators required (N) is 1 then you have 100% redundancy. However if you require three generators then you have a 33.33% redundancy.
The next very familiar equation is N+2. This follows the same principle and delivers two additional generators or refrigerators. If N=1 then you have 200% resiliency, if N=4 you have 50% resiliency. What I am hoping to show here is that because of how datacentres report resiliency an N+1 facility might be very different to another. However, as we spec up the resiliency we get to 2N. This is the first time where you can be sure of resiliency – as 2N means that the facility has 2xN or double the number of generators and refrigerators needed to operate. Therefore regardless of whether the facility operates 2 generators or 200 they have confirmed that they have double the capacity.
So what should I look for?
- Your data centre should be a custom built facility with demonstrable security.
- Your data centre should be away from water and have risk assessments proving they are not at any flood risk (as New York proved).
- You should be asking if the facility is “Carrier Neutral”. Some datacentres are operated by a single carrier who then monopolise the connectivity. This may cause you issues if you take services from other providers or wish to create a mirrored datacentre setup in the future for further resiliency.
- Geography: Most colocation hardware is created to be manageable offsite; therefore geography should not be a major concern. In the event of a reboot being needed or a cable needing to be run you should be able to use the remote hands facility. Therefore understand the remote hands procedure and do not allow geography to limit your choice.
- Touring and Security: Your datacentre will hold the most sensitive parts of your organisational data; this may be your CRM or billing platform. Therefore when you tour the facility (which I strongly recommend you do) be mindful of the security, did they do thorough security checks? Are the suites secure? Who has access to your racks? If you are not satisfied with the security this is not the facility for you.
- Restrictions: It is important to understand what the limitations of the datacentre are. Many datacentres will have strict cabling and rack policies, these are not necessarily bad as they ensure security and continuity allow for faster time to repair and reduces the chance of accidental cable damage. You have the most flexibility before the racks are installed so do your capacity planning thoroughly and talk about your three to five year plans. Future racks may be located in a completely different section of the building so understand what impact this would have.
- Advice. Always ask for advice and use the pre-sales resources on offer.
I hope this proves useful to those wishing to understand more about datacentres and in particular to anyone confused by DC terminology.
Posted in |
Posted: Friday, January 18th, 2013
This week it was revealed that a US software developer has been caught outsourcing his job, which has been earning him a six-figure salary. Working from home, he had spent his days browsing Reddit and YouTube, whilst a Chinese software company had been working under a contract with him to carry out his work. He was paying them only a fraction of his annual wage.
While this is completely fraudulent, it throws up some very interesting debates on the nature of outsourcing. Both SMEs and larger enterprises often do not have the capacity to run all aspects of their own business, or it simply doesn’t make financial sense, and so look elsewhere to have the work carried out contractually. IT departments especially are often entirely outsourced to individuals or companies offsite or even abroad. This enables companies to have a far wider reach than their headcount or talent pool of full-time employees can offer. But when this chain extends and expands, it can be at the expense of efficiency. In a world where time is money, decisions and directions can end up taking longer, and company core values become more difficult to stick to.
Timothy Ferriss, author of The 4-Hour Workweek and self styled “serial entrepreneur and ultravagabond”, offers a step-by-step guide on how to outsource your entire life using overseas ‘virtual assistants.’ He describes his personal story of how he streamlined both his personal and work life, enabling him to add more zeros to his salary, as well as spending the majority of his life on holiday. While you family may not appreciate birthday cards written by your Indian personal assistant ‘Honey’, I certainly know a few people who would see the immediate benefits in this. However, somewhere along the way the line blurs between having an assistant do a bit of extra research for you for a short article, to defrauding your company and opening up security breaches.
The fact is companies will always rely on each other to provide services they are experts in, companies that can do things better, quicker or cheaper than one can do in house. Good working relationships are key, where both sides are clear and open with each other on what they require and expect, as well as a good understanding of the business itself.
Posted in |
Posted: Tuesday, January 15th, 2013
Last year we got involved in a project to bring high-speed broadband to a rural community in Hampshire as part of a number of trials to evaluate what technology could be used to serve a number of residents in a remote pocket of the country. Interestingly the villages of Little London and Smannell were a stone’s throw from a new housing development which was being served with a fibre to the premises (FTTP) product from Independent Fibre Networks Ltd making it a good location test with.
What was interesting with this project was the use of fibre to the cabinet (FTTC) for Little London and a wireless solution for Smannell ensuring that all the houses and local businesses were served. The use of multiple technologies meant we were able to maximise the budget while ensuring nobody was left out. This along with our Service Exchange Platform meant that the solution also delivered choice to the residents so they had a number of ISPs to choose from to deliver internet their home.
While the final speeds still aren’t near FTTP they are faster than most urban areas and a huge increase over their previous ADSL service. This film was done as part of a look into broadband in the UK and was shown this month on BBC South.
Posted in |
Posted: Friday, January 4th, 2013
The gravity of January 1st 1983 continues to slip under the radar for most. Much like Danny Boyle’s nod to Tim Berners-Lee in the Opening Ceremony of the Olympics, or the work done by Bob Metcalfe in the development of Ethernet technology, the significance of “Flag Day” will be lost on those not familiar with the great breakthroughs made in the development of the Internet over the past half-century.
‘Flag Day’ was effectively the day the internet was born; the day when TCP/IP fully replaced the Network Control Program (NCP) as the core networking protocol for ARPAnet (the predecessor to the internet). TCP/IP ultimately created a common language for inter-network communication; amalgamating the various conventions to allow for disparate networks with their own standards to communicate with one another more efficiently, reliably and securely. In particular it improved on NCP by ensuring that isolated attacks could no longer be capable of bringing down an entire network. It was upon these foundations that Berners Lee was later able to devise the World Wide Web.
Although no one individual can claim to have invented the Internet (with the exception of Al Gore!), Vint Cerf, Robert E. Kahn and the others at ARPAnet responsible for making the switch have stronger grounds than most.
ARPAnet itself was formally decommissioned in 1990, but it’s impressive to think that 30 years on the reason for transition towards IPv6 is that we’ve managed to allocate nearly the entire 4.2 billion addresses TCP/IP was originally designed to support.
Posted in |