Posted: Tuesday, May 14th, 2013
BT’s latest move forward in the deployment of infrastructure to underpin high-speed connections is Fibre to the Premises on Demand (FTTPoD). Unlike BT’s current FTTP offering, where fibre is run along pylons directly to the house, FTTPoD uses existing FTTC infrastructure as much as possible. Fibre optic cable is laid up to cabinets that are FTTC enabled in order to reach speeds of up to 330 Mb/s down and 30 Mb/s up. A pilot has already been launched, with results to follow shortly.
If this services was easily accessible to the general public, FTTPoD could be a big step towards competing with the likes of Hong Kong and South Korea in terms of average speeds. However, the number of the general population that will be able to access this service may be limited. FTTPoD runs off BT’s existing FTTC cabinets, which are still out of reach of large sections of the country, and in many areas may never be rolled out. BT has generally targeted highly populated residential areas for this infrastructure, leaving business areas out of reach. FTTPoD also cannot be installed into multi-tenanted premises, which further shows that this is not designed as a business service.
From what information we currently have available, the on-demand product will have a high install cost, but without contention or uptime guarantees normally associated with EAD services. This will raise interesting questions on how this service will be marketed – will home users be prepared to pay hundreds for the install in order to get speeds that arguably are not required by the majority? Will small business owners jump at the chance to access speeds previously only available through leased lines or bonded FTTC. While the install costs may well fall in line with the work that needs to be carried out, FTTPoD is offering BT a chance to begin the replacement last-mile copper lines with cheaper, faster and easier to manage fibre optic cable. No doubt over the next few decades copper will be phased out and fibre will be the main choice for last-mile connectivity, so this is a chance for BT customers to foot the bill for them.
While the lack of contention guarantees and SLAs will put off businesses that are more reliant on their connectivity, this technology could be very appealing to prosumers and start ups. It will be interesting to see if BT’s restrictions will impede businesses putting this to use once it is roll out across the country.
Posted in |
Posted: Thursday, April 18th, 2013
Cyber War is, we are told, happening increasingly all around us. However it doesn’t normally (touch wood) affect the average man in the street, until last month that is when millions of ordinary Internet users were caught in an ugly crossfire between warring companies; suffering delays in services and disruption to access.
The target of what became the largest DDoS attack in history (up to 300 Gb/s) was Spamhaus – an anti-spam website whose practices and methods have made them unpopular within shadier corners of the internet. The attack, began on March 18th, fully saturating Spamhaus’ connection to the rest of the Internet and came close to knocking their site offline. If not for the intervention of Cloudflare (who provide protection against such attacks) it’s likely it would have done. Cloudflare ‘rescue’ story below.
The Spamhaus DDos attacks may be the biggest to date, but they are not in isolation, rather they are the latest in a long list of recent incidents. American Express and HSBC fell victim to large scale attacks last year and it’s a trend security vendor Kaspersky expects to continue. “In general, attacks of this type are growing in terms of quantity as well as scale. Among the reasons for this growth is the development of the Internet itself (network capacity and computing power) and past failures in investigating and prosecuting individuals behind past attacks.”
Another trend that we are witnessing is that of cyber criminals exploiting a fundamental feature that allows us to use the internet – DNS. Domain Name System converts from name to IP through your computer asking a server what the IP address is. However the chances are that the server you ask won’t know the answer, so it will go and get it for you from a list of known authoritative servers. Once it has the answer it will reply back to original sender. These ‘recursive’ DNS servers are the life blood of how we use the internet, without them you would have to memorise each IP address!
However there are thousands of ‘recursive’ DNS servers out there which will accept queries from any IP address. If spoofed DNS packets are then sent to those unsecured servers they are susceptible to what is known as a DNS amplification attack – where only 3 or 4 KB of data can be sent, but where the request can generate as much as 100x that amount. This means that even with a relatively small number of nodes the bandwidth hit can be enormous. Combating these attacks is possible, but the way in which we do so may hinge on the answers to many other much broader questions about the future of the internet and in particular – who governs it.
Looking at the Spamhaus attack, it would appear that both unsecured DNS (by design) and unsecured DNS (by misconfiguration) were responsible for the amplification of the attack. One way of nullifying this would be for all ISP’s to only allow their customers IP’s to query their own DNS servers (as we do at Fluidata) however the processing overheads deter many others from doing so. As it stands customers also have the option to build their own recursive DNS servers on their own infrastructure; moving DNS outside of the ISP’s responsibility and increasing the potential for misconfiguration; which can be exploited for malicious purposes.
In theory ISP’s could form a united front against DDoS attacks of this nature; through insisting that customers only use their recursive DNS servers and ensuring that those servers are secure. To increase security further BCP-38 could also be deployed – providing filtering on every edge port so that customers cannot spoof traffic from their links. However the move to a more regulated system would rely on (if it was to be truly effective) cross national coordination and likely meet opposition from service providers who do not wish to incur the processing overheads associated with such measures.
Overcoming that opposition (i.e. by turning regulation into something more akin to legal statute) would inexorably carry this issue into the contentious territory of who governs the internet, who polices it and whether anybody has the right to do; a proverbial Pandora’s box with far reaching consequences and considerations for subjects ranging from security to freedom of speech, right to privacy and the debate over the openness of the web. Given this, raising awareness around responsible DNS use seems the most viable course of action; the Spamhaus attack legacy might just be encouraging people to think a little more about it.
Posted in |
Posted: Wednesday, February 27th, 2013
At the end of a complex bidding process, the 4G auction has its victors and has raised £2.34bn for the public purse. About 90% less than the price paid at the 3G sale 13 years ago – at the height of the dot-com bubble. It’s also more than £1bn short of what the chancellor estimated in his autumn statement.
The relatively modest amounts raised by the auction may well be attributable to the limited success enjoyed by EE since launching the 4G service. Results published last week for EE’s financial end of year show contract net additions actually falling by over a third in 2012 Q4. It’s been suggested that this may be down to the way EE have been pricing their data bundles; offering only the same amount of data as on 3G platforms (resulting in customers running out of data early within the contracted month). The recent reduction in the price of EE’s most basic tariff (by £5 a month) may well be a move to remedy these perceived short comings.
There were no real surprises as to the companies who have succeeded in the auction with 3, EE, Vodafone and the new kid on the block Niche (well BT) all getting portions of the valuable spectrum.
It is interesting that BT, who left the mobile sector a decade ago when it sold off BT Cellnet (now O2), is now back in the market with a healthy chunk of the 2.6Mhz spectrum which is best suited to handling high data traffic in cities.
BT has stressed that it is not planning to operate a national mobile network, but it will be using its spectrum to boost its fixed and Wi-Fi networks for businesses and consumers.
Even if the Treasury is disappointed, the auction may be good news for the roll out. We can now expect plenty of competition to offer fast new mobile services across the UK. But those people in 3G “notspots” will be hoping that this time they will not be left out of the faster future. Ofcom CEO Ed Richards has said “we will be conducting research at the end of this year to show who is deploying services, in which areas and at what speeds. This will help consumers and businesses to choose their most suitable provider.”
Ofcom has attached a coverage obligation to one of the 800 MHz lots of spectrum. The winner of this lot is Telefónica who is obliged to provide a mobile broadband service for indoor reception to at least 98% of the UK population (expected to cover at least 99% when outdoors) and at least 95% of the population of the UK by the end of 2017. While the main part of the auction has concluded, there is a final stage in the process to determine where in the 800 MHz and 2.6 GHz bands each winning bidder’s new spectrum will be located. Bidding in this final stage, called the ‘assignment stage’, will take place shortly.
Following that stage, once bidders have paid their full licence fees, Ofcom will grant licences to the winners to use the spectrum. Operators will then be able to start roll out services.
By 2030, demand for mobile data could be 80 times higher than today. To help meet this demand and avert a possible ‘capacity crunch’, more mobile spectrum is needed over the long term, together with new technologies to make mobile broadband more efficient. Ofcom is planning now to support the release of further spectrum for possible future ‘5G’ mobile services.
As for Fluidata we expect to be able to launch our own 4G services in the not too distant future, if you would like further details please speak with your Account Manager.
Posted in |
Posted: Friday, February 15th, 2013
A recent BBC report has unveiled car manufactures plans to have all new vehicles connected to the web within the next few years. In fact Intel, which will invest £64million over the next five years in the ‘connected cars’ claims that is already the third fastest growing technological device after phones and tablets.
The introduction of smart technologies into vehicles could herald a new era of app laden dashboards; providing useful information on anything from the price of petrol at local garages, to the nearest free parking space. Interestingly enough this technology isn’t new as McLaren pioneered it over a decade ago with their F1 road car which could be connected to a mobile phone to provide data about the car back to headquarters in Woking.
Now though social media and entertainment would be placed in new vehicles; with specialist voice commands allowing drivers to check and update Facebook and Twitter without touching a button. BMW already have some of this functionality available in their cars.
Exciting stuff, but will ‘connected cars’ have grave consequences for road safety – as drivers are exposed to more distractions? Research suggests that a high number of road accidents are caused by drivers using their mobile phones at the wheel (according to the National Safety Council about 25 per cent in the US ) so the introduction of technology which stops us from taking our eyes of the road – should actually have a positive impact on safety.
‘Connected cars’ are just the latest example of how the internet is changing and how new connectivity solutions like 4G and 3G will affect how we work, live and play.
Posted in |
Posted: Monday, January 28th, 2013
If, like the vast majority of the population, you’ve never set foot inside a datacentre, then they may well be a bit of a mystery to you – large, nondescript buildings hosting the mystical cloud.
Of course you may have seen some photos of the interior of a facility, but if you have, in magazines like Wired or the Economist, then it’s probable you’ve glimpsed at the facilities of Google or Facebook and witnessed a glossy, shiny premises with row upon row of nicely colour coded servers, routers and switches all working 24/7.
Datacentres for major corporations like Google (who have seemingly limitless budgets) are one thing, but how do the “real” businesses find data centre space and what should they be looking for?
This article has not been written to hark on about the cloud and its benefits; that subject has been exhausted almost as much as the word ‘cloud’ has been printed in marketing campaigns. But rather to begin to break through the marketing jargon and be a useful guide to understanding what sort of data centre would be right for your business.
For many of us, irrespective of our technical qualifications, reading a datacentre specification sheet can be a most confusing exercise. In fact in many ways it almost seems as if you are studying maths; the sheets are riddled with algebraic equations and terminology such as 2N power redundancy, with N+1 cooling with VESDA Gas Suppression units delivering FM200…
Working on a recent project at our newest colocation facility in Manchester, Joule House, I’ve managed to gain some understanding of the algebra and with it what businesses should be looking for in a facility.
To begin with it’s important to understand that both power and cooling are delivered using generators and refrigerators; essential to keeping your equipment working 24/7. The total number of generators or refrigerators needed is called “N”. N is the optimum number but has no resiliency, so if a generator were to fail you would lose power. Therefore what many facilities do is introduce an extra fully redundant generator; this is referred to as N+1. If the total number of generators required (N) is 1 then you have 100% redundancy. However if you require three generators then you have a 33.33% redundancy.
The next very familiar equation is N+2. This follows the same principle and delivers two additional generators or refrigerators. If N=1 then you have 200% resiliency, if N=4 you have 50% resiliency. What I am hoping to show here is that because of how datacentres report resiliency an N+1 facility might be very different to another. However, as we spec up the resiliency we get to 2N. This is the first time where you can be sure of resiliency – as 2N means that the facility has 2xN or double the number of generators and refrigerators needed to operate. Therefore regardless of whether the facility operates 2 generators or 200 they have confirmed that they have double the capacity.
So what should I look for?
- Your data centre should be a custom built facility with demonstrable security.
- Your data centre should be away from water and have risk assessments proving they are not at any flood risk (as New York proved).
- You should be asking if the facility is “Carrier Neutral”. Some datacentres are operated by a single carrier who then monopolise the connectivity. This may cause you issues if you take services from other providers or wish to create a mirrored datacentre setup in the future for further resiliency.
- Geography: Most colocation hardware is created to be manageable offsite; therefore geography should not be a major concern. In the event of a reboot being needed or a cable needing to be run you should be able to use the remote hands facility. Therefore understand the remote hands procedure and do not allow geography to limit your choice.
- Touring and Security: Your datacentre will hold the most sensitive parts of your organisational data; this may be your CRM or billing platform. Therefore when you tour the facility (which I strongly recommend you do) be mindful of the security, did they do thorough security checks? Are the suites secure? Who has access to your racks? If you are not satisfied with the security this is not the facility for you.
- Restrictions: It is important to understand what the limitations of the datacentre are. Many datacentres will have strict cabling and rack policies, these are not necessarily bad as they ensure security and continuity allow for faster time to repair and reduces the chance of accidental cable damage. You have the most flexibility before the racks are installed so do your capacity planning thoroughly and talk about your three to five year plans. Future racks may be located in a completely different section of the building so understand what impact this would have.
- Advice. Always ask for advice and use the pre-sales resources on offer.
I hope this proves useful to those wishing to understand more about datacentres and in particular to anyone confused by DC terminology.
Posted in |
Posted: Friday, January 18th, 2013
This week it was revealed that a US software developer has been caught outsourcing his job, which has been earning him a six-figure salary. Working from home, he had spent his days browsing Reddit and YouTube, whilst a Chinese software company had been working under a contract with him to carry out his work. He was paying them only a fraction of his annual wage.
While this is completely fraudulent, it throws up some very interesting debates on the nature of outsourcing. Both SMEs and larger enterprises often do not have the capacity to run all aspects of their own business, or it simply doesn’t make financial sense, and so look elsewhere to have the work carried out contractually. IT departments especially are often entirely outsourced to individuals or companies offsite or even abroad. This enables companies to have a far wider reach than their headcount or talent pool of full-time employees can offer. But when this chain extends and expands, it can be at the expense of efficiency. In a world where time is money, decisions and directions can end up taking longer, and company core values become more difficult to stick to.
Timothy Ferriss, author of The 4-Hour Workweek and self styled “serial entrepreneur and ultravagabond”, offers a step-by-step guide on how to outsource your entire life using overseas ‘virtual assistants.’ He describes his personal story of how he streamlined both his personal and work life, enabling him to add more zeros to his salary, as well as spending the majority of his life on holiday. While you family may not appreciate birthday cards written by your Indian personal assistant ‘Honey’, I certainly know a few people who would see the immediate benefits in this. However, somewhere along the way the line blurs between having an assistant do a bit of extra research for you for a short article, to defrauding your company and opening up security breaches.
The fact is companies will always rely on each other to provide services they are experts in, companies that can do things better, quicker or cheaper than one can do in house. Good working relationships are key, where both sides are clear and open with each other on what they require and expect, as well as a good understanding of the business itself.
Posted in |
Posted: Tuesday, January 15th, 2013
Last year we got involved in a project to bring high-speed broadband to a rural community in Hampshire as part of a number of trials to evaluate what technology could be used to serve a number of residents in a remote pocket of the country. Interestingly the villages of Little London and Smannell were a stone’s throw from a new housing development which was being served with a fibre to the premises (FTTP) product from Independent Fibre Networks Ltd making it a good location test with.
What was interesting with this project was the use of fibre to the cabinet (FTTC) for Little London and a wireless solution for Smannell ensuring that all the houses and local businesses were served. The use of multiple technologies meant we were able to maximise the budget while ensuring nobody was left out. This along with our Service Exchange Platform meant that the solution also delivered choice to the residents so they had a number of ISPs to choose from to deliver internet their home.
While the final speeds still aren’t near FTTP they are faster than most urban areas and a huge increase over their previous ADSL service. This film was done as part of a look into broadband in the UK and was shown this month on BBC South.
Posted in |
Posted: Thursday, December 20th, 2012
Fluidata are delighted to announce that we have completed a £2.5 million upgrade on our network on time and on budget. After months of work and planning, it’s fantastic to have concluded this project and we can now start to deliver the benefits of the new network to our clients.
Well the network now operates a hybrid core of Juniper MX and Cisco ASR hardware; providing a switching and routing platform capable of supporting 100 Gb/s wavelengths with true MPLS/VPLS support.
The network now also spans 10 UK data centres and supports 16 carriers – allowing us to offer an unrivalled choice of services to both our direct and wholesale clients. With our network expected to expand to incorporate more carriers in 2013 we a building a platform unique in the industry in its capacity and diversity. Already a number of PWAN customers have been migrated onto the new platform and new fibre services are making the most of the multipoint to multipoint functionality.
We believe this upgrade provides us an improved fabric to support our existing and future requirements; the network is more scalable, more resilient and easier to manage. Furthermore, we’re genuinely excited by the enhanced MPLS and VPLS capabilities; which allow us to offer next generation WAN infrastructures at an affordable price point.
Posted in |
Posted: Thursday, November 29th, 2012
Digital Region have announced that it is carrying out a phased network upgrade in which they are upgrading the cards in certain exchanges. This will enable customers to potentially receive higher than the 70 Mb/s download that is currently achievable using FTTC technology.
This will differ per customer/copper connection as the usual copper caveats apply and it will all depend on what the individual line is capable of achieving at a stable sync rate. It will also only be achievable if the customer is served by an exchange that is part of the network upgrade.
This network upgrade will only impact existing/new connections at the higher end allowing the line to achieve higher speeds if capable. Digital Region is planning phase 1 of the network upgrade next month.
Posted in |
Posted: Wednesday, November 21st, 2012
During last month’s Supertsorm Sandy, New York datacentres, like so much of the city’s infrastructure, felt the havoc wreaking power of the storm’s brutal force. Datacentres are built with natural disasters in mind and the storm gave DC providers an unwelcome opportunity to put their backup systems to the test; the success these companies had at dealing with such forces of nature was mixed.
Major providers such as AT&T stood confidently secure in the knowledge that their $600m investment; including 320 technology mobile trailers and fuel tankers, were ready to be positioned where needed. Further preparations to increase their wireless capacity were enacted to see through the full strength of the storm. Some datacentres also deployed generators in the areas expected to be hit by superstorm Sandy and placed employees in the considerably safer eastern region, on standby or in hotels close to the datacentre, to ensure fast response when necessary.
Others fared less well, seemingly waiting to execute the recovery plans mid-storm and switching to back up power in the middle of the event. Datacentres, positioned within the regional and local FEMA flood zone maps, were reduced to watching their backup systems fail; as flooding crippled the diesel pumps based in high risk areas such as basements and prevented fuel from being pumped to the generators.
In the next few months experts will ascertain the long standing impact of the storm on datacentre systems and no doubt ways in which to improve resilience will be initiated. These facilities are highly controlled and designed to cope with disasters – but for many the storm effectively broke through their cocooned environment; one datacentre even experienced temperatures rising above 100 degrees Fahrenheit.
The loss of connectivity across New York had a significant impact on organisations’ internet access and web presence; Huffington Post, Gawker, Gizmodo and BuzzFeed all went down during the storm when their web hosting company’s infrastructure was flooded.
In many ways Sandy illustrated just how much we have come to rely on the internet. Record numbers of Americans turned to online video to view news updates and eye witness accounts, and social media was abuzz with the latest news on the storm and the resulting crisis. For those without access though – individuals and organisations; an essential service, a utility if you like, was lost. Of course the storm also illustrated that as important as connectivity is to us, like much of the infrastructure we rely on it is still fallible in the face of these types of disasters.
Posted in |