Posted: Thursday, April 18th, 2013
Cyber War is, we are told, happening increasingly all around us. However it doesn’t normally (touch wood) affect the average man in the street, until last month that is when millions of ordinary Internet users were caught in an ugly crossfire between warring companies; suffering delays in services and disruption to access.
The target of what became the largest DDoS attack in history (up to 300 Gb/s) was Spamhaus – an anti-spam website whose practices and methods have made them unpopular within shadier corners of the internet. The attack, began on March 18th, fully saturating Spamhaus’ connection to the rest of the Internet and came close to knocking their site offline. If not for the intervention of Cloudflare (who provide protection against such attacks) it’s likely it would have done. Cloudflare ‘rescue’ story below.
The Spamhaus DDos attacks may be the biggest to date, but they are not in isolation, rather they are the latest in a long list of recent incidents. American Express and HSBC fell victim to large scale attacks last year and it’s a trend security vendor Kaspersky expects to continue. “In general, attacks of this type are growing in terms of quantity as well as scale. Among the reasons for this growth is the development of the Internet itself (network capacity and computing power) and past failures in investigating and prosecuting individuals behind past attacks.”
Another trend that we are witnessing is that of cyber criminals exploiting a fundamental feature that allows us to use the internet – DNS. Domain Name System converts from name to IP through your computer asking a server what the IP address is. However the chances are that the server you ask won’t know the answer, so it will go and get it for you from a list of known authoritative servers. Once it has the answer it will reply back to original sender. These ‘recursive’ DNS servers are the life blood of how we use the internet, without them you would have to memorise each IP address!
However there are thousands of ‘recursive’ DNS servers out there which will accept queries from any IP address. If spoofed DNS packets are then sent to those unsecured servers they are susceptible to what is known as a DNS amplification attack – where only 3 or 4 KB of data can be sent, but where the request can generate as much as 100x that amount. This means that even with a relatively small number of nodes the bandwidth hit can be enormous. Combating these attacks is possible, but the way in which we do so may hinge on the answers to many other much broader questions about the future of the internet and in particular – who governs it.
Looking at the Spamhaus attack, it would appear that both unsecured DNS (by design) and unsecured DNS (by misconfiguration) were responsible for the amplification of the attack. One way of nullifying this would be for all ISP’s to only allow their customers IP’s to query their own DNS servers (as we do at Fluidata) however the processing overheads deter many others from doing so. As it stands customers also have the option to build their own recursive DNS servers on their own infrastructure; moving DNS outside of the ISP’s responsibility and increasing the potential for misconfiguration; which can be exploited for malicious purposes.
In theory ISP’s could form a united front against DDoS attacks of this nature; through insisting that customers only use their recursive DNS servers and ensuring that those servers are secure. To increase security further BCP-38 could also be deployed – providing filtering on every edge port so that customers cannot spoof traffic from their links. However the move to a more regulated system would rely on (if it was to be truly effective) cross national coordination and likely meet opposition from service providers who do not wish to incur the processing overheads associated with such measures.
Overcoming that opposition (i.e. by turning regulation into something more akin to legal statute) would inexorably carry this issue into the contentious territory of who governs the internet, who polices it and whether anybody has the right to do; a proverbial Pandora’s box with far reaching consequences and considerations for subjects ranging from security to freedom of speech, right to privacy and the debate over the openness of the web. Given this, raising awareness around responsible DNS use seems the most viable course of action; the Spamhaus attack legacy might just be encouraging people to think a little more about it.
Posted in |
Posted: Monday, January 28th, 2013
If, like the vast majority of the population, you’ve never set foot inside a datacentre, then they may well be a bit of a mystery to you – large, nondescript buildings hosting the mystical cloud.
Of course you may have seen some photos of the interior of a facility, but if you have, in magazines like Wired or the Economist, then it’s probable you’ve glimpsed at the facilities of Google or Facebook and witnessed a glossy, shiny premises with row upon row of nicely colour coded servers, routers and switches all working 24/7.
Datacentres for major corporations like Google (who have seemingly limitless budgets) are one thing, but how do the “real” businesses find data centre space and what should they be looking for?
This article has not been written to hark on about the cloud and its benefits; that subject has been exhausted almost as much as the word ‘cloud’ has been printed in marketing campaigns. But rather to begin to break through the marketing jargon and be a useful guide to understanding what sort of data centre would be right for your business.
For many of us, irrespective of our technical qualifications, reading a datacentre specification sheet can be a most confusing exercise. In fact in many ways it almost seems as if you are studying maths; the sheets are riddled with algebraic equations and terminology such as 2N power redundancy, with N+1 cooling with VESDA Gas Suppression units delivering FM200…
Working on a recent project at our newest colocation facility in Manchester, Joule House, I’ve managed to gain some understanding of the algebra and with it what businesses should be looking for in a facility.
To begin with it’s important to understand that both power and cooling are delivered using generators and refrigerators; essential to keeping your equipment working 24/7. The total number of generators or refrigerators needed is called “N”. N is the optimum number but has no resiliency, so if a generator were to fail you would lose power. Therefore what many facilities do is introduce an extra fully redundant generator; this is referred to as N+1. If the total number of generators required (N) is 1 then you have 100% redundancy. However if you require three generators then you have a 33.33% redundancy.
The next very familiar equation is N+2. This follows the same principle and delivers two additional generators or refrigerators. If N=1 then you have 200% resiliency, if N=4 you have 50% resiliency. What I am hoping to show here is that because of how datacentres report resiliency an N+1 facility might be very different to another. However, as we spec up the resiliency we get to 2N. This is the first time where you can be sure of resiliency – as 2N means that the facility has 2xN or double the number of generators and refrigerators needed to operate. Therefore regardless of whether the facility operates 2 generators or 200 they have confirmed that they have double the capacity.
So what should I look for?
- Your data centre should be a custom built facility with demonstrable security.
- Your data centre should be away from water and have risk assessments proving they are not at any flood risk (as New York proved).
- You should be asking if the facility is “Carrier Neutral”. Some datacentres are operated by a single carrier who then monopolise the connectivity. This may cause you issues if you take services from other providers or wish to create a mirrored datacentre setup in the future for further resiliency.
- Geography: Most colocation hardware is created to be manageable offsite; therefore geography should not be a major concern. In the event of a reboot being needed or a cable needing to be run you should be able to use the remote hands facility. Therefore understand the remote hands procedure and do not allow geography to limit your choice.
- Touring and Security: Your datacentre will hold the most sensitive parts of your organisational data; this may be your CRM or billing platform. Therefore when you tour the facility (which I strongly recommend you do) be mindful of the security, did they do thorough security checks? Are the suites secure? Who has access to your racks? If you are not satisfied with the security this is not the facility for you.
- Restrictions: It is important to understand what the limitations of the datacentre are. Many datacentres will have strict cabling and rack policies, these are not necessarily bad as they ensure security and continuity allow for faster time to repair and reduces the chance of accidental cable damage. You have the most flexibility before the racks are installed so do your capacity planning thoroughly and talk about your three to five year plans. Future racks may be located in a completely different section of the building so understand what impact this would have.
- Advice. Always ask for advice and use the pre-sales resources on offer.
I hope this proves useful to those wishing to understand more about datacentres and in particular to anyone confused by DC terminology.
Posted in |
Posted: Thursday, December 20th, 2012
Fluidata are delighted to announce that we have completed a £2.5 million upgrade on our network on time and on budget. After months of work and planning, it’s fantastic to have concluded this project and we can now start to deliver the benefits of the new network to our clients.
Well the network now operates a hybrid core of Juniper MX and Cisco ASR hardware; providing a switching and routing platform capable of supporting 100 Gb/s wavelengths with true MPLS/VPLS support.
The network now also spans 10 UK data centres and supports 16 carriers – allowing us to offer an unrivalled choice of services to both our direct and wholesale clients. With our network expected to expand to incorporate more carriers in 2013 we a building a platform unique in the industry in its capacity and diversity. Already a number of PWAN customers have been migrated onto the new platform and new fibre services are making the most of the multipoint to multipoint functionality.
We believe this upgrade provides us an improved fabric to support our existing and future requirements; the network is more scalable, more resilient and easier to manage. Furthermore, we’re genuinely excited by the enhanced MPLS and VPLS capabilities; which allow us to offer next generation WAN infrastructures at an affordable price point.
Posted in |
Posted: Wednesday, May 16th, 2012
Fluidata are delighted to announce the launch of our new bonded FTTC service.
The first PureFluid PULSE service went live with a trial customer last month, and as you will see from this case study, we now have every reason to offer it to any client or prospective client lucky enough to benefit from it. The PureFluid PULSE service can aggregate up to three 80 Mb/s down, 20 Mb/s up PULSE circuits, delivering superfast speeds of up to 200 Mb/s down and 60 Mb/s up. As with all PureFluid solutions, PureFluid PULSE also comes fortified with true resilience, via either any additional DSL line (separate carrier) or 3G connectivity that is delivered over the same IP.
PureFluid PULSE can claim to be a genuine leased line alternative; offering as good as or even better speeds than fibre and a high service level guarantee.
Posted in |
Posted: Monday, April 23rd, 2012
Olympic Countdown 95 days to go
Resilience and capacity planning are at the heart of our core network design. Preparing for the 2012 Olympics has been a relatively straightforward process for us because the network has been inherently built to cope with huge volumes of traffic and failure scenarios. In many ways 2012 is a simple extension of the good practises we already employ to serve our customers in a resilient and uncontended fashion.
All DSL services are always mapped from suppliers into at least two independent nodes on our core network. Key anciliary services, RADIUS, DNS and SMTP, are similarly spread across separate geographical locations, and all datacentres connect back to two others via diverse dark fibres.
In the event of a black-out at one of our datacentres all DSL based services would automatically fail-over to one of several other sites and carry on working.
Our customers typically also have demand for high bandwidth as well as uptime, so capacity is carefully engineered with low thresholds set to trigger upgrades. On our DSL platforms routers are kept at no more than 30% usage during peak hours, with average across the network typically 15-20%. This ensures that traffic re-routed during a failure can be absorbed at other sites with ease.
Ethernet, colocation and leased line customers connect and route directly via our MPLS core network backbone, and typically place a higher demand on the network. This core is also the fabric which connects together our datacentres, so we use multiple 10 Gb/s wavelengths on our own WDM equipment to provide extremely scalable bandwidth. In preparation for the Olympics we are operating with peak backbone usage typically between 5 & 10% and have plumbed in an additional 40 Gb/s of Internet transit routing capacity – enough to comfortably serve huge customer demands whilst being able to also cushion large DoS attacks.
And because demand for network bandwidth is increasing at an exponential rate, no sooner than the Olympics are over work will begin on a £2.4M upgrade of our network, designed to deliver a fabric for the most demanding applications for several years to come.
Posted in |
Posted: Friday, April 13th, 2012
Fluidata Olympic Countdown – 105 days to go.
The Olympics is affecting business’s up and down the county in myriad ways; from those directly working on the planning and logistics, to London retailers likely to capitalise on increase in footfall. As a London based business, and a telecommunications provider, Fluidata are also witnessing many consequences of the games imminent arrival to the capital.
Disruption: Many fibre provisions in London have been severely disputed, starting from as early as last autumn. An embargo on all planned street work, affecting key parts of the London Olympic Route Network (ORN) , was implemented on 1st March 2012, and will run till the 30th of September. Fibre orders across London will also likely see delays after this date – as carriers begin attempting to clear the back log of work. Needless to say, these disruptions have impacted on Fluidata’s ability to deliver a number of planned provisions, however with a strong DSL portfolio at our disposal we have also seen growth in ‘leased line alternative’ orders, as companies plunge for temporary next best solutions.
Demand: Increase in demand for connectivity solutions has been witnessed in London. A number of luxury hotel chains have invested in improved connectivity during the games – expecting both an increase in custom and user demand to view the games from laptop and mobile device.
Travel: With travel in London likely to be severely disputed, we also witnessing organisations looking for remote working solutions – such as reliable home connectivity, 3G connectivity and companywide voice and video provisions. Of course with more people working from home, upstream in the office also needs to improve, once more resulting in increased demand for improved connectivity.
Those companies who are expecting a full complement of staff in the office, are also considering improvements or modifications to their connectivity. As Fluidata has demonstrated in previous months, more and more employees use work internet for viewing events like Wimbledon. The Olympics will be no different and IT mangers are investigating bumping up connectivity temporarily or, if they are a bit mean, locking down the likes of BBC iPlayer.
Trouble?: Many experts in our industry are expecting the UK IP infrastructure to be hit with a ‘deluge of data’ as more people watch, keep up to date, and talk about the games over connectivity devices than ever before. Fluidata have made improvements in our network , and though we expect more traffic than ever before , we are confident of coping with the demands and showcasing our network as one of the best in the industry. How everyone else copes remains to be seen.
We will not know the true impact of the games on Fluidata until after their conclusion, but thus far, it’s throwing up as many opportunities for us as it is potential challenges.
Posted in |
Posted: Friday, January 6th, 2012
Ok, so I live out in the country and I’m in the privileged position of getting almost 3Mb/s on my ADSL line at home.
But is speed the only problem that rural communities (and councils) face? I’m not convinced, yes it takes me an age (relatively) to download updates and programs but I can equally go for a walk (or the pub) whilst I wait for whatever Microsoft update I need.
But streaming, that’s where my problems lies, or should I say, any need for constant low latency connectivity to a server, now most people will think why is that important? Let me explain: The reason we all need more speed is for the user experience at home or in the office. Part of that user experience is how quickly can I get a piece of data from over there, to me at my computer. But if you are needing a constant stream of data say of video and sound, say around the 27th of July this year, and you keep dropping packets you’re not going to be a happy user.
I’m pointing out the problem of a recreational user but what if the your talking about a home user or small office using Citrix or a thin client solution, this is where the problem often gets very worrying. For those of you who don’t know, as soon as a packet is dropped on a Citrix session, the client software has a bit of a moment and decides it needs to check the connection to the central server, meaning the user hopes that they saved the last piece of work they were doing and logs back on. This is often the problem of latency and packet loss, unfortunately for the end user and ISP there is a whole host of things that can cause this. From dodgy old routing equipment (on the ISP’s core network) to the end user not having a good enough CPE.
So are there solutions to this issue? Well the short answer is “yes” , however the issues are often specific to each end user due to the amount of things that can affect the latency. If you are having problems and you don’t use our connections, come have a chat with one of our consultants who should be able to help.
On a final point (not an intentionally smug way, but…) I was lucky enough to upgrade my line at home to a BURST connection last year, so high latency and packet loss can’t be my excuse as to why I’m so rubbish at Battlefield 3!
Posted in |
Posted: Tuesday, October 11th, 2011
Millions of Blackberry owners across Europe, the Middle East and Africa have been left without services following a large server crash yesterday. The problem appears to have originated in a datacentre in Slough which handles Blackberry services for the affected regions. Unlike other smartphone services, BlackBerry phones rely on a centralised internet service provided by RIM – making it susceptible to non-carrier outages.
Users around the world had their work, business and social lives severely affected yesterday after the server crashed, as their phones were rendered dysfunctional. Blackberry have sold over 100 million handsets worldwide to date and RIM (blackberry’s manufacturer) claimed to have added 1 million subscribers in the EMEA region alone during the month of July. It’s believed the EMEA region was the area most affected by yesterday’s outage. This highlights the lack of control for businesses and their IT teams who were unable to fix the problem as the servers reside in the cloud and outside their remit. As most businesses need to consider their disaster recovery options I believe it would have come as a surprise to many as to how reliant they are on RIM to ensure the services work correctly. This also highlights other issues in terms of security and secrecy when it comes to email and RIMs ability to censor services to the request of any government. Ask most business people who have a Blackberry their reliance on it and I think most would say it has replaced their laptop and now their primary method of communication, definitely a concern for the IT department.
The timing of the outage couldn’t have come better for Blackberry’s rival Apple. The new iPhone 4S received a mixed reception when unveiled last week, but will be in prime position to benefit from any disgruntled, want away Blackberry customers.
Posted in |
Posted: Tuesday, October 4th, 2011
According to reports yesterday over 5% (275,000) BT customers lost connectivity due to a power failure at a major BT exchange in the Birmingham area. Connections started to drop 13:00 and most residential customers began to see their services logging back on 15:00 onwards.
However surprisingly it was business connections with higher SLAs and uptime assurances that had to wait even longer for services to return to normal. Reports from the BBC indicate that they did not see their connectivity restore for a “slightly longer period”. Whilst we do not know what constitutes as ‘slightly longer’ we do know that even two hours in the business world is two hours too long with no internet service. The fact that it happened as well during the day when most businesses would notice and less consumers would probably didn’t help matters.
While an SLA is a demonstration of an ISPs confidence in their network it will mean little, as some BT customers found out, in times of outages where it takes time to restore services. The problem is any compensation is going to be insignificant to the frustration and potential lost business. Much better to actually invest in a technology that is delivered over two networks so you aren’t just relying on one, such as our PureFluid or ADVANCE products which always use multiple carriers to ensure maximum uptime.
Posted in |
Posted: Friday, July 1st, 2011
I remember when a 2Mb/s ADSL connection was called future proofing, but you need to remember a lot of businesses at that time were still using 64Kb/s ISDN lines.
In the technical world there are so many aspects that change constantly and put more pressure on IT Directors and Managers. From the speed of a computer (Moore’s Law), to internet bandwidth and even protocols (IPv6) – change is the only constant.
So what do we advise our customers? Well in part it depends on the size and the need of the company, but one thing that we do preach consistently, across big and small, is keep your contracts flexible. For a smaller SME company we would suggest something as simple as a short (3-month) contract for our ADSL products. For larger PWAN and CORE customers we would advise taking a year contract – although it means that we can’t spread the capex cost quite as long , it allows the IT Directors and Managers who work with us keep flexible and upgrade or downgrade their solutions. Large companies also have the added problem of cost against flexibility, do they go for the (on the face of it) cheaper-per-year 3 or 5 year deals or the flexible shorter contracts? Our experience suggests the later always pays off when it comes to renegotiating in years 2 and 3- as a rule technology normally goes down not up in price.
There also has the added advantage that both the account manager, and our service, has to be excellent - first time and every time.
Posted in |