Posted: Monday, May 20th, 2013
This year when we did Tough Mudder I was able to bring along my new GoPro camera to capture some of the action. While I still have much to learn about holding the camera steady and how to use video editing software it does give a flavour as what the event is like.
Posted in |
Posted: Thursday, May 9th, 2013
Last Saturday, 27 Fluidata employees completed 20Km of hardcore obstacle course designed by British special forces, Tough Mudder North London. Doubling the amount of company participants in 2013, we raised over £7,500 for Parkinson’s Disease UK and tested our mental grit, stamina, strength and camaraderie for the second year running.
This year we entered as the biggest group Tough Mudder UK has seen to date, and we successfully completed the challenge without any major injuries (only a few bruises). The relentless training, long distance running, and dieting helped to prepare the team to run through mud, electricity, fire and an ice filled plunge pool. The atmosphere in the office has been at high spirits and the team will continue to train over the coming months to keep in shape and prepare for the next big challenge.
The question is what is next for the Bandwidth Bandits?
Posted in |
Posted: Wednesday, May 1st, 2013
In death, as in life, Margaret Thatcher divided opinion. Obituaries penned celebrating the ‘savior of the nation’, while others celebrated the demise of a merciless class war commander. What we can perhaps all agree on is she was fundamental to shaping the world we live in now; as citizens, as workers, as business’s – our lives, for better or for worse, owe much to her actions. Of course in some quarters her legacy is more marked than in others, unbeknown to some the telecommunications industry is one such place.
The privitisation of BT in 1984 represents both a watershed moment for Thatcher and her government as well for our industry. Whist it also reveals much about how the ‘Thatcher Revolution’ gathered pace. Before 1981, all telecoms services in Britain were provided through the Post Office Telecommunications (known as BT from 1980). Widely considered a ‘natural monopoly industry’ ( due to the high infrastructure costs associated with it) liberalisation of the market had been given little consideration up until the late ‘70s. However against the backdrop of public dissatisfaction with increasing delays for telephone line installation and with new technologies reducing capex costs required to enter the industry, this was soon to change under the first Thatcher government.
The 1981 Telecommunications Act removed BT from the Post Office and this was followed in 1982 by BP, Cable and Wireless and Barclays setting up Mercury – injecting competition into the market place. However at neither of these junctures is there evidence that privatisation was the ultimate vision of Thatcher or her government. Denationalisation was still viewed as a radical and risky policy against the backdrop of 30 plus years of state ownership consensus and state sales prior to ’84 reflected this tentativeness; in that they were small and discreet. The reasons for the sale coming to fruition were on the most part pragmatic; responses to the problems of the time formed gradually through multi-stakeholder negotiation.
Modernising BT was a key objective under the move to separate it from the post office, however financing that moderinisation was a challenge; in 1983 the government’s finances were deep in the red, with a deficit of around 4% of GDP and nationalised industries were competing with key services (health, education etc) for the treasuries limited coppers. Transferring assets into the private sphere not only opened up the options to find investment via other sources (i.e the city) but also raised money for the public purse.
Whilst the primary motivation for the sale of BT was raising funds for future investment, public shared ownership was also attractive to Thatcher for both pragmatic and ideological reasons. Knowing that both Labour and the unions were likely to be opposed to privitisation, the government was able to offer BT employees pre-registered share options (of which 90% took them up on) as a populist bulwark to any attempts to reverse the trend. In tandem with the ‘Right to Buy’ policy, it also began to shape the neo liberalist narrative Thatcher was developing on ‘rolling back the frontiers of the state’ and empowering the people through private equity purchases.
The sale of BT encapsulates how Thatcher’s early pragmatically reasoned social and economic policies evolved into a political ideology. Arriving at the conclusion to privatise was a policy building process over a number of years, which always kept a firm eye on what was considered acceptable. When, in November 1984, more than 50 percent of BT was sold to the public through share option, it became the largest ever most successful SOE privatisation exercise in the history. It was also an almost immediate political success, popular with the millions who purchased shares, invigorating the UK stock exchange (very much the beginning of the ‘Big Bang’ in the City) and raising money for the government. The sale would pave the way for a further 40 mass market sell off’s during the Thatcher years and irrevocably altered the relationship between state and market. In many ways the embryo of ‘Thatcherism’ was also hatched during the process – setting a template that one could argue has been followed by all subsequent British governments, as well as many others internationally.
As for the impact on telecommunications, opening up the sector allowed for other operators to enter the market, challenge BT, and invest in new technologies (such as mobile and Internet services). Of course without regulation BT’s ‘natural monopoly’ ( i.e owning the underlying infrastructure) would have made for an uneven playing field so Oftel ( later Ofcom) was established at the point of denationalisation to introduce price caps and optimise BT’s levels of efficiency. Ofcom oversaw further moves to liberalise the market in 1991; when authorising independent companies to bulk-buy telecommunications and sell in packages to customers, and again in 2003 when opening up the telephone exchange for LLU operators.
As of 2012, there were over 200 fixed telecommunications providers, over 100 mobile service providers and over 1,000 Internet service providers operating in the UK. For most consumers there is a wide array of services and providers to choice from and value for money to be gained from doing so. But not for everyone. Many areas of Britain (mostly rural) are without access to fast broadband; for those the wrong side of the ‘Digital Divide’ , in an increasingly digital world, there are serious social and economic consequences, for communities and individuals. The reasons for this divide? It’s hard not to arrive at the conclusion that privatisation constitutes the root cause; given that historically operators have reframed from investing in areas where there are unable to identify significant ROI. The establishment of BDUK (Broadband Delivery Fund) in 2009 represents a move from the government to intervene in the market and initiate state led solutions to this problem.
When Margaret Thatcher set the wheels in motion for the liberalisation of the telecommunications, she did so with more modest than radial intentions. Just a few years later, she would find herself presiding over change which would not only revolutionise the telecommunications industry, but which constituted a seismic shift in the relationships between the state, individual and market and had both immediate and long lasting economic, political and social consequences across the UK.
When people now debate Thatcher’s legacy, they debate the merits of policies and philosophy’s which were sharpened, developed and ultimately given momentum, by those changes to our industry, over 30 years ago.
Posted in |
Posted: Thursday, April 18th, 2013
Cyber War is, we are told, happening increasingly all around us. However it doesn’t normally (touch wood) affect the average man in the street, until last month that is when millions of ordinary Internet users were caught in an ugly crossfire between warring companies; suffering delays in services and disruption to access.
The target of what became the largest DDoS attack in history (up to 300 Gb/s) was Spamhaus – an anti-spam website whose practices and methods have made them unpopular within shadier corners of the internet. The attack, began on March 18th, fully saturating Spamhaus’ connection to the rest of the Internet and came close to knocking their site offline. If not for the intervention of Cloudflare (who provide protection against such attacks) it’s likely it would have done. Cloudflare ‘rescue’ story below.
The Spamhaus DDos attacks may be the biggest to date, but they are not in isolation, rather they are the latest in a long list of recent incidents. American Express and HSBC fell victim to large scale attacks last year and it’s a trend security vendor Kaspersky expects to continue. “In general, attacks of this type are growing in terms of quantity as well as scale. Among the reasons for this growth is the development of the Internet itself (network capacity and computing power) and past failures in investigating and prosecuting individuals behind past attacks.”
Another trend that we are witnessing is that of cyber criminals exploiting a fundamental feature that allows us to use the internet – DNS. Domain Name System converts from name to IP through your computer asking a server what the IP address is. However the chances are that the server you ask won’t know the answer, so it will go and get it for you from a list of known authoritative servers. Once it has the answer it will reply back to original sender. These ‘recursive’ DNS servers are the life blood of how we use the internet, without them you would have to memorise each IP address!
However there are thousands of ‘recursive’ DNS servers out there which will accept queries from any IP address. If spoofed DNS packets are then sent to those unsecured servers they are susceptible to what is known as a DNS amplification attack – where only 3 or 4 KB of data can be sent, but where the request can generate as much as 100x that amount. This means that even with a relatively small number of nodes the bandwidth hit can be enormous. Combating these attacks is possible, but the way in which we do so may hinge on the answers to many other much broader questions about the future of the internet and in particular – who governs it.
Looking at the Spamhaus attack, it would appear that both unsecured DNS (by design) and unsecured DNS (by misconfiguration) were responsible for the amplification of the attack. One way of nullifying this would be for all ISP’s to only allow their customers IP’s to query their own DNS servers (as we do at Fluidata) however the processing overheads deter many others from doing so. As it stands customers also have the option to build their own recursive DNS servers on their own infrastructure; moving DNS outside of the ISP’s responsibility and increasing the potential for misconfiguration; which can be exploited for malicious purposes.
In theory ISP’s could form a united front against DDoS attacks of this nature; through insisting that customers only use their recursive DNS servers and ensuring that those servers are secure. To increase security further BCP-38 could also be deployed – providing filtering on every edge port so that customers cannot spoof traffic from their links. However the move to a more regulated system would rely on (if it was to be truly effective) cross national coordination and likely meet opposition from service providers who do not wish to incur the processing overheads associated with such measures.
Overcoming that opposition (i.e. by turning regulation into something more akin to legal statute) would inexorably carry this issue into the contentious territory of who governs the internet, who polices it and whether anybody has the right to do; a proverbial Pandora’s box with far reaching consequences and considerations for subjects ranging from security to freedom of speech, right to privacy and the debate over the openness of the web. Given this, raising awareness around responsible DNS use seems the most viable course of action; the Spamhaus attack legacy might just be encouraging people to think a little more about it.
Posted in |
Posted: Monday, March 4th, 2013
Within the telecommunication industry we are aware of some of the external problems that can affect our last mile access networks. During my 16 years working in these circles, I’ve witnessed everything from DSL slowing down due to frost, to a wireless networks poor performance being blamed on the heat.
Interestingly, it looks like researchers in the Netherlands have figured out a way to use weather associated network problems to monitor the weather itself! In this instance; using mobile phone signal power loss to map rainfall patterns. To me personally, I already monitor the rain in real time by stepping outside. However if it means the weather forecasters can watch a rain front travel across the country, and then give me a warning about it, all the better.
The system uses the attenuation (power) differences through the mobile networks. They cross referenced their information with weather stations across the country and realised there was a correlation. Off the back of that they can see the fronts; as they aim for the most inappropriate place on land to dump their contents.
We will see if O2, EE or Vodafone develop into weather forecasting companies in the near future. Further reading can be found here: http://environmentalresearchweb.org/cws/article/news/52322
Posted in |
Posted: Monday, January 28th, 2013
If, like the vast majority of the population, you’ve never set foot inside a datacentre, then they may well be a bit of a mystery to you – large, nondescript buildings hosting the mystical cloud.
Of course you may have seen some photos of the interior of a facility, but if you have, in magazines like Wired or the Economist, then it’s probable you’ve glimpsed at the facilities of Google or Facebook and witnessed a glossy, shiny premises with row upon row of nicely colour coded servers, routers and switches all working 24/7.
Datacentres for major corporations like Google (who have seemingly limitless budgets) are one thing, but how do the “real” businesses find data centre space and what should they be looking for?
This article has not been written to hark on about the cloud and its benefits; that subject has been exhausted almost as much as the word ‘cloud’ has been printed in marketing campaigns. But rather to begin to break through the marketing jargon and be a useful guide to understanding what sort of data centre would be right for your business.
For many of us, irrespective of our technical qualifications, reading a datacentre specification sheet can be a most confusing exercise. In fact in many ways it almost seems as if you are studying maths; the sheets are riddled with algebraic equations and terminology such as 2N power redundancy, with N+1 cooling with VESDA Gas Suppression units delivering FM200…
Working on a recent project at our newest colocation facility in Manchester, Joule House, I’ve managed to gain some understanding of the algebra and with it what businesses should be looking for in a facility.
To begin with it’s important to understand that both power and cooling are delivered using generators and refrigerators; essential to keeping your equipment working 24/7. The total number of generators or refrigerators needed is called “N”. N is the optimum number but has no resiliency, so if a generator were to fail you would lose power. Therefore what many facilities do is introduce an extra fully redundant generator; this is referred to as N+1. If the total number of generators required (N) is 1 then you have 100% redundancy. However if you require three generators then you have a 33.33% redundancy.
The next very familiar equation is N+2. This follows the same principle and delivers two additional generators or refrigerators. If N=1 then you have 200% resiliency, if N=4 you have 50% resiliency. What I am hoping to show here is that because of how datacentres report resiliency an N+1 facility might be very different to another. However, as we spec up the resiliency we get to 2N. This is the first time where you can be sure of resiliency – as 2N means that the facility has 2xN or double the number of generators and refrigerators needed to operate. Therefore regardless of whether the facility operates 2 generators or 200 they have confirmed that they have double the capacity.
So what should I look for?
- Your data centre should be a custom built facility with demonstrable security.
- Your data centre should be away from water and have risk assessments proving they are not at any flood risk (as New York proved).
- You should be asking if the facility is “Carrier Neutral”. Some datacentres are operated by a single carrier who then monopolise the connectivity. This may cause you issues if you take services from other providers or wish to create a mirrored datacentre setup in the future for further resiliency.
- Geography: Most colocation hardware is created to be manageable offsite; therefore geography should not be a major concern. In the event of a reboot being needed or a cable needing to be run you should be able to use the remote hands facility. Therefore understand the remote hands procedure and do not allow geography to limit your choice.
- Touring and Security: Your datacentre will hold the most sensitive parts of your organisational data; this may be your CRM or billing platform. Therefore when you tour the facility (which I strongly recommend you do) be mindful of the security, did they do thorough security checks? Are the suites secure? Who has access to your racks? If you are not satisfied with the security this is not the facility for you.
- Restrictions: It is important to understand what the limitations of the datacentre are. Many datacentres will have strict cabling and rack policies, these are not necessarily bad as they ensure security and continuity allow for faster time to repair and reduces the chance of accidental cable damage. You have the most flexibility before the racks are installed so do your capacity planning thoroughly and talk about your three to five year plans. Future racks may be located in a completely different section of the building so understand what impact this would have.
- Advice. Always ask for advice and use the pre-sales resources on offer.
I hope this proves useful to those wishing to understand more about datacentres and in particular to anyone confused by DC terminology.
Posted in |
Posted: Friday, January 18th, 2013
This week it was revealed that a US software developer has been caught outsourcing his job, which has been earning him a six-figure salary. Working from home, he had spent his days browsing Reddit and YouTube, whilst a Chinese software company had been working under a contract with him to carry out his work. He was paying them only a fraction of his annual wage.
While this is completely fraudulent, it throws up some very interesting debates on the nature of outsourcing. Both SMEs and larger enterprises often do not have the capacity to run all aspects of their own business, or it simply doesn’t make financial sense, and so look elsewhere to have the work carried out contractually. IT departments especially are often entirely outsourced to individuals or companies offsite or even abroad. This enables companies to have a far wider reach than their headcount or talent pool of full-time employees can offer. But when this chain extends and expands, it can be at the expense of efficiency. In a world where time is money, decisions and directions can end up taking longer, and company core values become more difficult to stick to.
Timothy Ferriss, author of The 4-Hour Workweek and self styled “serial entrepreneur and ultravagabond”, offers a step-by-step guide on how to outsource your entire life using overseas ‘virtual assistants.’ He describes his personal story of how he streamlined both his personal and work life, enabling him to add more zeros to his salary, as well as spending the majority of his life on holiday. While you family may not appreciate birthday cards written by your Indian personal assistant ‘Honey’, I certainly know a few people who would see the immediate benefits in this. However, somewhere along the way the line blurs between having an assistant do a bit of extra research for you for a short article, to defrauding your company and opening up security breaches.
The fact is companies will always rely on each other to provide services they are experts in, companies that can do things better, quicker or cheaper than one can do in house. Good working relationships are key, where both sides are clear and open with each other on what they require and expect, as well as a good understanding of the business itself.
Posted in |
Posted: Tuesday, January 15th, 2013
Last year we got involved in a project to bring high-speed broadband to a rural community in Hampshire as part of a number of trials to evaluate what technology could be used to serve a number of residents in a remote pocket of the country. Interestingly the villages of Little London and Smannell were a stone’s throw from a new housing development which was being served with a fibre to the premises (FTTP) product from Independent Fibre Networks Ltd making it a good location test with.
What was interesting with this project was the use of fibre to the cabinet (FTTC) for Little London and a wireless solution for Smannell ensuring that all the houses and local businesses were served. The use of multiple technologies meant we were able to maximise the budget while ensuring nobody was left out. This along with our Service Exchange Platform meant that the solution also delivered choice to the residents so they had a number of ISPs to choose from to deliver internet their home.
While the final speeds still aren’t near FTTP they are faster than most urban areas and a huge increase over their previous ADSL service. This film was done as part of a look into broadband in the UK and was shown this month on BBC South.
Posted in |
Posted: Monday, January 14th, 2013
Aaron Swartz, the Reddit co-founder and internet activist, has been found dead in his apartment in Brooklyn, New York – he is believed to have committed suicide. His death comes just one month before he was due to go on trial on federal charges that he stole millions of scientific journals from a computer archive at the Massachusetts Institute of Technology. If found guilty it was expected that he would face 35 years in prison and a fine of up to $1Million.
The indictment alleged that in November 2010 Swartz had hacked into MIT’s system and stolen nearly five million documents from a digital Library (Journal Storage or JSTOR) which users were supposed to subscribe and pay for. Interestingly JSTOR had refused to press charges and were publically ‘not at ease’ with the prosecution. JSTOR had actually decided to release 1200 journals free of charge prior to the suicide.
Swartz never actually made the information available to the public.
In 2008 Swartz was involved in a similar incident; writing a program to download 20 million pages of documents from Public Access to Court Electronic Records (PACER), a database of federal judicial documents which he believed should be available to the public free of charge. On that occasion the authorities decided to take no action against him. Swartz dedicated much of his time to fighting online censorship and his court case had become a cause célèbre for many similar-minded figures.
News of his death has resulted in an outpouring of tributes over the internet. Tim Berners-Lee, the man credited with inventing the World Wide Web, tweeted: “Aaron dead. World wanderers, we have lost a wise elder. Hackers for right, we are one down. Parents all, we have lost a child. Let us weep.”
His family also paid their tribute by adding “He used his prodigious skills as a programmer and technologist not to enrich himself but to make the Internet and the world a fairer, better place.”
Whilst the life of a troubled but extremely gifted Aaron Swartz is over prematurely, the on-going debate over internet censorship and ownership most certainly isn’t.
Aaron Swartz November 8, 1986 – January 11, 2013
Posted in |
Posted: Friday, January 4th, 2013
The gravity of January 1st 1983 continues to slip under the radar for most. Much like Danny Boyle’s nod to Tim Berners-Lee in the Opening Ceremony of the Olympics, or the work done by Bob Metcalfe in the development of Ethernet technology, the significance of “Flag Day” will be lost on those not familiar with the great breakthroughs made in the development of the Internet over the past half-century.
‘Flag Day’ was effectively the day the internet was born; the day when TCP/IP fully replaced the Network Control Program (NCP) as the core networking protocol for ARPAnet (the predecessor to the internet). TCP/IP ultimately created a common language for inter-network communication; amalgamating the various conventions to allow for disparate networks with their own standards to communicate with one another more efficiently, reliably and securely. In particular it improved on NCP by ensuring that isolated attacks could no longer be capable of bringing down an entire network. It was upon these foundations that Berners Lee was later able to devise the World Wide Web.
Although no one individual can claim to have invented the Internet (with the exception of Al Gore!), Vint Cerf, Robert E. Kahn and the others at ARPAnet responsible for making the switch have stronger grounds than most.
ARPAnet itself was formally decommissioned in 1990, but it’s impressive to think that 30 years on the reason for transition towards IPv6 is that we’ve managed to allocate nearly the entire 4.2 billion addresses TCP/IP was originally designed to support.
Posted in |