Always On or Never-Never?
Everybody wants to use the Internet whenever they want and wherever they are, but no one seems willing to pay to ensure that it always stays on, whether they're consumers, business users, governments, equipment manufacturers or service providers. …
Article by Peter Barr
Everybody wants to use the Internet whenever they want and wherever they are, but no one seems willing to pay to ensure that it always stays on, whether they're consumers, business users, governments, equipment manufacturers or service providers. As the Internet continues to expand, and our dependence on it also increases, meltdown becomes much more likely and also more scary, but researchers in Scotland are doing their best to ensure that the Next Generation Internet will be fit for purpose – not just to survive a cyber-crime attack but also X Day...
For almost 20 years, the Internet has hardly changed at all, in terms of fundamental architecture. There have been lots of tweaks or engineering “workarounds” and the physical layer has changed out of all recognition, but the core protocols (the message exchanges used to manage data traffic) are basically the same as they were at the time of the last major change in 1993 (and indeed not very different from the mid-1970s), when the Internet emerged from academia and entered the public domain – now approaching two billion users.
As mobile networks grow and demand from social media, e-commerce, online gaming and video streaming accelerates, the problems are accumulating rapidly, from spam to cyber-terrorism and denial-of-service attacks, as well as worsening traffic congestion and problems with deployment of new applications. Many services have ported to the Internet from traditional business (e.g. retailers and banks) while others are the children of the Internet (e.g. Facebook or Twitter). All these users, services and applications put tremendous pressure on the network, slowing traffic down or even threatening breakdown and, according to many observers, nothing will improve unless the companies who profit from the Internet think there is money to make, and users are willing to pay more for high-availability, high-speed, high-quality access.
Domestic customers may sometimes be willing to lose data now and again, accepting lower quality of service and occasional downtime in return for lower prices. But utility companies, large corporations and government agencies, including the military, can't afford to make the same trade-off – even though they use private networks, they also use the Internet for many mission-critical functions.
The problem is that nobody wants to pay now to solve a problem that may or may not happen in the future – just in case the Internet grinds to a halt. In fact, says Saleem Bhatti, SICSA theme leader for Next Generation Internet, “parts of the network often operate at the limits of their capability,” and within the next two or three years, there will be no more IPv4 addresses available for distribution to new networks – what is called X Day.
Even though a lack of “statically assigned” address space may not affect domestic users directly, it could have a serious impact on organisations who offer always-on services. And that is why scaling is a critical issue, says Bhatti: “The use of the IP address today is ‘wrong’. The fundamental architecture needs to change.” What we need is not just technological but cultural change – new attitudes to payment and incentives, as well as new core protocols, so users get the services they want and business makes money.
Different rates for different qualities of service may be part of the answer – e.g. gold, silver and bronze data access – but this would be extremely difficult to implement Internet-wide, taking account of the complex connections that users make while travelling, piggybacking on various cross-border networks at once. What the industry is trying to promote as a next-step solution is IPv6, but Bhatti and many other scientists think that the Internet needs much more radical changes
“We need a system-wide approach,” says Bhatti, “but system-wide changes are hard, and industry and users want incremental changes.” The old Internet was a “co-operative experiment,” but the current version is “a landscape of competitive services,” and incentives are key to success. “We can have excellent technology for the network of today,” Bhatti continues, “but we also need to incentivise innovation for the network of tomorrow.”
So what does the Next Generation Internet need? Bhatti, who is also Professor at the School of Computer Science at the University of St Andrews, wants to “reheat some cold research topics” where progress is harder, such as architecture, naming and routing. “We need experimentation,” he says, “and more sharing of data and code. We need to be disruptive, and test new things out in the wild, but disruptive research and the cost of change can appear to be too high for end users.”
One of the problems with the Internet, Bhatti continues, is that incremental engineering retrofits have made the current technology landscape very complex, and to introduce radical changes which impact a large number of components and functionality could have a correspondingly complex effect on the network and on users. In addition, it is widely agreed that if a large customer asks a service provider to deliver a particular service, the inclination is to say “yes,” to make the customer happy and hold on to the business, but delivery of that request may result in increased complexity in the existing network engineering – which may in turn lead to major (very costly) problems in the future.
Bhatti also says that IPv6 (which greatly increases the available addresses, taking over from IPv4) is not taking off yet because it offers little benefit to users, other than a larger address space, so vendors and developers still stand to make money out of workaround solutions. In addition, not all applications can be easily ported to IPv6. So what is the answer?
THE PAIN BARRIER
Bhatti explains that the Internet “only just works,” in the sense that problems are usually solved “just in time”, when users and industry are forced to change because they have no other option. But he also believes we can be more proactive in establishing standards and introducing new technologies. “There will be pain,” he says, but incremental change will still be possible.
One of the most critical issues is who will be willing to pay for the changes required, and this is where academic researchers may take the initiative, rather than industry. The scientists will try to develop solutions which meet universal requirements, with backing from government and business partners who recognise the benefits for everyone, including the general public. For example, Bhatti and his colleague Ran Atkinson of Cheltenham Research have published a number of papers describing what they think could be one of the answers to the problems of IP addresses or “namespaces” used on the Internet. ILNP (the Identifier-Locator Network Protocol) is a new network protocol which can be built on IPv6 incrementally, breaking the address into two separate spaces – a Locator and an Identifier – to enable harmonised functionality such as mobility, multi-homing, local addressing and end-to-end security at the network layer “through an improved naming and addressing architecture.” Bhatti explains that the Locator names a single subnetwork and is used only for packet forwarding and in routing protocols – it is never used above the network-layer. The Identifier always names a node, rather than naming an interface, as an IP Address does today.
According to Bhatti, the threat of X Day is not a technological but an architectural issue, and “the root problem is in the overloaded semantics of the IP address.” The new protocol he is proposing would alter the way that addresses are handled, using one or more “semi-permanent identities” on every device, wherever it is used. This will not solve every problem of scaling, says Bhatti, but it could be part of the solution, meeting important requirements such as routing state scaling, traffic engineering and mobility functionality.
Other new core protocols have been proposed, including LISP (the Locator-Identifier Separation Protocol), developed by Cisco Systems, and HIP (the Host Identity Protocol) developed within the Internet Engineering Taskforce (IETF). All of these promise advances – for example, LISP would enable the design of a “scalable routing and addressing architecture for the Internet,” dealing with end-to-end functions – but none have been deployed yet on a large scale.
Equipment manufacturers may make lots of money by selling new boxes, when the new core protocols start to be used, but most other interested parties seem unwilling to make the first move – even though it makes perfect logical sense and it's in everyone's interest. Bhatti compares this to environmental action – we all know we should try to save the planet but we can't agree what to do first, when to act or decide who should pay. Industry is starting to express considerable interest in ILNP, even though it is still not completely developed, and Bhatti thinks the best way to test it will be to build it and give it away for free to encourage deployment. There is no “grand plan” for the transition, says Bhatti, but something will have to be done.
While Bhatti grapples with core protocols and other key issues like “green ICT,” his colleagues in SICSA are also doing world-class research into Next Generation Internet. For example, researchers at several Scottish universities, led by Professor Bill Buchanan at Edinburgh Napier, are looking into issues of security and trust, including cybercrime, as part of the SICSA-supported Centre of Excellence for Security and Cybercrime, an initiative which brings together academics, representatives from industry, government agencies and the police, to improve people’s lives “though excellence in research, knowledge transfer and teaching,” including the creation of next-generation systems which protect the rights of individuals and reduce the risks they are exposed to.
In Glasgow, SICSA-funded PhD students supervised by Dr Colin Perkins are looking at routing in the Internet. The goal here is to reduce the amount of routing state (routing-related information) needed in the core Internet routers so that there are better scaling properties for the future Internet, by applying a mixture of mathematical techniques to our current understanding of the way the Internet topology grows.
At the University of St Andrews, other SICSA-funded PhD students, studying under the supervision of Dr Mirco Musolesi, a SICSA-funded lecturer, are looking at some fundamental aspects of network science. They are exploring the dynamics of the formation and growth of networks, including mobile networks and online social networks, and drawing inspiration from other sciences such as biology and psychology.
Bhatti's main research interests are “networked systems architecture and the control of network resources,” but he also focuses on ICT energy usage – which accounts for about 2% of all carbon emissions. The ultimate aim is sustainable ICT systems, but Bhatti believes the solution will come from a combination of factors, including technological advances and changes in attitude. “Legislation is not an incentive,” cautions Bhatti. “Businesses need environmental incentives that are better aligned with their business goals.”
Increasing user expectations and the huge surge in the number of mobile devices are fuelling emissions, says Bhatti. When mobile users send a text, they rarely stop to think about the carbon-hungry infrastructure helping them perform this simple task – including the servers and routers, etc. Increasingly, consumers want always-on services and never switch off their computers, and business and government are becoming increasingly dependent on ICT systems, for services such as distance learning and telehealthcare. Even using a search engine adds to carbon emissions, with one estimate putting the cost of a single search as high as 7g of CO2 – enough to boil half a kettle of water. The need for compliance with new legislation (e.g. requiring archiving and search for personal data), and the need to make systems robust and redundant also add to the problem.
Manufacturing, disposal and recycling of ICT equipment are also a headache, and Bhatti is concerned it can be cheaper to build new equipment than it is to recycle components.
Awareness of energy issues is key to success – to change the way we use computers at home and at work. According to a UK national survey in 2008, only 13% of IT managers monitor energy usage, and Bhatti says that there are not enough incentives to purchase low-energy options. The economics may be complex, but Bhatti is convinced we need a systems-wide approach to carbon emissions – in the same way that we need a total rethink for Next Generation Internet.