Skip to navigation Skip to content

Issue
Seventeen

From dark demand to dinosaurs

We are standing on a cliff edge and we need to completely rethink how our software is written ...…

From dark demand to dinosaurs

From dark demand to dinosaurs

It has to cope with “dark demand” and balance the diverse needs of blue-sky researchers and business (simulating everything from nuclear fission to dinosaur racing), but EPCC faces another huge challenge over the next few years, developing new algorithms and software to take full advantage of the next generation of high-performance supercomputers...

“The big change in the last five years has been in the scale of computers,” says Professor Mark Parsons, Executive Director of EPCC (the supercomputing centre at the University of Edinburgh).  “We’ve now hosted three systems with about 100,000 cores (the equivalent of CPUs or central processing units) each, compared to only 2,500 cores in 2010, and when they are running, they shake so much you think they are alive.”

Parsons also believes we are now on the cusp of the most exciting period in high-performance computing (HPC) for the last 30 years, moving from today’s petascale systems (one thousand million million calculations per second) to exascale systems (one million million million calculations per second) within the next 5–10 years. When he first arrived at EPCC in 1994, the top-performing system was capable of 800 megaflops (one million floating-point operations per second), but even mobile phones today are faster. EPCC now has supercomputers a million times faster than 20 years ago, including ARCHER, based around a Cray XC30 supercomputer, but exascale computers will be another giant leap forward.    

Nowadays, computer processors are made up of multiple cores. At the exascale, the jump to 500 million cores, running in parallel, will “create completely different levels of parallelism,” says Parsons, “and change the way simulations are done.” But apart from the challenge of power consumption and cooling, many of the algorithms used today will no longer work. For software first written for systems with 256 parallel processors, the jump will simply be a step too far. “We are standing on a cliff edge,” says Parsons, “and we need to completely rethink how our software is written.”

EPCC will play a major role in the development of new algorithms for future HPC systems. The centre provides a range of services to academics and industry in the UK and Europe, and also asks “what next?” in computing.  Sometimes, these two roles can overlap – for example, its work for the European Centre for Medium-Range Weather Forecasting (ECMWF), which provides two-week forecasts to organisations in Europe, including the UK Met Office.  For the last 20 years, the HPC code used by the ECMWF has been able to cope, and now runs on systems of about 20,000 cores, but how will it cope with an exascale system?  Ten years from now, the forecasters aim to deliver much more detailed forecasts, moving from a grid of squares 32km across to squares only 2km across, and from modelling percentages of vapour in the air to modelling an individual cloud. The incremental step-change and optimisation required – or tweaking – will “disruptively change the code for modelling weather,” says Parsons, as researchers move to exascale platforms, and the ECMWF is working in partnership with EPCC to upgrade its modelling software before the new platform arrives, using EPCC computers for testing. As Parsons points out, the ECMWF needs its in-house computers to deliver its forecasts without interruption, so it makes sense to outsource this research work and also tap the expertise available at EPCC.

As the project continues, the ECMWF gets the specialist code it requires and EPCC also develops the new techniques and algorithms needed to take advantage of the next generation of systems – techniques to process arrays of equations (like doing giant algebra) which are openly shared with the rest of the scientific community.

This dual approach – advancing generic computational techniques at the same time as delivering specific solutions to industry clients – is typical for EPCC.  Parsons describes it as striking a balance between different needs. “We can happily work with our industry partners and also focus on doing pure science,” he says. “And if we don't do both, we can't dazzle our industry clients.”

Parsons has spent the last 20 years working with industry clients, but he doesn't agree with the “political view” that the work of an organisation such as EPCC should be driven entirely by industry needs. He also thinks that industry will benefit from pure science over the long term, even though there is not much corporate funding for curiosity-driven research in the field of computing.  To explain this, he cites the example of Nobel Prizewinners Albert Fert and Peter Grünberg, who discovered Giant Magnetoresistance – thus making possible dramatic advances in data storage technology. “Scientists like them often say that they accidentally stumbled on the breakthrough which won them the Prize,” he explains. “Sometimes, researchers need the freedom to follow their noses.”

As well as understanding the need to fund both academic and commercial research, Parsons has also experienced life in both camps. His career began at CERN in Geneva – “the epitome of Blue Sky thinking” – and his work at EPCC has focused on industry users since he joined as a programmer in 1994.

According to Parsons, EPCC has three priorities. First, it has to earn a living.  Second, it has an economic imperative to make sure UK companies have access to the best modelling and simulation technologies available today. And third, it should be “intellectually interesting.”

Another project which involves both curiosity-driven and commercial imperatives is the Intel Parallel Computing Centre (IPCC).  Intel wants to test and develop new processors and sees a mutual benefit in funding such research (see below - Inside Intel). 

Large companies such as Intel, Rolls Royce, Lloyds TSB, AEA Technology and British Aerospace are among the major users of services over the years, but smaller companies are also increasingly turning on to supercomputing, partly thanks to initiatives such as Supercomputing Scotland (a five-year scheme now coming to an end, backed by Scottish Enterprise), and Fortissimo – a European programme to promote HPC to small to medium-sized enterprises (SMEs). In addition, the purchase of a smaller-scale HPC system called Indy, with 1,600 cores, is a valuable addition to EPCC because it is designed for smaller companies, providing greater flexibility in scheduling, as well as pay-per-use pricing. “Our pay-on-demand services have grown hugely over the last two years,” says Parsons.

Amongst the companies to benefit is Glasgow-based IES (see below Low-carbon cities). The company has built a “strong working relationship” with EPCC, becoming a regular user of HPC systems, as well as using EPCC services to optimise its code to run on supercomputers and provide a new pay-on-demand, cloud-based solution for clients. According to Parsons, this means IES can deliver a more personal service, whether it uses EPCC or another HPC service provider.

Fortissimo's “experiments” with a group of about 60 SMEs with no previous experience of HPC has been designed to demonstrate how it can benefit the sector in general. For Parsons, it’s ironic that EPCC could have done this a long time ago, but even though “there is no problem in identifying companies who would benefit from HPC,” he acknowledges that many SMEs are still not convinced they will get a good return on their investment. Another factor is what Parsons and his team call “dark demand” – a lot of companies out there “who don't know they need HPC,” even in established high-tech industries such as oil and gas. Companies may need to make an initial investment of £40,000, for example, but once projects get underway, costs go down – and programmes such as Fortissimo now offer a collection of success stories which illustrate the benefits for different SMEs.

The work with major multinationals also continues – for example, Rolls Royce uses HPC systems to model new designs for turbine blades in engines, and also uses EPCC services to “make better decisions about which algorithms to use on the next generation of HPC systems.” For research and development, many large companies outsource simulation because it is more cost-effective than buying their own in-house system – even £40 million can be a lot of money for large multinationals, for a system which may be outdated within a few years. Indy only cost about £250,000, but this would be a major investment for any SME and would also mean additional spending on training and people, already available at EPCC.

The automotive, aerospace, finance, engineering, medical and energy (especially renewables) sectors are still amongst the heaviest industry users of HPC systems. Academic research also makes a big contribution to industrial development, says Parsons, including research enabled by the HPC machines at EPCC to build acoustic models of aeroplane engines, which should be of value to the sector in general. “Scientists push the boundaries,” Parsons continues, “and this pushes back into industry.”

The big growth area, he says, is the medical market, including recent work for the Farr Institute of Health Informatics Research (a collaboration between six Scottish Universities and NHS National Services Scotland), which is analysing vast amounts of anonymised patient data (e.g. hospital admissions and prescriptions), looking for patterns which will help to develop better future solutions for health care – a good example of HPC being used for data-driven applications.  

Hardware vs software

The older software written for HPC systems with 256 cores is “scaling very badly,” says Parsons, and innovation is stalling. “The hardware is leaving the software behind,” he explains, “and the software is leaving the algorithms behind.” Despite the promises of Moore's Law (which states that processor speeds for computers will double every two years), we are heading for a new phase in computing. 

“We used to ride the wave of faster processors, but now it is more and more processors,” Parsons continues. And with the science heading for a new generation of exascale supercomputers which could get so hot they lead to system meltdown, the future challenge for the EPCC is not only to conquer the technical problems but also to reassure a new generation of users that HPC – in terms of return on investment – is not too hot to handle.

 

Inside Intel

The Intel Parallel Computing Centre (IPCC) was created to optimise codes for Intel processors, particularly to port and optimise scientific simulation codes for Intel Xeon Phi co-processors. Because the EPCC's ARCHER supercomputer contains a large number of Intel Xeon processors, and does a lot of work for EPSRC and other UK research funding councils, it's important that the scientific simulation codes are highly optimised for these processors, so the work going on at the IPCC has focused on improving the performance of a range of codes that are heavily used for computational simulation in the UK on both Intel Xeon and Intel Xeon Phi processors.

 

 Low-carbon cities

EPCC has worked with Glasgow-based software and consultancy company Integrated Environmental Systems (IES) to enable its SunCast simulation software (which measures the effect of solar energy on buildings to improve energy efficiency) to run on HPC systems.

IES is developing a planning tool for cities that incorporates dynamic simulation of all aspects of a city, including buildings, transport and utilities. Because of the depth of information associated with multiple buildings, this tool will rely heavily on HPC simulation, taking full advantage of cloud-based HPC and pay-per-use services to remove the capital costs of an HPC system and the need for the specialist skills to operate it. For IES, this is expected to open up a whole new market of urban consultants and planners, giving them tools that currently do not exist to support decisions about how to create low-carbon cities.

 

Hear hear

The Auditory pilot project, involving EPCC and the University of Edinburgh’s Acoustics and Audio Group, is using HPC to speed up the creation of computational models of the human ear. According to Dr Michael Newton, such models usually take many hours to complete, and the project investigated how to harness the power of HPC to shorten the run times, thus providing greater opportunities for the rapid development and use of such models in a range of research and clinical environments.

The human ear converts acoustical sound waves into neural signals that can be interpreted by the brain – a process called transduction. And the Auditory project was concerned with speeding up the computational simulation of this process, which takes place inside the cochlea or inner ear. The cochlea plays a key role as a kind of frequency analyser, and having a good model of this mechanical structure, and understanding its role in transduction, is key to understanding how it functions.

 

Sound approach to rooms

The NESS  project is developing next-generation sound synthesis techniques based on physical models of acoustical systems, including three-dimensional (3D) rooms. Computer simulation of 3D room acoustics has many practical applications, such as the design of concert halls, virtual reality systems and artificial reverberation effects for electroacoustic music and video games.

 

Dinosaur racing

To generate excitement and explain HPC to the general public, EPCC uses a program which creates realistic simulations of animals both present and extinct, and invites people to compete in a dinosaur race. The software, created by a team at the University of Manchester, simulates the movements of animals based on fossils and a 3D model of their skeletons and biological data. Players choose a dinosaur and EPCC provides the computer which powers the game. 

According to Nick Brown of EPCC, this is a great illustration of how simulation has become the third methodology, complementing theory and experiment. Because dinosaurs have been extinct for so long, there is very little one can do in terms of physical experimentation when it comes to researching their movement. But combining theory from a number of different fields – palaeontology, biology and also the physics of movement – results in a detailed and accurate computer model of these ancient creatures.

The race was a popular feature in the PRACE (Partnership for Advanced Computing in Europe) Summer of HPC programme, where students from institutions all over Europe visit HPC centres in other countries to work on a graphically-oriented HPC project, improving the prototype in functional and visual terms, at the same time as seeing first-hand how much computing power it actually takes to simulate a dinosaur.

 

 

New HPC funding for SMEs

SMEs who want to trial high-performance computing (HPC) can apply for funding to SHAPE (the SME HPC Adoption Programme in Europe) – a pan-European programme supported by the PRACE (Partnership for Advanced Computing in Europe) project. The programme aims to raise awareness and provide European SMEs with the expertise necessary to take advantage of the innovation possibilities created by HPC, to increase their competitiveness and benefit from the expertise and knowledge developed within the PRACE Research Infrastructure. Click here for more details.

 

 

EPCC in profile

EPCC’s origins date back to the early 1980s, when researchers at the University of Edinburgh started
using parallel computers – mainly in the physics department.

In 1987, the first links between academic and industrial projects emerged, and the university won a bid for government funds to buy one of the very first transputer-based computers from Meiko – a UK company. This machine, the so-called Edinburgh Concurrent Supercomputer (ECS), became one of the largest such parallel computers in the world.

In 1990, the growing awareness of the importance of parallel computing resulted in funding for five new research posts – and EPCC was founded “to accelerate the exploitation of parallel computing through industry, commerce and academia,” developing simulation software to run on parallel computers, as well as providing consultancy services and training. It was staffed primarily by physicists who wanted to advance their theoretical research, and its industry partners included Barclays Bank and British Gas.

In 1991, the EPCC won £3.5 million in government funding and bought a CM-200 machine, the fastest, highest-profile computer in the UK. It also launched an industrial programme and partnered with computer manufacturers.

In 1994, EPCC acquired a 256-processor, Cray T3D (the first national HPC service it ran), at the time Europe’s fastest supercomputer and, two years later, a Cray T3E system dedicated to particle physics research.

In 2002, EPCC became lead partner in the HPCx consortium, supporting the national supercomputing service for UK academic research and in 2008, it became the host for HECToR.

Today, the EPCC hosts the ARCHER national supercomputing service and runs the Computational Science and Engineering support service for ARCHER.

EPCC has carried out industrial technology transfer projects with well over 400 companies since 1991, including Rolls-Royce, AEA Technology and British Aerospace, focusing on the development of simulation software. It has also worked with many local SMEs and many different technologies, and provides education and training.

 

 

 

 

"From dark demand to dinosaurs". Science Scotland (Issue Seventeen)
Printed from http://www.sciencescotland.org/feature.php?id=257 on 21/10/17 04:00:43 AM

Science Scotland is a science & technology publication brought to you by The Royal Society of Edinburgh (www.rse.org.uk).