The role of high performance computing in nuclear

1 December 2015



The nuclear energy sector exploits high performance computing in practically all domains, from research to life management to accident safety. By I. Simonovski, B. Žefran and S. Cimerman


In the last couple of decades the application of computer simulations in the nuclear energy sector has been steadily increasing, because the industry needs better predictive tools for describing basic physical phenomena. This is driven by the need for increased safety and operation beyond the design life.

The latter requires a thorough assessment and management of structures and components for ageing degradation, which in turn require higher-fidelity models, both in terms of resolution and physics, increased knowledge of the physics, parametric studies, and better estimation of margins by reducing uncertainties and optimising operations.

The cost, complexity and difficulty of performing experiments on irradiated materials are additional drivers for using simulation tools. Damage in material starts with point defects on the atom scale, while a structural engineer who needs to account for material damage works on a component scale. So these tools must cover a wide range of time and length scales (Figure 1).

High performance computing (HPC) Processor design, high speed memory systems, storage subsystems and hardware designed for parallel computation, have all helped to reduce computation times.

At the start vector processors were used. These were replaced with big shared-memory systems containing a number of processors, and from 1990 with distributed memory systems where computers were clustered within a fast network - described as a high performance computing (HPC) cluster.

A HPC cluster typically consists of a master/login node (i.e. computer), compute nodes and a storage system, all connected via a fast network. Since off-the-shelf components have been used, the price of a HPC system has fallen considerably. Further advances brought multi-core processors within each compute node and the use of graphical accelerators. Consequently, computational power has increased considerably (Figure 2).

Small HPC cluster systems, containing more than 500 compute cores, are nowadays relatively inexpensive and can be purchased from a number of vendors.

Computing service

A HPC system is just one of many necessary building blocks in the IT service structure.

IT is central to many tasks as it needs to provide user-oriented support and HPC life-cycle management. Initially, this means support for defining a HPC system that meets the users' needs, followed by procurement and commissioning of a system.

IT also provides daily support to the users and their software applications, along with the system administration, back-up and archiving services. The software selection can significantly influence the level of required support. Open-source software is often much more difficult to compile and install, so more support should be planned-in. But commercial licences can often be more expensive than a small or medium HPC system, so maximising HPC system usage is especially important. Transparent HPC usage policies and a clear and efficient computational job submission system need to be put in place.

As needs grow and technology evolves, HPC systems are upgraded and new systems are added. System usage and administration should then be standardised to minimise and simplify administration and to make it easy for users.

For example using a different job submission system may require changes to the job submission scripts. Some users can do this, but it takes valuable time from their core work. The IT has to minimise such distractions. Careful planning of system requirements, financial and human resources, and commissioning are essential.

Communication between the users and the IT is also important. It should be as open as possible. The users often do not understand IT issues and terms, especially in a large organisation. They want to have the latest and fastest HPC system available, while issues like data standardisation, the variation in hardware of nodes in a cluster, ease of administration, serviceability and up-time, are of secondary concern. The IT is also challenged by the growing size of data sets and visualisation over high latency networks. These HPC issues must be addressed by focusing on the user requirements, policies and technology together, in order to obtain the best value.

Fields of application

In nuclear energy HPC is applied in a number of fields, primarily materials science, structural integrity, neutronics and thermal-hydraulics.

Material science, ageing degradation
Understanding the thermo-mechanical properties of structural and nuclear materials is essential for safe operation of a nuclear facility. Understanding, predicting and measuring changes in a material involves different approaches; from first principles at the lowest length and time scales, to Monte Carlo and discrete dislocation dynamics, to finite element models at the component scale.
Results from modelling at smaller scales can be fed into models at the next scale. For example, interatomic potentials calculated using first principles can be used in molecular dynamics simulations. Figure 3 shows the computed intergranular cracking surfaces in 304 stainless steel.

The measured grain structure is recreated within a finite element model to simulate the early crack initialisation and evolution. Such realism helps understand the early crack propagation rate and the effect of the microstructure on it. This is a typical example of a simulation, requiring large amounts of processors and memory.

Structural integrity
At higher scales, the structural integrity of components is also of concern. HPC is often used for finite element modelling of complex structures and corresponding parametric studies. A model is first validated against an experiment and then used to assess the effect of a specific number of parameters, reducing the need for expensive testing.

One example is of a cask for transport and storage of spent nuclear fuel and radioactive waste testing. The cask is dropped from 1m height on a steel bar, where the impact is in a region with cooling fins. In this way a validated model can be used to reduce the need for expensive testing.

Neutronics
Modern Monte Carlo computer codes for neutron transport allow calculation of detailed neutron flux and power distribution in complex geometries with a resolution of ~1mm. They can calculate individual particle tracks, scattering and absorption events (Figure 4). The Monte Carlo approach is optimal for parallelisation and can efficiently use thousands of computer cores. Visualisation of the results can often be challenging.

Thermal hydraulics
Thermal hydraulics simulations can also benefit greatly from HPC. Simulation times can be counted in months, especially for 3D models. A significant number of computer cores are needed, along with a fast interconnect between the compute nodes. Typically these simulations do not require large amounts of memory. Figure 5 shows a severe accident case with a presumed completely melted reactor core. Modelling is applied to study the feasibility of preventing pressure vessel wall failure by flooding the reactor pit with water. A shield enhances water convection and the cooling along the wall.

Figure 6 shows the results of a scale-adaptive simulation for simulation of a turbulent flow in horizontal rod bundle with split type spacer grid. Further research, showed the need to account for secondary flow of a second kind, which develops perpendicularly to the main flow along the channel. Depending on whether a spacer is used or not this secondary flow can be one or two orders of magnitude weaker than the main flow.


About the authors

Dr. Igor Simonovski heads the development of high performance computational resources at the European Commission's Institute for Energy and Transport.

I. Simonovski (Igor.Simonovski@ec.europa.eu) and S. Clements, European Commission, Joint Research Centre, Institute for Energy and Transport, P.O. Box 2, NL-1755 ZG Petten, The Netherlands

B. Žefran and S. Cimerman, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana, Slovenia

The authors would like to thank to all the contributors for providing figures of their results.

Figure 4. Left: neutron tracks (lines) and scattering events (points) from the monoenergetic 3MeV plane source in graphite. The moderation process is presented by coloring the particles by their energy (red - high, blue - low). Right: Total neutron flux distribution in a PWR core with burnable absorbers
Figure 6. Scale adaptive simulation of the OECD/NEA MATiS-H benchmark, visualization of the flow 3D and cross section of velocity magnitude field (right)
Figure 1. Length and time scales involved in simulating phenomena relevant for nuclear materials as covered by computational methods
Figure 2. Evolution of high performance computing
Figure 5. Simulation of in vessel melt retention for a VVER 1000 reactor. Volume fractions, temperatures, and velocities at time t = 10,000s for the corium (within the vessel), the vessel wall, and for the coolant (outside the vessel)
Figure 3. Example of an intergranular cracking in a 304 stainless steel. Left: grain 1, centre: granular structure, right: intergranular cracking


Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.