It has long been known that the condition of a nuclear power plant can be effectively monitored by the analysis of small fluctuations (noise) of the process variables around their stationary value. The technique is commonly referred to as noise analysis, noise diagnostics, or reactor diagnostics. The values of many important parameters, such as reactivity coefficients, vibration amplitudes, response times, and others, can be monitored. The abnormal state of the system is discovered either by a shift of these parameters into non-permitted regions, or by the appearance of a changed structure of the noise signatures, usually the frequency spectra, indicating an anomaly.

The advantage of the technique is that it is based on the measurement of process variables under operation without any external perturbation. Hence, it is a non-intrusive technique that can be used under normal operation. Application of the method requires an understanding of the physical relationships between the various process variables, most notably the effect of the parameters of interest, such as temperature, pressure, or displacement on the measured quantity such as the fluctuations of the neutron flux. In possession of such relationships, the sought parameters can be extracted from the measured quantities by signal analysis methods. The majority of the applied research work in the area is constituted by physical modelling of noise phenomena, and the elaboration of inversion methods for parameter estimation and anomaly detection.

For various technical reasons, the analysis of test data was originally made off-line by evaluating analogue recordings on tape recorders. However, with the developments in instrumentation, data acquisition, and computing power, on line-applications have become possible with dedicated equipment installed at the plants.

The author started his professional career with the above type of applications of noise analysis and sensor health tests. However, he soon understood that the path from the elaboration of a new method and its verification and demonstration in an off-line manner for post-mortem analysis is extremely long. This is true for even one single new application of the same method at another plant; not to mention on-line applications with a predictive capability. For this, first the utility must trust the performance of the method and the economic gain that it may realize from investing in the technology. Second, for on-line or even just in-service applications, the method has to be very robust and reliable and interfere neither with the safety and process control and data acquisition system of the plant, nor produce false alarms. Utilities fear false alarms far more than missed alarms, due to unnecessary loss of revenue. This may lead to high threshold settings at the installed OLM system and a consequent insensitivity of the method, whereby the trust in the usefulness of the method diminishes. Third, in order to get a technology accepted for on-line applications, it has to comply with regulatory regulations and standards.

In summary, development and deployment of an OLM system in nuclear power plants is a many-faceted part of applied research and development (R&D) that has to cope with an extremely large number of (sometimes contradictory) boundary conditions. The author’s experience shows that very few, if any, organisation, research establishment, or commercial manufacturer of diagnostic equipment, has a command of all steps of the above chain, except for companies specialised in robust, but very narrow and very standard applications. Researchers developing new algorithms are not aware of those utility and regulatory constraints that have to be taken into account in order that the method has a chance of application at operating plants and of acceptance by regulatory bodies. They are not always aware of the existing technical problems either, simply by not having experience with the everyday routines of nuclear plant operations. Utilities, on the other hand, may not be aware of an existing solution to a problem, due to lack of sufficiently close contact between the research and utility environments, or out of concern with interference from the regulatory bodies. For this reason, these techniques have been implemented in nuclear power plants mostly on an as-needed basis rather than for routine condition monitoring applications. In addition, application of OLM systems has not yet come to a stage of exploiting its true potential.

OLM fundamentals

The term on-line monitoring (OLM) is used in this paper to describe methods for evaluating the health and reliability of nuclear plant sensors, processes, and equipment from data acquired while the plant is operating. Although OLM technologies typically apply to essentially all types of nuclear power reactors, in this paper, pressurised water reactors (PWRs) are used as the reference plant for description of OLM applications.

A PWR plant employs a number of sensors to measure the process parameters for control of the plant and protection of its safety. Figure 1 shows a simplified schematic of a primary loop of a PWR plant and identifies the important sensors in this loop. Depending on the plant design and manufacturer, a PWR plant has 2 to 4 primary coolant loops (with the exception of Russian PWRs, called VVERs or WWERs, which have up to 6 loops).

The number of sensors that are typically found in the primary loop of a PWR plant is shown in Table 1. The normal output of these sensors can often be used to verify the performance of the sensors themselves, and establish the health and condition of the plant.

In Figure 2, the output of a process sensor is shown as a function of time during plant operation. Normally, while the plant is operating, the sensor’s output will have a steady-state value corresponding to the process parameter indicated by the sensor. This steady-state value is often referred to as the static component or DC value. Figure 2 also shows a magnified portion of the sensor’s output signal to illustrate that, in addition to the static component, a small fluctuating signal is naturally present on the sensor output. The fluctuating signal, which is known as the signal’s dynamic component, stems from inherent fluctuations in the process parameter due to turbulence, random flux, random heat transfer, vibration, and other effects. The dynamic component is also referred to as the AC signal or the noise output of the sensor and its analysis constitutes the field of noise analysis that is the main focus of this paper.

The static and dynamic components of the sensor output each contain different information about the process being measured, and as such, can be used for a number of OLM applications. For example, applications that monitor the gradual changes in the process over the fuel cycle, such as sensor calibration monitoring, make use of the static component. On the other hand, applications that monitor fast-changing events, such as core barrel motion, use the information in the dynamic component that provides signal bandwidth information. Static data is analysed using empirical and physical modelling and averaging techniques involving multiple signals, while dynamic data analysis involves time domain and frequency domain analysis involving single signals or pairs of signals.

The types of OLM applications in nuclear power plants are in large part determined by the sampling rates available for data acquisition. Static OLM applications, such as resistance temperature detector (RTD) cross-calibration and on-line calibration monitoring of pressure transmitters, typically require sampling rates up to 1Hz, while dynamic OLM applications such as sensor response time testing uses data sampled in the 1kHz range. Other OLM applications, such as vibration measurement of rotating equipment and loose parts monitoring, may use data sampled at up to 100kHz.

This paper covers OLM applications that monitor I&C sensors such as temperature, pressure, level, flow, and neutron flux up to data sampling frequencies around kHz. These types of sensors represent the majority of measurement devices in nuclear power plants, and thus plants can more readily stand to benefit from OLM applications that use them. Other OLM applications, such as vibration measurement of rotating equipment and loose parts monitoring, which primarily rely on high-frequency acquisition of data from accelerometers, are not covered in this paper as they rely on a separate set of sensors for data acquisition than existing process sensors.

Sensing line blockages

One of the practical uses of OLM is in detection of sensing line blockages. Sensing lines (also called impulse lines) are small diameter tubes that bring the pressure signal from the process to the pressure sensor. Depending on the application and the type of plant, pressure sensing lines can be as long as 300 metres or as short as 10 metres. They have isolation valves, root valves, and bends along their length making them susceptible to blockages from residues in the reactor coolant, failure of isolation valves, and other problems. Sensing line blockages are a recurring problem in PWRs, boiling water reactors (BWRs), and essentially all water-cooled nuclear power plants. It is an inherent phenomenon which causes the sensing lines of nuclear plant pressure transmitters to clog up with sludge, boron, magnetite, and other contaminants. Typically, nuclear plants purge the important sensing lines with nitrogen or backfill the lines periodically to clear any blockages. This procedure is, of course, time-consuming and radiation-intensive, and more importantly, not always effective in eliminating blockages. Furthermore, except with noise analysis, there is no way to know ahead of time which sensing lines may be blocked.

Depending on the design characteristics of the pressure transmitter, a sensing line blockage can cause the response time of the affected pressure transmitter to increase by an order of magnitude. The danger is that due to a total blockage, the operating pressure may get locked in the transmitter and cause its indication to appear normal. Then, when the pressure changes, the transmitter will not respond and will continue to show the locked-in pressure, which will certainly confuse the reactor operators and can pose a risk to the safety of the plant. If a blocked pressure transmitter happens to be a part of a redundant safety channel, it can trip the plant during a transient. More specifically, the indication of a blocked transmitter will obviously not match the other redundant channels creating a mismatch that could trigger a reactor trip. In fact, this problem has occurred in France where partial blockages in flow transmitters caused two French PWRs to trip during load- following episodes [1].

Response time testing

Pressure, level, and flow transmitters in nuclear power plants behave like filters to the natural plant fluctuations that are presented to their inputs. That is, if one assumes that the input to the transmitter exhibits wide-band frequency characteristics (which is typically the case for nuclear power plant fluctuations), information about the sensor itself can be inferred by measuring the transmitter output. This is the basis of the noise analysis technique to determine the dynamic response of pressure, level, and flow transmitters in nuclear power plants [2].

Dynamic response analysis is based on the assumptions that the dynamic characteristics of the transmitters are linear and the input noise signal (that is, the process fluctuations) has proper spectral characteristics. Frequency-domain and time-domain analyses are two different methods for determination of response time of transmitters. It is usually helpful to analyse the data with both methods and average the results, excluding any outliers.

In frequency-domain analysis, a fast Fourier transform (FFT) of the noise signal yields the auto power spectral density (APSD) of the noise data. Under normal plant conditions, the APSDs of nuclear plant pressure transmitters have characteristic shapes that can be baselined and compared with the APSDs of similar transmitters operating under the same process conditions. Figure 3 shows examples of a few typical nuclear plant APSDs.

Fig 3a

Fig 3b

Fig 3c

Figure 3. Examples of auto power spectral densities of nuclear plant pressure transmitters

After the APSD is obtained, a mathematical function (model) that is appropriate for the transmitter under test is fit to the APSD. The model is then used to calculate the dynamic response of the transmitter. The dynamic response of the transmitter can then be analysed to determine the transmitter’s response time in-situ.

On-line calibration

The normal output of nuclear plant pressure transmitters can be monitored during plant operation. The data is compared with an estimate of the process parameter that the transmitter is measuring. One may attribute a divergence of the output from the expected value to sensor drift. If drift is identified and is significant, the transmitter is scheduled for a calibration during an ensuing outage. On the other hand, if the transmitter drift is insignificant, no calibration is performed for as long as eight years, typically. This eight-year period is based on a two-year operating cycle and a redundancy level of four transmitters. In this application, OLM is not a substitute for traditional calibration of pressure transmitters; rather, it is a means for determining when to schedule a traditional calibration for a pressure transmitter.

Reviews of calibration histories of process instruments in nuclear power plants have shown that high-quality instruments – such as nuclear-grade pressure transmitters – typically maintain their calibration for more than a fuel cycle of about two years and do not, therefore, need to be calibrated as often [3,4]. At most plants, the plant computer contains all the data that is needed to verify the calibration of pressure transmitters.

To perform on-line calibration monitoring, the output of redundant sensors is averaged and the average value is called the process estimate. This process estimate is then used as a reference to determine the deviation of each sensor from the average of the redundant sensors and identify the outliers. For non-redundant sensors, a reference value cannot obviously be determined by averaging. Therefore, if there is not enough instrument redundancy, the process estimate for calibration monitoring is determined by analytical modelling of the process.

In-situ cross calibration

PWR plants often employ a number of resistance temperature detectors to measure the fluid temperature in the reactor coolant system. The temperatures measured by the RTDs are used by the plant operators for process control and to assess the operational status and safety of the plant. As such, the calibration of the RTDs are normally evaluated at least once every refuelling cycle. Each RTD must meet specific accuracy requirements for the plant to continue to produce power according to its design specifications. There are also core-exit thermocouples (CETs) in PWRs to provide an additional means of monitoring the reactor coolant temperature. Typically, a PWR plant has 20 to 40 RTDs and 50 CETs. The accuracy of CETs is not as important as the accuracy of RTDs because CETs are used mostly for temperature monitoring.

Nevertheless, CETs are sometimes cross-calibrated against RTDs to ensure that their output is reliable. Redundant RTDs and CETs are used to minimise the probability of failure of any one RTD or CET seriously affecting the operator’s ability to safely and efficiently operate the plant. This redundancy of temperature sensors is the basis for a method of cross-calibrating RTDs and CETs.

In cross-calibration, redundant temperature measurements are averaged to produce an estimate of the true process temperature. The result of the averaging is referred to as the process estimate. The measurements of each individual RTD and CET are then subtracted from the process estimate to produce the cross-calibration results in terms of the deviation of each RTD from the average of all redundant RTDs (less any outliers). If the deviations from the process estimate of an RTD or CET are within acceptable limits, the sensor is considered in calibration. However, if the deviation exceeds the acceptance limits, the sensor is considered out of calibration and its use for plant operation is re-evaluated.

Traditionally, cross-calibration data has been acquired using data acquisition equipment that is connected temporarily to test points in the plant instrumentation cabinets. While highly accurate, the traditional method causes the plant to lose indication when the data is being acquired, and costs the plant time during shutdown and/or startup to defeat and restore the temperature indications. Now, with new and more advanced plant computers, RTD and CET measurements can be collected in the plant computer, which also provides a centralized location for monitoring and storing the measurements. Using on-line data from the plant computer for cross-calibration can save plants’ startup time, while producing results that are comparable to the traditional method.

Equipment assessment

In addition to evaluating the health of individual sensors as in on-line cross-calibration and transmitter calibration monitoring, static analysis methods may be used for other purposes. Equipment condition assessment (ECA) applications take the idea of on-line calibration monitoring a step further by monitoring for abnormal behaviour in a group of sensors as a means for indicating nuclear plant equipment or system malfunctions. An example of ECA is illustrated in Figure 4, which shows a simplified diagram of a typical chemical and volume control system (CVCS) in a PWR. The primary functions of a typical CVCS in a PWR are:

1. Controlling the volume of primary coolant in the reactor coolant system (RCS)

2. Controlling the chemistry and boron concentration in the RCS

3. Supplying seal water to the reactor coolant pumps (RCPs)

Several transmitters are typically used to monitor various parameters related to the operation of the CVCS. Figure 4 highlights the normal operation of a few of the parameters that are monitored in the CVCS system:

1. Charging Flow measures the flow rate of the coolant being provided from the volume control tank (VCT) to the RCS and RCP seals.

2. Seal Injection Flow measures the flow rate of the coolant provided to the RCP seals.

3. Seal Return Flow measures the flow rate of the coolant returned to the VCT from the RCP seal injection.

4. Letdown Flow measures the flow rate of the reactor coolant as it leaves the RCS and enters the VCT.

Fig 4

Fig. 4. Simplified diagram of chemical and volume control system components: see also text

During normal operation, the measurements of these parameters will fluctuate slightly, but should remain at a relatively consistent level. However, in abnormal conditions such as a RCP seal leak, some parameters may exhibit trends in the up or down directions indicating a problem in the plant. For example, Figure 5 shows the four flow signals mentioned above during normal operation of a PWR plant. In this figure, the actual flow rates have been scaled to simplify this example. As shown in the figure, the flows remain at relatively constant rates relative to one another.

Fig 5

Fig. 5. Normal operation of CVCS flow parameters

Figure 6 shows how these flow signals may appear at the onset of a RCP seal leak in this PWR plant. In this example, the onset of the RCP seal leak is first indicated by a downward trend in the seal return flow measured at time T1. This is followed by an increase in charging pump flow at time T2 as the charging pump compensates for the loss of coolant due to the RCP seal leak. Of course, an abnormal trend in an individual parameter such as seal return flow could mean that the sensor is degrading; however, abnormalities in related parameters that occur close in time are more likely to indicate the onset of a system or equipment problem. Early warning of these types of failures is thus the key benefit of ECA. More specifically, early warning of impending equipment failures can provide nuclear power plants with increased plant safety by early recognition of equipment failures, and reduced downtime from timely repair of affected equipment.

fig 6

Fig. 6. CVCS flow parameters at the onset of a reactor coolant pump seal leak

Predictive maintenance

Typically, vibration sensors (e.g. accelerometers) are located on the top and bottom of the reactor vessel to sound an alarm in case the main components of the reactor system vibrate excessively. However, neutron detectors have proven to be more sensitive in measuring the vibration of the reactor vessel and its internals than accelerometers. This is because the frequency of vibration of reactor internals is normally below 30Hz, which is easier to resolve using neutron detectors than accelerometers. Accelerometers are more suited for monitoring higher-frequency vibrations.

Figure 7 shows the APSD of the neutron signal from an ex-core neutron detector (NI-42) in a PWR plant. This APSD contains the vibration signatures (i.e., amplitude and frequency) of the reactor components, including the reactor vessel, core barrel, fuel assemblies, thermal shield, and so on. It even contains, at 25Hz, the signature of the RCP rotating at 1500 revolutions per minute, which corresponds to 25Hz. These signatures can be trended to identify the onset of ageing degradation that can cause damage to the reactor internals. This approach has been recognised as a predictive maintenance tool that can help guard against vibration-induced mishaps that may be encountered as plants age and become more vulnerable to challenges to their structural integrity. The details are covered in NUREG/CR 5501 [4].

Fig 7

Fig. 7. Auto power spectral density of NI-42 containing vibration signatures of reactor internals

Aging management of neutron detectors is somewhat dependent upon the detector manufacturer and the nuclear plant strategy for performance verification of nuclear instrumentation systems. Some manufacturers recommend periodic replacement of the detectors as often as once every five years and other manufacturers state that neutron detectors can be used for as long as 40 years if they are in good working condition. In the latter case, manufacturers sometime recommend cable testing and static and/or dynamic performance monitoring as a means of verifying that the neutron detectors are in good working condition.

The dynamic response of neutron detectors can be monitored using the noise analysis technique. In fact, response time testing as a means of trending the performance of neutron detectors in nuclear power plants goes back nearly 30 years.

Results of such tests performed by the author and his company (AMS Corporation) in a U.S. nuclear power plant are shown in Table 3 for four ex-core neutron detectors, each with an upper and a lower sensor. As demonstrated by these results, the response time of the detectors increases during the first two decades and then stabilises. This is expected of neutron detectors, as well as other sensors. In addition to trending response times, the noise output of neutron detectors can be examined for signs of other problems in the nuclear instrumentation circuit, such as cable and connector anomalies. Continuous monitoring of neutron detectors can reveal problems in the neutron detector circuit so that plant personnel can schedule maintenance accordingly.

Flow monitoring

In a PWR plant, 50 thermocouples are located on the top of the core. These thermocouples are normally used to monitor the reactor coolant’s temperature at the output of the core. However, they can also be used in conjunction with the ex-core neutron detectors to monitor flow through the reactor system.

More specifically, by cross-correlating signals from the ex-core neutron detectors and CETs, it is possible to identify the time that it takes for the reactor coolant to travel between the physical location of the neutron detectors and the thermocouple. The result, referred to as transit time (t), can be used with core geometric data to evaluate the reactor coolant’s flow through the system, identify flow anomalies, detect flow blockages, and perform a variety of other diagnostics. The same concept has been used for measurement of primary coolant flow in PWR plants. The procedure is referred to as transit time flow measurement (TTFM).

Furthermore, in a PWR plant, nitrogen-16 is produced by a fast neutron-induced reaction with oxygen in the primary coolant water. This radioisotope of nitrogen has a half-life of 7.35 seconds. In its decay back to oxygen, it emits high-energy gamma rays. As the N-16 is transported by the primary coolant, these gamma rays can be detected by radiation monitors (N-16 detectors) that are installed on the primary loop hot leg piping. The coolant flow can then be determined by measuring fluctuations in the intensity of the N-16 gamma radiation and analysing them with the cross-correlation method). An alternative method to the cross-correlation technique for determination of transit time is to analyze the data in the frequency domain. The Fourier transform of the detector data can be used to determine the phase spectrum of the data in the frequency domain. The phase spectrum can then be used to determine the transit time.

Over the last 40 years, an array of techniques have been developed for equipment and process condition monitoring. These techniques have been implemented in nuclear power plants mostly on an as-needed basis rather than for routine condition monitoring applications. Now, with the advent of fast data acquisition technologies and proliferation of computers and advanced data processing algorithms and software packages, condition monitoring can be performed routinely and efficiently.

Author Info:

H.M. Hashemian, founder, AMS Corporation, AMS Technology Center, 9119 Cross Park Drive, Knoxville, Tennessee 37923 USA. This paper is an extract of a nuclear engineering doctoral dissertation at Chalmers University, Sweden, which he successfully defended in May 2009.

Related Articles
Listening in real time – the case of Diablo Canyon
The Sizewell B experience


[1] C. Meuwisse and C. Puyal, “Surveillance of neutron and thermodynamic sensors in EDF nuclear power plants,” paper presented at the CORECH Meeting, Chatou, France (1987).

[2] J.A. Thie, “Power reactor noise,” American Nuclear Society, La Grange Park, Illinois, USA (1981).

[3] H.M. Hashemian, “On-line testing of calibration of process instrumentation channels in nuclear power plants,” U.S. Nuclear Regulatory Commission, NUREG/CR-6343 (1995).

[4] H.M. Hashemian, et al.”Advanced instrumentation and maintenance technologies for nuclear power plants,” U.S. Nuclear Regulatory Commission, NUREG/CR-5501 (1998).