A signal-based condition indicator is a quantity derived from processing signal data. The condition indicator captures some feature of the signal that changes in a reliable way as system performance degrades. In designing algorithms for predictive maintenance, you use such a condition indicator to distinguish healthy from faulty machine operation. Or, you can use trends in the condition indicator to identify degrading system performance indicative of wear or other developing fault condition.

Signal-based condition indicators can be extracted using any type of signal processing, including time-domain, frequency-domain, and time-frequency analysis. Examples of signal-based condition indicators include:

The mean value of a signal that changes as system performance changes

A quantity that measures chaotic behavior in a signal, the presence of which might be indicative of a fault condition

The peak magnitude in a signal spectrum, or the frequency at which the peak magnitude occurs, if changes in such frequency-domain behavior are indicative of changing machine conditions

In practice, you might need to explore your data and experiment with different condition indicators to find the ones that best suit your machine, your data, and your fault conditions. There are many functions that you can use for signal analysis to generate signal-based condition indicators. The following sections summarize some of them. You can use these functions on signals in arrays or timetables, such as signals extracted from an ensemble datastore. (See Data Ensembles for Condition Monitoring and Predictive Maintenance.)

For some systems, simple statistical features of time signals can serve as
condition indicators, distinguishing fault conditions from healthy conditions.
For example, the average value of a particular signal (`mean`

) or its standard deviation
(`std`

) might change as system
health degrades. Or, you can try higher-order moments of the signal such as
`skewness`

and `kurtosis`

. With such features, you
can try to identify threshold values that distinguish healthy operation from
faulty operation, or look for abrupt changes in the value that mark changes in
system state.

Other functions you can use to extract simple time-domain features include:

In systems that exhibit chaotic signals, certain nonlinear properties can indicate sudden changes in system behavior. Such nonlinear features can be useful in analyzing vibration and acoustic signals from systems such as bearings, gears, and engines. They can reflect changes in phase space trajectory of the underlying system dynamics that occur even before the occurrence of a fault condition. Thus, monitoring a system's dynamic characteristics using nonlinear features can help identify potential faults earlier, such as when a bearing is slightly worn.

Predictive Maintenance Toolbox™ includes several functions for computing nonlinear signal features. These quantities represent different ways of characterizing the level of chaos in a system. Increase in chaotic behavior can indicate a developing fault condition.

`lyapunovExponent`

— Compute the largest Lyapunov exponent, which characterizes the rate of separation of nearby phase-space trajectories.`approximateEntropy`

— Estimate the approximate entropy of a time-domain signal. The approximate entropy quantifies the amount of regularity or irregularity in a signal.`correlationDimension`

— Estimate the correlation dimension of a signal, which is a measure of the dimensionality of the phase space occupied by the signal. Changes in correlation dimension indicate changes in the phase-space behavior of the underlying system.

The computation of these nonlinear features relies on the `phaseSpaceReconstruction`

function, which reconstructs the phase space
containing all dynamic system variables.

The example Using Simulink to Generate Fault Data uses both simple time-domain features and these nonlinear features as candidates for diagnosing different fault conditions. The example computes all features for every member of a simulated data ensemble, and uses the resulting feature table to train a classifier.

For some systems, spectral analysis can generate signal features that are useful for distinguishing healthy and faulty states. Some functions you can use to compute frequency-domain condition indicators include:

The example Condition Monitoring and Prognostics Using Vibration Signals uses such frequency-domain analysis to extract condition indicators.

For a list of functions you can use for frequency-domain feature extraction, see Identify Condition Indicators.

The time-frequency spectral properties are another way to characterize changes in the spectral content of a signal over time. Available functions for computing condition indicators based on time-frequency spectral analysis include:

`pkurtosis`

— Compute*spectral kurtosis*, which characterizes a signal by differentiating stationary Gaussian signal behavior from nonstationary or non-Gaussian behavior in the frequency domain. Spectral kurtosis takes small values at frequencies where stationary Gaussian noise only is present, and large positive values at frequencies where transients occur. Spectral kurtosis can be a condition indicator on its own. You can use`kurtogram`

to visualize the spectral kurtosis, before extracting features with`pkurtosis`

. As preprocessing for other tools such as envelope analysis, spectral kurtosis can supply key inputs such as optimal bandwidth.`pentropy`

— Compute*spectral entropy*, which characterizes a signal by providing a measure of its information content. Where you expect smooth machine operation to result in a uniform signal such as white noise, higher information content can indicate mechanical wear or faults.

The example Rolling Element Bearing Fault Diagnosis uses spectral features of fault data to compute a condition indicator that distinguishes two different fault states in a bearing system.

Time-frequency moments provide an efficient way to characterize
*nonstationary* signals, signals whose frequencies
change in time. Classical Fourier analysis cannot capture the time-varying frequency
behavior. Time-frequency distributions generated by short-time Fourier transform or
other time-frequency analysis techniques can capture the time-varying behavior.
Time-frequency moments provide a way to characterize such time-frequency
distributions more compactly. There are three types of time-frequency
moments:

`tfsmoment`

— Conditional spectral moment, which is the variation of the spectral moment over time. Thus, for example, for the second conditional spectral moment,`tfsmoment`

returns the instantaneous variance of the frequency at each point in time.`tftmoment`

— Conditional temporal moment, which is the variation of the temporal moment with frequency. Thus, for example, for the second conditional temporal moment,`tftmoment`

returns the variance of the signal at each frequency.`tfmoment`

— Joint time-frequency moment. This scalar quantity captures the moment over both time and frequency.

You can also compute the instantaneous frequency as a function of time using `instfreq`

.