5 – Signal Analysis and Response Measurement




Abstract




Monitoring during anaesthesia is based on the measurement of physiological signals that are recorded during surgical or other procedures requiring anaesthetic drug administration. The methods used to process these signals, calculate derived parameters and develop the different indicators of the physiological status of the patient or depth of anaesthesia are difficult to understand for anaesthesia providers without an engineering background. The present chapter aims to fill this gap by introducing the reader to the main concepts related to signal analysis commonly used to measure the response of human physiological systems under anaesthesia.





5 Signal Analysis and Response Measurement


Umberto Melia , Erik W. Jensen and José F. Valencia



Introduction


Monitoring during anaesthesia is based on the measurement of physiological signals that are recorded during surgical or other procedures requiring anaesthetic drug administration. The methods used to process these signals, calculate derived parameters and develop the different indicators of the physiological status of the patient or depth of anaesthesia are difficult to understand for anaesthesia providers without an engineering background. The present chapter aims to fill this gap by introducing the reader to the main concepts related to signal analysis commonly used to measure the response of human physiological systems under anaesthesia.


The purpose is to describe how biomarkers, signals and responses from the human body can be recorded, collected and mathematically analysed to quantify drug effect. The reader will learn about how a signal of the body is transformed into a parameter that is displayed on the screen and how it correlates with human responses or outcomes. The examples used to introduce these concepts will pertain to the measurement of (components of) ‘depth’ of anaesthesia that include: (a) the autonomic nervous system (ANS) response to noxious stimuli, such as haemodynamic responses [1], heart rate variability (HRV) [1], plethysmographic responses [2], and pulse wave analysis [2]; and (b) the electroencephalogram (EEG). The EEG is a direct measurement of brain activity from which indices of hypnotic effect and even a measure of pain/nociception have been and are being developed [3, 4, 5, 6, 7].



Signal Analysis: An Overview


A signal is commonly defined as a physical quantity or a source of information that is a function of independent variables such as time and/or space. For example, a signal that varies with time (e.g. the EEG) is represented as a function of the time variable, ‘t’, which is denoted as ‘x(t)’, with ‘x’ being the value of the signal at the time ‘t’.


Signals are classified by different features. A signal can be periodic or non-periodic. A periodic signal contains a sequence of values that is repeated after a fixed time period, ‘T’. The reciprocal of the ‘T’ value is defined as the fundamental frequency, ‘f’, of the signal. A signal is called deterministic if it can be expressed by a mathematical expression, while a signal is random if it is not predictable.


By using different signal processing techniques, it is possible to calculate several parameters that allow all the information that a signal can provide to be extracted. There are two commonly used methods to extract information from signals: analysis in the time domain and analysis in the frequency domain. The information contained in the time domain expresses the occurrence of some events by signal amplitude variation. This is the simplest and more intuitive way to represent a signal. In contrast, information in the frequency domain is more indirect and can be extracted by several methods. One of the more important methods is Fourier analysis, which is based on the theory that every signal (periodic or non-periodic, random or deterministic) can be decomposed into a sum of infinite periodic signals with different frequencies. The range of these frequency values is defined as the bandwidth of the signal.


The purpose of this section is to list and illustrate the basic concepts that are related to the processing and analysis of physiological signals.



Recording and Representation of Physiological Signals


A physiological signal is a biological electric potential variation which can be recorded from any part of the body. Recording a physiological signal can provide relevant information to assess the underlying physiological system that generates the signal.


Physiological signals are acquired by a device that measures the electrical activity using sensors that are placed in different parts of the body. Because the amplitude of this type of signal is quite weak compared to the electrical activity from other sources, these devices are equipped with amplifiers. Furthermore, an ideal recording system should be able to separate the physiological components of interest from any other unwanted electrical activity, which is considered to be noise. Accordingly, the input signal ‘x(t)’ of a recording device can be written as the sum of two terms (Equation 1), the physiological signal of interest ‘s(t)’ and a noise term ‘n(t)’:



x(t) = s(t) + n(t)
x(t)=s(t)+n(t)
(1)

where ‘n(t)’ is any kind of electrical activity, physiological or not, that contaminates the signal of interest ‘s(t)’ and might cause misinterpretation of its real features.


Before being processed, the recorded signal must be also converted from analogue to digital. Indeed, biological systems generate signals that are continuous over an interval of time. In order to be able to be processed by software, the analogue signal has to be converted into a digital signal, that is, a sequence of numbers that is discrete in time and amplitude. Thus, the signal x(t) is sampled by taking discrete x values at a fixed time interval Ts. The reciprocal of the period Ts is defined as the sampling frequency of the digital signal, fs. The outcome of the digitalization process is that the fundamental frequencies that can be analysed from the obtained digital signal are only in the range from 0 to half of the fs value, according to the Nyquist theorem. On the other hand, also the signal amplitude is discretized during the recording because the voltage values are converted into digital numbers that are discrete and have a limited range. The number of available discrete values and the range of the analogue values determine the resolution of the recording system (see ‘EEG Recording in the Operating Room’ in this chapter for more explanation). A low resolution, which occurs when a small number of discrete values are used to represent the whole range of the analogue signal, implies that a range of amplitudes of the analogue signal will have the same discrete value. Therefore, a variation of the analogue signal smaller than the resolution could be undetected after the analogue to digital conversion of the signal. Resolution can be enhanced by both increasing the number of available discrete values and limiting the analogue input range of the recording system. The former will increase the computational cost of the system and the latter alternative will cause loss of high amplitude signal values. For these reasons, before recording and converting a physiological signal, it is very important to know the bandwidth and the amplitude of the signal of interest in order to design an acquisition system with appropriate sampling frequency, amplitude range and resolution. Fig. 5.1 shows the block diagram of a typical system that records and processes a signal.





Fig. 5.1 Block diagram of a typical system that records and processes a physiological signal. x(t) = analogue signal; xs(t) = processed digital signal.



Filtering


An important aspect of recording and processing digital signals is the elimination or reduction of ‘noise’, the unwanted component n(t) that can affect the interpretation of the physiological information, which is contained in s(t). When the bandwidth of the original signal s(t) is different from the bandwidth of noise n(t), it is possible to eliminate the frequency components of noise n(t) by using filters. A filter is a signal processing tool that reduces or removes the energy of a signal in certain frequency bands. There are several types of filters and different criteria to classify them, such as linear or nonlinear, time-invariant or time-variant, causal or not-causal, analogue or digital, discrete-time (sampled) or continuous-time, passive or active, infinite impulse response (IIR) or finite impulse response (FIR) type. An important feature of the filters is the associated frequency response that describes which frequency bands the filter allows to pass (the passband) and which it rejects (the stopband). Table 5.1 shows the list of the most relevant filters with a short description of which frequency band passes and which is reduced.




Table 5.1. List of the most known filters and their behaviour.








































FILTER PASS REDUCE
Low-pass Frequencies lower than a specific frequency Frequencies higher than a specific frequency
High-pass Frequencies higher than a specific frequency Frequencies lower than a specific frequency
Band-pass Frequencies in a specific frequency band All the frequencies that are not in a specific frequency band
Band-stop All the frequencies that are not in a specific frequency band Frequencies in a specific frequency band
Notch All the frequencies except one One specific frequency
Comb Regularly spaced narrow frequency band All the frequencies that are not included in the regularly spaced narrow frequency band
All-pass All frequencies All frequencies (only modify the phase of the output)

The transition band of a filter is defined as the band between the passband and the stopband. The cut-off frequency of the filter is the frequency that lies at the division between the passband and the transition band, in which the filtered signal has an attenuation of 3 dB compared to the original signal. That means that a filtered sinusoidal signal with frequency equal to the cut-off frequency will have half of the energy compared with the original signal (signal without filtering).


Figure 5.2 shows an example of a frequency response of a low-pass filter with cut-off frequency of 30 Hz and an example of a periodic signal after and before the filtering process.





Fig. 5.2 (a) Frequency response of a low-pass filter with cut-off frequency at 30 Hz. (b) A periodic signal sum of two sinusoids at 2 Hz and 50 Hz before and (c) after filtering by the 30 Hz cut-off low-pass filter.



Time and Frequency Domain Analysis


As mentioned above, a signal can be analysed in time and/or in frequency domains. The following sections provide the basic concepts of the tools most commonly used to represent the signal features in time and frequency domains. First, the parameters that can be extracted from time domain analysis are described. Then, a selection of methods for signal representation in frequency and in both the time and frequency domains are introduced.



Time Domain Analysis

Time domain analysis involves analysing the data over a time period. Periodic signals can be easily characterized by calculating the maximum or minimum amplitude, the mean value and the root mean square (RMS) amplitude with respect to time. The RMS amplitude is defined as the square root of the arithmetic mean of the squares of the signal amplitude. It represents the effective amplitude of a varying signal. In case of a signal with mean value equal to 0, the root mean square amplitude is the same as the standard deviation of the signal amplitude in a defined time frame. A physiological signal that can be easily analysed in the time domain is the electrocardiogram (ECG). Figure 5.3 illustrates an example of maximum, mean, root mean square value and duration of a PQRST segment calculated from an ECG signal, and also an example of the interval between 2 R peaks.





Fig. 5.3 An example of time domain analysis of a PQRST wave from an ECG signal: (a) mean, maximum, root mean square values and PQRST time duration, (b) time interval between two R peaks.


However, most physiological signals have random features that can be approximated by a normal, Gaussian distribution. In this case, signals can be summarized by calculating the mean and standard deviation of the amplitude, assuming that the investigated periods have more or less constant statistical properties. Other statistical methods that are widely applied to characterize a signal in the time domain are kurtosis and the autocorrelation function. Kurtosis is a descriptor of the shape of the probability distribution of a random variable that is used to quantify its ‘tailedness’. The autocorrelation function is calculated as the correlation of a signal with a delayed copy of itself as a function of delay. It represents the similarity between observations as a function of the time lag between them. The analysis of autocorrelation is mostly used for finding repeating patterns or identifying the fundamental frequency in a signal.


A particular use of time domain parameters in signal analysis occurs when an evoked potential is studied, which is a cortical response to a specific stimulus. The traditional methods for analysing evoked potentials are based on measurements of the amplitude and duration of the waveform. Since the amplitude of these evoked potentials is smaller than the EEG background activity, epochs that contain evoked responses are often averaged. Evoked potentials are widely used in neuromonitoring during different types of surgeriesto help minimize iatrogenic trauma. Also, auditory evoked potentials have been used to assess depth of hypnotic effect during anaesthesia.



Frequency Domain Analysis: Fourier Transform

The most important method for signal analysis in the frequency domain is Fourier transformation. The Fourier transform is a mathematical method that decomposes a function of time (a signal) into the frequencies that make it up, allowing the energy or the power of a signal to be computed at each frequency. In signal processing, the energy of a signal is defined as the integral of the square of the amplitude in the time domain, while the power is defined as the energy per time unit. Parseval’s theorem demonstrates that the signal energy can be also computed from the sum across all frequencies of the signal spectral energy density that are obtained with the Fourier transform. For example, considering a signal that represents the electric potential (in volts) of a biological phenomenon, the units of measurement for the signal energy would be volt2 × seconds or volt2/Hz and the power would be volt2. The resulting distribution of frequencies that results from Fourier analysis is also called the spectrum or power spectral density (PSD) of the signal if it is calculated per unit of frequency. The sum of the PSD values for all the frequencies gives the total power of the signal. The PSD is often normalized by its maximum value or by the area under the curve so it can be expressed in arbitrary units (AU).


In digital signal processing applications, the Fourier Transform is widely computed by an optimized algorithm that is called the Fast Fourier Transform (FFT). This algorithm, instead of applying the simple definition of the Fourier Transform, uses mathematical properties that permit the computational costs and the complexity of the algorithm to be reduced yet obtains the same result. Figure 5.4 shows an example of the sum of sinusoidal signals with different frequencies and the corresponding PSD in the frequency domain, obtained with the FFT. It can be noted that the PSD shows peaks at the frequency values corresponding to the fundamental frequencies of the sinusoids that compose the signal, while it tends to be 0 at the other frequencies.





Fig. 5.4 The periodic signal x (middle pane) can be decomposed into three sinusoids x1, x2 and x3 at 10, 20 and 50 Hz, respectively (upper pane). The power spectral density (lower pane) is computed by the Fourier transform, and provides a measure of how much each frequency contributes to the original signal.


In the case of digital real signals, many limitations can affect the PSD estimation. First of all, the frequency resolution increases with the length of the signal segments of which the PSD is estimated: longer segments provide better frequency resolution. However, too lengthy segments may not be stationary, affecting the estimation of the PSD. Furthermore, the theory of the PSD is based on signals of infinite length, while the real signal has finite length. The effect of this limitation on the PSD estimation is named spectral leakage and consists of the generation of new frequency components (that are not real) around the main frequency component in the spectrum. Also, the limitation provided by the discretization of time can cause noise in the PSD estimation, since for example the frequencies that are not an integer multiple of the sample frequency are not correctly evaluated. For this reason the PSD is often estimated using averaging methods in order to minimize the error in the estimation of the spectral power. Indeed, the EEG is normally divided into segments, with or without overlap, with a fixed length and it is multiplied by window signals of specific shapes that reduce the spectral leakage. The final spectral density is achieved as the average of the spectral densities estimated on all the processed segments. Hence, the goodness of the PSD estimation depends on the choice of the shape of the windows, the length of the windows and how they are adapted with the characteristic of the signal.


One of the parameters that can be extracted from the PSD is the spectral power in specific frequency bands, which can be calculated as the area under the PSD curve in the desired bands. In a digital signal the spectral power is often calculated by summing the power at each frequency in one defined frequency range. Another parameter is the centroid, which is defined as the mean frequency of the PSD curve and can be computed with respect to the entire frequency spectrum or in different frequency bands. The mean frequency is the sum of each PSD value multiplied by the respective frequency value and divided by the sum of all the PSD values (the total power). A parameter that quantifies one aspect of frequency characteristics is the ‘spectral edge frequency’ or SEF. SEF is defined as the highest frequency at which a significant amount of energy is present in the signal (usually calculated at 50% or between 75% and 95% of total power contents). The SEF50 represents the value of the median frequency of the spectrum and it has been used as a surrogate measure of anaesthetic drug effects. The median frequency is the frequency that divides the spectrum into two parts with equal power.



Spectrogram

The main disadvantage of the Fourier transform is the lack of information about how the energy at each frequency evolves over time. Furthermore, because the theory of the Fourier transform is based on comparing the signal with sinusoids that extend through the whole time domain, it also needs to fulfill the requirement of stationarity, which means that the signal always carries the same information during the entire duration of observation. If the signal is not stationary, an isolated alteration might affect the whole Fourier spectrum. Since the Fourier transform does not include time information, also a short time event might be represented in the frequency domain as part of the signal spectrum and it can be misinterpreted as a signal component that is part of the signal for the entire recording period.


Time–frequency analysis is a tool that permits description of the evolution of the periodicity and frequency components with respect to time. Different time–frequency analysis methods with different properties have been proposed over the last decades [8]. The result of the time–frequency analysis can be visualized in a spectrogram, a 3D representation of the time, frequency and energy of the signal. A common 2D display of the 3D spectrogram represents time on the horizontal axis, frequency on the vertical axis and the amplitude of a particular frequency at a particular time by the colour of each point.


The simplest method to obtain a spectrogram is the short time Fourier transform which consists of dividing the signal into short segments of equal length and then computing the Fourier transform of each segment separately. Figure 5.5 shows an example of two sinusoidal signals, their PSD and the spectrogram. In the signal in Fig. 5.5a the frequency components of the signal evolve with respect to time while the signal in Fig. 5.5b contains the same frequency components for all the time instants. As can be noted, with the spectrogram it is possible to observe the evolution of the frequencies of both signals with respect to time, while with only the PSD computed by Fourier transform, all frequencies are represented without the information of time. Although the two sinusoidal signals are different, it is almost impossible to distinguish them from their respective PSD, because they have the same morphology.





Fig. 5.5 (a) A periodic signal composed of different sinusoids whose frequencies change over time with values: 2, 10, 20 and 50 Hz, the power spectral density and the spectrogram; (b) another periodic signal sum of four sinusoids at 2, 10, 20 and 50 Hz whose frequencies do not change over time, the power spectral density and the spectrogram.


Nonetheless, the main limitation of the short time Fourier transform remains that it is not possible to simultaneously obtain a very good time resolution and a very good frequency resolution. For that reason, methods based on quadratic transformation and signal windowing, such as the Cohen class distributions, permit improvement of the performance of the time–frequency analysis. A detailed discussion, however, is outside the scope of this chapter.


All the parameters explained in this section that can be computed from the PSD (power, mean frequency or centroid and SEF) can also be calculated by using the time–frequency representation of the signal, obtaining their evolution over time.



Wavelet

A specific class of time–frequency analysis contains the wavelet transform. The main difference with the Fourier transform from the other time–frequency representation techniques is that the signal is not decomposed into sinusoids and cosinusoids. The wavelet transform uses different functions belonging in both the time and Fourier space in which the concept of frequency is replaced by the concept of time scale. Hence, while traditional spectral analysis is used to represent a signal in the frequency domain, the frequency being the inverse of time, wavelet analysis permits representation of a signal in the time scale domain, the time scale being the time divided by a predefined factor. Instead of using sinusoid waves of infinite time length as Fourier transforms, the wavelet transform performs a mathematical projection of the signal to several wave oscillations with an amplitude that begins at zero, increases, and then decreases back to zero with different time durations. These wave oscillations are called wavelets, and their duration determines the different time scale at which the signal is represented. By using wavelets of different shapes and durations it is possible to recognize or detect specific pattern(s) in a signal. Continuous wavelet transform (CWT) is an implementation of the wavelet transform using arbitrary scales and almost arbitrary wavelets. This transform can be used also for the discrete time series, with the limitation that the smallest wavelet translations must be equal to the sampling frequency. Discrete wavelet transform (DWT) decomposes the signal by progressively dividing the bandwidth by a power of two at each level of decomposition. Hence, the time scale factor is always a power of two and the signal components that are represented at each level contain only the lowest frequencies of the original signal, and therefore this kind of representation can also act as a pass band or a low pass filter.


The choice of the wavelet that is used for signal decomposition is the most important point. Depending on the choice, it might be possible to influence the time and frequency resolution of the result and to steer the focus of the analysis towards specific patterns of the signal. Wavelet techniques are mostly used to detect known waveforms in a noisy background signal. An example of wavelet application is electro-oculogram (EOG) pattern recognition in the EEG signal. Since eye blinking or eye movement can be represented by specific patterns at low frequencies mimicking delta waves, the wavelet decomposition of the EEG at low time scale can help detect the EOG components. In this case the wavelet of choice should be one that is most similar to the EOG waves.


Figure 5.6 shows an example of a DWT decomposition of an EEG segment containing ocular activity. It can be noted that at high decomposition levels (lower frequencies) the EOG components are more visible and separated from the EEG activity, while at low decomposition levels (higher frequencies) the EEG activity is dominant. Another option for EOG detection with wavelet transform can be the design of a specific CWT as a model of experimental EOG recorded data.





Fig. 5.6 An example of (a) discrete wavelet transform that is similar to EOG activity, (b) an EEG segment that contains EOG activity and (c) the results of the discrete wavelet decomposition of levels from three to eight. The results of the DWT in (c) is reached by mathematical projection of the signal in (b) to the wavelet in (a) in which the time (x-axis) is scaled by six different factors: Lev3: scale factor is 23, Lev4: scale factor is 24, Lev5: scale factor is 25, Lev6: scale factor is 26, Lev7: scale factor is 27, Lev8: scale factor is 28. It can be observed that at higher time scales, 26, 27, 28, it is possible to observe better the slower oscillations of the signal that are included in (b) and that have a shape that is similar to the wavelet shape in (a).


Figure 5.7 shows an example of an EOG wavelet generated from real EEG data recording and the CWT of an EEG window containing EOG patterns. As can be seen, the energy of the wavelet representation (colour scale on the right) is higher in the x-axes at the time of blinking and in the y-axes at the time scales that match with duration of blinking.





Fig. 5.7 An example of (a) continuous wavelet transform that simulates the EOG pattern, (b) an EEG segment that contains EOG activity and (c) the results of the continuous wavelet decomposition.



Nonlinear Analysis


The state of a dynamical system (output) is given by a set of variables (inputs) that describe it at a particular time. The system is classified as linear if the change of the output is proportional to the change of their inputs, acting individually or in combination. Otherwise, the system is defined as nonlinear. Most physiological systems are inherently nonlinear, showing outputs that may appear chaotic, unpredictable or counterintuitive, contrasting with the much simpler linear systems. Although nonlinear systems can be approximated by linear equations for some range of the input values, this procedure could hide important features of the system.


Physiological signals can be adequately described also with methods derived from chaos theory including nonlinear dynamics analysis. Most of these methods are usually based on the concepts of entropy, fractals, symbolic dynamics and Poincare plots. They all are mathematical methods that are used to quantify nonlinear dynamics and system complexity. They can be correlated with physiological conditions.


The fundamental assumption of nonlinear techniques is that the physiological signals are generated by nonlinear deterministic systems with nonlinear coupling interactions, for example between neuronal populations in case of the EEG. Every neuron can be represented as a signal source and the EEG is the signal that is the result of the nonlinear interactions between the signals of all sources. Hence, the neuronal populations can be represented as a nonlinear deterministic system with nonlinear coupling interactions whose output is the EEG signal. The analysis of the EEG signal by nonlinear techniques permits one to assess the features and the state of the nonlinear system that has generated it, and thus may be used to assess the physiological state of the neurological system. In a complex dynamic system, a large number of interrelated variables are involved. The state of the system at a particular moment in time can be represented by a point in a space (the state space or phase space) with as many dimensions as there are variables. Nonlinear analysis is used to convert data from one dimension (the signal) into a multidimensional phase space that expresses each state of the system by a point, with as many coordinates as the values of the governing variables that are used to describe this specific state. When the system is observed for a long period of time, the sequence of those points in the phase space allows one to obtain a subspace called the attractor of the system. The nonlinear measures are the methods that are used to quantify the geometric and dynamical properties of the attractor.


In general, nonlinear approaches that have mostly been applied to various physiological data are based on attractor feature computation [9, 10, 11, 12, 13]. Table 5.2 shows a list of the parameters that are involved in the most frequently used nonlinear approaches. With these approaches it is possible to evaluate the nonlinearity and complexity of the physiological signals and then to classify their behaviour as deterministic, chaotic or random.


Aug 31, 2020 | Posted by in ANESTHESIA | Comments Off on 5 – Signal Analysis and Response Measurement

Full access? Get Clinical Tree

Get Clinical Tree app for offline access