Skip to content Skip to sidebar Skip to footer

Is the Sum of Two Continuous Time Period Signals Also Periodic

Signals and noise

John Price , Terry Goble , in Telecommunications Engineer's Reference Book, 1993

10.5.2 Periodic and aperiodic signals

A periodic signal is one that repeats the sequence of values exactly after a fixed length of time, known as the period. In mathematical terms a signal x(t) is periodic if there is a number T such that for all t Equation 10.10 holds.

(10.10) x ( t ) = x ( t + T )

The smallest positive number T that satisfies Equation 10.10 is the period and it defines the duration of one complete cycle. The fundamental frequency of a periodic signal is given by Equation 10.11.

(10.11) f = 1 T

It is important to distinguish between the real signal and the quantitative representation, which is necessarily an approximation. The amount of error in the approximation depends on the complexity of the signal, with simple waveforms, such as the sinusoid, having less error than complex waveforms.

A non-periodic or aperiodic signal is one for which no value of T satisfies Equation 10.11. In principle this includes all actual signals since they must start and stop at finite times. However, aperiodic signals can be presented quantitatively in terms of periodic signals.

Examples of periodic signals include the sinusoidal signals and periodically repeated non-sinusoidal signals, such as the rectangular pulse sequences used in radar.

Non-periodic signals include speech waveforms and random signals arising from unpredictable disturbances of all kinds. In some cases it is possible to write explicit mathematical expressions for non-periodic signals and in other cases it is not.

In addition to periodic and non-periodic signals are those signals that are the sum of two or more periodic signals having different periods. T will not be satisfied in Equation 10.10, but the signal does have many properties associated with periodic signals and cannot be represented by a finite number of periodic signals.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750611626500162

Frequency Analysis: The Fourier Series

Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019

4.6 What Have We Accomplished? Where Do We Go From Here?

Periodic signals are not to be found in practice, so where did Fourier get the intuition to come up with a representation for them? As you will see, the fact that periodic signals are not found in practice does not mean that they are not useful. The Fourier representation of periodic signals will be fundamental in finding a representation for non-periodic signals. A very important concept you have learned in this chapter is that the inverse relation between time and frequency provides complementary information for the signal. The frequency domain constitutes the other side of the coin in representing signals. As mentioned before, it is the eigenfunction property of linear time invariant systems that holds the theory together. It will provide the fundamental principle for filtering. You should have started to experience deja vu in terms of the properties of the Fourier series ( Table 4.1 summarizes them), some look like a version of the ones in the Laplace transform given the connection between these transforms. The Fourier series of some basic signals, normalized so they have unity fundamental period and amplitude, are shown in Table 4.2. You should have also noticed the usefulness of the Laplace transform in finding the Fourier coefficients, avoiding integration whenever possible. The next chapter will extend some of the results obtained in this chapter, thus unifying the treatment of periodic and non-periodic signals and the concept of spectrum. Also the frequency representation of systems will be introduced and exemplified by its application in filtering. Modulation is the basic tool in communications and can be easily explained in the frequency domain.

Table 4.1. Basic properties of Fourier series

Signals and constants x ( t ) , y ( t ) periodic X k , Y k
with period T 0, α, β
Linearity αx(t)+βy(t) αX k  +βY k
Parseval's power relation P x = 1 T 0 T 0 | x ( t ) | 2 d t P x  = ∑ k |X k |2
Differentiation d x ( t ) d t jkΩ0 X k
Integration t x ( t ) d t  only if X 0 = 0 X k j k Ω 0 k 0
Time shifting x(t −α) e j α Ω 0 X k
Frequency shifting e j M Ω 0 t x ( t ) X kM
Symmetry x(t) real |X k | = |X k | even
function ofk
X k  = −∠X k  odd
function ofk
Multiplication z(t)=x(t)y(t) Z k  = ∑ m X m Y km

Table 4.2. Fourier series of normalized signals

Signal Period Fourier coefficients
Sinusoid x 1 ( t ) = cos ( 2 π t + θ ) [ u ( t ) u ( t 1 ) ] X 1 = 0.5e , X 1 = X 1 ,
X k  = 0, k ≠ 1,−1
Sawtooth x 1(t)=t[u(t)−u(t − 1)]−u(t − 1) X 0 = 0.5 , X k = j 1 2 π k , k ≠ 0
Rectangular pulse x 1(t)=u(t)−u(t −d), 0 <d < 1 X 0 =d,
X k = d sin ( π k d ) π k d e j k π d , k ≠ 0
Square wave x 1(t)=u(t)−2u(t − 0.5)+u(t − 1) X 0 = 0, X k = j 1 ( 1 ) k π k
Half-wave rectified x 1 ( t ) = sin ( 2 π t ) u ( t ) + sin ( 2 π ( t 0.5 ) ) u ( t 0.5 ) X 0 = 1 π , X 1 = j 4 ;
X k = 1 + ( 1 ) k 2 π ( 1 k 2 ) , k ≠ 0,1
Full-wave rectified x 1 ( t ) = sin ( π t ) u ( t ) + sin ( π ( t 1 ) ) u ( t 1 ) X 0 = 2 π , X k = 1 + ( 1 ) k π ( 1 k 2 ) , k ≠ 0
Triangular x 1(t)=2r(t)−4r(t − 0.5)+2r(t − 1) X 0 = 0.5, X k = 1 ( 1 ) k k 2 π 2 , k ≠ 0
Impulse sequence x 1(t)=δ(t) X k  = 1

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128142042000144

Introductory Concepts

William K. Blake , in Mechanics of Flow-Induced Sound and Vibration, Volume 1 (Second Edition), 2017

1.4.3.1 Periodic Signals

Periodic signals are known to be expressible as a summation of sine and cosine functions; i.e., if υ(t) is a periodic signal of period T p, then we may write it as a sum of harmonics of the fundamental frequency 2π/T p

(1.24) υ ( t ) = a 0 2 + n = 1 a n cos ( 2 π n t T p ) + b n sin ( 2 π n t T p )

where the coefficients

(1.25) a n = 2 T p T p / 2 T p / 2 υ ( t ) cos ( 2 π n t T p ) d t

and

(1.26) b n = 2 T p T p / 2 T p / 2 υ ( t ) sin ( 2 π n t T p ) d t

Among other conditions, these equations are convergent as long as

T p / 2 T p / 2 | υ ( t ) | d t

is finite, see, e.g., Refs. [11,12]. As an example, this harmonic analysis is particularly useful in describing the response of fan rotors to inflow distortions (see Chapter 12: Noise from Rotating Machinery). A rotating fan blade will respond to some nonuniformity of its inflow periodically each time it rotates one revolution. The periodicity T p will be n s 1 , where n s is the rotation speed (revolutions/time). Therefore υ(t) will have a period T p = n s 1 and will be expressible as a summation of harmonics of frequency mn s.

The above representations may be given a little more general appearance by replacing the sine and cosine functions by their exponential equivalences:

cos x = ( e i x + e i x ) / 2 sin x = ( e i x e i x ) / 2 i

Then Eq. (1.24) is replaced by

υ ( t ) = 1 2 ( a n + i b n ) e i n ω 1 t

where ω 1 = 2 π / T p . The coefficients a n + i b n may be written

a n + i b n = 2 T p T p / 2 T p / 2 υ ( t ) e i n ω 1 t d t

and it is noted that a n =a −n b n =−b −n . Accordingly, letting

V n = 1 2 ( a n + i b n )

we can write the Fourier transform pair over the entire range of positive and negative values of n as

(1.27) υ ( t ) = V n e i n ω 1 t

and

(1.28) V n = 1 T p T p / 2 T p / 2 υ ( t ) e i n ω 1 t d t

where V n is a complex number, which may be written

(1.29) V n = | V n | e i ϕ n

The correlation function of the periodic signal defined by Eq. (1.15) and expanded as in Eq. (1.27) is

(1.30) R ˆ υ υ ( τ ) = 1 T p T p / 2 T p / 2 [ ( n V n * e i n ω 1 t ) ( m V m e i n ω 1 ( t + τ ) ) ] d t

where the asterisk represents the complex conjugate (i.e., z=x+iy and z*=x−iy). The correlation function then reduces to

(1.31) R ˆ υ υ ( τ ) = | V n | 2 e i n ω 1 τ

since

1 T p T p / 2 T p / 2 cos ( m ± n ) ω 1 t d t = { 1 , m ± n = 0 0 , m ± n 0

and likewise for the average value of sin[(m±n)ω 1 t].

In Eq. (1.31) only the diagonal term of all the possible combinations of V m * V n values contribute to the autocorrelation, the off-diagonal terms, mn, do not contribute. The inverse of Eq. (1.31) is, from Eq. (1.28),

(1.32) | V n | 2 = 1 T p T p / 2 T p / 2 R ˆ υ υ ( τ ) e i n ω 1 t d τ , < n <

A similar Fourier transform pair may be defined for cross-correlation functions.

These relationships show that for a correlation function R ˆ υ υ ( τ ) there is a spectrum function | V n | 2 that gives the contribution of each harmonic n of the period ω 1 = 2 π / T p . Certain of these functions will be discussed in Chapter 12, Noise from Rotating Machinery, when we discuss sound fields of rotating sources.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128092736000014

Signal Analysis in the Frequency Domain

John Semmlow , in Circuits, Signals and Systems for Bioengineers (Third Edition), 2018

Abstract

Any periodic signal can be broken down into a series of harmonically related sinusoids, and conversely, any periodic signal can be reconstructed from a series of sinusoids. A sinusoid has energy at only one frequency, so sinusoids are used as intermediaries between the time and frequency domain representations of signals. The technique for determining the sinusoidal series representation of a periodic signal is known as Fourier series analysis. Fourier series analysis is often described and implemented using complex representation. If the signal is not periodic, but exists for a finite time period, Fourier decomposition is still possible by assuming that this aperiodic signal is actually periodic with a period that is infinite, an approach known as the Fourier transform.

Fourier decomposition is usually applied to digitized data on a computer using a high-speed algorithm known as the fast Fourier transform (FFT). The inverse fast Fourier transform (IFFT) implements the Fourier synthesis equations. Some signal processing operations involve converting a signal to the frequency domain using the FFT, operating on the signal while it is in the frequency domain, then converting it back to a time domain signal using the IFFT. The great speed of the FFT and IFFT not only makes such involved operations practical, but also greatly enhances the value and practicality of time–frequency conversion in general.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128093955000035

Signals, Systems, and Spectral Analysis

Ali Grami , in Introduction to Digital Communications, 2016

3.2.6 Periodic and Nonperiodic Signals

A periodic signal repeats itself in time. A periodic continuous-time signal g(t) is a function of time that satisfies the periodicity condition g t = g t ± T 0 for all time t, where t starts from minus infinity and continues forever, and T 0 is a positive number. The smallest value of T 0 that satisfies this condition is called the period . Note that a time-shift to the right or to the left by T 0 results in exactly the same periodic signal g(t). Assuming p(t) is a time-limited signal that defines only one period of the signal g(t), then g(t) can be analytically expressed as follows:

(3.9) g t = g t ± T 0 = k = - p t - k T 0

The reciprocal of the period is called the fundamental frequency of the periodic signal, and its multiples are called harmonics . Any signal for which no value of T 0 satisfies the above condition is then a nonperiodic signal .

No physical signal can be truly categorized as periodic, as no physical signal can start from minus infinity and continue forever. However, for very long enough observation intervals, and of course, for the analysis and design purposes, it is reasonable to assume to have periodic signals.

For a discrete-time signal to be periodic, the period must be a positive integer; otherwise, it is called nonperiodic. It is worth noting that a discrete-time signal obtained by uniform sampling of a periodic continuous-time signal may or may not be periodic, as it highly depends on the sampling rate.

A continuous-time signal consisting of the sum of two time-varying functions is periodic, if and only if both functions are periodic and the ratio of these two periods is a rational number. In such a case, the least common multiple of the two periods is the period of the sum signal. Alternatively, the fundamental frequency of each of the two periodic signals can be found and the greatest common factor of the two fundamental frequencies is then the fundamental frequency of the sum signal. This is in contrast to discrete-time signals, where the sum of two periodic discrete-time signals is always periodic.

Note that we have defined periodic functions in the time domain. However, it is also possible to define a periodic function in the frequency domain. For instance, a continuous-time signal sampled in the time domain is periodic in the frequency domain, as will be discussed in the context of sampling theorem.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012407682200003X

SPECTRAL ANALYSIS, CLASSICAL METHODS

S. Braun , in Encyclopedia of Vibration, 2001

Periodic Signals

Purely periodic signals are a theoretical concept. Any measured data will be contaminated to some extent by noises. These usually include a random component and often deterministic ones (for example, traceable to line interfere). Thus, some averaging is usually indicated. The number of averages will depend on the signal-to-noise ratio, and for reasonable situations can be much less than for purely random signals.

The analysis parameters are an integral part of any spectral analysis results. A triggered analysis is often preferable for periodic signals. A trigger signal is often available from external devices (like a one-per-rev signal in rotating machines), but can also be obtained from the analyzed signal itself, a so-called self-trigger. In contrast to free-running analysis, temporal patterns are preserved, and the phase is not randomized. Thus, in free running analysis, a PSD would be computed, while for triggered analysis, the DFT itself could be averaged, keeping the phase information.

In dedicated data acquisition and analysis instruments, triggered data acquisition would automatically imply a triggered spectral analysis.

Windows would normally be used, generally the Hanning one, to decrease leakage and increase the dynamic range. For very close components (say, separated by 2Δf), the increased bandwidth using windows has to be considered, and a rectangular window (or an increase in the analyzed data duration) is advised. Overlapped processing seems unnecessary, as it is easy to use sufficiently long signal durations, certainly in the case of rotating machines under steady-state conditions. Overlapped processing necessitates a window. The remark concerning very close components applies; overlapped processing may not enable separation).

The number of fundamental periods spanned dictates the location of the spectral line in the DFT presentation.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122270851001879

Fundamentals of the model predictive repetitive control

Alfeu J. Sguarezi Filho , in Model Predictive Control for Doubly-Fed Induction Generators and Three-Phase Power Converters, 2022

8.1.1 IMP for any periodic signal

A periodic signal that has ( T p p ) period can be represented by using the Fourier series in exponential form written as [108,110]

(8.1) r r p ( t ) = n n = a n n e j 2 π n n t T p p .

For the inclusion of Eq. (8.1) into the control system loop by using the IMP, one can use its transfer function in Laplace domain which can be represented as [110]

(8.2) R r p ( s ) = 1 s n n = 1 ( 2 π n n T p p ) 2 ( 2 π n n T p p ) 2 + s 2 = T p p e s T p p 2 1 e s T p p .

However, T p p e s T p p 2 represents a delay term with gain T p p . In this case, the transfer function 1 1 e s T p p is enough to insert into the closed loop. The implementation of this transfer function can be achieved by means of a positive feedback loop by using e s T p p as depicted in Fig. 8.1. It can be noticed that this model has poles on the imaginary axis, s = j n n T p p , n n , and the model has infinite gain at frequencies n n T p p , n n , regarding the frequencies point of view.

Figure 8.1

Figure 8.1. Control system loop by using the IMP.

The implementation of the IMP presented in the transfer function of Eq. (8.2) in the discrete-time domain can be written as

(8.3) G r r ( z ) = z N 1 z N = 1 z N 1 ,

where we have the relationship N = T p p / T with T being the sampling time.

One controller that can realize the IMP is the PID (proportional-integral-derivative) which can be represented in the Laplace domain as

(8.4) P I D = k p + k i s + k d s .

In the case of Eq. (8.4), the integrator 1 s is an internal model of DC signals that allows null steady state error. The proportional and derivative terms increase the controller performance and robustness.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780323909648000178

Major Design Issues

Husrev T. Sencar , ... Ali N. Akansu , in Data Hiding Fundamentals and Applications, 2004

7.2.1 Autocorrelation for Restoring the Cropped Signal

Let a periodic signal V be obtained by combining n replicas of the signal W of length T 1 (Fig. 7-1). V is arbitrarily cropped out, V C , and the resulting signal is resampled by the factor 1/τ = T 2/T 1, V CR . Then, T 2 is the size of the resampled W. Let n be a large integer number; Te be the amount of signal (number of coefficients) cropped from V, where Te < T 1, and L = nT 2Te /τ be the length of V CR . The resampling factor can also be defined as 1/τ = L/(nT 1Te ). The autocorrelation R V C R V C R ( m ) of V CR is computed as

(7.12) R V C R V C R ( m ) = k = 1 l | m | V C R ( k ) V C R ( k + m ) .

In order to recover W, the cropped resampled signal V CR of size nT 2Te/τ has to be restored to the cropped signal V C with size nT 1Te by resampling with the factor τ. The autocorrelation function of V CR is used to estimate 1/τ depending on information about V available to the extractor (i.e., size of V, size of W). It will also be seen that the autocorrelation peak pattern provides insights into the nature of the croppings even when croppings occur at multiple positions (note that if two or more consecutive samples in V are cropped, they will be considered a single cropping). The total amount of cropped signal is assumed to be much smaller than the size of V , T e n T 1 . The justification for this assumption is that in a typical attack scenario, due to perceptual constraints, the attacker cannot make radical changes on signal size V. Therefore, all copies of W cannot be cropped fatally at the same time. Consequently, in the corresponding autocorrelation function of V CR the peaks observed at T 2 shifts of the origin, R V C R V C R ( ± i T 2 ) , where i Z , will be relatively greater in strength compared with other peaks, irrespective of the number of croppings. Given that T 1 is known at the extractor, the resampling factor can be found by measuring T 2 through distances between the dominant peaks in the autocorrelation function and calculating T 2 /T 1. Alternately, if the size of V prior to cropping, nT 1, is known rather than the size of W, 1/τ can be calculated using the relative peak locations of the autocorrelation function.

Considering the single cropping case of amount Te, the autocorrelation function of the signal V CR will indicate the presence of two periodic components with the same period, T 2 = T 11/τ. The first component is identified by peaks at T 2 shifts of the origin. The second, on the other hand, generates peaks at the shift of T 2 Te 1/τ with respect to zero shift and at T 2 shifts thereafter. In other words, the first component is due to resampled copies of signal W in V CR , and the second one is due to the cropping. In the autocorrelation, at every T 2Te 1 shift following a T 2 shift, the incomplete signal period coincides with a copy of itself and generates a peak. The peaks corresponding to the latter component are weaker in signal strength compared with the former due to the incomplete W. Therefore, other than the peak at the zero shift, every peak at T 2 shifts (with respect to zero shift) is accompanied by a peak due to cropped W (assuming n is large enough). The distance d between the peak at kT 2, kn, and (k − 1)T 2 + T 2Te 1/τ is calculated as

(7.13) d = k T 2 ( ( k 1 ) T 2 + T 2 T e 1 τ ) , = T e 1 τ .

Being able to measure Te / τ and T 2, the resampling factor τ is calculated as τ = nT 2 /nT 1 or τ = T 2 /T 1 based on availability of nT 1 or T 1. Then the total cropping amount Te is calculated using Eq. (7.13). It should also be noted that given either of nT 1 or T 1, one can determine either using τ and T 2.

Now we shall consider the double cropping case where Te 1 and Te 2 are the amounts of the nonoverlapping cropped samples (Te 1 and Te 2 refer to croppings of W at different locations) from V with Te 1 + Te 2 < T 1. The autocorrelation function of V CR may have up to four peaks in every T 2 interval that are (k − 1)T 2, kn, away from zero shift. These peaks may appear at kT 2 − (Te 1 + Te 2)/τ, kT 2 − (Te 1)/τ, kT 2Te 2/τ, and kT 2. The last one is due to resampled copies of W and has the highest correlation value. Others are due to cropped-resampled copies of W and have smaller strengths. If no croppings are present in the first and last periods of W, for relatively large n and T 1, the distance, d, between the first and the last peak in any T 2 interval is measured as (Te 2 + Te 1)/τ. Similar to the single cropping case, nT 2 and 1 / τ = nT 2 /nT 1 are consequently computed.

For more croppings followed by resampling, a similar analogy is applicable. If Te 1, …, Tem are the amounts of the nonoverlapping cropped signals and Te 1 +⋯–Tem < T 1, there may, at most, be 2m peaks at every shift based on how the signal V is cropped (i.e., the number of croppings in each period of W, the location of a cropping in the period W, the neighborhood of the cropped periods). These croppings may yield correlation peaks at 2 m locations in a T 2 shift (assuming each cropping is nonoverlapping with the others and considering that the first and last periods are not cropped). Corresponding peak locations in the autocorrelation function are at k T 2 j = 1 j = m T e j / τ , k T 2 j = 1 , j i j = m T e j / τ for i , k T 2 j = 1 , j i , l j = m T e j / τ for i , l such that il, …, kT 2Tej / τ for j , and at kT 2. Then, the distance d between the first and last peaks in a T 2 shift can be used to estimate the total erasure amount.

When the first and last periods of the signal V are cropped, the autocorrelation function may not generate a peak at kT 2 − (Te 1 + ⋯ + Tem )/τ. Therefore, the distance d, measured between the first and the last peak at a T 2 shift of the autocorrelation function, does not indicate Te /τ. However, as will be explained in Section 7.2.3, d may still be measured using cyclic autocorrelation features for such croppings. Further, if both T 1 and nT 1 are known at the extractor, the amount of cropping, Te, can also be determined by measuring d and 1/τ using Eq. (7.13).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120471447500078

Discrete Systems

In Modelling of Mechanical Systems, 2004

8.2.5.6 Correlation of periodic signals

As already emphasized, periodic signals have an infinite energy whereas those which correspond to physical quantities have a finite power. As a consequence, the concept of functions and coefficients of correlation can be extended to such periodic signals, provided power is used instead of energy. In this manner we define the auto-and cross-correlation functions:

[8.44] R X X ( τ ) = 1 T T / 2 + T / 2 X ( t ) X ( t + τ ) d t ; R X Y ( τ ) = 1 T T / 2 + T / 2 X ( t ) Y ( t + τ ) d t

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874705101800116

Analysis of continuous and discrete time signals

Alvar M. Kabe , Brian H. Sako , in Structural Dynamics Fundamentals and Advanced Applications, 2020

Example 5.1-15

Consider T -periodic signals, x ( t ) X m and w ( t ) W m , where w ( t ) equals the Hanning window defined in Example 5.1-13. Use Property 7 to calculate the Fourier series coefficient of y ( t ) = w ( t ) x ( t ) . Combining (5.1-74) and (5.1-75) in Example 5.1-13 yields the Fourier series coefficients, W m ,

(5.1-102) W m = { 1 2 m = 0 1 4 m = ± 1 0 | m | > 1

The multiplication property leads to

(5.1-103) Y m = n = W m n X n = 1 4 X m 1 + 1 2 X m + 1 4 X m + 1

Therefore, multiplying x ( t ) by the Hanning window is equivalent to a 3-pt weighted average of its Fourier series coefficient.

Property 8 is a consequence of the multiplication and conjugation properties. Since y ¯ ( t ) Y ¯ m , Property 7 implies

(5.1-104) 1 T T / 2 T / 2 x ( t ) y ¯ ( t ) e i m ω 0 t d t = n = X n Y ¯ n m

Setting m = 0 leads to Parseval's theorem, which establishes the equivalence of the time-domain inner product with the inner product of the Fourier coefficients. If y ( t ) = x ( t ) , we obtain the analogue of Plancheral's theorem known as Parseval's identity,

(5.1-105) 1 T T / 2 T / 2 | x ( t ) | 2 d t = m = | X m | 2

This equates the mean-square of x ( t ) with the sum-squared of its Fourier coefficients. We will refer to periodic signals as square integrable if their mean square over a period is finite.

To summarize, the Fourier series integral, Eq. (5.1-66), associates a periodic signal, x ( t ) , with a unique sequence of its Fourier coefficients. Parseval's identity implies that every square-integrable periodic signal has Fourier coefficients that are square summable. Conversely, Eqs. (5.1-105) and (5.1-65) imply that any square-summable sequence, { X n } , is associated with a unique square-integrable periodic signal. By establishing the equivalence of the inner products of square-integrable periodic signals and square-summable sequences, Parseval's theorem states that the geometry of these two spaces are the same, i.e., they are isometric.

The set of all square integrable periodic signals with period, T , forms a linear vector space known as a Hilbert space. Hilbert spaces are infinite-dimensional generalizations of finite-dimensional vector spaces with inner products that define their geometries. A Hilbert space, H , possesses an infinite orthonormal basis, { u m } m = 1 , such that each vector, x H , can be represented by a sum of the basis vectors

(5.1-106) x = m = 1 ξ m u n and ξ m = x , u m

The scalars, ξ m , are the coordinates of x with respect to the basis. The orthonormality of the basis vectors means that the pair-wise inner products satisfy

(5.1-107) u m , u n = { 1 m = n 0 m n

Consider vectors x and y with coordinates ξ m and η m , respectively. Then orthonormality implies that

(5.1-108) x , y = m = 1 ξ m η ¯ m

Hence, if x = y , we obtain

(5.1-109) x 2 = x , x = m = 1 | ξ m | 2

In the context of periodic square-integrable signals and Fourier series, the orthonormal basis is equal to { e i m ω 0 t } m = , and the inner product of two signals, x ( t ) and y ( t ) , is defined by

(5.1-110) x , y = 1 T T / 2 T / 2 x ( t ) y ¯ ( t ) d t

Therefore, the Fourier series coefficients are simply the coordinates of a periodic signal with respect to the orthornormal basis, and the Fourier series is the representation of a signal with respect to this basis. Also, note that in this Hilbert space setting, Parseval's theorem and identity are immediate consequences of (5.1-108) and (5.1-109), respectively. For more details on Hilbert spaces, refer to Reed and Simon, 1980; Rudin, 1973.

Let x ( t ) be a periodic signal with period equal to T . We will examine the relation between the Fourier series coefficient of x ( t ) and its Fourier transform. First note that a periodic function is not absolutely integrable, hence, its Fourier transform is defined in the distribution sense, as we have discussed in the previous section. We start by representing x ( t ) as a replication of period T (Briggs and Henson, 1995) of a base signal, x 0 ( t ) ,

(5.1-111) x ( t ) = T { x 0 ( t ) } = n = x 0 ( t + n T )

We will often refer to { x 0 ( t ) } as a T-replication of x 0 ( t ) . Observe that the T-replication operation produces a signal that is periodic with period equal to T . We will assume that x 0 ( t ) decreases sufficiently as t ± so that the infinite sum converges. Note that x 0 ( t ) is not unique. For example, consider the functions, y ( t ) and z ( t ) ,

(5.1-112) y ( t ) = { x ( t ) 0 < t < T 0 otherwise and z ( t ) = { x ( t ) / 3 0 < t < 3 T 0 otherwise

Then we could define x 0 ( t ) to be a T-replication of either y ( t ) or z ( t ) .

Next, we establish a relationship between the Fourier series coefficient of x ( t ) and the Fourier transform of x 0 ( t ) , i.e.,

(5.1-113) X m = 1 T T / 2 T / 2 x ( t ) e i m ω 0 t d t , ω 0 = 2 π T = 1 T T / 2 T / 2 ( n = x 0 ( t + n T ) ) e i m ω 0 t d t = 1 T n = T / 2 T / 2 x 0 ( t + n T ) e i m ω 0 t d t = 1 T n = e i m ω 0 ( n T ) n T T / 2 n T + T / 2 x 0 ( τ ) e i m ω 0 τ d τ = 1 T n = n T T / 2 n T + T / 2 x 0 ( τ ) e i m ω 0 τ d τ = 1 T x 0 ( τ ) e i m ω 0 τ d τ = X 0 ( m ω 0 ) T

This is a remarkable result, given that there are infinitely many base signals that can yield the same T-replication. Suppose x 0 ( t ) and y 0 ( t ) are two different base signals with equal T-replications,

(5.1-114) x ( t ) = n = x 0 ( t + n T ) = n = y 0 ( t + n T )

Since x 0 ( t ) y 0 ( t ) , their Fourier transforms are also not equal, i.e., X 0 ( ω ) Y 0 ( ω ) . Eq. (5.1-113) implies that because x 0 ( t ) and y 0 ( t ) have the same T-replication, their Fourier transforms, although different, must be equal at the discrete frequencies, ω m = m ω 0 .

Eq. (5.1-113) and the Fourier series expansion of x ( t ) lead us to the next theorem of interest:

Theorem 5.7 (Inverse Poisson Summation Formula) Assume that for a continuous time signal, x 0 ( t ) , T { x 0 ( t ) } converges and is finite. Let ω 0 = 2 π / T , then

(5.1-115) T { x 0 ( t ) } = 1 T m = X 0 ( m ω 0 ) e i m ω 0 t

The above theorem states that a discrete inverse Fourier transform of X 0 ( ω ) yields a T-replication of x 0 ( t ) . We will discuss a dual version of Theorem 5.7 in the next section on time-domain sampling. The Fourier transform pair, e i m ω 0 t 2 π δ ( ω m ω 0 ) , and (5.1-115) imply that x ( t ) has the Fourier transform,

(5.1-116) T { x 0 ( t ) } = 1 T m = X 0 ( m ω 0 ) e i m ω 0 t ω 0 m = X 0 ( m ω 0 ) δ ( ω m ω 0 )

Consequently, Eqs. (5.1-115) and (5.1-116) lead to

(5.1-117) T { x 0 ( t ) } = n = x 0 ( t + n T ) ω 0 m = X 0 ( m ω 0 ) δ ( ω m ω 0 )

That is, the replication in the time domain is equivalent to a sampling of the Fourier transform in the frequency domain, where X 0 ( m ω 0 ) are sample values of X 0 ( ω ) .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128216156000058

cambronevelve.blogspot.com

Source: https://www.sciencedirect.com/topics/engineering/periodic-signal

Post a Comment for "Is the Sum of Two Continuous Time Period Signals Also Periodic"