Discrete-Time Signals in the Frequency Domain
By Sébastien Boisgérault, Mines ParisTech, under CC BY-NC-SA 4.0
April 25, 2017
Contents
Introduction
Discrete-time signals are obtained by the sampling of continous-time signals – real-valued functions of a real-valued time – at a constant rate (see e.g. Strang (2000)). The analysis of many of their properties are simpler to carry out “in the frequency domain”, once a Fourier transform has been applied to the original data (also called representation in the signal “in the time domain”).
In the classical setting, the Fourier transform generates functions of the frequency. However, the signals with arguably the simplest frequency content, sinusoidal signals, then cannot be represented in the frequency domain. Hence, this theory should be considered a partial failure, or at least incomplete.
Extensions of the classical approach uses generalized functions of the frequency to represent discrete-time signals in the frequency domain. We introduce in this document a type of generalized functions called hyperfunctions (Sato 1959; Kaneko 1988), whose foundation is complex analysis, that fits perfectly the needs of discrete-time signal processing.
Terminology & Notation
Signals and Domains
We use in this document a convenient convention that is more popular among physicists than it is among mathematicians. In a given application domain – for us, that is digital signal processing – when an object has several equivalent representations as functions, we use the same name for the object, and distinguish the representations by (a superset of the) domain of definition of the function. To every such domain we also associate fixed variable names.
In this document, we are dealing with discrete-time signals with a sample period Δt (or sample rate Δf=1/Δt). A signal x is is represented in the time domain as a function x(t) where t∈ZΔt, in the frequency (or Fourier) domain as the function x(f) where f∈R/Δf, and in the complex (or Laplace) domain as a function x(z) where z∈C. We often implicitly favor some representation and refer for example to x(t) as “the signal” instead of “the representation of the signal x in the time domain”.
If t is a free variable, x(t) denotes a function of the time t, if it is bound to some value, the value of the function. If there is some ambiguity in the choice of the representation, we use an assignment syntax, for example x(t=0) instead of x(0), because it could be mistaken as x(f=0).
Sets and Functions
The set of functions from A to B is denoted A→B.
The set C is the complex plane; U refers to the unit circle centered on the origin: U={z∈C,|z|=1}. The symbol ∂U denotes the boundary of U. Its positively oriented boundary, the closed path t∈[0,1]↦ei2πt, is denoted [↺].
For any r>0, Dr is the open disk with radius r centered on the origin: Dr={z∈C,|z|<r} and for any r∈[0,1[, Ar is the open annulus with internal radius r and external radius 1/r: Ar={z∈C,r<|z|<1/r}.
Iverson Bracket
We1 denote {⋅} the function defined by: {b}=|1if b is true,0if b is false.
This elementary notation supercedes many other ones. For example, we can use {x∈A} instead of χA(x) to denote the characteristic function of the set A, {i=j} instead of δij to denote the Kronecker delta, {t≥0} instead of H(t) to denote the Heaviside function.
Finite Signals
Definition – Signal (Time Domain), Sampling Period/Rate
A discrete-time signal x(t) is a real or complex-valued function defined on ZΔt for some Δt>0, the signal sampling period (or sampling time); the number Δf=1/Δt is the signal sampling rate (or sampling frequency).In the sequel, all signals are discrete-time, hence we often drop this qualifier. Also, in this introductory section, although many definitions and results are valid in a more general setting, for the sake of simplicity, we always assume that signals are finite:
Definition – Finite Signal.
A discrete-time signal x(t) is of finite support – or simply finite – if x(t)=0 except for a finite set of times t.Fourier Transform
Definition – Signal in the Frequency Domain, Fourier Transform.
A signal x(t) is represented in the frequency domain as x(f), the (discrete-time) Fourier transform of x(t), defined for f∈R by: x(f)=Δt∑t∈ZΔtx(t)e−i2πft.Remark – Frequency Domain.
Note that x(f) is Δf-periodic. Indeed, for any f∈R and t=nΔt with n∈Z, e−i2π(f+Δf)t=e−i2πft(e−i2π)n=e−i2πft and therefore x(f+Δf)=Δt∑t∈ZΔtx(t)e−i2π(f+Δf)t=x(f). As x(f) does not really depend directly of the value of f∈R, but only on the value of f∈R modulo some multiple of Δf, we may alternatively define x(f) as a function defined on the frequency domain R/Δf, and totally forget about the periodicity, because it is now captured by the domain definition. An alternate – arguably less contrived – way to deal with the periodicity is to consider only the values of x(f) on one period, for example in the interval [−Δf/2,Δf/2[.Remark – Fourier Transform of Continuous-Time Signals.
The discrete-time Fourier transform formula is similar to the continous-time Fourier transform formula x(f)=∫+∞−∞x(t)e−i2πftdt. Actually, if x(t) if defined for every t∈R and not only t∈ZΔt – if our discrete-time signal samples the continuous-time signal x(t) – the discrete-time Fourier transform is the continuous one with the integral replaced by its Riemann sum. In many respects, the operator Δt∑t∈ZΔt plays the same role for discrete-time signals than the integral with respect to the time t plays for continous-time signal.Theorem – Inversion Formula.
If x(t) is a finite signal, represented in the frequency domain as x(f), we have x(t)=∫+Δf/2−Δf/2x(f)ei2πftdf.Remark – Continuous-Time Signals (Inverse Fourier Transform).
The Fourier inversion formula for discrete-time signals is also very similar to its counterpart for continuous-time signals, that is: x(t)=∫+∞−∞x(f)ei2πftdf. Two differences are obvious: for continous-time time signals, the formula is meaningful for any t∈R while in discrete-time, it is only meaningful for t∈ZΔt; for continous-time signals, the integral with respect to the frequency f ranges over R while for discrete-time signals, it ranges over [−Δf/2,Δf/2[. Unlike continous-time signals, the information contained in discrete-time signals is structurally contained in a bounded frequency band of width Δf.z-Transform
There is yet another useful representation of a finite signal – this time as a function of a complex variable z – and it is closely related to the frequency-domain representation.
Definition – Signal in the Complex Domain, z-Transform.
A signal x(t) is represented in the complex domain as x(z), the z-transform of x(t), defined for some z∈C by: x(z)=Δt∑t∈ZΔtx(t)z−t/Δt=Δt∑n∈Zx(t=nΔt)z−n.Remark – z-Transform Domain for Finite Signals.
When x(t) is finite, the z-transform x(z) is defined for any z∈C∗ ; it can be extended to C if x(t)=0 when t>0.We have the straightforward, but nevertheless very useful:
Theorem – z-Transform to Fourier Transform.
The frequency domain representation of a signal x(f) is related to the complex domain representation x(z) by: x(f)=x(z=ei2πfΔt).Example – Unit Impulse.
The unit impulse signal 1 is defined in the time domain as 1(t)=(1/Δt)×{t=0}. It is equal to zero outside t=0 and satisfies Δt∑t∈ZΔ1(t)=1.Convolution and Filters
Definition – Convolution.
The convolution of the signals x(t) and y(t) is the signal (x∗y)(t) defined by: (x∗y)(t)=Δt∑τ∈ZΔtx(τ)y(t−τ).Theorem – Representation of the Convolution in the Frequency Domain.
For finite signals, we have (x∗y)(f)=x(f)×y(f).Example – Unit Impulse.
For any finite signal x, the definition of convolution yields (1∗x)=x(t)=(x∗1)(t). In other words, the signal 1 is a unit for the convolution. This also also clear from its frequency domain representation: indeed, we have 1(z)=1 and 1(f)=1, and therefore (1∗x)(f)=1(f)×x(f)=x(f)=x(f)×1(f)=(x∗1)(f).Definition – Filter, Impulse Response, Frequency Response, Transfer Function.
A filter is an operator mapping an input signal x(t) to an output signal y(t) related by the operation y(t)=(h∗u)(t) where h(t) is a signal called the filter impulse response. The filter frequency response is h(f) and its transfer function is h(z).Remark – Impulse Response.
The “impulse response” terminology is justified by the fact that if u(t)=1(t), then y(t)=h(t): the impulse response is the filter output when the filter input is the unit impulse. For obvious reasons, the filters we have introduced so far are called finite impulse response (FIR) filters.Quickly Decreasing Signals
The assumption that x(t) is finite simplifies the theory of frequency domain representation of signals, but it is also very restrictive. For example, in speech analysis, we routinely use auto-regressive filters; their impulse responses are are not finite, and yet their frequency representation is needed, for example to analyze the acoustic resonances of the vocal tract (or “formants”).
Fortunately, the theory can be extended beyond finite signals. The extension is quite straightforward if x(t) decrease quickly when t→±∞, where by “quickly decreasing” we mean that it has a sub-exponential decay:
Definition – Quickly Decreasing Signal.
A signal x(t) with sample period Δt is quickly decreasing if ∃σ>0,∃κ>0,∀t∈ZΔt,|x(t)|≤κe−σ|t|.Given a quickly decreasing signal x(t) in the time domain, as in the finite signal case, its representation in the frequency domain is x(f)=Δt∑t∈ZΔtx(t)ei2πft and in the complex domain x(z)=Δt∑n∈Zx(t=nΔt)z−n. However, the sums are not finite anymore; we consider that the values of the functions x(f) and x(z) are well defined when the sums are absolutely summable.
Theorem – Quickly Decreasing Signal.
Any quickly decreasing signal x can be equivalently represented as:a quickly decreasing function x(t),
a holomorphic function x(z) defined on some neighbourhood of U,
a Δf-periodic and analytic function x(f) on R.
Theorem – Inversion Formulas.
Let x(t) be quickly decreasing signal and x(z) be its representation in the complex domain, defined in the annulus Aρ for some ρ∈[0,1[. For any r>0 such that r∂U⊂Aρ, we have x(t=nΔt)=1i2π∫r[↺]x(z)Δtzn−1dz. As a special case, we have x(t)=∫+Δf/2−Δf/2x(f)ei2πftdf.Example – Auto-Regressive Filter.
The filter whose impulse response h(t) is given by h(t=nΔt)=(1/Δt)×2−n×{n≥0} is an auto-regressive filter, ruled for finite inputs u(t) by the dynamics y(t)=1/2×y(t−Δt)+u(t). The transfer function h(z) of this filter is h(z)=Δt∑n∈Zh(t=nΔt)z−n=∑n∈N(1/2z)n. This sum is absolutely convergent when |1/2z|<1, that is |z|>1/2, and h(z)=11−1/2z=zz−1/2. Consequently, h(f)=ei2πfΔtei2πfΔt−1/2. The modulus and argument of this complex-valued function are called the filter frequency response magnitude and phase. They are usually displayed on separate graphs. We know that h(f) is Δf-periodic. Moreover, here h(t) is real-valued, hence for any f∈R, h(−f)=¯h(f). We can therefore plot the graphs for f∈[0,+Δf/2] because all the information stored in the frequency response is available in this interval. The Python code below can be used to generate the graph data for Δf=8000 Hz.from numpy import *
df = 8000.0
dt = 1.0 / df
N = 1000
f = linspace(0.0, 0.5 * df, N)
z_f = exp(1j * 2 * pi * f * dt)
h_f = z_f / (z_f - 0.5)
Slowly Increasing Signals
Once again the theory of representation of signals in the frequency domain can be extended, this time beyond quickly increasing signals. However, we will have to abandon the representation of x(f) as a function, to adopt instead the representation of x(f) as a hyperfunction.
The extension will be valid as long as x(t) “increases slowly” when t→±∞, or more precisely, has an infra-exponential growth:
Definition – Slowly Increasing Signal.
A signal x(t) with sample period Δt is slowly increasing if ∀σ>0,∃κ>0,∀t∈ZΔt,|x(t)|≤κeσ|t|.Remark.
Quicky decreasing signals are obviously slowly increasing, but this class also include all bounded signals, and even all signals that are dominated by polynomials.Remark.
There is a way to get rid of the factor κ in the definition of slowly increasing signal. Instead, we can check that the signal is eventually dominated by every increasing exponential function of |t|: ∀σ>0,∃τ∈NΔt,∀t∈ZΔt,|t|>τ⇒|x(t)|≤eσ|t|.Fourier Transform
Definition – Abel-Poisson Windowing.
Let r∈[0,1[. We denote xr(t) the signal derived from x(t) by xr(t)=r|t/Δt|x(t), the application of the Abel-Poisson window r|t/Δt| to the original signal x(t).Remark.
The family of signals xr(t) indexed by r, approximates x(t): for any t∈ZΔt, xr(t)→x(t) when r↑1.Remark.
If x(t) is only known to be slowly increasing, we cannot define its Fourier transform classically. However, for any r∈[0,1[, the signal xr(t) is quickly decreasing and we may therefore compute its Fourier transform xr(f); we then leverage this property to define the Fourier transform x(f) of x(t) as the family of functions xr(f) indexed by r:Definition – Signal in the Frequency Domain, Fourier Transform.
The representation x(f) in the frequency domain of a slowly increasing signal x(t) is the Δf-periodic function with values in [0,1[→C defined by: x(f)=r∈[0,1[↦xr(f)∈C.The periodic hyperfunctions are then simply defined as the images of slowly increasing signals by the Fourier transform:
Definition – Periodic Hyperfunction.
A Δf-periodic hyperfunction is a function ϕ:R→[0,1[→C such that there is a slowly increasing signal x(t) with sample rate Δf satisfying ϕ(f)(r)=xr(f).Remark – Multiple Representations in the Frequency Domain.
A signal x(t) that is quickly decreasing is also slowly increasing; therefore it has two distincts representations in the frequency domain: a periodic function f∈R↦x(f)∈C, and a periodic hyperfunction f∈R↦x(f)∈[0,1[→C. Here, the Fourier-transform-as-a-function x(f) is the uniform limit of the Fourier-transform-as-a-hyperfunction xr(f) when r↑1, hence we can easily recover the function representation of x(f) from its hyperfunction representation.Remark – Hyperfunctions as Limits.
Is x(f) the limit of xr(f) when r↑1? The short answer is “yes”, but only when the question is framed appropriately, and we still lack of few tools to do it now. At this stage, it is probably more fruitful to think of x(f) as the approximation process r↦xr(f) itself than of its limit2.Example – Fourier Transform of a Constant Signal.
Let x(t)=1 for every t∈ZΔt. This signal is not quickly decreasing, but it is slowly increasing, hence we may compute its Fourier transform as a periodic hyperfunction. By definition, xr(t)=r|t/Δt|x(t)=r|t/Δt|, hence xr(f)=Δt∑n∈Zr|n|e−i2πfnΔt. We may split the sum in two: xr(f)=Δt∑n≤0(rei2πfΔt)−n+Δt∑n>0(re−i2πfΔt)n. Both terms in the right-hand side are sums of geometric series, which yields Δt∑n≤0(rei2πfΔt)−n=Δt1−rei2πfΔt, Δt∑n>0(re−i2πfΔt)n=Δt×re−i2πfΔt1−re−i2πfΔt=−Δt1−r−1ei2πfΔt. Hence, if we define x±(z)=Δt1−z, we can write xr(f) asxr(f)=x±(z=rei2πfΔt)−x±(z=r−1ei2πfΔt). We may compute another useful expression of xr(f) xr(f)=Δt1−rei2πfΔt−Δt×re−i2πfΔt1−re−i2πfΔt=Δt(1−r2)1−2rcos2πfΔt+r2 The representation of the functions xr(f) for several values of r clearly demonstrates how the energy of the signals concentrates around f=0 when r↑1.
Standard Defining Function
The example of Fourier transform x(f) computed in the previous section exhibited a very specific structure that is actually shared by all periodic hyperfunctions:
Theorem & Definition – Standard Defining Function.
For every slowly increasing signal x(t), there is a unique function x±(z) – called standard defining function of x(f) – holomorphic in C∖U, with x±(z=∞)=lim|z|→+∞x(z)=0, such that for any r∈[0,1[: xr(f)=x±(rei2πfΔt)−x±(r−1ei2πfΔt). This function is defined by: x±(z)=|x+(z)=+Δt∑n≤0x(t=nΔt)z−nif |z|<1,x−(z)=−Δt∑n>0x(t=nΔt)z−nif |z|>1,Theorem – Inversion Formula.
Any holomorphic function defined on C∖U and equal to 0 at z=∞ is the standard defining function x±(z) of a unique slowly increasing signal x(t), defined for any r∈]0,1[ by x(t=nΔt)=1i2π[∫r[↺]−∫r−1[↺]]x±(z)Δtzn−1dz.Example – Inversion Formula.
Consider the signal whose Fourier transform has for standard defining function x±(z)=Δt1−z. The inversion formula provides x(t=nΔt)=1i2π[∫r[↺]−∫r−1[↺]]zn−11−zdz. The right-hand side is a line integral over the sequence of paths γ made of r[↺] (oriented counter-clockwise) and r−1[↺] (oriented clockwise). We have ind(γ,0)=0 and ind(γ,1)=−1, hence the residues theorem yields x(t=nΔt)=1i2π∫γzn−11−zdz=−res(zn−11−z,z=1)=1.Non-Standard Defining Functions
Definition – Defining Function.
Let x(f) be a Δf-periodic hyperfunction with standard defining function x±(z). A holomorphic function ϕ(z) defined on V∖U where V is an open neighbourghood of U is a defining function of x(f) if ϕ(z)−x±(z) has an holomorphic extension to V. In the sequel, unless we use the “standard” qualifier, the notation x±(z) will be used to denote any of the defining function of a signal x(t).Theorem – Inversion Formula.
Any holomorphic function defined on Aρ∖U for some ρ∈[0,1[ is a defining function x±(z) of a unique slowly increasing signal x(t), defined for any r∈]ρ,1[ by x(t)=1i2π[∫r[↺]−∫r−1[↺]]x±(z)Δtzn−1dz.Remark.
The domain of definition of a defining function x±(z) always contains a subset Aρ∖U for some ρ∈[0,1[. As this restriction conveys enough information to described the signal x(t), it is harmless and the assumption made in the theorem that the defining function is actually defined on such set is not overly restrictive.Example – Quickly Decreasing Signals.
If x(t) is a quickly decreasing signal, its z-transform x(z) is defined in some open neighbourhood of U by x(z)=Δt∑n∈Zx(t=nΔt)z−n (see section Quickly Decreasing Signals); on the other hand its standard defining function x±(z) is given by x±(z)=|x+(z)=+Δt∑n≤0x(t=nΔt)z−nif |z|<1,x−(z)=−Δt∑n>0x(t=nΔt)z−nif |z|>1. As x(z) is defined and holomorphic on some open neighbourhood of U, x−(z) and x+(z) can be extended as holomorphic functions to such a domain ; if we still denote x−(z) and x+(z) these extensions, we can write x(z)=x+(z)−x−(z). Hence, the difference between x±(z)=+x(z)×{|z|<1}, and the standard defining function has an analytic extension – that is x−(z) – in a neighbourhood of U and x±(z) qualifies as a defining function. The function x±(z)=−x(z)×{|z|>1}, for similar reasons, also does.Ordinary Functions as Hyperfunctions
We still need to make our frequency-domain representations as hyperfunctions consistent with the classical framework. If a signal has a classical frequency-domain representation, as a complex-valued, locally integrable, Δf-periodic function x(f) – or “ordinary function” representation – what is its frequency-domain representation as a hyperfunction?
The answer is – at least conceptually – pretty straightforward: if x(f) is an ordinary function, the classic time-domain representation of x(t) is given by x(t)=∫+Δf/2−Δf/2x(f)ei2πftdf. In particular, x(t) is a bounded signal, hence it is slowly increasing signal, and we may define its frequency-domain representation as a hyperfunction: this is the representation of x(f) as a hyperfunction.
Theorem – Hyperfunction Representation of an Ordinary Function.
If x(f) is an ordinary function, the standard defining function x±(z) of its representation as a hyperfunction is defined by: x±(z)=∫+Δf/2−Δf/2x(f)Δt1−ze−i2πfΔtdf.Example – Constant Frequency-Domain Representation.
The ordinary function x(f)=1 has a temporal representation given by x(t)=∫+Δf/2−Δf/2ei2πftdf. If t=0, x(t)=Δf ; otherwise, t=nΔt for some n≠0 and x(t)=[ei2πfnΔti2πnΔt]+Δf/2−Δf/2=(−1)n−(−1)−ni2πnΔt=0. Hence, x(t)=1(t): the signal is the unit impulse. At this stage, it is easy to use the definition of the standard defining function to derive that x±(z)={|z|<1}. With the above theorem, we can also compute x±(z) directly from the definition x(f)=1. Indeed, we have x±(z)=∫+Δf/2−Δf/2Δt1−ze−i2πfΔtdf=∫+Δf/2−Δf/2Δt1−ze−i2πfΔtdei2πfΔti2πΔtei2πfΔt, hence x±(z)=1i2π∫[↺]ξ−11−zξ−1dξ=1i2π∫[↺]1ξ−zdξ, which yields x±(z)={|z|<1} as expected.Example – Defining Function of a Low-Pass Filter.
The impulse response x(t) of a perfect low-pass filter whose cutoff frequency is fc=Δf/4 – a filter whose passband and stopband have equal size – is defined in the frequency domain by x(f)={|f|<Δf/4},f∈[−Δf/2,Δf/2[. The same kind of computations that we have made when we had x(f)=1 yield x±(z)=1i2π∫γ1ξ−zdξ where γ:f∈[−1/4,1/4]→ei2πf. Inside or outside of the unit circle, if we differentiate under the integral sign and an integrate by parts the result, we end up with: dx±(z)dz=1i2π[1z−i−1z+i]. Let log denote the principal value of the logarithm. Inside the unit circle, the function [z↦1i2π[log(z−i)−log(z+i)]]. is defined and holomorphic and its derivative matches the derivative of x±(z). As a direct computation shows that x±(z=0)=1/2, but −1/2 for this function, we have x+(z)=1i2π[log(z−i)−log(z+i)]+1. Outside the unit circle, the function [z↦1i2πlogz−iz+i] is defined, holomorphic, and has the same derivative as x±(z). Moreover, it has the same limit when |z|→+∞, hence x−(z)=1i2πlogz−iz+i.
The time-domain representation of this filter is easy to determine: we have
x(t)=∫+Δf/2−Δf/2x(f)ei2πftdf=∫+Δf/4−Δf/4ei2πftdf hence x(t=nΔt)=[ei2πfnΔti2πnΔt]+Δf/4−Δf/4=sincπn2.
Calculus
The representation of signals in the frequency domain as hyperfunctions allows us to consider a large class of signals – the slowly increasing ones – but we now have to get familiar with the operations that we can perform with these mathematical objects. Some operations that are straightforward with functions cannot be carried to hyperfunctions – for example we cannot in general define the value of a hyperfunction x(f) at a given frequency f – some will be equally easy to perform and finally some – such as derivation with respect to f – will be much easier to deal with in this new setting.
Linear Combination
As the Fourier transform and the z-transforms are linear operators, the multiplication of signals by a complex scalar and sum of signals can be defined in the time domain, by (λx)(t)=λx(t),(x+y)(t)=x(t)+y(t), or equivalently in the frequency domain (λx)r(f)=λxr(f),(x+y)r(f)=xr(f)+yr(f), as well as in the complex domain (λx)±(z)=λx±(z),(x+y)±(z)=x±(z)+y±(z).
Modulation
Let x(t) be a signal, f0∈R and y(t)=x(t)ei2πf0t. Straighforward computations show that yr(f)=xr(f−f0) and y±(z)=x±(ze−i2πf0Δt)
Example – Fourier Transform of Sine & Cosine.
Let a>0, ϕ∈R, f0>0 and let x(t) be the signal defined by x(t)=acos(2πf0t+ϕ). We can decompose x(t) using complex exponentials: x(t)=ae+iϕ2ei2πf0t×1+ae−iϕ2e−i2πf0t×1 As we know the standard defining function of t↦1 is Δt/(1−z), given the properties of linear combination and modulation in the complex domain, we have x±(z)=ae+iϕ2Δt1−ze−i2πf0Δt+ae−iϕ2Δt1−ze+i2πf0Δt.Integration (Frequency Domain)
Let x(f) be a Δf-periodic hyperfunction. It would be natural to define the integral of x(f) over one period as the limit when r↑1 of ∫+Δf/2−Δf/2x(f)df=limr↑1∫+Δf/2−Δf/2xr(f)df but does this definition make sense? Are we sure that the limit always exists to begin with? Actually, it does and in a quite spectacular way: the integral under limit is eventually independent of r:
Definition & Theorem – Integration in the Frequency Domain.
The integral over one period of a Δf-periodic hyperfunction x(f) with defining function x±(z) is defined as ∫+Δf/2−Δf/2x(f)df=∫+Δf/2−Δf/2xr(f)df=1i2π[∫r[↺]−∫r−1[↺]]x±(z)zΔtdz for any r∈]ρ,1[ if the domain of definition of x±(z) contains Aρ∖U. This definition is sound: the right-hand sides of this formula are independent of the choice of r; they are also independent of the choice of the defining function.Example – Constant Signal.
Let x(t)=1 for every t∈ZΔt. As the standard definition function of x(f) is x±(z)=Δt/(1−z), ∫Δf/2−Δf/2x(f)df=1i2π∫r[↺]1z(1−z)dz−1i2π∫r−1[↺]1z(1−z)dz. The pair of paths γ made of rU (oriented counter-clockwise) and r−1U (oriented clockwise) satisfies ind(γ,0)=0 and ind(γ,1)=−1, hence ∫Δf/2−Δf/2x(f)df=(−1)×res[1z(1−z),z=1]=1.Differentiation (Frequency Domain)
Let x(f) be a Δf-periodic hyperfunction. For every r∈[0,1[, the function xr(f) is differentiable with respect to f. It would be natural to define the derivative of x(f) with respect to f by dx(f)df=∂xr(f)∂f. and then every periodic hyperfunction would be differentiable. But does this definition make sense? Is dx(f)/df well-defined as a hyperfunction?
Definition & Theorem – Differentiation in the Frequency Domain.
Let x(f) be a Δf-periodic hyperfunction with standard defining function x±(z). The derivative of x(f) with respect to f is the Δf-hyperfunction defined as dx(f)df=∂xr(f)∂f and its standard defining function is (i2πΔt)zdx±(z)dz.Example – Integral of a Derivative.
Let x(f) be a periodic hyperfunction. We know that the standard defining function of dx(f)/df is (i2πΔt)zdx±(z)/dz. Hence, the integral ∫+Δf/2−Δf/2dx(f)dfdf, is equal to 1i2π[∫r[↺]−∫r−1[↺]](i2πΔt)zdx±(z)/dzzΔtdz and after obvious simplifications, to ∫+Δf/2−Δf/2dx(f)dfdf=[∫r[↺]−∫r−1[↺]]dx±(z)dzdz=0.Convolution (Time Domain), Product (Frequency Domain)
Theorem – Convolution.
The convolution (x∗y)(t) between a slowly increasing signal x(t) and a quickly decreasing signal y(t) is a slowly increasing signal.Definition – Product.
The product w(f) = x(f) \times y(f) of a \Delta f-periodic hyperfunction x(f) and a \Delta f-periodic analytic function y(f) is the hyperfunction defined by w_{\pm}(z) = x_{\pm}(z) \times y(z).Remark.
The product between arbitrary hyperfunctions is not defined in general.Remark – Product Soundness.
The definition of the product above is independent of the choice of the defining function for x(f).Theorem.
The convolution (x\ast y)(t) of a slowly increasing signal x(t) and a quickly decreasing signal y(t) is represented in the frequency domain as (x \ast y)(f) = x(f) \times y(f).Example – Filtering a Pure Frequency.
Let h(t) be a quickly decreasing signal and consider the filter that associates to the slowly increasing input u(t) the slowly increasing output y(t) = (h \ast u)(t). The transfer function h(z) of this filter is holomorphic in a neighbourghood of \mathbb{U}. Let f_0 > 0; if u(t) = e^{i 2\pi f_0 t}, we have y_{\pm}(z) = h(z) \times \frac{\Delta t}{1 - z e^{-i2\pi f_0 \Delta t}}. It is clear that the difference between this defining function and \phi(z) = h(z=e^{i2\pi f_0 t}) \times \frac{\Delta t}{1 - z e^{-i2\pi f_0 \Delta t}} can be extended to a holomorphic function in a neighbourghood of \mathbb{U}. Hence, \phi(z) is also defining function for y(f) (moreover, it is standard). From this defining function, the results of section Modulation show that y(t) = h(f=f_0) \times e^{i 2\pi f_0 t}.Fourier Inversion Formula
We already know enough about operational calculus of hyperfunctions to prove some interesting results. For example, we may now deal with the extension of the Fourier Inversion Formula to slowly increasing signals (in the time domain) or hyperfunctions (in the frequency domain).
Theorem – Fourier Inversion Formula
. Let x(t) be a slowly increasing signal and x(f) its Fourier transform. We have x(t) = \int_{-\Delta f/2}^{+\Delta f / 2} x(f) e^{i2\pi f t} \, df.Remark.
The first step is obviously to check that the right-hand side means something, before that we prove that its is equal to x(t). The Fourier transform x(f) is defined as a \Delta f-periodic hyperfunction. For any time t \in \mathbb{Z} \Delta t, the function f \mapsto e^{i2\pi f t} is analytic and \Delta f-periodic, hence x(f) e^{i2\pi f t} is defined as a \Delta f-periodic hyperfunction. Therefore its integral over one period is well defined.Bibliography
Notes
Actually, Kenneth Iverson originally used the syntax (\, \cdot \,) while Donald Knuth prefers [\, \cdot \,] (see Knuth 1992).↩
A similar situation happens in the construction of real numbers, at the stage where the rational numbers are available, but not yet the real numbers. You can then think of “\pi” as the sequence of decimal approximations 3, 31/10, 314/100, etc., but the question “Is \pi the limit of this sequence?” is meaningless. It only starts to make sense when you have constructed the set of real numbers, embedded the rational numbers in it and defined a topology on the real numbers. Then, finally, the answer is “yes”!↩