We have already met the process of smoothing in connection with the heat
equation: starting with bounded initial data, f(x), the solution of the
heat equation (para.1)
represents the effect of smoothing f(x), so that (in fact analytic) and .
A general process of smoothing can be accomplished by convolution with
appropriate smoothing kernel Q(x)
such that is sufficiently smoother than f(x) is, and
With the heat kernel, the role of is played by time t > 0. A standard way to construct such filters is the following. We start with a -function supported on, say, (-1,1), such that it has a unit mass and zero first r moments,
Then we set and consider
Now, assume f is (r + 1) - differentiable in the neighborhood of x; then, since is supported on and satisfies (smoo.4) as well, we have by Taylor expansion
The first r moments of vanish and we are left with
i.e., converges to f(x) with order r+1 as . Moreover, is as smooth as is, since
has many bounded derivatives as Q has, i.e., starting with differentiable function f of order r + 1 in the neighborhood of x, we end up with regularized function in .
: For regularization - choose a unit mass
kernel, see Figure 3.10,
Then is a
with first order convergence rate
To increase the order of convergence, one requires more vanishing moments, (smoo.4),(which yield more oscillatory kernels). We note that this smoothing process is purely local -- it involves -neighboring values of function f, in order to yield a -regularized function with . The convergence rate here is r+1.
Figure 3.10: Unit mass mollifiers
We can also achieve local regularization with spectral convergence. To this
end we set
where is a -function supported on . Figure 3.11 demonstrates such a mollifier. In this case the support of the mollifier is kept fixed; instead, by increasing m -- particularly, by allowing to increase together with N, we obtain a highly oscillatory kernel whose monomial moments satisfy (smoo.4) modulo a spectrally small error.
Figure 3.11: A spectral unit mass mollifier
Then is because is; and the convergence rate is spectral, since by (2.1.34)
and since was chosen as we are left with a residual term which does not exceed
Thus, the convergence rate is as fast as the local smoothness of f permits;(in this case - the local neighborhood ). Of course, with we recover the global -regularization due to the spectral projection. The role of was to localize this process of spectral smoothing.
We can as easily implement such smoothing in the Fourier space: For example,
with the heat kernel we have
so that for any t > 0 decay faster than exponential and hence u(x,t>0) belong to for any s (and by Sobolev embedding, therefore, is in and in fact analytic). In general we apply,
such that for to be in we require
and r+1 order of convergence follows with
Indeed, (smoo.15) implies
Note: Since we can
deal with any
unbounded f by splitting . To
obtain spectral accuracy we may use
Clearly is and the familiar Fourier estimates give us
We emphasize that this kind of smoothing in the Fourier space need not be local; rather or are negligibly small away from a small interval centered around the origin depending on or . (This is due to the uncertainty principle.)
The smoothed version of the pseudospectral approximation of (meth_ps.16)
i.e., in each step we smooth the solution either in the real space (convolution) or in the Fourier space (cutting high modes). We claim that this smoothed version is stable hence convergent under very mild assumptions on the smoothing kernel . Specifically, (smoo.18) amounts in the Fourier space, compare (meth_ps.3)
The real part of the matrix in question is given by
is interpreted as the smoothed differentiation operator. Now, looking at (smoo.20) we note:
For example, consider the smoothing kernel where
This yields the smoothed differentiation symbols
which corresponds to the second order center differencing in (app_ps.36); stability is immediate by (smoo.21) for
Yet, this kind of smoothing reduces the overall spectral accuracy to a second one; a fourth order smoothing will be
In general, the accuracy is determined by the low modes while stability has to do with high ones. To entertain spectral accuracy we may consider smoothing kernels other than trigonometric polynomials ( finite difference), but rather, compare (smoo.17)
An increasing portion of the spectrum is differentiated exactly which yields spectral accuracy; the highest modes are not amplified because of the smoothing effect in this part of the spectrum.
We close this section noting that if the
differential model contains some dissipation - e.g.,
the parabolic equation
then stability follows with no extra smoothing. The parabolic dissipation compensates for the loss of ``one derivative'' due to aliasing in first order terms. To see this we proceed as follows: multiply
by and sum to obtain
Suppressing excessive indices, , we have for the RHS of (smoo.27)
Now, the first sum on the right gives us the usual loss of one derivative and the second are compensates with gain of such quantity. Petrovski type stability (gain of derivatives) follows. We shall only sketch the details here. Starting with the first term on the right of (smoo.28) we have
while for the second term
and this last term dominates the RHS of (smoo.29).