If f(x) is an even function, that is, f(−x)=f(x), then bn=0, reducing its Fourier series to a Fourier cosine series with
a0an=L1∫0Lf(x)dx=L2∫0Lf(x)cos(Lnπx)dx
If f(x) is an odd function, that is, f(−x)=−f(x), then an=0, reducing its Fourier serier to a Fourier sine series with
bn=L2∫0Lf(x)sin(Lnπx)dx
The Complex Fourier Series
f(x)=n=−∞∑∞cneLinπx
where
cn=2L1∫−LLf(x)e−Linπxdx
For even functions f, the complex Fourier coefficients are real. For odd functions f, the complex Fourier coefficients are purely imaginary.
Euler's formula
The above is derived from the Euler's formula:
eix=cos(x)+isin(x)
Half-Range Expansions
Approximation by Trigonometric Polynomials
The Nth partial sum of the Fourier series is an approximation of the given f(x). The square error of the Nth partial sum is minimal only if the coefficients are the Fourier coefficients of f.
f(x)≈a0+n=1∑N(ancos(Lnπx)+bnsin(Lnπx))
Parseval’s Identity
2a02+n=1∑∞(an2+b22)=π1∫−ππf(x)2dx
The error of the Nth partial sum is equal to the difference of the left and right parts of Parseval's identity.
When approximating, we use only the first terms. The second terms then become the error. The errors may also be written as O(h) for forward and backward difference and O(h2) for central difference for the first and second derivative.
Interpolation
With the direct polynomial approach to solving an interpolation problem where n+1 points (xi,yi) are given, we let p(x)=cnxn+cn−1xn−1+⋯+c1x1+c0 and create a linear system of n+1 equations p(xi)=yi, which we solve for all ci.
With spline interpolation, we use piecewise polynomials. Linear splines results in straight lines between each point. Cubic splines are the most important in practise.
Numerical integration
A numerical quadrature or a quadrature rule is a formula, usually of the form of the left, for approximating integrals such as the one on the right. We call wiweights and xinodes.
Q[f](a,b)=i=0∑nwif(xi)≈I[f](a,b)=∫abf(x)dx
In practice, we will not use a single (or simple) quadrature for the whole interval [a,b], but rather apply a quadrature on each of m subintervals [Xj,Xj+1],j=0,1,…,m−1, and then add the results together. This leads to composite quadratures yielding the approximation
I[f](a,b)≈j=0∑m−1Q[f](Xj,Xj+1)
It is common to define a quadrature from nodes on the standard interval [−1,1], and then transferring it to an arbitrary interval [a,b]. This transformation is done by
For the standard interval [−1,1] choose the nodes as the zeros of the
polynomial of degree n:
Ln(t)=dtndn(t2−1)n
The resulting quadrature rules have a degree of precision d=2n−1, and the corresponding composite rules have a convergence order of 2n.
Error Estimation for Quadratures
We can create an upper bound for the error by maximizing the f(n)(ξ) term. This bound, however, often vastly overestimates the actual error.
A better error estimation may be found by:
Write Em as Cmhn where n is given by the exponent of h in Em
Write E2m as C2m(2h)n
Let C=Cm=C2m
Solve I−Qm≈Chn;I−Q2m≈C(2h)n; for I
Insert this I into E2m=I−Q2m for an estimate of E2m
Adaptive Integration
An adaptive quadrature is a recursive algorithm which for each step accepts the result Q(a,b)+E(a,b) if ∣E(a,b)∣≤Tol, otherwise it splits the interval in two halves and calls the procedure on these subintervals with tolerance Tol/2.
Numerical Solution of Nonlinear Equations
Existence and uniqueness of a solution
The intermediate value theorem is used to prove the existence of a solution, and monotonicity (∣f′(x)∣>0) proves that the solution is unique.
Fixed Point Iterations
Given g such that r=g(r), and a starting value x0
xk+1=g(xk),k=0,1,2,…
If g is continous and a<g(x)<b on [a,b] and there exist a positive constant L such that ∣g′(x)∣≤L<1 for all x∈[a,b] then
g has a unique fixed point r in (a, b)
The fixed point iterations converges toward r for x0∈[a,b]
Given f(x)=0 and a starting value x0, Newton's method is given by
xk+1=xk−f′(xk)f(xk),k=0,1,2,…
Assume that f has a root r, and is two times continuously differentiable on Iδ=[r−δ,r+δ] for some δ>0. If there is a positive constant M such that
∣∣∣∣∣f′(x2)f′′(x1)∣∣∣∣∣≤2M, for all x1,x2,∈Iδ
then Newton's iterations converge quadratically,
∣ek+1∣≤M∣ek∣2
for all starting values x0 satisfying ∣x0−r∣≤min{1/M,δ}.
Newton's Method for Systems of Equations
Given a vector function f(x):Rm→Rm, its Jacobian J(x) and a starting value x0. For k=0,1,2,…
Solve the system J(xk)Δk=−f(xk)
Let xk+1=xk+Δk
Alternatively, with the inverse Jacobian J(x)−1
xk+1=xk−J(xk)−1f(xk)
Numerical Solution of Partial Differential Equations
Explicit Scheme
For an explicit scheme, we follow the same procedure as in Numerical Solutions of BVP, but use the notation Uin≈u(xi,tn) since we now have two variables.
For values outside of the domain (e.g. Ui−i or Uin+1), we once again use an approximation (e.g. central difference) to represent the value from values within the domain.
The Explicit Euler method uses central differences in the x-direction, and forward differences in the t-direction.
Wave Equation
The Courant–Friedrichs–Lewy (CFL) stability condition states that our numerical method is stable for the wave equation if and only if
Our numerical method is once again stable for the heat equation if and only if
c2k2h≤21
Implicit Scheme (Implicit Euler)
By swapping the forward differences in t-direction of the explicit Euler method with backward differences, we get the implicit Euler method. The result is a system of linear equations. The implicit Euler method for the heat equation is unconditionally stable.
Crank-Nicolsen method
The idea of the Crank-Nicolson method is to combine the updates for the explicit and the implicit Euler methods.
// TODO: Crank-Nicolsen method
Numerical Solution of Ordinary Differential Equations
Scalar ODE
A scalar ODE is an equation of the form
y′(t)=f(t,y(t)),y(t0)=y0
The initial condition is required for a unique solution, and makes it an initial value problem (IVP).
For example,
y′=−2ty
has the solution
y(t)=Ce−t2
where C depends on the initial condition y(t0).
System of ODEs
A system of ODEs (in vector form) is given by
y′(t)=f(t,y(t)),y(t0)=y0
Higher order ODEs
An ODEs of order m
y(m)=f(t,y,y′,…,y(m−1)),y(t0)=y0,…
may be written as a system of first order ODEs like the following
The method is defined by its coefficients aij,bi,ci, which may be visualized in a Butcer tableau. The method is explicit if aij=0 whenever i≥j; otherwise it is implicit.
Lipschitz Continuity
A function f is lipschitz continous if there exists a constant L such that
∥f(t,y1)−f(t,y2)∥≤L∥y1−y2∥for all t,y1,y2 in the domain
Consider the IVP y′=f(t,y),y(t0)=y0. If f(t,y) is lipschitz continous in D, then the ODE has one and only one solution in D.
Order of a method
A method is of order p>0 if there is a constant C>0 such that
A reasonable approximation to the unknown local error ln+1 is the local error estimatelen+1
len+1=y^n+1−yn+1≈ln+1
where yn+1 is found by a method of order p and y^n+1 is found by a method of order p+1.
Stepsize control
After a step n, the next step size is given by
hnew=P(∥len+1∥Tol)p+11hn
where the pessimist factorP<1 is some chosen constant, normally between 0.5 and 0.95. We only move forwards if ∣len+1<Tol.
Runge–Kutta Methods with an Error Estimate
The error estimate is then given by
len+1=hi=1∑s(b^i−bi)ki
Linear stability analysis
Linear stability can be analyzed by studying the very simple linear test equation
y′=λy,y(0)=y0
One step of some Runge–Kutta method applied to the linear test equation can always be written as
yn+1=R(z)yn,z=λh
where R(z) is called the stability function of the method.
The stability region of a method is defined by
S={z∈C:∣R(z)∣≤1}
To get a stable numerical solution, we have to choose the step size h such that z=λh∈S.
A-stable methods
A method is A-stable if the stability region S covers C−. This means that it is stable for all λ∈C− independent of h. Explicit methods can never be A-stable.