1. Introduction
The partial differential equations (abbreviation PDE) are used for mathematical modeling of many physical processes. The solution theory of partial differential equations has been widely studied for linear equations, but still contains many gaps for nonlinear equations. Numerical methods are usually used for the practical calculation of solutions. See also [1].
In preliminaries, we introduce the variational formulation and Sobolev space, which will be very useful tools to study some elliptic equations. In fact, by Green’s formula, we can transform the PDE with boundary condition into the variational formulation. Moreover, we also introduce the finite element method (FEM) in this section. FEM is a general numerical method applied to various physical tasks. By using more and more parameters (e.g. more and more, smaller and the approximate solution can be improved.
Section 3 studies the variational formulation of the Stokes equation. The first problem studied is the Stokes equation with a one-part boundary with homogeneous Dirichlet conditions, and we find that the corresponding variational formulation is equivalent to the variational formulation in symmetric form. Then the Stokes equation with a boundary decomposition into two different parts of the non-homogeneous Dirichlet condition is studied. Then an attempt is made to approximate the solution of the variational formulation (with homogeneous Dirichlet conditions and non-homogeneous Dirichlet conditions, with coefficients but no divergence constraints, proving that the approximation has a convergence rate. Finally, the approximation of the variational formulation without divergence constraints by the finite element method is analyzed. By analyzing the condition number of the matrix, it is found that the complexity of the algorithm will be very large, so it is necessary to calculate the corresponding numerical solution using preprocessing techniques, and then use iteration.
2. Some preliminaries
2.1. Hilbert space
Definition 2.1. The metric space \( U \) with the bilinear form \( {(\cdot ,\cdot )_{U}} \) is a Hilbert space where:
\( ∀u∈U,∥u{∥_{U}}=\sqrt[]{{(u,u)_{U}}}. \) (2.1)
We then define weak convergence in Hilbert spaces, which will be important in our future research, see also in Lecture 19 in [2]:
Definition 2.2. In Hilbert space \( U \) , we have
\( ∀v∈V,\underset{n→+∞}{lim}{({v_{n}},v)_{U}}={({v_{∞}},v)_{U}}∙ \) (2.2)
We can see this in the modest convergence with \( {v_{n}}⇀{v_{∞}} \) when \( n \) tends to \( +∞ \) .
Based on the definition of weak convergence, which can also be seen in Chapter 8 in [3], we derive the following proposition by Cauchy-Schwarz inequality:
Proposition 2.3. Let \( {({v_{n}})_{n∈N}} \) be a sequence in Hilbert space \( V \) that converges strongly to \( {v_{∞}}∈V \) , then \( {({v_{n}})_{n∈N}} \) converges weakly to \( {v_{∞}} \) .
Proposition \( 1.3 \) also explains why Proposition \( 1.2 \) holds, since it is convergent in a general sense.
Proposition 2.4. Any weakly convergent sequence in Hilbert space \( V \) is bounded.
In the converse, we have the following conclusion:
Theorem 2.5. \( {({v_{n}})_{n∈N}} \) is a bounded sequence of \( V \) . We can obtain a sub-sequence which converges weakly from \( {({v_{n}})_{n∈N}} \) .
Remark 2.6. We note that bounded sequences must have weakly convergent sub-sequences in infinite dimensional Hilbert spaces, but we are not sure whether sub-sequences are strongly convergent.
Remark 2.7. We note that a Cauchy Sequence and a strongly convergent sequence are mutually sufficient and necessary conditions in a Hilbert Space.
2.2. Sobolev space
First, we specify the weak derivative and weak divergence in \( {L^{2}}(Ω) \) to introduce Sobolev spaces:
Definition 2.8. \( v \) is a function in \( {L^{2}}(Ω) \) . If functions in form \( i∈{L^{2}}(Ω) \) for \( i∈\lbrace 1,…,N\rbrace \) , such that for any function \( ϕ∈C_{c}^{∞}(Ω) \) , exist, we will get the following equation, then \( v \) is weakly derivable in \( {L^{2}}(Ω) \)
\( \int _{Ω}^{}v(x)\frac{∂ϕ}{∂{x_{i}}}(x)dx=-\int _{Ω}^{}{w_{i}}(x)ϕ(x)dx. \)
Furthermore, let a vector-valued function \( σ \) whose components all belong to \( {L^{2}}(Ω) \) \( :Ω→{R^{N}}.σ \) weakly diverges in \( {L^{2}}(Ω) \) if there is a function \( w∈{L^{2}}(Ω) \) that for any function \( ϕ∈C_{c}^{∞}(Ω) \) , then:
\( \int _{Ω}^{}σ(x)\cdot ∇ϕ(x)dx=-\int _{Ω}^{}w(x)ϕ(x)dx \)
\( w \) is the weak divergence of \( σ \) denoted div \( σ \) , see also in Part II, Chapter 5 in [4].
Remark 2.9. In fact, we can deduce that \( v \) is weakly derivable with respect to \( {x_{i}} \) if partial derivative \( \frac{∂v}{∂{x_{i}}} \) exists. However, if \( u \) is weakly derivable with respect to \( {x_{i}} \) , we cannot get the existence of the partial derivative \( \frac{∂v}{∂{x_{i}}} \) exists. This is why we call it the weak derivative.
The Sobolev space \( {H^{m}} \) with \( m∈{N^{*}} \) is defined:
Definition 2.10. \( Ω \) is a domain of \( {R^{N}} \) . We define the Sobolev space \( {H^{1}}(Ω) \) by:
\( {H^{1}}(Ω)=\lbrace v∈{L^{2}}(Ω) such that ∀i∈\lbrace 1,…,N\rbrace ,\frac{∂v}{∂{x_{i}}}∈{L^{2}}(Ω)\rbrace , \)
We can also define \( {H^{m}}(Ω)(m≥2) \) by:
\( {H^{m}}(Ω)=\lbrace v∈{L^{2}}(Ω)such that,∀αwith|α|≤m,{∂^{α}}v∈{L^{2}}(Ω)\rbrace \)
with
\( {∂^{α}}v(x)=\frac{{∂^{|α|}}{v^{α}}}{∂x_{1}^{α1}…∂x_{N}^{α}}(x), \)
Here \( α=({α_{1}},…,{α_{N}}) \) is multi-index with \( {α_{i}}≥0 \) and \( |α|=\sum _{i=1}^{N}{α_{i}} \) .
In fact, we can define a scalar product in \( {H^{m}}(Ω) \) and we can deduce that \( {H^{m}}(Ω) \) is a Hilbert space:
Proposition 2.11. Sobolev space \( {H^{m}}(Ω) \) is a Hilbert space with the scalar product
\( ⟨u,v⟩=\int _{Ω}^{}\sum _{|α|≤m}^{}{∂^{α}}u(x){∂^{α}}v(x)dx \) (2.3)
\( {‖u‖_{{H^{m}}(Ω)}}=\sqrt[]{⟨u,u⟩}={(\int _{Ω}\sum _{|α|≤m}{|{∂^{α}}u(x)|^{2}}dx)^{\frac{1}{2}}}. \) (2.4)
Proof. Confirm the case with \( m=1 \) . For \( m≥2 \) , the proof is similar to this. We recall that \( {L^{2}}(Ω) \) is a Hilbert space. Formula \( (2.3) \) is indeed a scalar product in \( {H^{1}}(Ω) \) . Let \( {({u_{n}})_{n≥1}} \) be a Cauchy sequence in \( {H^{1}}(Ω) \) . By definition of the norm of \( {H^{1}}(Ω),{({u_{n}})_{n≥1}} \) as well as \( {(\frac{∂{u_{n}}}{∂{x_{i}}})_{n≥1}} \) for \( i∈\lbrace 1,…,N\rbrace \) are Cauchy sequences in \( {L^{2}}(Ω) \) . Since \( {L^{2}}(Ω) \) is complete, there are limits \( u \) and \( {w_{i}} \) such that \( {u_{n}} \) converges to \( u \) and \( {(\frac{∂{u_{n}}}{∂{x_{i}}})_{n≥1}} \) tends to \( {w_{i}} \) in \( {L^{2}}(Ω) \) . Now, we can deduce that:
\( \int _{Ω}{u_{n}}(x)\frac{∂ϕ}{∂{x_{i}}}(x)dx=-\int _{Ω}\frac{∂{u_{n}}}{∂{x_{i}}}(x)ϕ(x)dx. \) (2.5)
Take the limit \( n→+∞ \) in (2.5) and then we can obtain:
\( \int _{Ω}u(x)\frac{∂ϕ}{∂{x_{i}}}(x)dx=-\int _{Ω}{w_{i}}(x)ϕ(x)dx \) ,
which proves that \( u \) is weakly derivable. Consequently, \( u \) belongs to \( {H^{1}}(Ω) \) and \( {({u_{n}})_{n≥1}} \) converges to \( u \) in \( {H^{1}}(Ω) \) .
A crucial subspace of \( {H^{1}}(Ω) \) is introduced, which is noted as \( H_{0}^{1}(Ω) \) :
Definition 2.12. \( Ω \) is a regular and bounded domain. \( H_{0}^{1}(Ω) \) is the subspace of \( {H^{1}}(Ω) \) consisting of functions which are null at the boundary \( ∂Ω \) .
Remark 2.13. In fact, \( H_{0}^{1}(Ω) \) is defined imperatively as the completion of \( C_{c}^{∞}(Ω) \) in \( {H^{1}}(Ω) \) . But under \( Ω \) ’s assumption and the trace theorems (which is shown in Theorem 2.18), we can show that this statement is equivalent to the statement of Definition 2.12.
Before we proceed to the next statement, let us introduce the definition of the equivalent norm:
Definition 2.14. Two norms \( {‖∙‖_{a}} \) and \( {‖∙‖_{β}} \) defined on \( X \) are called equivalence if there are positive real numbers \( C \) and \( D \) so that for all \( x∈X \)
\( C{‖x‖_{a}}≤{‖x‖_{β}}≤D{‖x‖_{a}}. \)
Then we have the following statement:
Corollary 2.15. The norm of \( H_{0}^{1}(Ω) \) can be simplified as
\( {‖v‖_{H_{0}^{1}(Ω)}}={(\int _{Ω}{|∇v(x)|^{2}}dx)^{1/2}} \) (2.6)
For \( H_{0}^{1}(Ω) \) , the norm (2.6) is equivalent to the norm (2.4) with \( m=1 \) , so we can simplify the norm of \( H_{0}^{1}(Ω) \) . To state this, we introduce Poincaré inequality:
Proposition 2.16. \( Ω \) is a regular and bounded domain, and constant \( C \gt 0 \) so that for any function \( v∈H_{0}^{1}(Ω) \) ,
\( \int _{Ω}{|v(x)|^{2}}dx≤C\int _{Ω}{|∇v(x)|^{2}}dx \) (2.7)
Using Poincaré inequality, we can easily prove Corollary 2.15. Before proving Poincaré inequality, we introduce Rellich-Kondrachov theorem:
Theorem 2.17. If \( Ω \) is a regular and bounded domain, we can extract a convergent sub-sequence in \( {L^{2}}(Ω) \) .
Then we can prove Poincaré inequality through this theorem:
Proof. We prove by contradiction. If there is no constant \( C \gt 0 \) such that, for any function \( v∈H_{0}^{1}(Ω) \)
\( \int _{Ω}^{}{|v(x)|^{2}}dx≤C\int _{Ω}^{}{|∇v(x)|^{2}}dx \)
And we have \( {v_{n}}∈H_{0}^{1}(Ω) \) so that:
\( 1=\int _{Ω}^{}{|{v_{n}}(x)|^{2}}dx \gt n\int _{Ω}^{}{|∇{v_{n}}(x)|^{2}}dx \) (2.8)
Particularly, (2.8) shows that the sequence \( {v_{n}} \) is bounded in \( H_{0}^{1}(Ω) \) . Rellich theorem proves that there is a sub-sequence \( {v_{{n^{ \prime }}}} \) that converges in \( {L^{2}}(Ω) \) . Therefore, \( {v_{{n^{ \prime }}}} \) is a Cauchy sequence in \( H_{0}^{1}(Ω) \) , so it is in \( H_{0}^{1}(Ω) \) converges to the limit \( v \) . Since we will get:
\( \int _{Ω}^{}{|∇v(x)|^{2}}dx=\underset{n \prime →+∞}{lim}\int _{Ω}^{}{|∇{v_{n \prime }}(x)|^{2}}dx≤\underset{n \prime →+∞}{lim}\frac{1}{n \prime }=0 \)
we can deduce that \( v \) is a constant. But since \( v \) is zero on the boundary \( ∂Ω,v \) is identically zero in all \( Ω \) . Moreover,
\( \int _{Ω}^{}{|v(x)|^{2}}dx=\underset{n \prime →+∞}{lim}\int _{Ω}^{}{|{v_{n \prime }}(x)|^{2}}dx=1, \)
which is a contradiction with \( v=0 \) .
In fact, it is not clear whether the boundary value or trajectory of \( v \) can be defined on the boundary \( ∂Ω \) , since \( ∂Ω \) is a zero-metric set. Fortunately, there is still a way to define the trajectory of the function of \( {v_{∣∂Ω}} \) in \( {H^{1}}(Ω) \) . These basic results are called the trajectory theorem and are detailed below.
Theorem 2.18. (Trace Theorem \( {H^{1}} \) ) \( Ω \) is a regular and bounded domain. We define the trace application \( {γ_{0}} \)
\( \begin{matrix}{H^{1}}(Ω)∩{C^{1}}(\overline{Ω})→{L^{2}}(∂Ω)∩C(\bar{∂Ω}) \\ v→{γ_{0}}(v)={v|_{∂Ω}}∙ \\ \end{matrix} \)
There is \( C \gt 0 \) and we have:
\( {‖v‖_{{L^{2}}(∂Ω)}}≤C{‖v‖_{{H^{1}}(Ω)}}. \) (2.9)
Theorem 2.19. (Trace Theorem \( {H^{2}} \) ) \( Ω \) is a regular and bounded domain. We define trace application \( {γ_{1}} \) :
\( {H^{2}}(Ω)∩{C^{1}}(\overline{Ω})→{L^{2}}(∂Ω)∩C(\bar{∂Ω}) \)
\( v→{γ_{1}}(v)={\frac{∂v}{∂n}|_{∂Ω}}. \)
with \( \frac{∂v}{∂n}=∇u\cdot n \) . This application \( {γ_{1}} \) is extended by continuity to a continuous linear application from \( {H^{2}}(Ω) \) to \( {L^{2}}(Ω) \) . Particularly, there is a constant \( C \gt 0 \) and we have:
\( {‖\frac{∂v}{∂n}‖_{{L^{2}}(∂Ω)}}≤C{‖v‖_{{H^{2}}(Ω)}} \) (2.10)
According to the trace theorems and the density of \( C_{c}^{∞}(\overline{Ω}) \) in \( {H^{1}}(Ω) \) and \( {H^{2}}(Ω) \) , we have the following Green’s formula:
Theorem 2.20. \( Ω \) is a regular and bounded domain and we have:
\( \int _{Ω}^{}u(x)\frac{∂v}{∂{x_{i}}}(x)dx=-\int _{Ω}^{}v(x)\frac{∂u}{∂{x_{i}}}(x)dx+\int _{∂Ω}^{}u(x)v(x){n_{i}}(x)ds. \) (2.11)
Moreover, if \( u∈{H^{2}}(Ω) \) and \( v∈{H^{1}}(Ω) \) , we have:
\( \int _{Ω}^{}Δu(x)v(x)dx=-\int _{Ω}^{}∇u(x)\cdot ∇v(x)dx+\int _{∂Ω}^{}\frac{∂u}{∂n}(x)v(x)ds. \) (2.12)
Remark 2.21. Let’s review the classic Green’s formula, we can verify that \( u,v∈ \) \( C_{c}^{∞}(\overline{Ω}) \) in (2.11), (2.12). We can construct two sequences \( {({u_{n}})_{n≥1}}∈C_{c}^{∞}(\overline{Ω}) \) and \( {({v_{n}})_{n≥1}}∈C_{c}^{∞}(\overline{Ω}) \) , converges to \( u∈ \) \( {H^{1}}(Ω) \) (respectively \( u∈{H^{2}}(Ω) \) ) and \( v∈{H^{1}}(Ω) \) . Let \( n→∞ \) and then we can deduce (1.11) and (1.12). The trace theorems are necessary to prove the limit of the second integral on the right hand side of (1.11) and (1.12).
2.3. Poisson’s equation
Considering the homogeneous Dirichlet boundary condition:
\( \begin{cases}\begin{matrix}-Δu=f & inΩ \\ u=0 & on∂Ω, \\ \end{matrix}\end{cases} \) (2.13)
The following formulation is as follows:
Find \( u∈H_{0}^{1}(Ω) \) such that \( \int _{Ω}^{}∇u(x)\cdot ∇v(x)dx=\int _{Ω}^{}f(x)v(x)dx \) for all \( v∈H_{0}^{1}(Ω) \) .
(2.14)
We have transformed (1.13) into (1.14) and wonder if there is a unique solution \( u∈H_{0}^{1}(Ω) \) of the formed (1.13) into (1.14) and wonder if there is a unique solution \( u∈H_{0}^{1}(Ω) \) of the variational formulation (1.14).
Remark 2.22. We get the corresponding variational formulation:
Find \( u∈V \) such that \( a(u,v)=L(v) \) for all \( v∈V \) .(2.15)
For (1.14), we have:
\( a(u,v)=\int _{Ω}∇u(x)\cdot ∇v(x)dx \) (2.16)
and
\( L(v)=\int _{Ω}^{}f(x)v(x)dx. \) (2.17)
where \( a(\cdot ,\cdot ) \) is a bilinear form on \( V \) and \( L(\cdot ) \) is a linear form on \( V \) . The solution of the variational formulation sometimes is referred to as the weak solution of the corresponding partial differential equation.See also Chapter 6 in [5].
Before analyzing the solution of (1.14), we need to introduce the Lax-Milgram theorem:
Theorem 2.23. (Lax-Milgram Theorem) Let \( V \) be a real Hilbert space, and (1) \( a(\cdot ,\cdot ) \) is a continuous bilinear form on \( V \) , i.e., there is \( M \gt 0 \) so that:
\( |a(w,v)|≤M‖w‖‖v‖for allw,v∈V. \) (2.18)
(2) here \( α \gt 0 \) so that:
\( a(v,v)≥α{‖v‖^{2}}for all v∈V. \) (2.19)
(3) h \( ere \) \( C \gt 0 \) so that:
\( |L(v)|≤C‖v‖ for all v∈V. \) (2.20)
The proof of this theorem is discussed in detail in Chapter \( 6.2 \) .
In fact, variational formulations often have physical interpretations, especially if the bilinear form is symmetric, the solution of the variational formula (1.15) reaches an energy minimum, which is naturally obtained in physics or mechanics.
Proposition 2.24. We already have the assumptions of the Lax-Milgram theorem (1.23). Furthurmore, we further assume that the bilinear form is symmetric, i.e., \( a(w,v)=a(v,w) \) for all \( v,w∈V \) . Let \( J(v) \) be the energy defined for \( v∈V \) by:
\( J(v)=\frac{1}{2}a(v,v)-L(v) \)
\( J(u)=\underset{v∈V}{min}{J(v)}. \)
We just need to verify (2.18)- \( (2.20) \) for \( a(u,v) \) and \( L(v) \) of \( (1.16),(1.17) \) . From Cauchy-Schwarz inequality and (1.6), we can deduce that:
(1) For \( u,v∈H_{0}^{1}(Ω) \)
\( |a(u,v)|=|\int _{Ω}^{}∇u(x)\cdot ∇v(x)dx|≤{‖∇u‖_{{L^{2}}(Ω)}}{‖∇v‖_{{L^{2}}(Ω)}}≤{‖u‖_{{H^{1}}(Ω)}}{‖v‖_{{H^{1}}(Ω)}}. \) (2.21)
(2) For \( v∈H_{0}^{1}(Ω) \)
\( a(v,v)=\int _{Ω}^{}{|∇v(x)|^{2}}dx≥\frac{1}{2}‖v‖_{H_{0}^{1}(Ω)}^{2}. \) (2.22)
(3) For \( v∈H_{0}^{1}(Ω) \)
\( |L(v)|=|\int _{Ω}^{}f(x)v(x)dx|≤{‖f‖_{{L^{2}}(Ω)}}{‖v‖_{{L^{2}}(Ω)}}≤C{‖v‖_{{H^{1}}(Ω)}}. \) (2.23)
Then we can easily get the conclusion that there exists a unique solution in \( H_{0}^{1}(Ω) \) of (1.14).
We can assume the solution of (1.14) solves (1.13). It can be assumed that the solution of (1.14) satisfies: \( Δu=∇\cdot ∇u \) exists in the weak sense, and \( Δu∈{L^{2}}(Ω) \) (See Remark 1.26). Then according to the definition of weak divergence, (1.14) can be transformed into :
\( \int _{Ω}^{}(Δu+f)ϕdx=0 ∀ϕ∈C_{c}^{∞}(Ω) \) (2.24)
\( -Δu=f almost everywhere in Ω. \) (2.25)
We have verified that the solution of (1.14) solves (1.13).
Based on the above discussion, we have the following conclusion:
Theorem 2.25. \( Ω \) is a regular and bounded domain and \( f∈{L^{2}}(Ω) \) . We can get:
\( \begin{cases}\begin{matrix}-Δu=f & inΩ \\ u=0 & on ∂Ω. \\ \end{matrix}\end{cases} \)
Remark 2.26. We have the following result for weak divergence:
\( |\int _{Ω}^{}σ(x)\cdot ∇ϕ(x)dx|≤C{‖ϕ‖_{{L^{2}}(Ω)}}. \)
Then \( σ \) accepts a weak divergence in \( {L^{2}}(Ω) \) . Therefore, if \( u∈H_{0}^{1}(Ω) \) satisfies (1.14), we can deduce that \( ∇u \) admits weak divergence in \( {L^{2}}(Ω) \) . See also [6,7].
2.4. Finite element method
The following is the finite element method. Its rationale stems directly from the variational methods we studied in detail in the previous subsection. The basic idea is to replace the Hilbert space V with a finite-dimensional subspace Vh, and to propose a variational formula on this subspace. The approximation problem on Vh reduces to a simple analytical problem for linear systems
We consider again the general framework of the variational forms introduced in Section 1.4. We consider the following mutation formulation:
\( Find u∈V such that a(u,v)=L(v) ∀v∈V \) (2.26)
\( Find {u_{h}}∈{V_{h}} such that a({u_{h}},{v_{h}})=L({v_{h}}) ∀v∈V \) (2.27)
Then we will get:
Lemma 2.27. \( T \) he inner approximation (1.27) has a unique solution. Furthermore, this solution can be obtained by solving a linear system of positive definite matrix which is symmetric if \( a(u,v) \) is symmetric.
Proof. We write \( {u_{h}}=\sum _{j=1}^{{N_{h}}}{u_{j}}{ϕ_{j}} \) , let \( {U_{h}}=({u_{1}},…,{u_{{N_{h}}}}) \) be the vector in \( {R^{{N_{h}}}} \) of the coordinates of \( {u_{h}} \) . Then \( (1.27) \) is equivalent to
\( {K_{h}}{U_{h}}={b_{h}} \) (2.28)
for \( 1≤i,j≤{N_{h}} \)
\( {({K_{h}})_{ij}}=a({ϕ_{j}},{ϕ_{i}}), {({b_{h}})_{i}}=L({ϕ_{i}}). \)
The agility of the bilinear form \( a(u,v) \) leads to the positive definiteness of the matrix \( {K_{h}} \) , which leads to its invertibility. In fact, for any vector \( {U_{h}}∈{R^{{N_{h}}}} \) , we have:
\( {K_{h}}{U_{h}}\cdot {U_{h}}≥μ{‖\sum _{j=1}^{{N_{h}}}{u_{j}}{ϕ_{j}}‖^{2}}≥C{|{U_{h}}|^{2}}withC \gt 0. \)
The symmetry of \( a(u,v) \) also means the symmetry of \( {K_{h}} \) . In mechanical applications the matrix \( {K_{h}} \) is called the stiffness matrix. Therefore, the matrix problem (1.28) has a unique solution.
3. Analysis of the Stokes problem
We place ourselves in dimension \( N=2 \) or 3. Consider \( Ω⊂{R^{N}} \) a bounded, regular open set. We are interested in the Stokes problem in \( Ω \) . The unknowns are the pressure field \( p:Ω→R \) as well as the fluid velocity \( u \) , a vector field on \( Ω:u={({v_{n}})_{1≤i≤N}} \) where \( ({u_{i}}):Ω→R \)
Before we proceed with detailed discussion on Stokes Problem, we need to define some notations for the simplicity of further discussion. If each of the components of \( ({u_{n}}) \) belongs to \( {H^{1}}(Ω) \) , which we note \( u∈{H^{1}}{(Ω)^{N}} \) , we introduce: The divergence of u:
\( divu=\sum _{i=1}^{N}\frac{∂{u_{i}}}{∂{x_{i}}}∈{L^{2}}(Ω) \)
The matrix field \( ∇u \) whose coefficient located row \( i \) , column \( j \) , is equal to \( \frac{∂{u_{i}}}{∂{x_{i}}}∈{L^{2}}(Ω) \) :
\( ∇u={(\frac{∂{u_{i}}}{∂{x_{i}}})_{1≤i,j≤N}}∈{({L^{2}}(Ω))^{N×N}}. \)
If \( A \) and \( B \) are matrices of size \( N×N \) , we define:
\( A:B=\sum _{1≤i,j≤N}^{}{A_{ij}}{B_{ij}}∈Ret{|A|^{2}}=A:A≥0. \)
We can then verify that \( H_{0}^{1}{(Ω)^{N}} \) equipped with the scalar product \( (u,v)=\int _{Ω}^{}∇u:∇v \) , is a space of Hilbert space. We denote
\( {‖u‖_{H_{0}^{1}{(Ω)^{N}}}}=\sqrt[]{\int _{Ω}^{}∇u:∇v}=\sqrt[]{\int _{Ω}^{}{|∇u|^{2}}}. \)
the norm associated with this scalar product.
Let \( f∈{L^{2}}{(Ω)^{N}} \) be the volume force in \( Ω \) . Then the pair \( (u,p) \) is a solution of the variational formulation:
Find \( (u,p)∈H_{0}^{1}{(Ω)^{N}}×{L^{2}}(Ω) \) such that
\( \int _{Ω}^{}∇u:∇v-\int _{Ω}^{}pdivv=\int _{Ω}^{}f\cdot v∀v∈H_{0}^{1}{(Ω)^{N}} \)
\( \int _{Ω}^{}qdivu=0∀q∈{L^{2}}(Ω) \) (3.1)
We admit here, in accordance with the courses before, that there is a solution \( (u,p) \) to this variational formulation, unique to the addition of a constant on the pressure \( p \) . See also in Chapter 2.1 [8]
3.1. Variational formulations and edge conditions
Proposition 3.1. Let \( w={({w_{i}})_{1≤i≤N}} \) and \( v={({v_{i}})_{1≤i≤N}} \) in \( {H^{1}}{(Ω)^{N}} \) . We have \( ∇w:∇v=\sum _{i=1}^{N}∇w:∇v \) . Moreover, if we suppose in addition \( w,v∈H_{0}^{1}{(Ω)^{N}} \) and \( w∈{H^{2}}{(Ω)^{N}} \) , we have
\( \int _{Ω}^{}∇u:∇v=-\int _{Ω}^{}Δw:v whereΔw={(Δ{w_{i}})_{1≤i≤N}}∈{R^{N}}. \)
Proof. According to Trace Theorem, we have:
\( -\int _{Ω}^{}Δw\cdot v=\int _{Ω}^{}∇w:∇v-\int _{∂Ω}^{}\frac{∂w}{∂n}\cdot vds. \)
As \( v∈H_{0}^{1}{(Ω)^{N}},\int _{∂Ω}^{}\frac{∂w}{∂n}\cdot vds=0 \) Thus we have the original equation.
Lemma 3.2. Suppose that \( (u,p)∈{H^{2}}{(Ω)^{N}}×{H^{1}}(Ω) \) solution of (1). Then \( (u,p) \) is a solution of the following boundary problem:
\( -∇u+Δp=f \) almost everywhere in \( Ω \) ,
div \( u=0 \) almost everywhere in \( Ω \) ,
\( u=0 \) in the sense of the traces on \( ∂Ω \) .
Proof. We multiply the first equation by \( ∀v∈H_{0}^{1}{(Ω)^{N}} \) , and have:
\( -\int _{Ω}^{}Δu:v-\int _{Ω}^{}∇p\cdot v=\int _{Ω}^{}f\cdot v, \)
As we have proposition 3.1, we can deduce that
\( -\int _{Ω}^{}Δu:v=-\int _{Ω}^{}∇u\cdot ∇v. \)
According to Trace Theorem:
\( \int _{Ω}^{}∇p\cdot v=-\int _{Ω}^{}p\cdot divv+\int _{∂Ω}^{}u\cdot v\cdot {n_{i}}(x)dx. \)
As \( v∈H_{0}^{1}{(Ω)^{N}},\int _{∂Ω}^{}u\cdot v\cdot {n_{i}}(x)=0,\int _{Ω}^{}∇p\cdot v=-\int _{Ω}^{}p\cdot divv \) . Thus we know that the first equation in the lemma can be transformed into:
\( \int _{Ω}^{}∇u\cdot ∇v+\int _{Ω}^{}p\cdot divv=\int _{Ω}^{}f\cdot v \)
The equation above is exactly the first equation in (1). We multiply the second equation in lemma by \( ∀p∈{L^{2}}(Ω) \) , and have:
\( \int _{Ω}^{}qdivu=0∀q∈{L^{2}}(Ω). \)
The equation above is exactly the third equation in \( (1) \)
As \( u∈{H^{2}}{(Ω)^{N}} \) and \( u=0 \) in the sense of the traces on \( ∂Ω \) . Thus \( u∈H_{0}^{1}{(Ω)^{N}} \) in this lemma.
In conclusion, the conditions in (1) and Lemma 3.2 are completely equivalent. Thus the solution for (1) is the solution to the equations in Lemma 3.2 as well. See also Chapter VII in [9].
Proposition 3.3. We denote \( ∇{w^{T}} \) the transpose matrix of \( ∇w \) . Show that for all \( w,v∈H_{0}^{1}{(Ω)^{N}} \) we have
\( \int _{Ω}^{}∇{u^{T}}:∇v=\int _{Ω}^{}divudivv \)
Proof. We note that By Trace Theorem, \( ∀1≤i≤n \) we have:
\( \int _{Ω}^{}\frac{∂{w_{i}}}{∂{x_{j}}}\frac{∂{v_{j}}}{∂{x_{i}}}=\int _{Ω}^{}\frac{∂{w_{i}}}{∂{x_{i}}}\frac{∂{v_{j}}}{∂{x_{j}}}, \)
Sum the equations for \( 1≤i≤N \) and \( 1≤j≤N \) , and we have:
\( \sum _{i=1}^{N}\sum _{j=1}^{N}\int _{Ω}^{}\frac{∂{w_{i}}}{∂{x_{j}}}\frac{∂{v_{j}}}{∂{x_{i}}}=\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{Ω}^{}\frac{∂{w_{i}}}{∂{x_{i}}}\frac{∂{v_{j}}}{∂{x_{j}}}. \)
As
\( \begin{matrix}\int _{Ω}^{}∇{u^{T}}:∇v=\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{Ω}^{}\frac{∂{w_{i}}}{∂{x_{i}}}\frac{∂{v_{j}}}{∂{x_{j}}} \\ \int _{Ω}^{}divudivv=\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{Ω}^{}\frac{∂{w_{i}}}{∂{x_{i}}}\frac{∂{v_{j}}}{∂{x_{j}}} \\ \end{matrix} \)
We have the original equation proved.
Theorem 3.4. The variational formulation (1) is equivalent to (that is, has the same set of solutions as):
Find \( (u,p)∈H_{0}^{1}{(Ω)^{N}}×{L^{2}}(Ω) \) such that
\( \int _{Ω}^{}(∇u+∇{u^{T}}):∇v-\int _{Ω}^{}pdivv=\int _{Ω}^{}f\cdot v ∀v∈H_{0}^{1}{(Ω)^{N}}, \)
\( \int _{Ω}^{}qdivu=0,∀q \) in \( {L^{2}}(Ω) \)
Proof. As the two sets of formulations are all the same except for two, we only need to prove that the different two equations are actually the same under the conditions given. That is to prove:
\( \begin{matrix}\int _{Ω}^{}(∇u+∇{u^{T}}):∇v-\int _{Ω}^{}pdivv=\int _{Ω}^{}∇u:∇v-\int _{Ω}^{}pdivv \\ ⇔\int _{Ω}^{}(∇u+∇{u^{T}}):∇v=\int _{Ω}^{}∇u:∇v \\ ⇔\int _{Ω}^{}∇{u^{T}}:∇v=0. \\ \end{matrix} \)
As from lemma 3.3 we know that \( \int _{Ω}^{}∇{u^{T}}:∇v=\int _{Ω}^{}divu div v \) As \( \int _{Ω}^{}qdivu= \) \( 0∀q \) in \( {L^{2}}(Ω) \) We use the Trace Theorem and can deduce that div \( u=0 \) on \( ∂Ω \) Thus \( \int _{Ω}^{}div u div v=0=\int _{Ω}^{}∇{u^{T}}:∇v \)
Suppose \( ∂Ω \) decomposed into two distinct nonempty parts \( {Γ_{1}} \) and \( {Γ_{2}} \) of nonzero \( N-1 \) dimensional measure such that \( ∂Ω={Γ_{1}}∪{Γ_{2}} \) . We denote by \( {γ_{1}}:{H^{1}}(Ω)→{L^{2}}({Γ_{1}}) \) the trace application on \( {Γ_{1}} \) and we note
\( H_{0,{Γ_{1}}}^{1}=\lbrace v∈{H^{1}}{(Ω)^{N}},{γ_{1}}{v_{i}}=0for1≤i≤N\rbrace . \)
We consider the two variational formulations
\( Find(u,p)∈H_{0,{Γ_{1}}}^{1}×{L^{2}}(Ω)such that \)
\( \int _{Ω}^{}∇u:∇v-\int _{Ω}^{}pdiv v=\int _{Ω}^{}f\cdot v ∀v∈H_{0,{Γ_{1}}}^{1}, \)
\( \int _{Ω}^{}qdiv u=0 ∀q in {L^{2}}(Ω). \) (3.2)
Find \( (u,p)∈H_{0,{Γ_{1}}}^{1}×{L^{2}}(Ω) \) such that
\( \int _{Ω}^{}(∇u+∇{u^{T}}):∇v-\int _{Ω}^{}pdivv=\int _{Ω}^{}f\cdot v∀v∈H_{0,{Γ_{1}}}^{1} \)
\( \int _{Ω}^{}qdiv u=0 ∀q in {L^{2}}(Ω) \) (3.3)
Lemma 3.5. Let \( w∈{H^{2}}{(Ω)^{N}} \) and \( v∈{H^{1}}{(Ω)^{N}} \) . We assume div \( w=0 \) , and we denote by \( n \) the unit outgoing normal at \( Ω \) . Give, in terms of \( ∇w \) and \( ∇{w^{T}} \) , the expression for the fields of matrices \( A \) and \( B \) of size \( N×N \) such that
\( \int _{Ω}^{}∇w:∇v=-\int _{Ω}^{}Δw\cdot v+\int _{∂Ω}^{}{A_{n}}\cdot v, \)
and
\( \int _{Ω}^{}∇{w^{T}}:∇v=\int _{∂Ω}^{}{B_{n}}\cdot v.. \)
Proof. For the first equation, use Trace theorem, and we have:
\( \int _{Ω}^{}∇w:∇v=-\int _{Ω}^{}Δu:v+\int _{∂Ω}^{}\frac{∂u}{∂n}(x)v(x), \)
Therefore, \( {A_{n}}=\frac{∂u}{∂n}(x) \)
For the second equation, we have:
\( \begin{matrix}\int _{Ω}^{}∇{w^{T}}:∇v=\sum _{1≤i,j≤N}^{}\int _{Ω}^{}\frac{∂{w_{j}}}{∂{x_{i}}}\cdot \frac{∂{v_{i}}}{∂{x_{j}}} \\ =\sum _{1≤i,j≤N}^{}-\int _{Ω}^{}\frac{∂}{∂{x_{i}}}(\frac{∂{w_{j}}}{∂{x_{j}}}){v_{i}}+\sum _{1≤i,j≤N}^{}\int _{∂Ω}^{}\frac{∂{w_{j}}}{∂{x_{i}}}{v_{i}}{n_{j}} \\ =\sum _{1≤i≤N}^{}\int _{Ω}^{}\frac{∂}{∂{x_{i}}}(div w){v_{i}}+\sum _{1≤i≤N}^{}\int _{∂Ω}^{}(\sum _{1≤j≤N}^{}\frac{∂{w_{j}}}{∂{x_{i}}}{n_{j}}){v_{i}} \\ \end{matrix} \)
Thus \( {B_{n}}=∇{w^{T}} \)
Theorem 3.6. Suppose that \( (u,p)∈{H^{2}}{(Ω)^{N}}×{H^{1}}(Ω) \) is the solution of (3.2). Give with justification the boundary problem verified by \( (u,p) \) . Do the same in the case where \( (u,p) \) is a solution of (3.3).
Proof. Using Trace Theorem, we have the following deductions for (3.2):
\( \begin{matrix}\int _{Ω}^{}∇u:∇v-\int _{Ω}^{}pdiv v=-\int _{Ω}^{}Δu\cdot v+\int _{Ω}^{}∇p\cdot v \\ =-\int _{Ω}^{}Δu\cdot v+\int _{∂Ω}^{}∇un\cdot v+\int _{Ω}^{}∇p\cdot v-\int _{∂Ω}^{}pn\cdot v \\ =-\int _{Ω}^{}Δu\cdot v+\int _{Ω}^{}∇p\cdot v+\int _{∂Ω}^{}(∇u\cdot n-pn)\cdot v \\ =\int _{Ω}^{}f\cdot v \\ \end{matrix} \)
As \( \int _{Ω}^{}qdiv u=0∀q∈{L^{2}}() \) , we have that:
\( \begin{cases}\begin{matrix}Δu+∇p=f & inΩ \\ div u=0 & inω \\ u=0 & on{Γ_{1}} \\ ∇un-pn=0 & on{Γ_{2}} \\ \end{matrix}\end{cases} \)
Next we do the same thing for (3.3) :
\( \begin{matrix}\int _{Ω}^{}(∇u+∇{u^{T}}):∇v-\int _{Ω}^{}pdiv v \\ =-\int _{Ω}^{}(Δu+Δ{u^{T}})\cdot v+\int _{Ω}^{}∇p\cdot v \\ =-\int _{Ω}^{}Δu\cdot v+\int _{∂Ω}^{}∇un\cdot v+\int _{Ω}^{}divudivv+\int _{∂Ω}^{}∇{u^{T}}n\cdot v+\int _{Ω}^{}∇p\cdot v-\int _{∂Ω}^{}pn\cdot v \\ =-\int _{Ω}^{}Δu\cdot v+\int _{Ω}^{}∇p\cdot v+\int _{∂Ω}^{}(∇u\cdot n+∇{u^{T}}n-pn)\cdot v \\ =\int _{Ω}^{}f\cdot v. \\ \end{matrix} \)
Similarly, as \( \int _{Ω}^{}qdiv u=0∀q∈{L^{2}}() \) , we have:
\( \begin{cases}\begin{matrix}Δu+∇p=f & inΩ \\ div u=0 & inω \\ u=0 & on{Γ_{1}} \\ (∇u+∇{u^{T}})n-pn=0 & on{Γ_{2}} \\ \end{matrix}\end{cases} \)
3.2. Approximation of the continuous problem
We return in the following to the Stokes problem with homogeneous Dirichlet conditions:
Find \( (u,p)∈H_{0}^{1}{(Ω)^{N}}×{L^{2}}(Ω) \) such that
\( \int _{Ω}^{}∇u:∇v-\int _{Ω}^{}pdiv v=\int _{Ω}^{}f\cdot v ∀v∈H_{0}^{1}{(Ω)^{N}} \)
\( \int _{Ω}^{}qdiv u=0 ∀q∈{L^{2}}(Ω) \) (3.4)
One of the difficulties appearing in the solution of this problem is related to the constraint div \( u=0 \) imposed on the velocity. One way to get around this difficulty, is to approximate \( u \) by a sequence \( {u^{ε}} \) solution of problems without constraint on the divergence. For this purpose, we consider, for \( ε¿0 \) , the variational formulation
Find \( {u^{ε}}∈H_{0}^{1}{(Ω)^{N}} \) such that
\( \int _{Ω}^{}∇{u^{ε}}:∇v+\frac{1}{ε}\int _{Ω}^{}div{u^{ε}}div v=\int _{Ω}^{}f\cdot v ∀v∈H_{0}^{1}{(Ω)^{N}}. \) (3.5)
Lemma 3.7. There is a unique solution to the formulation (3.5).
Proof. As \( {u^{ε}}∈H_{0}^{1}{(Ω)^{N}} \) , we consider using Lax-Milgram theorem to verify this lemma.
First, we prove that:
Theorem 3.8. By choosing a suitable test function \( v \) in (3.4) and (3.5), show that, for all \( ε \gt 0 \) , we have
\( \int _{Ω}^{}{|∇({u^{ε}}-u)|^{2}}+\frac{1}{ε}\int _{Ω}^{}{|div{ u^{ε}}|^{2}}+\int _{Ω}^{}p div{ u^{ε}}=0 \)
Deduce that
\( ∀ε \gt 0,{∥div{u^{ε}}∥_{{L^{2}}(Ω)}}≤ε{‖p‖_{{L^{2}}(Ω)}}, \)
and then
\( ∀ε \gt 0,{∥{u^{ε}}-u∥_{H_{0}^{1}{(Ω)^{N}}}}≤\sqrt[]{ε}{‖p‖_{{L^{2}}(Ω)}} \) ,
Conclude
Proof. As \( \int _{Ω}^{}{|∇({u^{ε}}-u)|^{2}}≥0 \) and \( \frac{1}{ε}\int _{Ω}^{}{|div {u^{ε}}|^{2}}≥0 \)
\( \begin{matrix}{‖p‖_{{L^{2}}(Ω)}}{∥div{u^{ε}}∥_{{L^{2}}(Ω)}}≥|\int _{Ω}^{}p\cdot div {u^{ε}}| \\ =\int _{Ω}^{}{|∇({u^{ε}}-u)|^{2}}+\frac{1}{ε}\int _{Ω}^{}{|div {u^{ε}}|^{2}} \\ ≥\frac{1}{ε}\int _{Ω}^{}{|div{u^{ε}}|^{2}} \\ ≥\frac{1}{ε}∥div{u^{ε}}∥_{{L^{2}}(Ω)}^{2} \\ \end{matrix} \)
Thus we have:
\( {‖p‖_{{L^{2}}(Ω)}}≥\frac{1}{ε}{∥div{ u^{ε}}∥_{{L^{2}}(Ω)}}. \)
Thus \( ∀ε \gt 0,{∥div {u^{ε}}∥_{{L^{2}}(Ω)}}≤ε{‖p‖_{{L^{2}}(Ω)}} \) .
As \( \int _{Ω}^{}{|∇({u^{ε}}-u)|^{2}}≥0 \) and \( \frac{1}{ε}\int _{Ω}^{}{|div{ u^{ε}}|^{2}}≥0 \) , and the conclusion from the first equation, we have:
\( \begin{matrix}ε‖p‖_{{L^{2}}(Ω)}^{2}≥{‖p‖_{{L^{2}}(Ω)}}{∥div {u^{ε}}∥_{{L^{2}}(Ω)}} \\ ≥|\int _{Ω}^{}p\cdot div {u^{ε}}| \\ =\int _{Ω}^{}{|∇({u^{ε}}-u)|^{2}}+\frac{1}{ε}\int _{Ω}^{}{|div {u^{ε}}|^{2}} \\ ≥\int _{Ω}^{}{|∇({u^{ε}}-u)|^{2}} \\ ≥∥{u^{ε}}-u∥_{H_{0}^{1}{(Ω)^{N}}}^{2} \\ \end{matrix} \) ,
Thus we have:
\( ∀ε \gt 0,∥{u^{ε}}-u∥_{H_{0}^{1}{(Ω)^{N}}}^{2}≤ε∥p∥_{{L^{2}}(Ω)}^{2} \)
It is easy to see that the square root of both sides of the equation is exactly the original formulation.
We propose in the next question to reinterpret the approximation of (3.4) by (3.5) from an energy point of view.
Proposition 3.9. We denote \( H_{0,div}^{1}=\begin{cases}∀v∈H_{0}^{1}{(Ω)^{N}}\end{cases} \) , \( divv=0 \) in \( Ω\rbrace \) . Assume \( (u,p) \) is a solution of variational propblem (3.4).
Find \( u∈H_{0,div}^{1} \) such that
\( a(u,v)=l(v) ∀v∈H_{0,div}^{1}. \) (3.6)
Deduce that \( u \) is a solution of a minimization problem that we will specify.
3.3. Finite element discretization
We place ourselves in \( {R^{2}} \) in this part and assume in the rest of the problem that the open \( Ω∈{R^{2}} \) is polygonal connected. Let us give \( {T_{h}} \) a triangulation of \( Ω \) . We consider the inner approximation of \( H_{0}^{1}{(Ω)^{N}} \) by Lagrangian finite elements of order \( k \) . For this, let us define the space
\( {V_{0h}}=\lbrace v=({v_{1}},{v_{2}})∈{C^{0}}{(\bar{Ω})^{2}},such that |\begin{matrix}v=0 on ∂Ω \\ {{v_{1}}|_{{K_{i}}}}∈{P_{k}} and {{v_{2}}|_{{K_{i}}}}∈{P_{k}}, for all {K_{i}}∈{T_{h}} \\ \end{matrix}\rbrace \)
Let us introduce the problem, posed in the space \( {V_{0h}} \) , a finite dimensional subspace of \( H_{0}^{1}{(Ω)^{N}} \)
\( Find u_{h}^{ε}∈{V_{0h}} such that \)
\( \int _{Ω}^{}∇u_{h}^{ε}:∇{v_{h}}+\frac{1}{ε}\int _{Ω}^{}divu_{h}^{ε}div{v_{h}}=\int _{Ω}^{}f\cdot {v_{h}}=0 ∀{v_{h}}∈{V_{0h}}. \) (3.7)
Proposition 3.10. Show that (3.7) has a unique solution.
Proof. Consider the proof of Proposition 3.7. Similarly, we consider the Lax-Milgram theorem here.
In the following, we note \( u_{h}^{ε} \) this solution. If \( (u,p) \) is a solution of (3.4), the objective of this part is to show that \( u_{h}^{ε} \) is a good approximation of \( u \) . Specifically, we wish to estimate \( {∥u_{h}^{ε}u∥_{H_{0}^{1}{(Ω)^{N}}}} \) as a function of \( ε \) and \( h \) .
To do this, we begin, at fixed \( ε \) , by estimating the error due to the finite element discretization of (3.5) by (3.7). See also in [10].
Lemma 3.11. Show that
\( \int _{Ω}^{}∇(u_{h}^{ε}-{u^{ε}}):∇{v_{h}}+\frac{1}{ε}\int _{Ω}^{}div(u_{h}^{ε}-{u^{ε}})div{v_{h}}=0∀{v_{h}}∈{V_{0h}}, \)
Deduce that \( u_{h}^{ε} \) minimizes \( {G^{ε}} \) on \( {V_{0h}} \) where
\( {G^{ε}}(v)=\int _{Ω}^{}{∥∇(u_{h}^{ε}-{u^{ε}})∥^{2}}+\frac{1}{ε}\int _{Ω}^{}{∥div(v-{u^{ε}})∥^{2}}, \)
We then define \( {V_{0h,div}}=\begin{cases}{v_{h}}∈{V_{0h }}\end{cases}such that div {v_{h}}=0\rbrace \) and we show that
\( ∥u_{h}^{ε}-{u^{ε}}∥_{H_{0}^{1}{(Ω)^{N}}}^{2}≤\underset{{v_{h}}∈{V_{0h,div}}}{min}|{v_{h}}-{u^{ε}}|_{H_{0}^{1}{(Ω)^{N}}}^{2}+\frac{1}{ε}\int _{Ω}^{}|div|{{u^{ε}}|^{2}}. \)
Proof. As we are discussing about \( u_{h}^{ε} \) , we can see them as constants here.
\( \begin{matrix}\int _{Ω}^{}∇(u_{h}^{ε}-{u^{ε}}):∇{v_{h}}+\frac{1}{ε}\int _{Ω}^{}div(u_{h}^{ε}-{u^{ε}})div{v_{h}} \\ =\int _{Ω}^{}∇u_{h}^{ε}:∇{v_{h}}+\frac{1}{ε}\int _{Ω}^{}divu_{h}^{ε}:∇{v_{h}}+C \\ =\int _{Ω}^{}∇{u^{ε}}:∇{v_{h}}+\frac{1}{ε}\int _{Ω}^{}div{u^{ε}}:∇{v_{h}}+C \\ \end{matrix} \)
Since \( {G^{ε}}(v)=\frac{1}{2}a(v,v)-l(v) \) , we have \( {G^{ε}}(v)=\int _{Ω}^{}{|∇(u_{h}^{ε}-{u^{ε}})|^{2}}+\frac{1}{ε}\int _{Ω}^{}{|div(v-{u^{ε}})|^{2}} \) .
We then deduce that
\( \begin{matrix}∥u_{h}^{ε}-{u^{ε}}∥_{H_{0}^{1}{(Ω)^{N}}}^{2}=\int _{Ω}^{}{|∇(u_{n}^{ε}-{u^{ε}})|^{2}}dx \\ ≤{G^{ε}}(u_{n}^{ε}) \\ =\underset{{v_{n}}∈{V_{0h}}}{min}{G^{ε}}({v_{n}}) \\ ≤\underset{{v_{n}}∈{V_{0h,div}}}{min}{G^{ε}}({v_{n}}) \\ =\underset{{v_{n}}∈{V_{0h}},div}{min}(\int _{Ω}^{}{∥∇(v_{h}^{ε}-{u^{ε}})∥^{2}}+\frac{1}{ε}\int _{Ω}^{}{∥div{u^{ε}}∥^{2}}) \\ =\underset{{v_{h}}∈{V_{0h,div}}}{min}∥{v_{h}}-{u^{ε}}∥_{H_{0}^{1}{(Ω)^{N}}}^{2}+\frac{1}{ε}\int _{Ω}^{}|div|{{u^{ε}}|^{2}} \\ \end{matrix} \)
Theorem 3.12. Using the markups obtained in question 8, Derive a markup of the total error \( {∥u_{h}^{ε}-u∥_{H_{0}^{1}{(Ω)^{N}}}} \) by the sum of an interpolation-type error of \( u \) on \( {V_{0h,div}} \) , and a term of order \( \sqrt[]{ε} \) .
Proof.
\( \begin{matrix}{∥u_{h}^{ε}-u∥_{H_{0}^{1}{(Ω)^{N}}}} \\ ≤{∥u_{h}^{ε}-{u^{ε}}∥_{H_{0}^{1}{(Ω)^{N}}}}+{∥{u^{ε}}-u∥_{H_{0}^{1}{(Ω)^{N}}}} \\ ≤\underset{{v_{h}}∈{V_{0h,div}}}{min}{∥{v_{h}}-{u^{ε}}∥_{H_{0}^{1}(Ω)}}+\sqrt[]{\frac{1}{ε}\int _{Ω}^{}{|div{u^{ε}}|^{2}}}+{∥{u^{ε}}-u∥_{H_{0}^{1}{(Ω)^{N}}}} \\ ≤{∥{v_{h}}-u∥_{H_{0}^{1}{(Ω)^{N}}}}+{∥u-{u^{ε}}∥_{H_{0}^{1}{(Ω)^{N}}}}+\sqrt[]{\frac{1}{ε}\int _{Ω}^{}{|div{u^{ε}}|^{2}}}+{∥{u^{ε}}-u∥_{H_{0}^{1}{(Ω)^{N}}}} \\ ≤\underset{{v_{h}}∈{V_{0h,div}}}{min}{∥{v_{h}}-{u^{ε}}∥_{H_{0}^{1}(Ω)}}+\sqrt[]{\frac{1}{ε}\int _{Ω}^{}{|div{u^{ε}}|^{2}}}+2{∥{u^{ε}}-u∥_{H_{0}^{1}{(Ω)^{N}}}} \\ ≤\underset{{v_{h}}∈{V_{0h,div}}}{min}{∥{v_{h}}-{u^{ε}}∥_{H_{0}^{1}(Ω)}}+3\sqrt[]{ε}∥p{∥_{{L^{2}}(Ω)}} \\ \end{matrix} \)
3.4. Associated matrix problem
We denote by \( {ϕ_{i}},1≤i≤{N_{h}} \) , the functions of the canonical basis of the finite element space \( {V_{0h}} \) , so that each function \( {v_{h}}∈{V_{0h}} \) decomposes into \( {v_{h}}(x)=\sum _{i=1}^{{N_{h}}}{({v_{h}})_{i}}{ϕ_{i}}(x) \)
Lemma 3.13. Show that the solution of the problem (5) can be reduced to the solution of the linear system linear system
Find \( U_{h}^{ε}∈{R^{(}}{N_{h}}) \) such that
\( ({A_{h}}+\frac{1}{ε}{C_{h}})U_{h}^{ε}={F_{h}}. \) (3.8)
Proof. We recall the (3.5) and deform it:
\( \begin{matrix}\int _{Ω}^{}∇{u^{ε}}:∇v+\frac{1}{ε}\int _{Ω}^{}div{u^{ε}}divv \\ =\sum _{j=1}^{{N_{h}}}{(u_{h}^{ε})_{j}}(\int _{Ω}^{}∇{ϕ_{j}}:∇{ϕ_{i}}+\frac{1}{ε}\int _{Ω}^{}div{ϕ_{j}}div{ϕ_{i}})∀1≤i≤{N_{h}}. \\ \int _{Ω}^{}f\cdot v∀v∈H_{0}^{1}{(Ω)^{N}} \\ =\int _{Ω}^{}f\cdot {ϕ_{i}}∀1≤i≤{N_{h}}. \\ \end{matrix} \)
We let \( {({A_{h}})_{ij}}=\int _{Ω}^{}∇{ϕ_{j}}:∇{ϕ_{i}},{({C_{h}})_{ij}}=\int _{Ω}^{}div{ϕ_{j}}div{ϕ_{i}} \) and \( {({F_{h}})_{ij}}=∫Ωf\cdot {ϕ_{i}} \) And we could attain the equation in the lemma.
Where \( U_{h}^{ε} \) denotes the vector consisting of the unknowns \( {(u_{h}^{ε})_{i}} \) from the decomposition of \( U_{h}^{ε} \) in the basis of \( {ϕ_{i}},1≤i≤{N_{h}} \) . We will specify the general terms of the matrices \( {A_{h}} \) and \( {C_{h}} \) (square matrices of size \( {N_{h}}×{N_{h}} \) ), as well as the general terms of the column vector \( {F_{h}} \) (of size \( {N_{h}} \) ), in terms of the \( {ϕ_{i}},1≤i≤{N_{h}} \) , and \( f \) .
Lemma 3.14. Show that the matrix \( {A_{h}}+\frac{1}{ε}{C_{h}} \) is symmetric, positive definite and that there is a unique solution \( U_{h}^{ε} \) of (3.8).
Proof. We first note the definition of "positive definite":
\( \begin{matrix}∀V∈{R^{N\bar{∽}}} \\ {V^{T}}A∙V≥0, \\ V=0 when and only when {V^{T}}A∙V≥0. \\ \end{matrix} \)
Then, first we prove that \( {A_{h}}+\frac{1}{ε}{C_{h}} \) is symmetric.
This is because
\( {({A_{h}})_{ij}}=\int _{Ω}^{}∇{ϕ_{j}}:∇{ϕ_{i}}={({A_{h}})_{ji}}, \)
and that
\( {({C_{h}})_{ij}}=\int _{Ω}^{}div{ϕ_{j}}div{ϕ_{i}}={({C_{h}})_{ji}}, \)
Next, we prove that \( {A_{h}}+\frac{1}{ε}{C_{h}} \) is positive definite.
Consider the matrix form of \( {A_{h}} \) and \( \frac{1}{ε}{C_{h}} \) , we then deduce that:
\( \begin{matrix}{V^{T}}=\int _{Ω}^{}(\sum _{i=1}^{{N_{h}}}{v_{i}}∇{ϕ_{i}}:\sum _{i=1}^{{N_{h}}}{v_{j}}∇{ϕ_{j}}) \\ =\int _{Ω}^{}{|∇{v_{h}}|^{2}} \\ ≥0. \\ \end{matrix} \)
Similarly, we deduce that:
\( \frac{1}{ε}{C_{h}}=\frac{1}{ε}\int _{Ω}^{}{|div {v_{h}}|^{2}}≥0. \)
Add up the two equations above, and we have:
\( {A_{h}}+\frac{1}{ε}{C_{h}}=\int _{Ω}^{}{|∇{v_{h}}|^{2}}+\frac{1}{ε}\int _{Ω}^{}{|div{v_{h}}|^{2}}≥0. \)
When the equation above equals to \( 0,\int _{Ω}^{}{|∇{v_{h}}|^{2}}=0 \) , Thus \( V=0 \) as well. And when \( V=0,\int _{Ω}^{}{|∇{v_{h}}|^{2}}=0 \) , and thus \( \frac{1}{ε}\int _{Ω}^{}{|div{ v_{h}}|^{2}}=0 \) , making the equation above equals to 0 as well.
Based on the deductions above, \( {A_{h}}+\frac{1}{ε}{C_{h}} \) is positive definite. Therefore, \( {A_{h}}+\frac{1}{ε}{C_{h}} \) is inversible, and we can solve \( U_{h}^{ε} \)
As \( ({A_{h}}+\frac{1}{ε}{C_{h}})\cdot U_{h}^{ε}={F_{h}} \) , we can know that \( U_{h}^{ε}={F_{h}}\cdot {({A_{h}}+\frac{1}{ε}{C_{h}})^{-1}} \)
Since \( {A_{h}}+\frac{1}{ε}{C_{h}} \) is positive definite \( {F_{h}}\cdot {({A_{h}}+\frac{1}{ε}{C_{h}})^{-1}} \) is unique, coming to the conclusion that \( U_{h}^{ε} \) is unique.
In order to understand the difficulty of solving (8), we will estimate the conditioning of the matrix \( {A_{h}}+\frac{1}{ε}{C_{h}} \) . If \( B \) is a symmetric matrix, we denote \( {λ_{max}}(B) \) (resp. \( {λ_{min}}(B)) \) the largest (resp. smallest) eigenvalue of \( B \) , and recall that, in the special case where \( B \) is positive definite, the packing (for the 12) of \( B \) is the ratio \( {λ_{max}}(B) \) \( /{λ_{min}}(B) \) .
Lemma 3.15. For \( A,C \) symmetric matrices, not necessarily positive, show that
\( \begin{matrix} \\ {λ_{min}}(A)+{λ_{max}}(C)≤{λ_{max}}(A+C)≤{λ_{max}}(A)+{λ_{max}}(C), \\ \\ {λ_{max}}(A)+{λ_{min}}(C)≥{λ_{min}}(A+C)≥{λ_{min}}(A)+{λ_{min}}(C) \\ \end{matrix} \)
Proof. Before we prove the lemma, we first define \( {λ_{max}}(A) \) and \( {λ_{min}}(A) \) . We define \( {λ_{max}}(A) \) as \( {max_{U≠0}}\frac{{U^{T}}BU}{{U^{T}}U} \) and \( {λ_{min}}(A) \) as \( -{λ_{max}}(A) \) For the first equation
We first have the following:
\( \begin{matrix} & \\ {λ_{max}}(A+C) & =\underset{U≠0}{max}\frac{{U^{T}}(A+C)U}{{U^{T}}U} \\ & \\ & ≤\underset{U≠0}{max}\frac{{U^{T}}AU}{{U^{T}}U}+\underset{U≠0}{max}\frac{{U^{T}}CU}{{U^{T}}U} \\ & \\ & {=λ_{max}}(A)+{λ_{max}}(C). \\ \end{matrix} \)
Moreover, we can deduce that:
\( \begin{matrix} & \\ {λ_{max}}(C) & ={λ_{max}}(A+C-A) \\ & \\ & ≤{λ_{max}}(A+C)-{λ_{max}}(-A) \\ & \\ & ={λ_{max}}(A+C)-{λ_{min}}(A) \\ \end{matrix} \)
Combine the two deductions above, we attain the first equation in the lemma
As \( {λ_{min}}(A)=-{λ_{max}}(A),{λ_{max}}(A)=-{λ_{min}}(A) \) For the next equation in the lemma, we first deform it:
\( \begin{matrix} & \\ {λ_{max}}(A)+{λ_{min}}(C) & =-{λ_{min}}(A)-{λ_{max}}(C) \\ & \\ {λ_{min}}(A+C) & =-{λ_{max}}(A+C) \\ & \\ {λ_{min}}(A)+{λ_{min}}(C) & =-{λ_{max}}(A)-{λ_{max}}(C) \\ \end{matrix} \)
We then transform the second equation in this lemma into:
\( -{λ_{min}}(A)-{λ_{max}}(C)≥-{λ_{max}}(A+C)≥-{λ_{max}}(A)-{λ_{max}}(C). \)
which is exactly the first equation in the lemma. Thus it is proved as well inferring from the deduction of the first equation.
Lemma 3.16. Consider \( A \) a symmetric positive definite matrix, and \( C≠0 \) a symmetric positive, noninvertible matrix. Deduce from the previous question that the conditioning of \( A+\frac{1}{ε}C \) is of order \( \frac{1}{ε} \) when \( ε \) tends to zero.
Proof. We first define Condition number: Cond \( (A)=\frac{{λ_{max}}(A)}{{λ_{min}}(A)} \)
As \( C \) is symmetric positive, and \( A \) is positive definite, we have \( {λ_{min}}(C)=0,{λ_{min}}(A) \gt 0 \)
Here, transforming the lemma, it is the same that we prove Cond \( (A+\frac{1}{ε}C) \) approaches \( \frac{1}{ε} \) when \( ε \) approaches 0 .
Use Lemma 3.15, we have:
\( \begin{matrix}{λ_{min}}(A)+\frac{1}{ε}{λ_{max}}(C)≤{λ_{max}}(A+\frac{1}{ε}C)≤{λ_{max}}(A)+\frac{1}{ε}{λ_{max}}(C) \\ \frac{1}{ε}{λ_{min}}(C)+{λ_{max}}(A)≥{λ_{min}}(A+\frac{1}{ε}C)≥{λ_{min}}(A)+\frac{1}{ε}{λ_{max}}(C) \\ \end{matrix} \)
As \( {λ_{min}}(C)=0 \) and \( {λ_{min}}(A)≥0 \) , we have:
\( {{λ_{max}}(A)^{-1}}∙({λ_{min}}(A)+\frac{1}{ε}{λ_{max}}(C))≤Cond(A+\frac{1}{ε}C)≤{{λ_{min}}(A)^{-1}}∙({λ_{max}}(A)+\frac{1}{ε}{λ_{max}}(C)). \)
As \( {λ_{max}}(A) \) , \( {λ_{max}}(C) \) and \( {λ_{min}}(A) \) are all given positive constants, we know that Cond \( (A+\frac{1}{ε}C) \) is of order \( \frac{1}{ε} \) ,and that when \( ε \) tend to 0, \( \frac{1}{ε} \) tends to positive infinity.
In our case, the matrix \( {C_{h}} \) is likely to have a nonzero kernel because there exist in \( {V_{0h}} \) functions with zero divergence (or almost: the space \( {V_{0h}} \) approaches \( H_{0}^{1}{(Ω)^{N}} \) when \( h→0 \) ). Now, to approximate with good precision the solution u of the Stokes problem, one will want to take \( ε \) very small. This implies, from what we just saw, a large conditioning for \( {A_{h}}+\frac{1}{ε}{C_{h}} \) .
Now, we know that in this case, the solution of the linear system is sensitive to numerical errors and rounding. Moreover, if we use an iterative algorithm to solve the system, it converges more slowly as the conditioning is large. To get around this difficulty, we can for example use preconditioning techniques to avoid a too long computation time.
4. Conclusion
In this article, we delve into the analysis of the equivalence relations pertaining to the variational formulations of the Stokes equations. Furthermore, we present a rigorous theory for numerical computation of the variational formulation of the Stokes equation, employing an approximate approach in conjunction with the finite element method. Moreover, after giving the error analysis, we deduced that in the actual calculation, we need to use preprocessing techniques to avoid long calculation time.
References
[1]. Buttazzo, G., Giaquinta, M., & Hildebrandt, S. (1998). One-dimensional variational problems: an introduction (Vol. 15). Oxford University Press.
[2]. Ciarlet, P. G. (2002). The finite element method for elliptic problems. Society for Industrial and Applied Mathematics.
[3]. Lieb, E. H., & Loss, M. (1997). Analysis, Graduate Studies in Mathematics, American Mathematical Society.
[4]. DiBenedetto, E. (2009). Partial differential equations. Springer Science & Business Media.
[5]. Evans, L. C. (2022). Partial differential equations (Vol. 19). American Mathematical Society.
[6]. Klainerman, S. (2008). Introduction to analysis. Lecture Notes, Princeton University.
[7]. Chen, W., & Jost, J. (2002). A Riemannian version of Korn's inequality. Calculus of Variations and Partial Differential Equations, 14(4), 517-530.
[8]. Lang, S. (2012). Real and functional analysis (Vol. 142). Springer Science & Business Media.
[9]. Lax, P. D. (2002). Functional analysis (Vol. 55). John Wiley & Sons.
[10]. Mu, L., & Ye, X. (2017). A simple finite element method for the Stokes equations. Advances in Computational Mathematics, 43, 1305-1324.
Cite this article
Yu,Q.;Huang,Z. (2023). Analysis of the stokes problem in a regular bounded open set. Theoretical and Natural Science,12,1-17.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 2023 International Conference on Mathematical Physics and Computational Simulation
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Buttazzo, G., Giaquinta, M., & Hildebrandt, S. (1998). One-dimensional variational problems: an introduction (Vol. 15). Oxford University Press.
[2]. Ciarlet, P. G. (2002). The finite element method for elliptic problems. Society for Industrial and Applied Mathematics.
[3]. Lieb, E. H., & Loss, M. (1997). Analysis, Graduate Studies in Mathematics, American Mathematical Society.
[4]. DiBenedetto, E. (2009). Partial differential equations. Springer Science & Business Media.
[5]. Evans, L. C. (2022). Partial differential equations (Vol. 19). American Mathematical Society.
[6]. Klainerman, S. (2008). Introduction to analysis. Lecture Notes, Princeton University.
[7]. Chen, W., & Jost, J. (2002). A Riemannian version of Korn's inequality. Calculus of Variations and Partial Differential Equations, 14(4), 517-530.
[8]. Lang, S. (2012). Real and functional analysis (Vol. 142). Springer Science & Business Media.
[9]. Lax, P. D. (2002). Functional analysis (Vol. 55). John Wiley & Sons.
[10]. Mu, L., & Ye, X. (2017). A simple finite element method for the Stokes equations. Advances in Computational Mathematics, 43, 1305-1324.