Stresses Around a Wellbore

Bernt Aadnøy , Reza Looyeh , in Petroleum Rock Mechanics, 2011

10.7.1 Governing Equations

Mathematically, the strain functions must possess sufficient continuous partial derivatives to satisfy the following compatibility for plane problem in polar coordinates [ Lekhnitskii, 1968]:

2 r 2 + 1 r r + 1 r 2 2 θ 2 2 Ψ r 2 + 1 r Ψ r + 1 r 2 2 Ψ θ 2 = 0

where Ψ is known as the Airys Stress Function.

In the absence of body forces, a function that satisfies the compatibility equation of 10.21 is given by the following relation between the Cauchy stresses and the Airys stress function:

(10.22) σ r = 1 r Ψ r + 1 r 2 2 Ψ θ 2 σ θ = 2 Ψ r 2 τ r θ = r 1 r Ψ θ

Expanding Equation 10.22 with the Airys stress function results in the so-called Euler Differential Equation:

(10.23) d 4 Ψ d r 4 + 2 r d 3 Ψ d r 3 1 r d 2 Ψ d r 2 + 1 r 3 d Ψ d r = 0

A general solution to the above equation, in a polar coordinate system, is:

(10.24) Ψ ( r , θ ) = C 1 r 2 + C 2 r 4 + C 3 r 2 + C 4 cos 2 θ

Inserting Equation 10.24 into 10.22, the expressions for the stresses become:

(10.25) σ r = C 1 + 6 C 3 r 4 + 4 C 4 r 2 cos 2 θ σ θ = 2 C 1 + 12 C 2 r 2 + 6 C 3 r 4 cos θ τ r θ = 2 C 1 + 6 C 2 r 2 6 C 3 r 4 2 C 4 r 2 sin θ

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123855466000103

Stresses Around a Wellbore

Bernt S. Aadnøy , Reza Looyeh , in Petroleum Rock Mechanics (Second Edition), 2019

11.7.1 Governing Equations

Mathematically, the strain functions must possess sufficient continuous partial derivatives to satisfy the following compatibility for plane problem in polar coordinates ( Lekhnitskii, 1968)

[ 2 r 2 + 1 r r + 1 r 2 2 θ 2 ] [ 2 Ψ r 2 + 1 r Ψ r + 1 r 2 2 Ψ θ 2 ] = 0

or

(11.21) 4 Ψ = 0

where Ψ is known as the Airy stress function.

In the absence of body forces, a function that satisfies the compatibility equation of 11.21 is given by the following relation between the Cauchy stresses and the Airy stress function:

(11.22) σ r = 1 r Ψ r + 1 r 2 2 Ψ θ 2 σ θ = 2 Ψ r 2 τ r θ = r [ 1 r Ψ θ ]

Expanding Eq. (11.22) with Airy stress function results in the so-called Euler differential equation

(11.23) d 4 Ψ d r 4 + 2 r d 3 Ψ d r 3 1 r d 2 Ψ d r 2 + 1 r 3 d Ψ d r = 0

A general solution to the above equation, in polar coordinate system, is

(11.24) Ψ ( r , θ ) = { C 1 r 2 + C 2 r 4 + C 3 r 2 + C 4 } cos 2 θ

Inserting Eq. (11.24) into (11.22), the expressions for the stresses become

(11.25) σ r = { 2 C 1 + 6 C 3 r 4 + 4 C 4 r 2 } cos 2 θ σ θ = { 2 C 1 + 12 C 2 r 2 + 6 C 3 r 4 } cos 2 θ τ r θ = { 2 C 1 + 6 C 2 r 2 6 C 3 r 4 2 C 4 r 2 } sin 2 θ

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012815903300011X

Basic Concepts

In Adaptive Sliding Mode Neural Network Control for Nonlinear Systems, 2019

1.1.6.3 Example of Positive Definite Function

1.

V ( x ) = x 1 2 + x 2 2 For all the components of a vector x , there is a continuous partial derivative of V(x); when x  =   0, V(0)   =   0 for any x    0, we have V(x)   >   0, so V(x) is positive definite function.

2.

V ( x ) = ( x 1 + x 2 ) 2 For all the components of a vector x, there is a continuous partial derivative of V(x); when x  =   0,V(0)   =   0 but not for any x    0, we have V(x)   >   0, for example, when x 1  =   x 2, we have V(x)   =   0, so V(x) is not positive definite function and is semipositive definite function.

3.

V ( x ) = x 1 2 x 2 2 V(x) inverse get V ( x ) = x 1 2 + x 2 2 , for all the components of a vector x, there is a continuous partial derivative of −V(x); when x  =   0, we have V(0)   =   0, for any x    0, we have V(x)   >   0, so −V(x) is positive definite function, and then V(x) is negative definite function.

4.

V ( x ) = x 1 x 2 + x 2 2 For all the components of a vector x, there is a continuous partial derivative of V(x); when x  =   0, V(0)   =   0, but for any x≠0, the symbols of V(x) cannot be determined, so V(x) is not positive definite function or negative definite function, it can be called indeterminate function.

(The function of the addition of the quadratic form conforms to the definition of the positive definite function.)

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012815372700001X

Functional Equations in Applied Sciences

In Mathematics in Science and Engineering, 2005

The associativity equation

Given the associativity equation

F [ F ( x , y ) , z ] = F [ x , F ( y , z ) ] ,

by differentiating with respect to x and y, and setting y = b we can write

F 1 [ f ( x , b ) , z ] F 1 ( x , b ) = F 1 [ x , F ( b , z ) ] , F 1 [ f ( x , b ) , z ] F 2 ( x , b ) = F 2 [ x , F ( b , z ) ] F 1 ( b , z ) ,

where the subindices refer to the variable with respect to which we differentiate, and assuming F 1(x, y) ≠ 0 and F 2(x, y) ≠ 0 we get

F 1 [ x , F ( b , z ) ] F 2 [ x , F ( b , z ) ] = F 1 ( x , b ) F 2 ( x , b ) F 1 ( b , z ) .

If now F(b, z) = u can be solved for u (i.e., z = ϕ(u)), we obtain

F 1 ( x , u ) F 2 ( x , u ) = p ( x ) q ( u ) ; { p ( x ) = F 1 ( x , b ) F 2 ( x , b ) d x , q ( u ) = 1 F 1 ( b , ψ ( u ) ) d u ,

where p(x) and q(u) are strictly monotonic. This implies

[ F ( x , y ) , p ( x ) + q ( y ) ] ( x , y ) = 0 F ( x , y ) = g [ p ( x ) + q ( y ) ] ,

and substituting back into the initial equation and setting y = b, we get

g { p [ g [ p ( x ) + q ( b ) ] ] + q ( z ) } = g { p ( x ) + q [ g [ p ( b ) + q ( z ) ] ] } ,

which leads to

p [ g [ p ( x ) + q ( b ) ] ] p ( x ) = q [ g [ p ( b ) + q ( z ) ] ] q ( z ) = C ,

that is

g [ p ( x ) + q ( b ) ] = p 1 [ p ( x ) + C ] g ( u ) = p 1 [ u + C q ( b ) ] , g [ p ( b ) + q ( z ) ] = q 1 [ q ( z ) + C ] g ( u ) = q 1 [ u + C p ( b ) ] ,

or

p ( x ) = g 1 ( x ) + C q ( b ) ; q ( u ) = g 1 ( y ) + C p ( b ) .

Therefore, the associativity equation becomes

F ( x , y ) = g [ g 1 ( x ) + g 1 ( y ) + A ] ; A = 2 C q ( b ) p ( b )

and calling

f ( z ) = g ( z A ) ,

we finally obtain

F ( x , y ) = f [ f 1 ( x ) + f 1 ( y ) ] .

Therefore the following theorem holds.

Theorem 7.9

(The associativity equation).

The general local solution of the functional equation

(7.77) F [ f ( x , y ) , z ] = F [ x , F ( y , z ) ]

is

(7.78) F ( x , y ) = f [ f 1 ( x ) + f 1 ( y ) ] ,

with continuously differentiable and strictly monotonic f, if the domain of (7.77) is such that S possesses continuous partial derivatives and if F 1 (x, y) ≠ 0, F 2(x, y) ≠ 0 and F(b, z) = u can be solved for u.

Theorem 7.10

(Generalized auto-distributivity equation).

If the domain of (7.79) is such that F, G, H, M and N have continuous partial derivatives for z ≠ 0; if H 1(x, y) ≠ 0, H 2(x, y) ≠ 0, F 1(x, c) ≠ 0, M 1(x, c) ≠ 0 and N 1(x, c) ≠ 0, if M(x, a) and N(x, a) are constant and if M(x, c) = u and N(y, c) = v have unique solutions (c ≠ 0), then the general solution continuous, on a real rectangle, of the functional equation

(7.79) F [ G ( x , y ) , z ] = H [ M ( x , z ) , N ( y , z ) ] ,

where we assume GM, GN, HM and HN, is

(7.80) F ( x , y ) = l [ f ( y ) g 1 ( x ) + α ( y ) + β ( y ) ] , G ( x , y ) = g [ h ( x ) + k ( y ) ] , H ( x , y ) = l [ m ( x ) + n ( y ) ] , M ( x , y ) = m 1 [ f ( y ) h ( x ) + α ( y ) ] , N ( x , y ) = n 1 [ f ( y ) k ( x ) + β ( y ) ] ,

where g, h, k, l, m and n are arbitrary strictly monotonic and continuously differentiable functions, f(a) = 0 and f, α and β are arbitrary continuously differentiable functions.

The two sides of (7.79) can be written as

l { f ( z ) [ h ( x ) + k ( y ) ] + α ( z ) + β ( z ) } .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0076539205800104

Optimal and Extremal Trajectories

Dilmurat M. Azimov , in Analytical Solutions for Extremal Space Trajectories, 2018

2.1 Optimal Control Problem

Let a system state at any instant is determined by n quantities x i ( i = 1 , . . . , n ) called as the state variables. The vector function x = ( x 1 , x 2 , , x n ) , undefined x ( t ) ( n ) is called as the state vector and is considered to be absolutely differentiable on the interval [ t 0 , t 1 ] , where t 0 and t 1 are the initial and final times of the system's motion. It will be assumed that x i , undefined ( i = 1 , , n ) are continuous, but their derivatives, in general, may have discontinuities. The behavior of the system is described by n differential equations of first order [130], [121]:

(2.1) x ˙ i = f i ( x 1 , x 2 , , x n , u 1 , u 2 , , u k , t ) ,

Eqs. (2.1) are called as the state equations valid on [ t 0 , t 1 ] . The vector function u, where u = ( u 1 , , u k ) , undefined u ( t ) ( k ) U , is called as the control vector, the quantities u r undefined ( r = 1 , . . . , k ) are determined on the same interval [ t 0 , t 1 ] and assumed to be piece-wise continuous functions, and U is an open set of controls. The functions f i possess continuous partial derivatives of sufficiently high order with respect to all their components.

At t 0 , the components of x 0 and t 0 satisfy q 1 constraint equations:

(2.2) E l ( x 01 , x 02 , . . . , x 0 n , t 0 ) = 0 , l = 1 , . . . , q 1 , q 1 n + 1 .

At t 1 , the components of x 1 and t 1 satisfy q 2 constraint equations:

(2.3) F m ( x 11 , x 12 , . . . , x 1 n , t 1 ) = 0 , m = 1 , . . . , q 2 , q 2 n + 1

Here q 1 + q 2 < 2 ( n + 1 ) . It is assumed that x and u satisfy the constraints [3], [130], [121]:

(2.4) Φ s ( x 1 , x 2 , . . . , x n , u 1 , u 2 , . . . , u k , α 1 , α 2 , . . . , α d ) = 0 , s = 1 , . . . , p ; undefined p n ; undefined p k ; undefined d k ,

where α = ( α 1 , α 2 , . . . , α d ) , undefined α ( d ) are the auxiliary control variables. The vector functions

E = ( E 1 , undefined E 2 , undefined , undefined E q 1 ) , undefined E ( q 1 ) , F = ( F 1 , F 2 , , F q 2 ) , undefined F ( q 2 ) , Φ = ( Φ 1 , Φ 2 , , Φ p ) , undefined Φ ( p )

are continuous and possess continuous partial derivatives of sufficiently high order with respect to all their arguments. Note that Eqs. (2.2) and (2.3) can be used to express q 1 components of x 0 and q 2 components of x 1 in terms of remaining n q 1 and n q 2 components of x 0 and x 1 respectively. Let us determine the functional Ψ of the problem in the form:

(2.5)

Here the scalar functions J and g are also assumed continuous and possess continuous partial derivatives of sufficiently high order with respect to all their arguments. Then it is required to determine u ( t ) and x ( t ) such that Eqs. (2.1)(2.4) are satisfied, and the functional Ψ in Eq. (2.5) takes minimum among its all possible values. Such u ( t ) and x ( t ) are called as optimal control and optimal trajectory respectively [121].

It is known that the constraints imposed on the state and control vectors or part of these constraints can also be given in the form of inequalities, such as

u 1 r u r u 2 r , r = 1 , . . . , q k .

In this case, the admissible region of controls may be opened by a classical method, that is by introducing auxiliary control or slack variables, and the constraint is transformed into an equality constraint (2.4), [132], [133], [92], [131]. The q 1 and q 2 components of the initial and final state vectors can be expressed in terms of n q 1 and n q 2 components of these vectors. In the problem being considered the equalities in Eqs. (2.4) cover, in general, both forms of equality and inequality constraints, and the auxiliary variables are denoted by α 1 , α 2 , . . . , α d [134]. The control variable, u is admissible if (1) u ( t ) is defined and a piece-wise continuous on [ t 0 , t 1 ] ; (2) u = u ( t ) satisfies Eqs. (2.4) [121]. The control vector, u ( t ) given on [ t 0 , t 1 ] determines the system's behavior which is described by the equations

(2.6) x ˙ i = z i ( x 1 , x 2 , . . . , x n , t ) ,

obtained by substitution of u ( t ) into Eqs. (2.1). It is assumed that Eqs. (2.6) satisfy the conditions of the theorem of existence and uniqueness of the solutions [135]. In this case, for the given admissible control u ( t ) and given initial conditions in Eqs. (2.2), there exist a unique continuous solution, x ( t ) of Eqs. (2.1).

If it is possible to uniquely find x 0 = ( x 01 , x 02 , . . . , x 0 n ) and t 0 from Eqs. (2.2) in the case of q 1 = n + 1 , then the initial state and the initial time are said to be fixed. Similarly, if q 2 = n and t 1 is free, one can determine the final state vector's components as functions of t 1 by employing Eqs. (2.3). If it is possible to uniquely find t 1 from Eqs. (2.3), then the final time is said to be fixed.

Below, in subsequent sections, the sufficient conditions of positive definiteness of the second differential, introduction of conditions of finiteness of the solutions to the Riccati equation, and the conditions of conjugate points associated with the statement and analysis of the auxiliary optimization problem are studied. The case of extremals with singular arcs, where H u u = 0 , is not considered. Studies of this particular case are presented in Ref. [136], [55]. It is assumed that the extremal may consist of various thrust arcs, connected at corner points. The problem being considered is generalized to the case when the extremal of the problem includes corner points. Analysis of the differentials of the extended functional allows us to obtain optimality conditions which must be satisfied for each arc including the conditions at the corner points. Furthermore, the variation problem, the formulation of which differs from the well known formulation of Lawden in flight dynamics is provided. It is shown that the presence of constraints that characterize power limited systems and chemical systems allows us to analyze the low thrust arcs together with zero and high thrust arcs thereby extending the group of thrust arcs known in the context of the conventional variation problem. Methodology of analytical determination of extremal trajectories is based on the studies of necessary conditions of optimality, continuity conditions at corner points and on the determination of structure of the trajectory. This methodology serves as a tool of determining the extremal trajectories which represent reference trajectories applicable to the guidance problem [26], [137].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128140581000023

Markov Processes

Alexander S. Poznyak , in Advanced Mathematical Tools for Automatic Control Engineers: Stochastic Techniques, Volume 2, 2009

10.3.2.1 Kolmogorov's backward equation

Theorem 10.2. Let the density p(s, x, t, y) of transition probability for a Markov process, with the drift vector a(s, x) and the diffusion matrix b(s, x) have the derivatives

s p s x t y , x p s x t y and 2 x 2 p s x t y

which are uniformly continuous in y at any finite interval y  y  y″ . Then for any t    [a, b] and any y    ℝ it satisfies the following partial differential equation

(10.25) s p s x t y = a s x x p s x t y + 1 2 b s x 2 x 2 p s x t y

known as the Kolmogorov backward equation .

Proof. Consider any continuous function ϕ c (x) which is equal to zero outside some finite interval, i.e.,

ϕ a b x : = ϕ x if a x b 0 if x a b

and denote

(10.26) φ s x t : = y = ϕ a b y p s x t y dy = E s , x ϕ a b x t ω

From the Chapman-Kolmogorov equation for densities (10.24) it follows that for any t 0  s  u  t  T

(10.27) φ s x t : = y = ϕ a b y p s x t y dy = y = ϕ a b y z n p s x u z p u z t y dz dy = z n p s x u z y = ϕ a b y p u z t y dy dz = z n p s x u z φ u z t dz

Obviously, φ(s, x, t ) has continuous partial derivatives s φ s x t , x φ s x t and 2 x 2 φ s x t , and therefore it can be approximated by the first two terms of the Taylor expansion in the neighborhood of the point x (under a fixed u and t):

φ u z t φ u x t = φ u x t x z x + 1 2 2 φ u x t x 2 + O δ ε u x t z x 2

where

δ ε u x t = sup z x ε = b a 2 φ u z t x 2 2 φ u x t x 2 ε 0 0

The presentations (10.19)(10.21) imply

φ s x t φ u x t = z n φ s z t φ u x t p s x u z dz = z x ε φ s z t φ u x t p s x u z dz + o u s = φ u x t x z x ε z x p s x u z dz + 1 2 2 φ u x t x 2 + o δ ε u x t z x ε z x 2 p s x u z dz = φ u x t x a s x + 1 2 2 φ u x t x 2 + o δ ε u x t b s x + o u s

which leads to the following identity

0 = lim u s φ s x t φ u x t = φ s x t t + φ s x t t a s x + 1 2 2 φ s x t x 2 b s x

Taking into account the definition (10.26), the last equation can be rewritten as

y = ϕ a b y p s x t y t + p s x t y x a s x + 1 2 2 p s x t y x 2 b s x dy = 0

Remembering that ϕ [a,b](y) is any continuous function, equal to zero outside of [a, b], and extending this interval, we obtain (10.25). Theorem is proven. □

Remark 10.2. Notice that the density p (s, x, t, y) (10.23) of transition probability coincides with the so-called fundamental solution of the elliptic partial differential equation (10.25) which is characterized by the condition(10.27), namely, when for any continuous bounded function we have

φ s x t = z n φ u z t p s x u z dz

Resulting from the multi-dimensional Taylor series expansion, the following generalization of the Kolmogorov equation(10.25) for continuous-time stochastic Markov processes x t ω t t 0 T , taking values in ℝ n , seems to be evident:

(10.28) s p s x t y = a T s x x p s x t y + 1 2 tr b s x 2 x 2 p s x t y

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080446738000146

Two-Dimensional Irrotational Mixed Subsonic and Supersonic Flow of a Compressible Fluid and the Upper Critical Mach Number

Hsue-shen Tsien Yung-huai Kuo , in Collected Works of H.S. Tsien (1938–1956), 2012

2 Transformation of the Differential Equations

The assumption of irrotationality implies the existence of a velocity-potential for such a flow. If this function is introduced to eliminate u and υ equations (4) and (8) would give rise to a nonlinear partial differential equation of the second order. The problem is further complicated by the possible appearance of supersonic regions, or regions where the speed of flow is larger than the local sonic speed. This means that for so me part of the domain, the equation is of the elliptic type; while in the other part, it is of the hyperbolic type. Thus the equation not only is nonlinear but also is of mixed type, and there is as yet no successful method to deal with it directly in the physical plane. Molenbroek [5] and Chaplygin [6] made some progress in solving the problem by transforming the equations from the physical to the hodograph plane in which u and υ are taken as the independent variables. If this is done, the differential equations become linear and thus can be solved by well-known methods.

Let the transformation be defined by

(9) u = u ( x , y )

(10) υ = υ ( x , y )

If u and v are continuous functions of x and y with continuous partial derivatives, and if the Jacobian ( ( x , y ) ( u , υ ) ) is finite and nonvanishing, a unique inverse transformation exists. Under these conditions, equations (8) and (4) are easily transformed into

(11) ( 1 u 2 c 2 ) y υ + 2 u υ c 2 x υ + ( 1 υ 2 c 2 ) x u = 0

(12) x υ y u = 0

Corresponding to φ(x, y) in the physical plane, there is introduced here a function χ(u, υ) defined by

(13) χ = x u + y υ φ ; x = χ u , y = χ υ

While equation (12) is satisfied identically, equation (11) becomes

(14) ( 1 u 2 c 2 ) 2 χ υ 2 + 2 υ u c 2 2 χ υ u + ( 1 υ 2 c 2 ) 2 χ u 2 = 0

As c is a function of q alone, the equation for χ (u, υ) is then linear. From equation (13) it is recognized that if χ (u, υ) is known, a one-to-one correspondence between the space coordinates and the velocity components can be easily established.

However, it is also clear that this function is inconvenient for obtaining the streamlines and the flow in the physical plane. To solve this part of the problem, a plan may be adopted similar to Chaplygin's by introducing both the potential function φ(x, y) and the stream function ψ (x, y) defined by:

(15) u = φ x , υ = φ y

(16) ρ u = ρ 0 ψ y , ρ υ = ψ x

From these definitions are obtained immediately the following equivalent relations:

(17) d φ = u d x + υ d y

(18) ρ 0 d ψ = ρ υ d x + ρ u d y

For the subsequent calculations, it was found convenient to introduce the polar coordinates in the hodograph plane defined by:

(19) u = q cos θ , υ = q sin θ

where θ is the inclination of the velocity vector to the x-axis. Functions dx and dy can be solved for from equations (17) and (18). As dx and dy are exact differentials, the conditions of integrability then give:

(20) φ q = ρ 0 ρ ( 1 q 2 c 2 ) 1 q ψ θ

(21) 1 q φ θ = ρ 0 ρ ψ q

By eliminating φ between equations (20) and (21), an equation for ψ is obtained:

(22) q 2 2 ψ q 2 + ( 1 + q 2 c 2 ) q ψ q + ( 1 q 2 c 2 ) 2 ψ θ 2 = 0

Equation (14) can also be transformed in polar coordinates. The procedure is straight forward and yields

(23) q 2 2 χ q 2 + ( 1 q 2 c 2 ) q ψ q + ( 1 q 2 c 2 ) 2 ψ θ 2 = 0

There is an additional relation between χ and φ derived from equation (13):

(24) φ = q χ q χ

Since φ is connected with ψ, this relation ensures that ψ and χ are properly connected and represent the same flow pattern in the physical plane. It can be thus considered as the equation of compatibility. Equations (22), (23), and (24) are the three fundamental equations in the present problem dealing with the two-dimensional flow of a compressible fluid.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123982773500191

ROBUSTNESS OF MULTIVARIABLE NON-LINEAR ADAPTIVE FEEDBACK STABILIZATION

A. Hmamed , L. Radouane , in Adaptive Systems in Control and Signal Processing 1983, 1984

EVENTUAL STABILITY

Definition (Lasalle and Rath, 1963) Consider the differential equation

(14) x ˙ ( t ) = F ( x,t ) , t t o

The origin of (14) is said to be eventually stable if given ε>o; there exist numbers δ and T such that x o < δ implies x ( t,t o , x o ) < ε for all t ≥ to ≥, T. x(t, to, xo) denotes the solution of (14) that starts at time to at xo. Consider the system

(15) x ˙ ( t ) = F ( t,x,y ) y ˙ ( t ) = G ( t,x,y )

Where x and y are m1 and m2 vectors respectively. It is assumed that F(t,x,y) is bounded for bounded x and yand all t ≥ t0. Let v(x,y) be Lyapunov function satisfying the following conditions:

a)

v(x,y) is positive definite and has continuous partial derivatives

b)

v(x,y)→∞ as ||x||2 +||y||2 →∞

c)

v ˙ ( x,y ) w ( x ) + h 1 ( t ) q ( x,y ) + h 2 ( t ) v ( x,y )

where

1)

w(x) is continuous and positive definite

2)

q(x,y) is continuous

3)

0 | h i ( t ) | dt < i=1,2,

Theorem 1

If for the system (15) there exists a function v(x,y) satisfying conditions a)-c), then the state x=o, y=o is eventually stable and corresponding to each r > o there exists a T, such that ||x(to)||2 + ||y(to)||2 < r2 for some to ⩾T, implies that y(t) is bounded and x(t) tends to zero as t tends infinity. If in addition d) for some k> o and some o <α <1

| q ( x,y ) | kv α ( x,y )

then all solutions y(t) are bounded and all x(t)→ o as t → ∞, In the following section we shall apply this theorem to the present scheme.

Theorem 2

The system (13) is eventually stable (all x(t)→ o and Δk remains bounded as t → ∞) if the matrix F is a (-M) matrix and if the non-linear vector function hi(y(t)) satisfies the condition

(16) | h i ( y ( t ) ) | | y i ( t ) | 1 2 λ min ( D i ) | P i | i= 1,2, ,m

where D i = Q i + P i b i b i T P i , F= ( f ij )

( f ij ) = { λ min ( 1 2 ) ( D i λ min ( D i ) I j ) i = j | P i | | A ij | i j

Proof

Let v(y, Δk) be a luapunov function candidate for system (13)

v ( y, Δ k ) = Σ i=1 m d i v i

where di are positive scalars for all i

(17) Δ k T = ( Δ k 1 T , Δ k 2 T , , Δ k m T ) v i ( y i , Δ k i ) = 1 2 y i T ( t ) P i y i ( t ) + 1 2 ( a i Δ k i ) T P i 1 ( a i Δ k i )

The time derivative of vi along motion (13) is

v ˙ i = 1 2 ( y ˙ i T P i y i + y i T P i y ˙ i ) ( a i Δ k i ) T P i 1 Δ k ˙ i = 1 2 y i T ( ( A i o b i b i T P i ) T P i + P i ( A i o b i b i T P ) ) y i + ( a i Δ k i ) T ( y i ( P i b i ) T y i P i 1 Δ ˙ k ) + + y i T P i j 1 A ij y i + y i T P i h i ( y ( t ) )

By tacking into account equtions (11) and (13), we obtain

v ˙ i = 1 2 y i T ( Q i + P i b i b i T P i ) y i + ( a i Δ k i ) T ϕ i ( t ) + + y i T P i Σ A ij y j + y i T P i h ( y ( t ) )

By using equation (16), we have

v ˙ i 1 2 y i T D i y i + ( a i Δ k i ) ϕ i T ( t ) + y i T P i A ij y i + 1 2 λ min ( D i ) | y i | 2

So we can write

v ˙ i λ min ( 1 2 D i λ min ( D i ) I i ) | y i | 2 + | a i Δ k i | | ϕ i ( t ) | + | y i | | P i | | A ij | | y i |

where |A|= λmax (ATA)

|x| is the euclidean norm of vector x

By using the inequality

λ min ( P i 1 ) | a i Δ k i | 2 ( a i Δ k i ) T P i 1 ( a i Δ k i ) 2 v i

we have

| a i Δ k i | 2 v i / λ min ( P i 1 ) v ˙ i λ min ( 1 2 ( D i λ min ( D i ) I i ) ) | y i | 2 | ϕ i ( t ) | 2 v i / λ min ( P i 1 ) + | y i | | P i | | A ij | | y j |

For the over system the time derivative of the associated Lyapunov function is

v ˙ = Σ m d i v i Z T D F Z + d i | ϕ i | 2 v i / λ min ( P i 1 ) 1 2 Z T ( D F + F T D ) Z + 2 v d i | ϕ i | / λ min ( P ¯ i )

where

Z T = ( | y 1 | , | y 2 | , . , | y | ) D = diag ( d 1 , d 2 , . , d m )

It is clear that v satisfies the condition of theorem 1 and théorem 2 is proved.

Given that the system parameters are assumed to be unknown, we can not guarantee that the matrix F is a (-M) matrix. In this case it is possible to adjust the linear gain k of each subsystem (Hmamed and Radouane 1983) or to modify the controller dynamic by introducing another non linear term.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080305653500599

Geometric Function Theory

Samuel L. Krushkal , in Handbook of Complex Analysis, 2005

6.6 Applications of the Dirichlet principle and of Fredholm eigenvalues. Kühnau's method. Applications

6.6.1

Let us now consider a somewhat different variational approach concerning general quasiconformal maps of finitely connected domains and reveal the extremal properties of the maps onto domains obtained from the sphere by parallel linear cuts. This was established in [Ku5] by extending the strip method of Grötzsch and the contour integration as well as in [Ku14] by minimization of a modified Dirichlet integral. Here we touch on the last method.

Let G C be a finitely-connected domain containing the point at infinity with the boundary C = ∂G possessing application of Green's integral formula. Consider a function p 0(z) ⩾ 1 having in G piecewise Hölder continuous partial derivatives (hence || p 0|| ⩽ ∞) and assume that p 0(z) ≡ 0 in a neighborhood of infinity.

There exists a quasiconformal homeomorphism g 0 with Lavrentiev's dilatation

p w ( z ) = | z w | + | z ¯ w | | z w | | z ¯ w |

equal to p 0(z) and hydrodynamical normalization

g 0 ( z ) = z + A 1 , 0 z 1 + ,

mapping the domain G onto a domain g 0(G) whose boundary components are the straight cuts parallel to the real axes R . Consider also the conformal map

ω 0 ( z ) = z + A 1 , 0 z 1 +

of G onto a domain bounded by straight cuts parallel to R and put

(6.28) Φ = Re ω 0 , Φ * = Re g 0 = Φ + φ * .

Then, due to [Ku17] (see also [KK1, Part 2]), the function Φ* is a solution of the differential equation

(6.29) div ( 1 p 0 grad Φ * ) = 0.

The admissible comparison functions φ on G are those for which

(6.30) | grad ψ (z) | c | z | 2 as z

with a constant c.

Then one obtains the following extremal principle, which is a generalization of the Diaz–Weinstein principle for conformal maps (cf., e.g., [Ku17]).

Theorem 6.14. For all nonconstant admissible ψ, we have

[ G ( 1 1 p 0 ) grad Φ grad ψ d x d y ] 2 G 1 p 0 grad 2 ψ d x d y 2 π Re ( A 1 , 0 A 1 , 0 ) G ( 1 1 p 0 ) grad 2 Φ d x d y .

The equality in (6.28) occurs only for ψ = αφ* + β, where α and β are constant.

The most interesting, though the simplest case, occurs when G = C and p 0(z) ≡ K in a union of a finite number of distinct simply-connected domains G j bounded by nonintersecting analytic curves C k C , and p 0(z) ≡ 1 in the complement of this union containing the point at infinity.

6.6.2

The above variational principle provides various sharp quantitative estimates. We restrict ourselves to three Kühnau's theorems, referring to [Ku17] and to his Part 2 of the joint book [KK1] (cf. [McL]).

Theorem 6.15. The exact range domain of the Grunsky functional l , s = 1 n c l s x l x s on the family F ( p 0 ) of all p 0(z)-quasiconformal maps w(z) = z + a 1 z −1 + ⋯ of C with p 0(z) ≡ 1 in a neighborhood of infinity is the closed disk whose boundary circle is located in the open annulus centered at the origin, with radii

1 2 π l , s = 1 n x l x ¯ s ( 1 1 p 0 ) z l 1 z ¯ s 1 d x d y

and

1 2 π l , s = 1 n x l x ¯ s ( 1 p 0 ) z l 1 z ¯ s 1 d x d y ,

provided p 0(z) ≢ 1.

Theorem 6.16. The exact range domain of the functional

log w ( z 1 ) w ( z 2 ) z 1 z 2

for two fixed distinct points z 1 and z 2 on the family F ( p 0 ) is the closed disk whose boundary circle is located in the open annulus centered at the origin, with radii

1 2 π ( 1 1 p 0 ) z l 1 z ¯ s 1 d x d y | z z 1 | | z z 2 |

and

1 2 π ( 1 p 0 ) z l 1 z ¯ s 1 d x d y | z z 1 | | z z 2 |

provided p 0(z) ≢ 1.

Note that one does not require here that p 0(z) be equal 1 near the fixed points z 1 and z 2. Kühnau has observed also that in many cases the assumption p 0(z) ≡ 1 can be omitted or replaced by a weaker one that p 0 tends to 1 sufficiently fast.

Let us mention here the special cases when z 1 = 0 and the class F ( p 0 ) is either Σ(k) or S(k), which concerns Theorem 6.15. The bounds of log[w(z)/z] on these classes following from Theorem 6.16 can be represented also by means of the complete elliptic integral K(κ) of the first kind. For example, we have the following theorem.

Theorem 6.17. The range domain of log[w(z)/z] with a fixed z C on the maps from S(k) for each k ∈ (0, 1) (i.e., for K = (1 + k)/(1 − k) > 1) is a closed disk whose boundary circle is located in the open annulus centered at the origin, with radii

(6.32) 1 2 π ( 1 1 K ) 0 | z | K ( κ ) d κ a n d 1 2 π ( K 1 ) 0 | z | K ( κ ) d κ

for |z| ⩽ 1 and

(6.33) 1 2 π ( 1 1 K ) { 2 G + 0 | z | K ( 1 κ ) d κ κ } a n d 1 2 π ( K 1 ) { 2 G + 0 | z | K ( 1 κ ) d κ κ }

for |z| > 1. Here G denotes the Catalan constant.

The bounds (6.33) follows also from Theorem 6.13.

6.6.3

The general Theorem 6.14 can be combined with the properties of the Fredholm eigenvalues λ C of a finite union of Jordan curves C = j C j (cf. Section 2.5). This provides, for example, the following result.

Assume that a domain G is of the same type as in Theorem 6.14 and that its boundary curves are analytic. Let I = I(G*) denote the (finite) area of the complement domain G * = ^ \ G ¯ . Consider the class F ( K ) of univalent C -holomorphic functions f(z) = z + b 1 z −1 + ⋯ on G having K-quasiconformal extensions to ^ . Put

Λ C = ( λ C + 1 ) ( λ C 1 ) > 1.

Theorem 6.18 [Ku17]. The range domain of the coefficient b 1 on F ( K ) is the disk whose boundary circle is located in the open annulus centered at the origin, with radii

(6.34) I ( K 1 ) 2 π ( 1 K 1 1 / Λ C + K ) a n d I ( K 1 ) 2 π ( 1 K 1 Λ C + K ) .

Both quantities in (6.34) coincide only if Λ C = 1, i.e., λ C = ∞, which occur when C consists of one curve which is a circle. Then F ( K ) = Σ ( k ) and (6.34) is reduced to the well-known bound |b 1| ⩽ k; the equality holds only for the function

(6.35) f ( z ) = { z + t z 1 for | z | 1 , z + t z ¯ for | z | < 1

with |t| = 1. This was first established in [Ku7].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874570905800095

Preliminaries

In Cooperative Control of Multi-Agent Systems, 2020

2.2 Stability theory

In this section, some basic concepts of stability based on Lyapunov functions are provided. Consider a nonlinear system

(2.1) x ˙ = f ( x ) ,

where x D R n is the state of the system, and f : D R n R n is a continuous function with x = 0 as an equilibrium point, that is f ( 0 ) = 0 , and with x = 0 as an interior point of D . D denotes a domain around the equilibrium x = 0 .

Definition 2.4 Lyapunov stability

For the system (2.1), the equilibrium point x = 0 is said to be Lyapunov stable if for any given positive real number R, there exists a positive real number r such that x ( t ) < R for all t > 0 if x ( 0 ) < r . Otherwise, the equilibrium point is unstable

Definition 2.5 Asymptotic stability

For the system (2.1), the equilibrium point x = 0 is asymptotically stable if it is (Lyapunov) stable and furthermore lim t x ( t ) = 0 .

Definition 2.6 Exponential stability

For the system (2.1), the equilibrium point x = 0 is exponentially stable if there exist two positive real numbers α and λ such that the inequality

x ( t ) < α x ( 0 ) e λ t

holds for t > 0 in some neighborhood D R n containing the equilibrium point.

Definition 2.7 Global asymptotic stability

If the asymptotic stability defined in Definition 2.5 holds for any initial state in R n , the equilibrium point is said to be globally asymptotically stable.

Definition 2.8 Global exponential stability

If the exponential stability defined in Definition 2.6 holds for any initial state in R n , the equilibrium point is said to be globally exponentially stable.

Definition 2.9 Positive definite function

A function V ( x ) D R n is said to be locally positive definite if V ( x ) > 0 for x D except at x = 0 where V ( x ) = 0 . If D = R n , i.e., the above property holds for the entire state space, V ( x ) is said to be globally positive definite.

Definition 2.10 Lyapunov function

If in D R n containing the equilibrium point x = 0 , the function V ( x ) is positive definite and has continuous partial derivatives, and if its time derivative along any state trajectory of system (2.1) is nonpositive, i.e.,

V ˙ ( x ) 0 ,

then V ( x ) is a Lyapunov function.

Definition 2.11 Radially unbounded function

A positive definite function V ( x ) : R n R is said to be radially unbounded if V ( x ) as x .

Theorem 2.1

Lyapunov theorem for global stability, Theorem 4.3 in [37]

For the system (2.1) with D R n , if there exists a function V ( x ) : R n R with continuous first order derivatives such that

V ( x ) is positive definite

V ˙ ( x ) is negative definite

V ( x ) is radially unbounded

then the equilibrium point x = 0 is globally asymptotically stable.

The optimal consensus control in this book is designed via the inverse optimal control theory as follows.

Lemma 2.3

[11]

Consider the nonlinear controlled dynamical system

(2.2) X ˆ ˙ ( t ) = f ( X ˆ ( t ) , U ( t ) ) , X ˆ ( 0 ) = X ˆ 0 , t 0

with f ( 0 , 0 ) = 0 and a cost functional given by

(2.3) J ( X ˆ 0 , U ( ) ) 0 T ( X ˆ ( t ) , U ( t ) ) d t

where U ( ) is an admissible control. Let D R n be an open set and Ω R m . Assume that there exists a continuously differentiable function V : D R and a control law ϕ : D Ω such that

(2.4) V ( 0 ) = 0

(2.5) V ( X ˆ ) > 0 , X ˆ D , X ˆ 0

(2.6) ϕ ( 0 ) = 0

(2.7) V ( X ˆ ) f ( X ˆ , ϕ ( X ˆ ) ) < 0 , X ˆ D , X ˆ 0

(2.8) H ( X ˆ , ϕ ( X ˆ ) ) = 0 , X ˆ D

(2.9) H ( X ˆ , U ) 0 , X ˆ D , U Ω

where H ( X ˆ , U ) T ( X ˆ , U ) + V ( X ˆ ) f ( X ˆ , U ) is the Hamiltonian function. The superscriptdenotes partial differentiation with respect to X ˆ .

Then, with the feedback control

(2.10) U ( ) = ϕ ( X ˆ ( ) )

the solution X ˆ ( t ) 0 of the closed-loop system is locally asymptotically stable and there exists a neighborhood of the origin D 0 D such that

(2.11) J ( X ˆ 0 , ϕ ( X ˆ ( ) ) ) = V ( X ˆ 0 ) , X ˆ 0 D 0

In addition, if X ˆ 0 D 0 then the feedback controller (2.10) minimizes J ( X ˆ 0 , U ( ) ) in the sense that

(2.12) J ( X ˆ 0 , ϕ ( X ˆ ( ) ) ) = min U ( ) S ( X ˆ 0 ) J ( X ˆ 0 , U ( ) )

where S ( X ˆ 0 ) denotes the set of asymptotically stabilizing controllers for each initial condition X ˆ 0 D . Finally, if D = R n , Ω = R m , and

(2.13) V ( X ˆ ) as X ˆ

the solution X ˆ ( t ) 0 of the closed-loop system is globally asymptotically stable.

Proof

Omitted. Refer to [11].  

Remark 2.1

This Lemma underlines the fact that the steady-state solution of the Hamilton–Jacobi–Bellman equation is a Lyapunov function for the nonlinear system and thus guarantees both stability and optimality.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128201183000119