In vectorial analysis, the theorem of flowdivergence, also called theorem of GreenOstrogradski, affirms the equality between the integral of the divergence of a vector field on a volume in ℜ³ and the flow of this field through the border of the volume which is an integral of surface.
The equality is the following one ∫∫∫_{V} div F^{→} * dV = ∯_{∂v} F^{→} * dS^{→}
where
v : is volume
∂v : is the border of v
dS^{→} : is the normal vector on the surface, directed towards the outside and of length equal to the element of surface which it represents
F^{→} : is continuously derivable in any point of V
This theorem rises from the theorem of Stokes which, itself, generalizes the fundamental Theorem of the analysis.
Physical interpretation
It is an important result in mathematical physics, in particular in electrostatics and dynamics of the fluids, where this theorem reflects uneloi conservation. According to its sign, the divergence expresses dispersion or the concentration of a size a such mass for example and the preceding theorem indicates that a dispersion within a volume is necessarily accompanied by a total flow are equivalent outgoing of its border.
This theorem in particular makes it possible to find the integral version of the theorem of Gauss in electromagnetism starting from the equation of MaxwellGauss : div E^{→} = ρ ⁄ ε_{0}
Other relations This theorem makes it possible to deduce certain useful formulas from the vector calculus In the expressions below ∇^{→} * F^{→} = div F^{→}, ∇^{→}_{g} = grad^{→}_{g}, ∇^{→} ∧ F^{→} = rot^{→} F^{→} ∫∫∫_{v} (F^{→} * ∇^{→}_{g + g}(∇^{→} * F^{→})) dV = ∯_{∂v}gF^{→} * dS^{→} ∫∫∫_{v} ∇^{→}_{g}dV = ∯_{∂v} gdS^{→} ∫∫∫_{v} (∇^{→} ∧ F^{→})  F^{→} * (∇^{→} ∧ G^{→})) dV = ∯_{∂v} (F^{→} ∧ G^{→}) * dS^{→} ∫∫∫_{v} ∇^{→} ∧ F^{→}dV = ∯_{∂v} dS^{→} ∧ F^{→} ∫∫∫_{v} (ƒ∇^{→}²_{g} + ∇^{→}ƒ * ∇^{→}_{g}) dV = ∯_{∂}ƒ∇^{→}_{g} * dS^{→}
Theorem of Gauss
In electromagnetism, the theorem of Gauss makes it possible to calculate the flow of an electric field through a surface taking into account the burdensharing. It is due to Carl Friedrich Gauss.
The flow of the electric field through a surface S closed is equal to the sum of the loads contained in the volume V delimited by this surface divided by ε_{0}, the permittivity of the vacuum.
This equation is the integral form of the equation of MaxwellGauss : div E^{→} = ρ ⁄ ε_{0}
The integral of flow is largely simplified if an adequate surface of Gauss is taken. This one depends on the symmetry of the burdensharing. There exist three symmetries very much used.
Spherical distribution
Cylindrical distribution
Plane distribution
It is a general property in physique coming from the principle of Curie: the effects have, at least, same symmetries as the causes.
Theorem of Stokes
In mathematics and more particularly in differential geometry, the theorem of Stokes is a central result on the integration of the differential forms, which generalizes the fundamental theorem of integration, as of many theorems of vectorial analysis. It has multiple applications, thus providing a form which readily physicists and engineers use, particularly in mechanics of the fluids.
The theorem is allotted to Sir George Gabriel Stokes, but the first to discover this result is actually Lord Kelvin. The mathematician and the physicist maintain on this subject an active correspondence during five years, of 1847 to 1853. The form initially discovered by Kelvin and often called theorem of KelvinStokes, or sometimes simply theorem of Stokes, is the particular case of the theorem concerning the circulation of the rotational one, which one will find described in the paragraph concerning the physical direction of the theorem.
Statement and demonstration
Theorem of Stokes, is M a differential variety directed of dimension N and ω one (n1)  differential form with compact support towards M of C^{1} class.
Then, one has ∫_{M} dω = ∫_{∂M} i * ω where d indicates the external derivative ∂M the edge of M, provided with the outgoing orientation and i : ∂M → M is the canonical injection.
The current demonstration requires to have a good definition of integration, its apparent simplicity is misleading. The idea is to use a partition of the unit adapted to the problem in the definition of the integral of a differential form and to bring back themselves to an almost obvious case.
That is to say {U_{i}}I a covering locally finished of M by fields of local charts Φ_{i} : U_{i} → Φ_{i} (U_{i}) ⊂ ℜ^{n} telles que Φ_{i} (U_{i} ∩ → M) = Φ_{i} (U_{i}) ∩ ({0} * ℜ^{n1}) let us introduce X_{i} a partition of the unit subordinate to {U_{i}} like the support of ω is closed, the differential form ω is written ω = Σ_{Xiω} where the summation is with support fini let us pose β_{i} = Φ^{*}_{i} [X_{iω}] form differential with compact support of M' = ℜ_{+} * ℜ^{n1} the restriction is a diffeomorphism on its image preserving the outgoing orientations one thus has ∫_{∂M} [X_{iω}] = ∫_{∂M'}^{βi} like Φ commutate with the operator of differentiation d, one has ∫_{∂M}^{d} [X_{iω}] = ∫_{M'}^{βi} By summation, the theorem of Stokes is shown once established the particular case M' = ℜ_{+} * ℜ^{n1} one (n1)  form ω sur M' = ℜ_{+} * ℜ^{n1} is written ω = ∑_{i = 1}^{n} ƒi * dx_{1} ∧ ... ∧ d^x_{i} ∧ ... ∧ dx_{n} where the hat indicates an omission. One finds then dω = ∑^{n}_{i = 1}(∑^{n}_{j = 1} [∂ƒ_{i} ⁄ ∂x_{j}] dx_{j}) ∧ dx_{1}∧ ... ∧d^x_{i} ∧ ... ∧ dx_{n} = ∑^{n}_{i = 1} (1)^{i1} (∂ƒ_{i} ⁄ ∂x_{i}) dx_{1}∧ ... ∧ dx_{n}
Transform of Dirac
The distribution of Dirac, also called by abuse language function δ of Dirac, introduced by Paul Dirac, can be informellement regarded as a function δ which takes an infinite value into 0, and the zero value everywhere else, and whose integral on ℜ is equal to 1. The chart of the function δ can be comparable with the xaxis in entirety and the half positive yaxis. In addition, δ corresponds to derived from the function of Heaviside. But this function of Dirac is not a function, it extends the concept of function.
The function δ of Dirac is very useful like approximation of functions of which the chart with the shape of a large narrow point. It is the same type of abstraction which represents a concentrated loading, a specific mass or a specific electron. For example, to calculate the speed of a ball of tennis, struck by a racket, we can compare the force of the racket striking the ball to a function δ. In this manner, we simplify not only the equations, but we can also calculate the movement of the ball by considering only all the impulse of the racket against the ball, rather than to require the knowledge of the details in the way in which the racket transferred energy to the ball.
By extension, the expression Dirac thus is often used by the physicists to indicate a function or a curve pricked in a given value.
Introduction formelle On is placed in ℜ^{n}.
The function δ of Dirac is the borelienne measurement which charges only the singleton {0} : δ({0}) = 1, δ (Q) = 0 (Q is a cubic segment ⁄ which does not contain 0)
That is to say a borélien has. A direct calculation establishes that δ (A) = 1 i.e ∫ 1_{A}dδ = 1 is well a measurement of Borel, only checking:
If A contains 0 : δ (A) = 1 i.e ∫ 1_{A}dδ = 1
If not : δ (A) = 0 i.e ∫ 1_{A}dδ = 0
Like any measurable function ƒ is limiting simple staged functions, one has : ∫ ƒdδ = ƒ (0)
From the point of view of measurements of Radon, one gives : δ : C_{c} (ℜ^{n}) → ℜ, ƒ → ƒ (0).
That is to say K compact: The restriction of such a linear form on ^{C}K (ℜ ^{n}) is clearly continuous (δ is thus well a measurement of Radon), standard 1. Indeed : sup {ƒ (0)  : ƒ _{∞} = 1} = 1
We can then see δ like a distribution of order 0 : δ : D → ℜ, φ → φ (0) is a linear form such as: ∀ K compact : δ (φ)≤φ_{0} (φ ∈ D_{K})
(Let us recall that ._{0} note the standard . ∞ in the context of the distributions)
Let us apply the definition of the support of a distribution: spt δ = {0} is compact. Consequently, δ is moderate
Other presentations (on ℜ) :
Any element ƒ of L^{1}_{loc} (ℜ), that is to say locally integrable within the meaning of Lebesgue) is identified with a linear form
∀ φ ∈ C_{c} (ℜ), 〈ƒ,φ〉 = ∫_{∞}^{+∞} ƒ (x) φ (x) dx By analogy, δ is defined; by the following equality : ∀ φ ∈ C_{c} (ℜ), 〈ƒ,φ〉 = ∫_{∞}^{∞} δ (x) φ (x) dx = φ (0)
The only mathematical being δ who checks this equation rigorously is a measurement. This is why the existence of δ a direction within the mathematical framework of the distributions has. By defining the functions ^{δ }n by: ^{δ }n (X) = N for X < 1⁄ 2n and ^{δ }n (X) = 0 everywhere else, we have :
∀ φ ∈ C_{c} (ℜ),lim_{n}→ ∞ ∫_{∞}^{+∞} φ (x) ^{δ}n (x) dx = φ (0) The continuation ^{δ }n converges with the weak direction towards δ.
Thus, by abuse language, one says that the function δ of Dirac is null safe everywhere into 0, where its infinite value corresponds to a mass of 1, that is to say it corresponds to a measurement which with a subset of R associates 0 if 0 are not in the subset and 1 if not.
This distribution can also be seen like the derivative, within the meaning of the distributions, of the function of Heaviside.
Let us give T_{H} (φ) := ∫ H φ (φ ∈ D) Then T’_{H} (φ) = T_{H} (φ’) = ∫ H φ’ = ]φ__{0}^{∞} = φ (0) = δ (φ) δ is the neutral element of the convolution : (δ * φ) (x) := δ (φ(x  .)) = φ (x 0) (x ∈ ℜ^{n} From where : δ * φ = φ
This property is abundantly used in treatment of the signal. It is said that a signal corresponding to a distribution of Dirac has a white spectrum. That is to say each frequency is present with an identical intensity. This property makes it possible to analyze the frequential answer of a system without having to sweep all the frequencies.
Transform of Fourier
The transform of Fourier of the function δ of Dirac is identified with the constant function 1 : 〈δ^{∧},φ〉 := 〈δ,φ^{∧}〉 = φ^{∧}(0) = ∫ φ (x)e^{i0x}dx = ∫ 1 . φ (x) dx (φ∈>) Consequence : 〈Î,φ〉 = 〈δ^{∨},φ〉 = 〈δ,φ^{∨}〉 = φ^{∨} (0) = φ (0) = 〈δ,φ〉 thus δ is Fourier of 1.
Derived The derived from the function δ of Dirac is the distribution δ defined by: for any function of test φ , 〈 δ , φ 〉 : =  φ ’(0) More generally, one with the nth derivative of δ, δ ^{(n)} : 〈δ^{(n)},φ〉 := (1)^{n}φ^{(n)} (0) The derivative of δ of Dirac are important because they appear in the transformation of Fourier of the polynômes. Une useful identity is δ (g(x)) = Σ_{i} δ(x  x_{i}) ⁄ g’(x_{i}) are the roots (presumedly simple) of the function G (X). It is equivalent to the integral form : ∫_{∞}^{+∞} ƒ (x) δ (g(x)) dx = Σ_{i} ƒ (x_{i}) ⁄ g’ (x_{i})
Representations of the fonction General information The function δ can be looked like limit of a family (δ _{a}) of fonctions δ (X) = lim_{a→ 0} δ _{a} (X) Some calls such functions δ _{a} of the functions incipient from δ.
Those can be useful in specific applications. But if the limit is employed in a too vague way, of the nonsenses can result from it, as besides in any branch of the analysis in mathématique.
The concept of approximation of the unit has a particular significance in harmonic analysis, in connection with the limit of a family having for limit a neutral element for the operation of convolution. Here the assumption is made that the limit is that of a family of functions positives.
Probability
A density of probability, for example that of the normal law, is represented by a curve which locks up a surface equalizes to 1. If one makes tighten his variance towards 0, one obtains in extreme cases a delta which represents the density of probability of an unquestionable variable with probability 1. It is a curiosity which is of limited practical interest but it spreads in a manner interesting.
The simplest manner to describe a discrete variable which takes values belonging to a countable unit consists in using its function of probability which associates a probability with each value. One can also consider a pseudodensity of probability consisted a sum of functions of Dirac associated with each value with a weight equal to their probabilities. Under these conditions, the integral formulas which calculate the hopes of the continuous variables apply to the discrete variables by taking account of the pointed out equation above.
Analyzes recordings For to determine the contents of the recording of a physical phenomenon according to time, one generally uses the transformation of Fourier. TF (ƒ (x)) = F (v) = ∫_{∞}^{+∞} ƒ (x) e^{2πivx}dx One can note the transform of Fourier of the function of Dirac : TF (δ(x)) = ∫_{∞}^{+∞} δ (x) e^{2πivx} dx = 1
Nowadays, the continuous analogical recordings of physical phenomena yielded the place to numerical recordings sampled with a certain step of time. One uses in this field the discrete transform of Fourier who is an approximation over a certain duration of sampling.
The multiplication of a continuous function by a comb of Dirac, summons equidistant deltas, has a transform of Fourier equal to the approximation of that of the function of origin by the method of the rectangles. By using a development in Fourier series of the comb, one shows that the result gives the sum of the true transform and of all its relocated by the sampling rate. If those encroach on the true transform, that is to say if the signal contains frequencies higher than half of the sampling rate, the spectrum is folded up. In the contrary case it is possible to reconstitute exactly the signal by the formula of Shannon.
Transform of Laplace
In mathematics, the transformation of Laplace is an integral transformation, that is to say an operation associating with a function with value in R^{n} or C^{n},ƒ (t) a new function known as transformed of ƒ (t), traditionally noted F_{(p)}, via an integral. The tranformation of Laplace is bijective and by use of tables it is possible to reverse the transformation. The great advantage of the transformation of Laplace is that majority of the current operations on the original function ƒ (t), such as derivation, or a shift on the variable T, have a simpler translation on the transform F_{(p)}. Thus the transform of Laplace of derived the ƒ’ (t) is simply p^{F} (p)  ƒ (0^{}), and the transform of the shifted function ƒ (t  τ) is simply e^{pτ} F (p). This transformation was introduced for the first time on a form close to that used by Laplace in 1774, within the framework of the theory of probability
The transform of Laplace is close to the transform of Fourier who is also used to solve the differential equations, but contrary to the latter it takes account of the initial conditions and can thus be used in theory of the mechanical vibrations or electricity in the study of the modes forced without neglecting the transitory mode. In this type of analysis, the transform of Laplace is often interpreted like a passage of the time field, in which the entries and exits are functions of time, in the field of the frequencies, in which the same entries and exits are functions of the frequency p. Ainsi it is possible to simply analyze the effect of the system on the entry to give the exit in term of simple algebraic operations, theory of the transfer transfer functions in electronics or mechanics).
Definition
In mathematics and in particular in functional analysis, the transform of Laplace monolatérale of a function ƒ possibly generalized, such as the function of Dirac of a real variable T, with positive support, is the function F of the variable complexes p, definite by:
More precisely, this formula is valid when ℜ (p) > α où α,∞ ≤ α ≤ + ∞, is the xcoordinate of convergence and ƒ is a germ ƒ of distributions defined in a vicinity open and limited in a lower position interval I = ]0,+ ∞_ whose restriction on complementary to I in this vicinity is an indefinitely derivable function. It is such a germ that we call here, by abuse language, a generalized function with positive support, and the transformation of Laplace injective is applied to these generalized functions. The xcoordinate of convergence α is defined as follows: maybe, for a reality β ƒ_{β} : t → e^{βt}ƒ (t). Then α is the lower limit of the unit B of the β for which ƒ_{β} is a moderate distribution if B is nonempty and α = +∞ if not.
The function of Dirac is of this nature. Its transform of Laplace is worth 1 with a xcoordinate of convergence of ∞.
The properties of this transformation confer to him a great utility in the analysis of the linear dynamic systems. Most interesting of these properties is that integration and derivation are transformed into division and multiplication by p, in the same way that the logarithm transforms the multiplication into addition. It thus makes it possible to bring back the resolution of the linear differential equations to constant coefficients the solution of equations closely connected whose solutions are rational functions of p.
The transformation of Laplace is very much used by the engineers to solve differential equations and to determine the transfer transfer function of a linear system. For example, in electronics, contrary to the decomposition of Fourier who is used for the determination of the spectrum of a periodic or even unspecified signal, it takes account of the existence of a transitory mode preceding the permanent mode.
It is indeed enough to transpose the differential equation in the field of Laplace to obtain an equation much simpler to handle.
Transform reverses of Laplace.
The inversion of the transformation of Laplace is carried out by the means of an integral in the complex plan. Using the remainder theorem, one shows the formula of Bromwich Mellin:
where Υ is selected so that the integral is convergent, which implies that Υ is higher than the real part of any singularity of F (p) and that ad infinitum, F (p) tends towards 0 at least as quickly as 1p². When this last condition is not satisfied, the formula above is still usable if there exists an entirety N such as p^{n} F (p)tends towards 0 as quickly as 1p², that is to say when, for P tending towards the infinite one, F (p) is raised by a polynomial in P. By replacing F (p) by p^{n} F, in the integral above, one finds in the member of left of the equality a generalized function with positive support whose derivative of order N within the meaning of the distributions is the generalized function it also with positive support sought.
In practice nevertheless, the formula of BromwichMellin is used little and one calculates the opposite of the transforms of Laplace starting from the tables of transforms of Laplace.
use of the transform of Laplace in electricity
One considers a circuit says R, C, made up of an electrical resistance of value R.&.d ’ an electric condenser of capacity C, placed in series. In all the case one considers that the circuit is not placed at the boundaries of an ideal generator of tension delivering a tension in general variable U (T) only at one moment chosen for origin of the dates, and that the condenser is initially discharged. One has thus respectively for the load Q (T) of the condenser and the intensity in circuit I (T) = dq/dt the following initial conditions : q (0^{}) = 0,i (0^{}) = 0.
Charge of a capacitor by a level of tension One applies the tension U (T) following: u (t) = 0, if t < 0 and U_{0} = cte, si t ≥ 0 and the differential equation connecting the answer Q (T) to the entry U (T) is by applying the usual laws of electricity: U_{0} Υ (t) = R * dq ⁄ dt + q (t) ⁄ C that is to say still by posing τ ≡ RC this quantity with the one duration dimension CU_{0} ⁄ τ = q (t) ⁄ τ + dq ⁄ dt One takes the transform of Laplace member to member of this last equation by noting Q (p) the transform of Q (T) it comes while taking of account the fact that q^{(0)} = 0_{)} : Q (p) = CU_{0} * (1 ⁄ τ) ⁄ p ](1 ⁄ τ) + p_ what can be also written in the form: Q (p) = H (p) U (p), with H (p) μ (1 ⁄ τ) ⁄ ](1 ⁄ τ) + p_ transfer transfer function of the system R,C et U (p) = CU_{0} ⁄ p
Bilateral transform of Laplace
In analysis, the bilateral transform of Laplace is the most general form of the transform of Laplace, in whom integration is done from less the infinite one rather than from zero.
Definition
The bilateral transform of Laplace of a function ƒ of the real variable is the function F of the variable complexes definite by:
F (p) = ζ {ƒ} (p) = ∫_{∞}^{+∞} e^{pt} ƒ (t) dt
This integral converges for α < ℜ (p) < β,∞ ≤ β ≤ + ∞, that is to say for P pertaining to a band of convergence in the complex plan instead of ℜ (p) > α, α indicating the xcoordinate of convergence then, in the case of the transform monolatérale). In a precise way, within the framework of the theory of the distributions, this transform converges for all the values the values of P for which t → e^{ℜ(p)t}ƒ (t) in abusive notation is a moderate distribution and thus admits a transform of Fourier.
Ambiguities to be avoided
It is essential, when one uses the bilateral transform of Laplace, to specify the band of convergence. That is to say for example F (p) = 1/p.
If the band of convergence is ℜ (p) > 0, the antecedent of this transform of Lapace is the function of Heaviside Υ. On the other hand, if the band of convergence is ℜ (p) < 0, this antecedent is t →  Υ (t).
Convolution and derivation
That is to say T and S two distributions convolables, for example having each one a support limited on the left, or one of them being with compact support.
ζ {T * S} = ζ {T} ζ {S} In particular ζ (δ^{(n)}) = p^{n} and δ^{(n)} * S = S^{(n)} thus ζ (S^{(n)}) = p^{n} ζ (S)
Transforms of Laplace of the hyperfonctions
One can extend the transform of Laplace to the case of the hyperfonctions. For a hyperfonction defined by a distribution, one finds the theory which precedes. But for example
although not being a distribution because it is of an infinite nature locally, namely into 0, is a hyperfonction whose support is {0} and who admits for transform of Laplace
F (p) = j_{1}(2√p) ⁄ √p
where ^{J1} indicates the function of Bessel of first usual species, namely the whole function
what is quite coherent with the definition of ƒ (T) since ζ (͐^{k}) = p^{k}.
Relation between the bilateral transform and the transform monolatérale Elementary theory
Either ƒ a function defined in an open vicinity of I = ]0,+∞], continues into 0, and admitting a transform of bilateral Laplace ζ (ƒ).
Its transform monolatérale of Laplace, whom we will note here ζ_{+} (ƒ) is given by ζ_{+} (ƒ) = ζ (ƒ * Υ) where Υ is the function of Heaviside. One has d ⁄ dt (ƒ * Υ) = ƒ’ * Υ + ƒ * δ = ƒ’ * Υ + ƒ (0) * δ consequently p^{ζ}_{+} (ƒ’) = ζ_{+} (ƒ’) + ƒ (0) from where the traditional formula ζ_{+} (ƒ’) = p^{ζ}_{+} (ƒ)  ƒ (0)
Generalization
That is to say T a distribution with positive support, g an indefinitely derivable function in an open interval containing I = ]0,+∞] and ƒ = T + g. By posing g + = g Υ,ƒ + = T + g is a distribution with positive support, whose transform of Laplace is in abusive notation
where α is the xcoordinate of convergence. The distributions ƒ and g have even restriction on any open interval of the form _  ε,0] as soon as ε > 0 is sufficiently small. One can thus write ƒ^{(i)} (0^{}) = g^{(i)} (0) for entire i ≥ 0. In addition,
ζ_{+} (ƒ’) = ζ_{+} (T’) + ζ_{+} (g’) with ζ_{+} (T’) = ζ (T’) = pζ (T) and, according to the elementary theory above ζ_{+} (g’) = pζ (g)  g (0) Finally ζ_{+} (ƒ’) = pζ (T + g)  g^{(0)} = pζ (ƒ)  ƒ (0^{})
Let us define the relation of following equivalence now: ƒ1 and ƒ2 indicating two distributions such as above, we will write ƒ1 ∼ ƒ2 si ƒ1 and ƒ2 have even restriction on the interval_ ε,+ ∞] as soon as ε > 0 is sufficiently small. Then ζ_{+} (ƒ_{1}) depends only on the class of equivalence ƒ of ƒ1 and which is called a germ of function generalized defined in a vicinity from ]0,+ ∞] and by abuse language, a generalized function with positive support. One will write ζ_{+} (ƒ) = ζ_{+} (ƒ_{1}) = ζ (ƒ_{1+}. Let us note finally that ζ_{+} (ƒ) = 0 if, and only if ƒ = 0.
Applications
The transform of bilateral Laplace is used in particular for the design of traditional analogical filters, the optimal filter of Wiener, in statistics where it defines the generating function of the moments of a distribution, it plays a crucial role in the formulation at continuous time of causal spectral factorization direct and opposite, it is very much used finally to solve the integral equations.
Generalization with the case of several variables
The bilateral transform of Laplace spreads with the case of functions or multivariate distributions, and Laurent Schwartz made the complete theory of it. That is to say T a distribution defined on ℜ^{n}. The whole of p pertaining to C^{N} for which x → exp (p * x) T (x) in abusive notation is a distribution moderated on ℜ^{n} is this time a cylinder of the form Γ + iℜ^{n} where Γ is a convex subset of ℜ^{n} in the case of a variable, Γ is not other than the band of convergence mentioned above). That is to say then for ξ in Γ distribution x → exp (ξ.x) T (x), again in abusive notation. This distribution is moderate. Let us note E (ξ) : n → E (ξ) (n) its transform of Fourier. The function ξ → E (ξ) is called the transform of Laplace of T.
Transformation of Mellin
In mathematics, the transformation of Mellin is an integral transformation which can be regarded as the multiplicative version of the transformation of bilateral Laplace. This integral transformation is strongly connected to the theory of the series of Dirichlet, and is often used into theory of the numbers and in the theory of the asymptotic developments, it is also strongly connected to the transformation of Laplace, with the transformation of Fourier, the theory of the function gamma and the special functions.
The notation supposes that it is a curvilinear integral applying to a vertical line in the complex plan. The conditions under which this inversion is valid are given in the theorem of inversion of Mellin.
The transformation was named thus into the honor of the Finnish mathematician Hjalmar Mellin.
Relationships to the other transformations
The transformation of bilateral Laplace can be defined in terms of transformation of Mellin by
{Bƒ} (s) = {Mƒ (ln x)} (s)
and conversely, we can obtain the transformation of Mellin starting from the transformation of bilateral Laplace by
{Mƒ} (s) = {Bƒ (e^{x})}(x)
The transformation of Mellin can be seen as an integration using a core x^{s} which respects the multiplicative measurement of Haar, dx/x, which is invariant under dilation x → αx, that is to say d(αx) ⁄ αx = dx /x, the transformation of just bilateral Laplace by respecting the additive measurement of Haar dx, which is an invariant of translation, that is to say d (x + α) = dx.
We can also define the transformation of Fourier into terms of transformation of Mellin and vice versa, if we define the transformation of Fourier like above, then
{Fƒ} (s) = {Bƒ} (is) = {Mƒ(ln x)} (is)
We can also reverse the process and obtain
{Mƒ (s) = {Bƒ (e^{x})} (s) = {Fƒ (e^{x})} (is)
The transformation of Mellin is also connected to the series of Newton or the binomial transformations with the generating function of the law of Poisson, by means of the cycle of PoissonMellinNewton.
Integral of CahenMellin
For c > 0,R (y) > 0 and y^{s} on the principal branch, one has
e^{y} = 1⁄2πi ∫_{ci∞}^{c+i∞} ⌈ (s) y^{s} ds
where ⌈ (s) is the function gamma of Euler. This integral is known under the name of integral of CahenMellin.
Transformation phase systems
The diagonalization of the matrix flow ⁄ currents: one seeks a basis of vectors in which the equations describing the operation of an electrical machine are decoupled, ie that the relative magnitudes to a phase do not depend on other phases, matrices linking different magnitudes are then diagonals. The base address these concerns is found, the difficulty is then to interpret physically the quantities that compose
Extension of Blondel theorem which allows to replace a threephase system by a twophase equivalent. Indeed, between two coils whose axes have an angular difference of 90 °, the flow Mutual are zero, hence disappearance of the terms offdiagonal matrix flow ⁄ currents:
The transform of Concordia, is a mathematical tool used in electrical engineering in order to model a threephase system thanks to a diphasic model.
Philosophy of the transform of Concordia
A diphasic system made up of two reels perpendicular one compared to the other and traversed by currents out of phase between them from π ⁄ 2 makes it possible to create a spinning field pattern at the speed ω.
Diagram A threephase system made up of reels and currents out of phase between them of 2π ⁄ 3 make it possible to create a spinning field pattern at the speed ω.
Diagram setting in equation
One can model the spinning field pattern created by threephase system by a diphasic system thanks to the following transformations:
i_{a}
i_{a}
i_{α}
i_{α}
= C_{23}
i_{b}
and
i_{b}
= C_{32}
i_{β}
i_{β}
i_{c}
i_{c}
with:
m_{a}
m_{h}
m_{b}
= T
m_{α}
m_{c}
m_{β}
where T is the matrix of Concordia.
Transformation of Fortescue
Any system of threephase sizes unbalanced can be put in the form of the sum of three balanced systems or symmetrical:
a direct balanced system noted G_{d}
an opposite balanced system noted G_{i}
a homopolar system of tension noted G_{0}
Homopolar threephase systems
g_{0} = G_{0} sin (ωt + φ_{0})
g_{0} = G_{0} sin (ωt + φ_{0})
g_{0} = G_{0} sin (ωt + φ_{0})
The interest of this false threephase system is to facilitate the matric writing of the transformation of Fortescue.
Stamp transformation
The goal is to find the G_{d} values, G_{i} and G_{0} starting from G_{1}, G_{2} and G_{3}.
Calculation of G_{0} As the sum of the three sizes of a balanced system is null, one has inevitably: 3G_{0} sin (ωt + φ_{0}) = G_{1} sin (ωt + φ_{1}) = G_{2} sin (ωt + φ_{2}) = G_{3} sin (ωt + φ_{3})
Operator of rotation: has
It is a complex number of module 1 and argument ⅔ π : a = e^{j⅔ π}
The result of its multiplication by the complex number associated with a size corresponds to another size of the same amplitude and out of phase of ⅔ π compared to the initial size. It corresponds to a rotation of ⅔ π in the plan of Fresnel.
It checks the following properties: a³ = 1 1 + a + a² = 0
Transformation Clarke
It seeks to express the different relations originally written in the system of axes (O1, O2, 03) in the system (Oα, Oβ). Here, we chose the axis Oβ behind the axis Oa. This is of interest in the study of the synchronous machine since the excitation flux is on an axis, the fem. are induced along the axis quadrature back. O1 = 1 O2 = aO1 O3 = a²O2 with a = _{e}j * 2π ⁄ 3 O α = 1 O β = jO α
We then seek the transformation C such that:
1 a a²
= C
1
j
avec : C =
c_{11} c_{12} c_{21} c_{22} c_{31} c_{32}
By identifying, found relations: 1 = c_{11}  jc_{12} ⇒ c_{11} = 1 and c_{12} = 0 a = c_{21}  jc_{22} ⇒ c_{21} = ½ and c_{22} = √3 ⁄ 2 a² = c_{31}  jc_{32} ⇒ c_{31} = ½ and c_{32} = √3⁄2
It is not suitable for a system with a nonzero sequence component. It then adds a third column so that the zerosequence X _{ 0 }is equal to the sum of three quantities X_{1},X_{2},X_{3}
Transformation Miss Emily Clarke
is obtained in this way, the transformation of Miss Emily Clarke:
C =
1 ½ ½
0 √3⁄2 √3⁄2
1 1 1
This transformation diagonalizes the matrix L well (flow ⁄ currents) (it was built for this, among others), and we get into the system (α,β, 0):
C^{1}LC =
LM 0 0
0 LM 0
0 0 L+2M
with:
C^{1} = ⅔
1 0 ½
½ √3⁄2 ½
½ √3⁄2 ½
Transformation park
Relations flows ⁄ currents are now fully decoupled, however there is still dependence with the angle θ, ie, for a rotating machine dependence over time. We can then consider modifying the transformation of Miss Clarke assuming the axes (Oα, Oβ) offset by an angle θ relative to the axis of phase 1, we christened the new axes (Od, Oq) respectively named axis direct and quadrature axis.
We pass system (α,β) the system (d, q, 0) by applying a simple rotation of angle θ around the z axis (see figure):
x_{α} x_{β} x_{0}
=
cos θ sin θ 0
sin θ cos θ 0
0 0 1
x_{d} x_{q} x_{0}
by calculating the product of two matrices, we obtain the transformation of Mr. PARK:
P =
cos θ cos (θ  2π⁄3) cos (θ  4π⁄3)
sin θ sin (θ  2π⁄3) sin (θ  4π⁄3)
1 1 1
Transformation KU
Transformation of electrical transformation by Mr. KU.Si are baptized coordinate transformation from the latter by (x_{b}, x_{f}, x_{0}), we have, for example, for a current:
Ci after, various matrices mathematics being able to be used in search and scientific computation in electricity and electronics.
Not criticizes mathematical
In analysis with several variables, a critical point of a function of several variables, with digital assets, is a point of cancellation of its gradient. The critical points are used as intermediary for the search of the extremums of such a function.
More generally, one can define the concept of point critical of a differentiable application between two differential varieties, it acts of the points where the differential is not maximum row.
Approximately, it is necessary to find all has them such as: ∇ ƒ (α) = 0.
Critical points and points of extremums of a numerical function
A pointsaddle or pointcollar is a critical point.
Either ƒ a function of n variable x_{1},...,x_{n}, to actual values, differentiable on open U.On says that she admits a critical point in a point u of U when its gradient is null in this point.
In particular, if u is a local point of extremum of ƒ it is a critical point. The reciprocal one is false: once one the critical points of a function, should be examined their nature determined, for example by carrying out the calculation of the matrix hessienne of ƒ.
Points and breaking values for an application between varieties
Either ƒ a differentiable application between two varieties M and N. One says that ƒ presents a critical point to the point m of M if the tangent linear application to ƒ in m is nonsurjective that is to say ƒ is not an immersion).
One calls breaking values the images of the critical points by ƒ. The theorem of Sard ensures that for an indefinitely differentiable function, the whole of the breaking values is of null measurement.
Stamp hessienne
In mathematics, the matrix hessienne (or simply the hessienne) of a numerical function F is the square matrix, noted H (F), of its derivative partial seconds.
More precisely, being given a function F to actual values
f(x_{1}, x_{2}, x_{n})
and by supposing that all the derivative partial seconds of F exist, the coefficient of index I, J of the matrix hessienne of F are worth
H_{ij}(ƒ) = ∂² ƒ ⁄ ∂ x_{i}∂ x_{j}
One calls hessien (or discriminating hessien) the determinant of this matrix.
The hessien term was introduced by James Joseph Sylvester, in homage to the German mathematician Ludwig Otto Hesse.
That is to say in particular F a function of class C² definite on open U of space E, with actual values. Its matrix hessienne is well defined and under the terms of the theorem of Schwarz, it is symmetrical.
Application to the study of the critical points
One supposes F function of class C² on open U. the matrix hessienne allows, in many cases, to determine the nature of the critical points of the function F, that is to say points of cancellation of the gradient.
Requirement of extremum local
if has is a point of local minimum of F, then it is a critical point and the hessienne in has is positive
if has is a point of local maximum of F, then it is a critical point and the hessienne in has is negative
In particular, if the hessienne in a critical point admits at least a strictly positive eigenvalue and a strictly negative eigenvalue, the critical point is a point coll.
Sufficient condition of extremum local
Precisely, a critical point of F known as is degenerated when the discriminant hessien is cancelled, in other words when 0 are eigenvalue of the hessienne. In a point criticizes not degenerated, the sign of the eigenvalues (all nonnull) determines the nature of this point (not of extremum local or not collar):
if the hessienne is definite positive, the function reaches a local minimum at the critical point
if the hessienne is definite negative, the function reaches a local maximum at the critical point
if there are eigenvalues of each sign, the critical point is a point collar
In this last case, one defines the index of the critical point like the number of negative eigenvalues.
In dimension two in particular, discriminating it hessien being the product of the eigenvalues, its sign is enough to determine the nature of a point criticizes not degenerated.
Finally for a degenerated critical point, none of these implications is true.
Curve hessienne
If C is the algebraic curve of projective equation (homogeneous) ƒ (X, there, Z) = 0, one calls curve hessienne (or simply hessienne) of C the curve whose projective equation is H (ƒ)  (X, there, Z) = 0, where H (ƒ) is the hessien (the determinant of the matrix hessienne) of F. The hessienne of a.c. for intersection with C the critical points and the points of inflection of C. If C is of degree D, its hessienne is of degree 3 (d2), according to the theorem of Bézout, the number of the points of inflection of a regular curve of degree D is thus 3D (d2), which is a particular case of one of the formulas of Plücker.
Extension to the framework of the varieties
When M is a differential variety and F an indefinitely differentiable numerical function on M, it is possible to define the differential of F in any point, but not the matrix hessienne, as one sees it by writing a formula of board swapping.
However, when m is a critical point for the function F, the matrix hessienne of F in m can be defined indeed. One can thus speak about critical point degenerated or not and prolong the results of the preceding paragraph.
Lemma of Morse
The lemma of Morse shows that the behavior of a regular function in the vicinity of a point criticizes not degenerated is entirely determined by the knowledge of the index of the critical point.
Lemma of Morse, Is F a C^{∞} function on a differential variety of dimension N. One considers a point criticizes not degenerated m of the function F, and one notes K his index. Then there exists a local frame of reference x1, ....., xn, centered in m and such as the corresponding expression of F is
It results in particular from the lemma that the not degenerated critical points are isolated.
The lemma of Morse spreads with spaces of Hilbert under the name of lemma of MorsePalate.
Theory of Morse
A function of which all the critical points are not degenerated is described as function of Morse. The theory of Morse aims to connect the study of the topology of the variety to that of the critical points of the functions which can be defined there.
The matrix jacobienne
In vectorial analysis, the matrix jacobienne is a matrix associated with a vector function in a given point. Its name comes from the mathematician Charles Jacobi. The determinant of this matrix, called jacobien, plays a big role in the solution to problem nonlinear.
The matrix jacobienne is the matrix of the first order derivative partial of a vector function.
That is to say ƒ a function of open of R ^{n} to values in R ^{m}. Such a function is defined by its m functions component in actual values:
x_{1}
ƒ 1(x_{1},....,x_{n})
F:

→

x_{n}
ƒ m(x_{1},....,x_{n})
The derivative partial of these functions in a point m, if they exist, can be arranged in a matrix with m lines and N columns, called matrix jacobienne of ƒ:
∂ ƒ 1 ⁄ ∂ x_{1}

∂ ƒ 1 ⁄ ∂ x_{n}
J_{F} (M) =

⁄

∂ ƒ m ⁄ ∂ x_{1}

∂ ƒ m ⁄ ∂ x_{n}
This matrix is noted: J_{F} (M), ∂ (ƒ 1,...,ƒ m) ⁄ ∂ (x_{1},...,x_{n}) or D (ƒ 1,...,ƒ m) ⁄ D (x_{1},...,x_{n})
For I = 1..., m, I ^{e} line of this matrix is transposed of the vector gradient to the point m of the function ƒi, when this one exists. The matrix jacobienne is also the matrix of the differential of the function, when this one exists. It is shown that the function ƒ is of C^{1} class if and only if its derivative partial exist and are continuous.
The matrix jacobienne of the function ƒ of R^{3} in R^{4} defined by:
Properties
Made up the ƒ R G of functions differentiable is differentiable and its matrix jacobienne is obtained by the formula:
J_{F o G} = (J_{F} o G).J_{G}
Determinant jacobien
If m = N, then the matrix jacobienne of ƒ is a square matrix. We can then define his determinant det J_{ƒ}, called the determinant jacobien, or jacobien. To say that the jacobien is nonno cost thus to say that the matrix jacobienne is invertible.
A function ƒ of C^{1} class is invertible in the vicinity of m with reciprocal a ƒ^{1} of C^{1} class if and only if its jacobien in m is nonnull. Moreover, the matrix jacobienne of ƒ^{1} results from the reverse of the matrix jacobienne of ƒ by means of the formula
J_{F  1} = (J_{F} o F^{1}) ^{1}
The theorem of change of variables in the multiple integrals utilizes the absolute value of the jacobien.
It is not necessary to suppose that V is opened, nor that ƒ is a homeomorphism of U on V : that results from the assumptions, according to the theorem of the invariance of the field.
One shows initially this theorem if ƒ is a diffeomorphism, which according to the theorem of local inversion, simply amounts adding the assumption that the jacobien of ƒ is not cancelled in any point of U, then one is freed from this assumption thanks to the theorem of Sard.
Stamp newton
In numerical analysis, the method of Newton or method of NewtonRaphson is, in its simplest application, an effective algorithm to numerically find an approximation precise of one zero (or root) of a real function of a real variable.
Presentation
In its modern form, the algorithm can be presented like: with each iteration, the function which one seeks one zero is linearized in reiterated running and reiterated according to is taken equal to the zero of the linearized function. This summary description indicates that at least two conditions are necessary for the good walk of the algorithm: the function must be differentiable at the visited points is added to these conditions the strong constraint of having to take the first reiterated rather near of one zero regular to the function, so that the convergence of the process is assured.
The principal interest of the algorithm of Newton is its local quadratic convergence. In picturesque terms but not very precise, that means that the number of correct significant digits of reiterated double with each iteration, asymptotically. Indeed, if the reiterated initial one is not taken sufficiently near to one zero, the continuation of reiterated generated by the algorithm has an erratic behavior, whose possible convergence can be only the fruit of the chance.
Applied to derived from a real function, this algorithm makes it possible to obtain critical points. This observation is at the origin of its use in optimization without or with constraints.
while taking as reiterated initial the point x_{1} = 2, which differs from less than 10% from the true value from a root. He writes x = 2 + d_{1}, or d_{1} is thus the increase to give to 2 to obtain root X. He replaces x by 2 + d_{1} in the equation, which becomes
d³_{1} + 6d²_{1} + 10d_{1}  1 = 0
and it is necessary to find the root to add it to 2. He neglects d³_{1} + 6d²_{1} because of his smallness one supposes that d_{1}« 1, so that there remain 10d_{1}  1 = 0 or d_{1} = 0, 1, which gives like new approximation of the x_{2} = x_{1} + d_{1} = 2,1. He writes then d_{1} = 0 , 1 + d_{2}, or d_{2} is thus the increase to be given to d_{1} to obtain the root of the preceding polynomial. He thus replaces d_{1} by 0,1 + d_{2} in the preceding polynomial to obtain
d³_{2} + 6,3²_{2}+ 11,23d_{2} + 0,061 = 0
One would obtain the same equation by replacing X by 2,1 + d_{2} in the initial polynomial. Neglecting the first two terms, there remain 11,23 d_{2} + 0,061 = 0 or d_{2} ≈  0,0054, which gives as new approximation of the x_{3} = x_{2} + d_{2} ≈ 2,0946.On can continue the operations as a long time as it is appropriate.
Real function of a real variable
The algorithm, one thus will seek to build a good approximation of one zero of the function of a real variable ƒ (X) while being based on its development of Taylor to the first command. For that, on the basis of a point x_{0} which one preferably chooses near to the zero to find (by making coarse estimates for example), one approaches the function with the first command, in other words, one considers it about equalizes with his tangent in this point:
ƒ (x) ≈ ƒ (x_{0}) + ƒ’ (x_{0}) (x  x_{0})
On the basis of there, to find one zero of this function of approximation, it are enough to calculate the tangent line intersection with the Xaxis, that is to say to solve the equation closely connected:
0 = ƒ (x_{0}) + ƒ’ (x_{0}) (x  x_{0})
One then obtains a point x1 which in general has good lucks be closer to truth zero of ƒ than the preceding point x_{0}. By this operation, one can thus hope to improve the approximation by successive iterations: one again approaches the function by his tangent in x_{1} to obtain a new point x_{2}.
Illustration of the method of Newton
This method requires that the function has a tangent of each leader character which one builds by iteration, for example it is enough that ƒ is derivable.
Formally, one starts from a point x0 pertaining to the whole of definition of the function and one builds by recurrence the continuation:
x_{k} + 1 = x_{k}  ƒ (x_{k}) ⁄ ƒ’ (x_{k}) or ƒ’ indicates the derivative of the function ƒ.
The point x_{k+1} is well the solution of the equation closely connected ƒ (x_{k}) + ƒ’ (x_{k}) (x  x_{k}) = 0.
It may be that the recurrence must finish, so at the stage K, x_{k} does not belong to the field of definition or if the derivative ƒ’ (x_{k}) is null, in these cases, the method fails.
If the zero unknown α is isolated, then there exists a vicinity of α such as for all the starting x_{0} values in this vicinity, the continuation x_{k} will converge towards a. Moreover, if ƒ (α) is nonnull, then convergence is quadratic, which means intuitively that the number of correct figures is roughly doubled with each stage.
Although the method is very effective, certain practical aspects must be taken into account. Above all, the method of Newton requires that the derivative is actually calculated. Whenever the derivative is only estimated by taking the slope between two points of the function, the method takes the name of method of the secant, less effective of command 1,618 which is the golden section and lower than other algorithms. In addition, if the starting value is too far away from truth zero, the method of Newton can enter in infinite loop without producing improved approximation. Because of that, very implemented of the method of Newton must include a check code of the iteration count.
Convergence
The speed of convergence of a continuation x_{n} obtained by the method of Newton can be obtained like application of the formula of TaylorLagrange. It is a question of evaluating an increase of log  x_{n}  α.
ƒ is a function defined in the vicinity of α and twice continuously differentiable. It is supposed that a is being one zero of ƒ that one tries to approach by the method of Newton. The assumption is made that α is one zero of command 1, in other words that ƒ’ (α) is nonnull. The formula of TaylorLagrange is written:
0 = ƒ (α) = ƒ (x) + ƒ’ (x) (α  x) + ]ƒ"(ξ) ⁄ 2_ (α  x)² with ∇ between x and α.
On the basis of approximation X, the method of Newton provides at the end of an iteration:
For a compact interval I containing X and α and included in the field of definition of ƒ, one poses: m_{1} = min_{x∈I}  ƒ’ (x)  like M_{2} = max_{x∈I}  ƒ" (x) . Then, for all x∈I:
 N_{ƒ} (x)  α  ≤ M_{2} ⁄ 2m_{1}  x  α ² by immediate recurrence, it comes: K  x_{n}  α  < (K x_{0}  α  )²^{n} or K = M_{2} ⁄ 2m_{1}.En passing to the logarithm: log  x_{n}  α  < 2^{n} log (K  x_{0}  α )  log (K)
The convergence of x_{n} towards α is thus quadratic, provided that  x_{0}  α < 1/K.
Criterion of stop
Possible criteria of stop, given relative with a numerically negligible size, are:
ƒ (x_{k})  < ε_{1} or  x_{k+1}  x_{k}  < ε_{2} or ε_{1},ε_{2} ξR^{+} errors of approximations represent characterizing the quality of the numerical solution.
In all the cases, it may be that the criterion of stop is checked in points not corresponding to solutions of the equation to solve.
Square root
A particular case of the method of Newton is the algorithm of Babylon, so known under the name of method of Héron: it acts, to calculate the square root of has, to apply the method of Newton to the resolution of
ƒ (x) = x^2  α
One obtains then, by using the formula of derived the ƒ’ (X) = 2x ", a method of approximation of the solution √α given by the following iterative formula:
The convergence of the continuation (x_{k}) is shown by recurrence: for K given, one can show that if 0 ≤ √ α ≤ x_{k} alors 0 ≤ √ α ≤ x_{k + 1} ≤ x_{k}. Moreover, if 0 < x_{k} ≤ √ α, alors √ α ≤ x_{k + 1}.The continuation is thus decreasing at least starting from the second term. It is also limited, therefore it converges. Remain to show that this limit 1 is quite equal to √ α : one obtains this result by showing that it is necessary that ∫ = √ α so that x_{k + 1}  1 tends towards 0 when K tends towards + ∞.
Intersection of graphs
One can determine an intersection of the graphs of two derivable real functions ƒ and G, that is to say an item x such as ƒ (x) = G (x), by applying the method of Newton to the fonctionƒ  G.
Complex function
Method of Newton applied to the polynomial z1 3  1 with complex variable Z converges starting from all the points of the plan (of the complex numbers) coloured in red, green or blue towards one of the three roots of this polynomial, each color corresponding to a different root. The remaining points, being on the clearer structure, called fractale of Newton are the starting points for which the method does not converge.
The method can also be applied to find of the zeros of complex functions
convergence towards one zero
infinite limit
the continuation admits a cycle limits in other words, the continuation can be cut out out of p undercontinuations disjoined of form z (n_{0 + kp}) k which each one converge towards distinct points (which are not of the zeros of ƒ forming a periodic cycle for function z  ƒ (z) ⁄ ƒ’ (z)
the continuation approaches the whole of the zeros of the function without there not being however of limiting cycle, and with each stage of the iteration, one finds near to one zero different from the precedents
following a chaotic behavior
Generalizations/alternatives
Systems of equations to several variables
One can also use the method of Newton to solve a system of n equations (nonlinear) with n unknown factors X = x1,...., x_{n}, which amounts finding one zero of a function ƒ of R^{n} in R^^{n}, who will have to be differentiable. In the formulation given above, it is necessary to multiply by the reverse of the matrix jacobienne ƒ (x_{k}) instead of dividing by ƒ’ (x_{k}).
F’ (x_{k}) (x_{k + 1}  x_{k}) =  F (x_{k})
in the unknown factor x_{k + 1}  x_{k}. Once again, this method x_{0} suffisamment functions only for one initial value near to one zero of F
Symmetrical matrix
In linear algebra and bilinear, a symmetrical matrix is a square matrix which equal to its characteristic is transposed.
Any diagonal matrix is symmetrical.
Properties
A matrix representing a bilinear form is symmetrical if and only if the latter is symmetrical.
The whole of the symmetrical matrices of command N to coefficients in a commutative body is a vectorial subspace of dimension N (n+1) /2 of the vector space of the square matrices of command N, and if the characteristic of the body is different from 2, an additional subspace is that of the antisymmetric matrices.
Real symmetrical matrices Euclidean structure
One notes S^{n} (R), or simply S^{n} if there is no possible confusion, the vector space of the symmetrical real matrices of command N. This vector space from dimension n (n + 1) ⁄ 2 is canonically provided with a structure of Euclidean space, which is that of m_{n} (R). The scalar product is defined by
(A,B) ∈ S_{n} * S_{n} → ]A,B_ : = tr(A^{T}B) = ∑_{1≤ i ≤ n,1 ≤ j ≤ n} A_{ij}B_{ij} ∈ R où tr (α) = ∑^{n}_{i = 1} A_{ii} indicate the trace of has and A indicates the element A_{ij}. The standard associated with this scalar product is the standard of Frobenius, which one notes here simply  A  = √]tr (A^{T}A)_ = (∑_{1 ≤ i ≤} ∑ _{1 ≤ j ≤} A^{2}_{ij})^{2} with these notations, the inequality of CauchySchwarz is written then for all A and B ∈ S_{n}  ]A,B_  ≤  A  B 
Spectral theory
Spectral decomposition
The spectral theorem (in finished dimension) states that any symmetrical matrix with real elements is diagonalisable using an orthogonal matrix. Its eigenvalues are thus real and its own subspaces are orthogonal.
Inequality of Fan One notes λ_{i} (α) ∈ R n eigenvalues of A ∈ S_{n}, which one arranges by decreasing order: λ_{1} (α) ≥ λ_{2} (α) ≥ ...≥ λ_{n} (α) The application is introduced λ : S^{n} → R^{n} : A → (λ_{1}(α),...,λ_{n}(α)) and, for a vector (column) v ∈ R^{n} one notes v^{t} transposed vector and Diag (v) the diagonal matrix whose element (i,i) est v_{i}.
Inequality of Fan For all A and B ∈ S_{n}, one A ]A,B_ ≤ λ (α) ^{T} λ (B) with equality if and only if one can obtain the ordered spectral decompositions λ (α) et λ (B) of A and B consequently orthogonal matrix, that is to say if and only if ∇ V orthogonal: With = A = V Diag (λ (α)) V^{T} and B = V Diag (λ (B)) V^{T}
Positive symmetrical matrices Positive matrix and positive definite Matrix.
A matrix S symmetrical real of command N is known as positive if the associated symmetrical bilinear form is positive, that is to say if
∀ x ∈ R^{n}, x^{T} Sx ≥ 0
A matrix S symmetrical real of command N is known as definite positive if the associated bilinear form is definite and positive, that is to say if
∀ x ∈ R^{n} \ {0}, x^{T} Sx > 0
In this chapter K indicates a commutative body.
Definitions
Are N and p two natural entireties nonnull.
We call matrix with elements in K of the type (N, p) any application of {1,2,..., N} * {1,2,..., p} in K (family of elements of K indexed by {1,2,..., N} * {1,2,..., p}), that is to say a rectangular table with N lines and p columns of the form:
a11
a12


a1p
a21
a22


a2p


\



\

an1
an2


anp
or a_{11},a_{12},a_{n} K are called the elements or the coefficients of the matrix. Such a matrix is noted too (a_{ij}) 1 ≤ i ≤ n,1 ≤ j ≤ p or more simply (a_{ij}).
The whole of the matrices of the type (N, p) to elements in K, notes M_{np}.
When N = p, the matrix is known as square of dimension N.
When p = 1, the matrix comprises only one column of N elements:
a_{1}
a_{2}

a_{n}
one speaks about vector. the whole of the square matrices of type (N, N) or of command N, notes K_{n}. when K = R the matrix is known as real. when K = C the matrix is known as complex. the elements a_{11},a_{12},a_{nn} form the principal diagonal of the matrix.
Opposite matrix
That is to say M a matrix. The reverse of M, if it exists, is defined like the single matrix NR such as: M . N = N . M = I_{n}
Stamp transposed
Above all, one speaks about transposed of a matrix. Transposed of a matrix M is noted t^{M}.
It is the matrix obtained starting from M by reversing the line and the columns. that is to say to obtain NR = t^{M}, one n_{ij} = m_{ij}(with N = (n_{ij}) and M = (m_{ij}))
Other notation: ^{t}M = (m_{ji}) notation without reelecting transposed of Mr.
Property: When the matrix M is known as symmetrical there is then m_{ij} = m_{ji}, which gives ^{t}M = m".
Stamp diagonal
A square matrix a_{ij} is known as diagonal if all the elements out of the diagonal are null, ∀ (i,j) ∈ {1,...,n}²,i ≠ j ⇒ a_{ij} = 0.Une such matrix notes diag a_{11},a_{22},...,a_{nn}
The whole of the diagonal matrices notes D_{n} (K).
Triangular matrix
Lower triangular matrix
A square matrix (a_{ij}) is known as triangular lower (or trigonal lower) if all the elements located above the principal diagonal are null, ∀ (i,j) ∈ {1,...,n}²,i < j ≠ a_{ij} = 0.
So moreover, the elements of the principal diagonal are null the matrix is known as strictly triangular lower or strictly trigonal lower.
1 0 0
2 3 0
4 5 6
A lower triangular matrix
0 0 0
2 0 0
4 5 0
A lower strictly triangular matrix
The whole of the lower triangular matrices notes T_{i} (K).
Higher triangular matrix
In a similar way, a square matrix (a_{ij}) is known as triangular higher (or trigonal higher) if all the elements located below the principal diagonal are null, ∀ (i,j) ∈ {1,...,n}²,i > j ≠ a_{ij} = 0..
So moreover, the elements of the principal diagonal are null the matrix is known as strictly triangular higher or strictly trigonal higher.
1 2 3
0 4 5
0 0 6
A higher triangular matrix
0 2 3
0 0 5
0 0 0
A higher strictly triangular matrix
The whole of the higher triangular matrices notes T_{s} (K).
The determinant of a triangular matrix has as a value the products of the terms of the principal diagonal. For the first example: det = 1 X 4 X 6 = 24
Stamp diagonal
A square matrix is known as diagonal matrix when a_{ij} = 0, for any I ∇ J what means that all the elements located out of the principal diagonal are null. If all the nonnull elements of the diagonal matrix are equal, the matrix is known as scalar matrix.
1 0 0
0 2 0
0 0 3
A diagonal matrix
2 0 0
0 2 0
0 0 2
A scalar matrix
Stamp identity
A matrix identity is a scalar matrix where aii = 1
1 0 0
0 1 0
0 0 1
A matrix identity 3x3 When one multiplies a matrix by the matrix identity one returns to the starting matrix.A _{n * m} • I_{n} = A_{n * m}
Symmetrical and antisymmetric matrix A matrix has is known as symmetrical if it equal to its is transposed: ^{t}A = A A matrix has is known as antisymmetric if it equal contrary to its is transposed: ^{t}A = A
Orthogonal matrices
M and NR are two orthogonal matrices if MN = NM = 0
Matrices idempotentes
These matrices have the following property : M^{n} = M
Matrices nilpotentes
A matrix M is known as nilpotente if: ∃ p ∈ N : M^{p} = O