Theorem of flow-divergence

In vectorial analysis, the theorem of flow-divergence, also called theorem of Green-Ostrogradski, affirms the equality between the integral of the divergence of a vector field on a volume in ℜ³ and the flow of this field through the border of the volume which is an integral of surface.
The equality is the following one
∫∫∫V div F * dV = ∯∂v F * dS
where
v : is volume
∂v : is the border of v
dS : is the normal vector on the surface, directed towards the outside and of length equal to the element of surface which it represents
F : is continuously derivable in any point of V
This theorem rises from the theorem of Stokes which, itself, generalizes the fundamental Theorem of the analysis.
Physical interpretation
It is an important result in mathematical physics, in particular in electrostatics and dynamics of the fluids, where this theorem reflects uneloi conservation. According to its sign, the divergence expresses dispersion or the concentration of a size a such mass for example and the preceding theorem indicates that a dispersion within a volume is necessarily accompanied by a total flow are equivalent outgoing of its border.
This theorem in particular makes it possible to find the integral version of the theorem of Gauss in electromagnetism starting from the equation of Maxwell-Gauss : div E = ρ ⁄ ε0
Other relations
This theorem makes it possible to deduce certain useful formulas from the vector calculus
In the expressions below
* F = div F, ∇g = gradg, ∇ ∧ F = rot F
∫∫∫v (F * ∇g + g(∇ * F)) dV = ∯∂vgF * dS
∫∫∫vgdV = ∯∂v gdS
∫∫∫v (∇ ∧ F) - F * (∇ ∧ G)) dV = ∯∂v (F ∧ G) * dS
∫∫∫v ∧ FdV = ∯∂v dS ∧ F
∫∫∫v (ƒ∇²g + ∇ƒ * ∇g) dV = ∯ƒ∇g * dS

Theorem of Gauss

In electromagnetism, the theorem of Gauss makes it possible to calculate the flow of an electric field through a surface taking into account the burden-sharing. It is due to Carl Friedrich Gauss.
The flow of the electric field through a surface S closed is equal to the sum of the loads contained in the volume V delimited by this surface divided by ε0, the permittivity of the vacuum.
S E * dS = 1 ⁄ ε0 ∫∫∫V ρdΤ = ∑Qint ⁄ ε0
This equation is the integral form of the equation of Maxwell-Gauss : div E = ρ ⁄ ε0
The integral of flow is largely simplified if an adequate surface of Gauss is taken. This one depends on the symmetry of the burden-sharing. There exist three symmetries very much used.
Spherical distribution
Cylindrical distribution
Plane distribution
It is a general property in physique coming from the principle of Curie: the effects have, at least, same symmetries as the causes.

Theorem of Stokes

In mathematics and more particularly in differential geometry, the theorem of Stokes is a central result on the integration of the differential forms, which generalizes the fundamental theorem of integration, as of many theorems of vectorial analysis. It has multiple applications, thus providing a form which readily physicists and engineers use, particularly in mechanics of the fluids.
The theorem is allotted to Sir George Gabriel Stokes, but the first to discover this result is actually Lord Kelvin. The mathematician and the physicist maintain on this subject an active correspondence during five years, of 1847 to 1853. The form initially discovered by Kelvin and often called theorem of Kelvin-Stokes, or sometimes simply theorem of Stokes, is the particular case of the theorem concerning the circulation of the rotational one, which one will find described in the paragraph concerning the physical direction of the theorem.
Statement and demonstration
Theorem of Stokes, is M a differential variety directed of dimension N and ω one (n-1) - differential form with compact support towards M of C1 class.
Then, one has
M dω = ∫∂M i * ω
where d indicates the external derivative
∂M the edge of M, provided with the outgoing orientation and
i : ∂M → M is the canonical injection.
The current demonstration requires to have a good definition of integration, its apparent simplicity is misleading. The idea is to use a partition of the unit adapted to the problem in the definition of the integral of a differential form and to bring back themselves to an almost obvious case.
That is to say {Ui}I a covering locally finished of M by fields of local charts
Φi : Ui → Φi (Ui) ⊂ ℜn
telles que
Φi (Ui ∩ → M) = Φi (Ui) ∩ ({0} * ℜn-1)
let us introduce Xi a partition of the unit subordinate to {Ui}
like the support of ω is closed, the differential form ω is written
ω = ΣXiω
where the summation is with support fini
let us pose βi = Φ*i [Xiω]
form differential with compact support of M' = ℜ+ * ℜn-1
the restriction is a diffeomorphism on its image preserving the outgoing orientations
one thus has
∂M [Xiω] = ∫∂M'βi
like Φ commutate with the operator of differentiation d, one has
∂Md [Xiω] = ∫M'βi
By summation, the theorem of Stokes is shown once established the particular case M' = ℜ+ * ℜn-1
one (n-1) - form ω sur M' = ℜ+ * ℜn-1
is written
ω = ∑i = 1n ƒi * dx1 ∧ ... ∧ d^xi ∧ ... ∧ dxn
where the hat indicates an omission. One finds then
dω = ∑ni = 1(∑nj = 1 [∂ƒi ⁄ ∂xj] dxj) ∧ dx1∧ ... ∧d^xi ∧ ... ∧ dxn = ∑ni = 1 (-1)i-1 (∂ƒi ⁄ ∂xi) dx1∧ ... ∧ dxn

Transform of Dirac

The distribution of Dirac, also called by abuse language function δ of Dirac, introduced by Paul Dirac, can be informellement regarded as a function δ which takes an infinite value into 0, and the zero value everywhere else, and whose integral on ℜ is equal to 1. The chart of the function δ can be comparable with the x-axis in entirety and the half positive y-axis. In addition, δ corresponds to derived from the function of Heaviside. But this function of Dirac is not a function, it extends the concept of function.
The function δ of Dirac is very useful like approximation of functions of which the chart with the shape of a large narrow point. It is the same type of abstraction which represents a concentrated loading, a specific mass or a specific electron. For example, to calculate the speed of a ball of tennis, struck by a racket, we can compare the force of the racket striking the ball to a function δ. In this manner, we simplify not only the equations, but we can also calculate the movement of the ball by considering only all the impulse of the racket against the ball, rather than to require the knowledge of the details in the way in which the racket transferred energy to the ball.
By extension, the expression Dirac thus is often used by the physicists to indicate a function or a curve pricked in a given value.
Introduction formelle
On is placed in ℜn.
The function δ of Dirac is the borelienne measurement which charges only the singleton {0} : δ({0}) = 1, δ (Q) = 0 (Q is a cubic segment ⁄ which does not contain 0)
That is to say a borélien has. A direct calculation establishes that δ (A) = 1 i.e ∫ 1Adδ = 1 is well a measurement of Borel, only checking:
If A contains 0 : δ (A) = 1 i.e ∫ 1Adδ = 1
If not : δ (A) = 0 i.e ∫ 1Adδ = 0
Like any measurable function ƒ is limiting simple staged functions, one has : ∫ ƒdδ = ƒ (0)
From the point of view of measurements of Radon, one gives : δ : Cc (ℜn) → ℜ, ƒ → ƒ (0).
That is to say K compact: The restriction of such a linear form on CK (ℜ n) is clearly continuous (δ is thus well a measurement of Radon), standard 1. Indeed : sup {|ƒ (0) | : ||ƒ|| = 1} = 1
We can then see δ like a distribution of order 0 : δ : D → ℜ, φ → φ (0) is a linear form such as: ∀ K compact : |δ (φ)|≤||φ||0 (φ ∈ DK)
(Let us recall that ||.||0 note the standard ||. ||∞ in the context of the distributions)
Let us apply the definition of the support of a distribution: spt δ = {0} is compact. Consequently, δ is moderate
Other presentations (on ℜ) :
Any element ƒ of L1loc (ℜ), that is to say locally integrable within the meaning of Lebesgue) is identified with a linear form
∀ φ ∈ Cc (ℜ), 〈ƒ,φ〉 = ∫-∞+∞ ƒ (x) φ (x) dx
By analogy, δ is defined; by the following equality :
∀ φ ∈ Cc (ℜ), 〈ƒ,φ〉 = ∫-∞ δ (x) φ (x) dx = φ (0)
The only mathematical being δ who checks this equation rigorously is a measurement. This is why the existence of δ a direction within the mathematical framework of the distributions has. By defining the functions δ n by: δ n (X) = N for |X| < 1⁄ 2n and δ n (X) = 0 everywhere else, we have :
∀ φ ∈ Cc (ℜ),limn→ ∞ ∫-∞+∞ φ (x) δn (x) dx = φ (0)
The continuation δ n converges with the weak direction towards δ.
Thus, by abuse language, one says that the function δ of Dirac is null safe everywhere into 0, where its infinite value corresponds to a mass of 1, that is to say it corresponds to a measurement which with a subset of R associates 0 if 0 are not in the subset and 1 if not.
This distribution can also be seen like the derivative, within the meaning of the distributions, of the function of Heaviside.
Let us give TH (φ) := ∫ H φ (φ ∈ D)
Then
T’H (φ) = -TH (φ’) = -∫ H φ’ = -]φ_0 = φ (0) = δ (φ)
δ is the neutral element of the convolution :
(δ * φ) (x) := δ (φ(x - .)) = φ (x -0) (x ∈ ℜn
From where :
δ * φ = φ
This property is abundantly used in treatment of the signal. It is said that a signal corresponding to a distribution of Dirac has a white spectrum. That is to say each frequency is present with an identical intensity. This property makes it possible to analyze the frequential answer of a system without having to sweep all the frequencies.

Transform of Fourier

The transform of Fourier of the function δ of Dirac is identified with the constant function 1 :
〈δ,φ〉 := 〈δ,φ〉 = φ(0) = ∫ φ (x)ei0xdx = ∫ 1 . φ (x) dx (φ∈>)
Consequence :
〈Î,φ〉 = 〈δ,φ〉 = 〈δ,φ〉 = φ (0) = φ (0) = 〈δ,φ〉
thus δ is Fourier of 1.
Derived
The derived from the function δ of Dirac is the distribution δ defined by:
for any function of test φ , 〈 δ , φ 〉 : = - φ ’(0)
More generally, one with the nth derivative of δ, δ (n) :
〈δ(n),φ〉 := (-1)nφ(n) (0)
The derivative of δ of Dirac are important because they appear in the transformation of Fourier of the polynômes.
Une useful identity is
δ (g(x)) = Σi δ(x - xi) ⁄ |g’(xi)|
are the roots (presumedly simple) of the function G (X). It is equivalent to the integral form :
-∞+∞ ƒ (x) δ (g(x)) dx = Σi ƒ (xi) ⁄ |g’ (xi)|
Representations of the fonction
General information
The function δ can be looked like limit of a family (δ a) of fonctions
δ (X) = lima→ 0 δ a (X)
Some calls such functions δ a of the functions incipient from δ.
Those can be useful in specific applications. But if the limit is employed in a too vague way, of the nonsenses can result from it, as besides in any branch of the analysis in mathématique.
The concept of approximation of the unit has a particular significance in harmonic analysis, in connection with the limit of a family having for limit a neutral element for the operation of convolution. Here the assumption is made that the limit is that of a family of functions positives.
Probability
A density of probability, for example that of the normal law, is represented by a curve which locks up a surface equalizes to 1. If one makes tighten his variance towards 0, one obtains in extreme cases a delta which represents the density of probability of an unquestionable variable with probability 1. It is a curiosity which is of limited practical interest but it spreads in a manner interesting.
The simplest manner to describe a discrete variable which takes values belonging to a countable unit consists in using its function of probability which associates a probability with each value. One can also consider a pseudo-density of probability consisted a sum of functions of Dirac associated with each value with a weight equal to their probabilities. Under these conditions, the integral formulas which calculate the hopes of the continuous variables apply to the discrete variables by taking account of the pointed out equation above.
Analyzes recordings
For to determine the contents of the recording of a physical phenomenon according to time, one generally uses the transformation of Fourier.
TF (ƒ (x)) = F (v) = ∫-∞+∞ ƒ (x) e-2πivxdx
One can note the transform of Fourier of the function of Dirac :
TF (δ(x)) = ∫-∞+∞ δ (x) e-2πivx dx = 1
Nowadays, the continuous analogical recordings of physical phenomena yielded the place to numerical recordings sampled with a certain step of time. One uses in this field the discrete transform of Fourier who is an approximation over a certain duration of sampling.
The multiplication of a continuous function by a comb of Dirac, summons equidistant deltas, has a transform of Fourier equal to the approximation of that of the function of origin by the method of the rectangles. By using a development in Fourier series of the comb, one shows that the result gives the sum of the true transform and of all its relocated by the sampling rate. If those encroach on the true transform, that is to say if the signal contains frequencies higher than half of the sampling rate, the spectrum is folded up. In the contrary case it is possible to reconstitute exactly the signal by the formula of Shannon.

Transform of Laplace

In mathematics, the transformation of Laplace is an integral transformation, that is to say an operation associating with a function with value in Rn or Cn,ƒ (t) a new function known as transformed of ƒ (t), traditionally noted F(p), via an integral. The tranformation of Laplace is bijective and by use of tables it is possible to reverse the transformation. The great advantage of the transformation of Laplace is that majority of the current operations on the original function ƒ (t), such as derivation, or a shift on the variable T, have a simpler translation on the transform F(p). Thus the transform of Laplace of derived the ƒ’ (t) is simply pF (p) - ƒ (0-), and the transform of the shifted function ƒ (t - τ) is simply e-pτ F (p). This transformation was introduced for the first time on a form close to that used by Laplace in 1774, within the framework of the theory of probability
The transform of Laplace is close to the transform of Fourier who is also used to solve the differential equations, but contrary to the latter it takes account of the initial conditions and can thus be used in theory of the mechanical vibrations or electricity in the study of the modes forced without neglecting the transitory mode. In this type of analysis, the transform of Laplace is often interpreted like a passage of the time field, in which the entries and exits are functions of time, in the field of the frequencies, in which the same entries and exits are functions of the frequency p. Ainsi it is possible to simply analyze the effect of the system on the entry to give the exit in term of simple algebraic operations, theory of the transfer transfer functions in electronics or mechanics).
Definition
In mathematics and in particular in functional analysis, the transform of Laplace monolatérale of a function ƒ possibly generalized, such as the function of Dirac of a real variable T, with positive support, is the function F of the variable complexes p, definite by:
F(p) = ζ { ƒ (t) } = ∫0-+∞ e-ptƒ (t) dt
More precisely, this formula is valid when ℜ (p) > α où α,-∞ ≤ α ≤ + ∞, is the x-coordinate of convergence and ƒ is a germ ƒ of distributions defined in a vicinity open and limited in a lower position interval I = ]0,+ ∞_ whose restriction on complementary to I in this vicinity is an indefinitely derivable function. It is such a germ that we call here, by abuse language, a generalized function with positive support, and the transformation of Laplace injective is applied to these generalized functions. The x-coordinate of convergence α is defined as follows: maybe, for a reality β ƒβ : t → eβtƒ (t). Then α is the lower limit of the unit B of the β for which ƒβ is a moderate distribution if B is nonempty and α = +∞ if not.
The function of Dirac is of this nature. Its transform of Laplace is worth 1 with a x-coordinate of convergence of -∞.
The properties of this transformation confer to him a great utility in the analysis of the linear dynamic systems. Most interesting of these properties is that integration and derivation are transformed into division and multiplication by p, in the same way that the logarithm transforms the multiplication into addition. It thus makes it possible to bring back the resolution of the linear differential equations to constant coefficients the solution of equations closely connected whose solutions are rational functions of p.
The transformation of Laplace is very much used by the engineers to solve differential equations and to determine the transfer transfer function of a linear system. For example, in electronics, contrary to the decomposition of Fourier who is used for the determination of the spectrum of a periodic or even unspecified signal, it takes account of the existence of a transitory mode preceding the permanent mode.
It is indeed enough to transpose the differential equation in the field of Laplace to obtain an equation much simpler to handle.

Transform reverses of Laplace.

The inversion of the transformation of Laplace is carried out by the means of an integral in the complex plan. Using the remainder theorem, one shows the formula of Bromwich Mellin:
ƒ (t) = ζ-1 {F (p) } = 1⁄2πi ∫Υ-i * ∞Υ+i * ∞ ept F (p) dp
where Υ is selected so that the integral is convergent, which implies that Υ is higher than the real part of any singularity of F (p) and that ad infinitum, |F (p)| tends towards 0 at least as quickly as 1|p|². When this last condition is not satisfied, the formula above is still usable if there exists an entirety N such as |pn F (p)|tends towards 0 as quickly as 1|p|², that is to say when, for |P| tending towards the infinite one, |F (p)| is raised by a polynomial in |P|. By replacing F (p) by p-n F, in the integral above, one finds in the member of left of the equality a generalized function with positive support whose derivative of order N within the meaning of the distributions is the generalized function it also with positive support sought.
In practice nevertheless, the formula of Bromwich-Mellin is used little and one calculates the opposite of the transforms of Laplace starting from the tables of transforms of Laplace.
use of the transform of Laplace in electricity
One considers a circuit says R, C, made up of an electrical resistance of value R.&.d ’ an electric condenser of capacity C, placed in series. In all the case one considers that the circuit is not placed at the boundaries of an ideal generator of tension delivering a tension in general variable U (T) only at one moment chosen for origin of the dates, and that the condenser is initially discharged. One has thus respectively for the load Q (T) of the condenser and the intensity in circuit I (T) = dq/dt the following initial conditions : q (0-) = 0,i (0-) = 0.
Charge of a capacitor by a level of tension
One applies the tension U (T) following:
u (t) = 0, if t < 0 and U0 = cte, si t ≥ 0
and the differential equation connecting the answer Q (T) to the entry U (T) is by applying the usual laws of electricity:
U0 Υ (t) = R * dq ⁄ dt + q (t) ⁄ C
that is to say still by posing τ ≡ RC this quantity with the one duration dimension
CU0 ⁄ τ = q (t) ⁄ τ + dq ⁄ dt
One takes the transform of Laplace member to member of this last equation
by noting Q (p) the transform of Q (T)
it comes while taking of account the fact that q(0-) = 0) :
Q (p) = CU0 * (1 ⁄ τ) ⁄ p ](1 ⁄ τ) + p_
what can be also written in the form:
Q (p) = H (p) U (p), with H (p) μ (1 ⁄ τ) ⁄ ](1 ⁄ τ) + p_
transfer transfer function of the system R,C et U (p) = CU0 ⁄ p

Bilateral transform of Laplace

In analysis, the bilateral transform of Laplace is the most general form of the transform of Laplace, in whom integration is done from less the infinite one rather than from zero.
Definition
The bilateral transform of Laplace of a function ƒ of the real variable is the function F of the variable complexes definite by:
F (p) = ζ {ƒ} (p) = ∫-∞+∞ e-pt ƒ (t) dt
This integral converges for α < ℜ (p) < β,-∞ ≤ β ≤ + ∞, that is to say for P pertaining to a band of convergence in the complex plan instead of ℜ (p) > α, α indicating the x-coordinate of convergence then, in the case of the transform monolatérale). In a precise way, within the framework of the theory of the distributions, this transform converges for all the values the values of P for which t → e-ℜ(p)tƒ (t) in abusive notation is a moderate distribution and thus admits a transform of Fourier.
Ambiguities to be avoided
It is essential, when one uses the bilateral transform of Laplace, to specify the band of convergence. That is to say for example F (p) = 1/p.
If the band of convergence is ℜ (p) > 0, the antecedent of this transform of Lapace is the function of Heaviside Υ. On the other hand, if the band of convergence is ℜ (p) < 0, this antecedent is t → - Υ (-t).
Convolution and derivation
That is to say T and S two distributions convolables, for example having each one a support limited on the left, or one of them being with compact support.
ζ {T * S} = ζ {T} ζ {S}
In particular
ζ (δ(n)) = pn
and
δ(n) * S = S(n)
thus
ζ (S(n)) = pn ζ (S)

Transforms of Laplace of the hyperfonctions

One can extend the transform of Laplace to the case of the hyperfonctions. For a hyperfonction defined by a distribution, one finds the theory which precedes. But for example
ƒ (t) = k = 0 * ( -1)k ⁄ ]k! ( k + 1)!_ δ(k) (t) = ]-( 1⁄2πi) e1 ⁄ z_
although not being a distribution because it is of an infinite nature locally, namely into 0, is a hyperfonction whose support is {0} and who admits for transform of Laplace
F (p) = j1(2√p) ⁄ √p
where J1 indicates the function of Bessel of first usual species, namely the whole function
J1 (s) = (s/2) k=0 * (-1)k ⁄ 22kK! (1 + K)! * S2k
One indeed obtains in substituent this expression in the preceding one
F (p) = k=0 * (-1)k ⁄ K! (K + 1)! * pk
what is quite coherent with the definition of ƒ (T) since ζ (͐k) = pk.
Relation between the bilateral transform and the transform monolatérale
Elementary theory
Either ƒ a function defined in an open vicinity of I = ]0,+∞], continues into 0, and admitting a transform of bilateral Laplace ζ (ƒ).
Its transform monolatérale of Laplace, whom we will note here
ζ+ (ƒ)
is given by
ζ+ (ƒ) = ζ (ƒ * Υ)
where Υ is the function of Heaviside. One has
d ⁄ dt (ƒ * Υ) = ƒ’ * Υ + ƒ * δ = ƒ’ * Υ + ƒ (0) * δ
consequently
pζ+ (ƒ’) = ζ+ (ƒ’) + ƒ (0)
from where the traditional formula
ζ+ (ƒ’) = pζ+ (ƒ) - ƒ (0)
Generalization
That is to say T a distribution with positive support, g an indefinitely derivable function in an open interval containing I = ]0,+∞] and ƒ = T + g. By posing g + = g Υ,ƒ + = T + g is a distribution with positive support, whose transform of Laplace is in abusive notation
ζ+ (ƒ) = ζ (ƒ+) = ∫0-+∞ ƒ (t) eptdt
where α is the x-coordinate of convergence. The distributions ƒ and g have even restriction on any open interval of the form _ - ε,0] as soon as ε > 0 is sufficiently small. One can thus write ƒ(i) (0-) = g(i) (0) for entire i ≥ 0. In addition,
ζ+ (ƒ’) = ζ+ (T’) + ζ+ (g’)
with
ζ+ (T’) = ζ (T’) = pζ (T)
and, according to the elementary theory above
ζ+ (g’) = pζ (g) - g (0)
Finally
ζ+ (ƒ’) = pζ (T + g) - g(0) = pζ (ƒ) - ƒ (0-)
Let us define the relation of following equivalence now: ƒ1 and ƒ2 indicating two distributions such as above, we will write ƒ1 ∼ ƒ2 si ƒ1 and ƒ2 have even restriction on the interval_ -ε,+ ∞] as soon as ε > 0 is sufficiently small. Then ζ+1) depends only on the class of equivalence ƒ of ƒ1 and which is called a germ of function generalized defined in a vicinity from ]0,+ ∞] and by abuse language, a generalized function with positive support. One will write ζ+ (ƒ) = ζ+1) = ζ (ƒ1+. Let us note finally that ζ+ (ƒ) = 0 if, and only if ƒ = 0.
Applications
The transform of bilateral Laplace is used in particular for the design of traditional analogical filters, the optimal filter of Wiener, in statistics where it defines the generating function of the moments of a distribution, it plays a crucial role in the formulation at continuous time of causal spectral factorization direct and opposite, it is very much used finally to solve the integral equations.
Generalization with the case of several variables
The bilateral transform of Laplace spreads with the case of functions or multivariate distributions, and Laurent Schwartz made the complete theory of it. That is to say T a distribution defined on ℜn. The whole of p pertaining to CN for which x → exp (-p * x) T (x) in abusive notation is a distribution moderated on ℜn is this time a cylinder of the form Γ + iℜn where Γ is a convex subset of ℜn in the case of a variable, Γ is not other than the band of convergence mentioned above). That is to say then for ξ in Γ distribution x → exp (-ξ.x) T (x), again in abusive notation. This distribution is moderate. Let us note E (ξ) : n → E (ξ) (n) its transform of Fourier. The function ξ → E (ξ) is called the transform of Laplace of T.

Transformation of Mellin

In mathematics, the transformation of Mellin is an integral transformation which can be regarded as the multiplicative version of the transformation of bilateral Laplace. This integral transformation is strongly connected to the theory of the series of Dirichlet, and is often used into theory of the numbers and in the theory of the asymptotic developments, it is also strongly connected to the transformation of Laplace, with the transformation of Fourier, the theory of the function gamma and the special functions.
The transformation of Mellin of a function ƒ is:
{Mƒ} (s) = φ (s) = ∫0 xs ƒ (x) dx ⁄ x
The reverse transformation is
{M-1 φ} (x) = ƒ (x) = 1⁄2πi ∫c-i∞c+i∞ x-1 φ (s) ds
The notation supposes that it is a curvilinear integral applying to a vertical line in the complex plan. The conditions under which this inversion is valid are given in the theorem of inversion of Mellin.
The transformation was named thus into the honor of the Finnish mathematician Hjalmar Mellin.
Relationships to the other transformations
The transformation of bilateral Laplace can be defined in terms of transformation of Mellin by
{Bƒ} (s) = {Mƒ (-ln x)} (s)
and conversely, we can obtain the transformation of Mellin starting from the transformation of bilateral Laplace by
{Mƒ} (s) = {Bƒ (e-x)}(x)
The transformation of Mellin can be seen as an integration using a core xs which respects the multiplicative measurement of Haar, dx/x, which is invariant under dilation x → αx, that is to say d(αx) ⁄ αx = dx /x, the transformation of just bilateral Laplace by respecting the additive measurement of Haar dx, which is an invariant of translation, that is to say d (x + α) = dx.
We can also define the transformation of Fourier into terms of transformation of Mellin and vice versa, if we define the transformation of Fourier like above, then
{Fƒ} (s) = {Bƒ} (is) = {Mƒ(-ln x)} (is)
We can also reverse the process and obtain
{Mƒ (s) = {Bƒ (e-x)} (s) = {Fƒ (e-x)} (-is)
The transformation of Mellin is also connected to the series of Newton or the binomial transformations with the generating function of the law of Poisson, by means of the cycle of Poisson-Mellin-Newton.
Integral of Cahen-Mellin
For c > 0,R (y) > 0 and y-s on the principal branch, one has
e-y = 1⁄2πi ∫c-i∞c+i∞ ⌈ (s) y-s ds
where ⌈ (s) is the function gamma of Euler. This integral is known under the name of integral of Cahen-Mellin.

Transformation phase systems

The diagonalization of the matrix flow ⁄ currents: one seeks a basis of vectors in which the equations describing the operation of an electrical machine are decoupled, ie that the relative magnitudes to a phase do not depend on other phases, matrices linking different magnitudes are then diagonals. The base address these concerns is found, the difficulty is then to interpret physically the quantities that compose
Extension of Blondel theorem which allows to replace a three-phase system by a two-phase equivalent. Indeed, between two coils whose axes have an angular difference of 90 °, the flow Mutual are zero, hence disappearance of the terms off-diagonal matrix flow ⁄ currents:


V1 = R1I1 + d⁄dt ψ1 = R1I1 + L1 d⁄dt I1
V2 = R2I2 + d⁄dt ψ2 = R2I2 + L2 d⁄dt I2

V1
V2
= R1
0
0
R2
I1
I2
+ L1
0
0
L1
d⁄dt I1
I2

Transform of concordia

The transform of Concordia, is a mathematical tool used in electrical engineering in order to model a three-phase system thanks to a diphasic model.
Philosophy of the transform of Concordia
A diphasic system made up of two reels perpendicular one compared to the other and traversed by currents out of phase between them from π ⁄ 2 makes it possible to create a spinning field pattern at the speed ω.
Diagram
A three-phase system made up of reels and currents out of phase between them of 2π ⁄ 3 make it possible to create a spinning field pattern at the speed ω.
Diagram
setting in equation
One can model the spinning field pattern created by three-phase system by a diphasic system thanks to the following transformations:
    ia   ia    
iα           iα
  = C23 ib and ib = C32  
iβ           iβ
    ic   ic    

with:
ma   mh
mb = T mα
mc   mβ

where T is the matrix of Concordia.

Transformation of Fortescue

Any system of three-phase sizes unbalanced can be put in the form of the sum of three balanced systems or symmetrical:
a direct balanced system noted Gd
an opposite balanced system noted Gi
a homopolar system of tension noted G0
Homopolar three-phase systems
g0 = G0 sin (ωt + φ0)
g0 = G0 sin (ωt + φ0)
g0 = G0 sin (ωt + φ0)
The interest of this false three-phase system is to facilitate the matric writing of the transformation of Fortescue.
Stamp transformation
The goal is to find the Gd values, Gi and G0 starting from G1, G2 and G3.
Calculation of G0
As the sum of the three sizes of a balanced system is null, one has inevitably:
3G0 sin (ωt + φ0) = G1 sin (ωt + φ1) = G2 sin (ωt + φ2) = G3 sin (ωt + φ3)
Operator of rotation: has
It is a complex number of module 1 and argument ⅔ π : a = ej⅔ π
The result of its multiplication by the complex number associated with a size corresponds to another size of the same amplitude and out of phase of ⅔ π compared to the initial size. It corresponds to a rotation of ⅔ π in the plan of Fresnel.
It checks the following properties:
a³ = 1
1 + a + a² = 0

Transformation Clarke

It seeks to express the different relations originally written in the system of axes (O1, O2, 03) in the system (Oα, Oβ). Here, we chose the axis Oβ behind the axis Oa. This is of interest in the study of the synchronous machine since the excitation flux is on an axis, the fem. are induced along the axis quadrature back.
O1 = 1
O2 = aO1
O3 = a²O2
with
a = ej * 2π ⁄ 3
O α = 1
O β = -jO α
We then seek the transformation C such that:
1
a
 
 
= C
 
 
 
1
 
-j
 
 
 
avec : C =
 
 
c11 c12
c21 c22
c31 c32

By identifying, found relations:
1 = c11 - jc12 ⇒ c11 = 1 and c12 = 0
a = c21 - jc22 ⇒ c21 = -½ and c22 = -√3 ⁄ 2
a² = c31 - jc32 ⇒ c31 = -½ and c32 = √3⁄2
It is not suitable for a system with a non-zero sequence component. It then adds a third column so that the zero-sequence X 0 is equal to the sum of three quantities X1,X2,X3

Transformation Miss Emily Clarke

is obtained in this way, the transformation of Miss Emily Clarke:
C = 1

0
-√3⁄2
-√3⁄2
1
1
1

This transformation diagonalizes the matrix L well (flow ⁄ currents) (it was built for this, among others), and we get into the system (α,β, 0):
C-1LC = L-M
0
0
0
L-M
0
0
0
L+2M

with:
C-1 = ⅔ 1
0
½

-√3⁄2
½

-√3⁄2
½

Transformation park

Relations flows ⁄ currents are now fully decoupled, however there is still dependence with the angle θ, ie, for a rotating machine dependence over time. We can then consider modifying the transformation of Miss Clarke assuming the axes (Oα, Oβ) offset by an angle θ relative to the axis of phase 1, we christened the new axes (Od, Oq) respectively named axis direct and quadrature axis.
We pass system (α,β) the system (d, q, 0) by applying a simple rotation of angle θ around the z axis (see figure):
xα
xβ
x0
= cos θ
-sin θ
0
sin θ
cos θ
0
0
0
1
xd
xq
x0

by calculating the product of two matrices, we obtain the transformation of Mr. PARK:
P = cos θ
cos (θ - 2π⁄3)
cos (θ - 4π⁄3)
sin θ
sin (θ - 2π⁄3)
sin (θ - 4π⁄3)
1
1
1

Transformation KU

Transformation of electrical transformation by Mr. KU.Si are baptized coordinate transformation from the latter by (xb, xf, x0), we have, for example, for a current:
i1
i2
i3
= K ib
iƒ
i0
and ib
iƒ
i0
= K-1 i1
i2
i3

Then:
ib = eje (i1 + a²i2 + ai3)
iƒ = e-je (i1 + ai2 + a²i3)
i0 = i1 + i2 + i3
We see that the components (b, f, 0) are related to symmetrical components (d, i, 0) from the transformation of Mr. Fortescue:
ib = ejeid
iƒ = e-jeii
i0 = i0

Stamp mathematical

Ci after, various matrices mathematics being able to be used in search and scientific computation in electricity and electronics.

Not criticizes mathematical
In analysis with several variables, a critical point of a function of several variables, with digital assets, is a point of cancellation of its gradient. The critical points are used as intermediary for the search of the extremums of such a function.
More generally, one can define the concept of point critical of a differentiable application between two differential varieties, it acts of the points where the differential is not maximum row.
Approximately, it is necessary to find all has them such as: ∇ ƒ (α) = 0.
Critical points and points of extremums of a numerical function

A point-saddle or point-collar is a critical point.
Either ƒ a function of n variable x1,...,xn, to actual values, differentiable on open U.On says that she admits a critical point in a point u of U when its gradient is null in this point.
In particular, if u is a local point of extremum of ƒ it is a critical point. The reciprocal one is false: once one the critical points of a function, should be examined their nature determined, for example by carrying out the calculation of the matrix hessienne of ƒ.
Points and breaking values for an application between varieties
Either ƒ a differentiable application between two varieties M and N. One says that ƒ presents a critical point to the point m of M if the tangent linear application to ƒ in m is nonsurjective that is to say ƒ is not an immersion).
One calls breaking values the images of the critical points by ƒ. The theorem of Sard ensures that for an indefinitely differentiable function, the whole of the breaking values is of null measurement.

Stamp hessienne

In mathematics, the matrix hessienne (or simply the hessienne) of a numerical function F is the square matrix, noted H (F), of its derivative partial seconds.
More precisely, being given a function F to actual values
f(x1, x2, xn)
and by supposing that all the derivative partial seconds of F exist, the coefficient of index I, J of the matrix hessienne of F are worth
Hij(ƒ) = ∂² ƒ ⁄ ∂ xi∂ xj
One calls hessien (or discriminating hessien) the determinant of this matrix.
The hessien term was introduced by James Joseph Sylvester, in homage to the German mathematician Ludwig Otto Hesse.
That is to say in particular F a function of class C² definite on open U of space E, with actual values. Its matrix hessienne is well defined and under the terms of the theorem of Schwarz, it is symmetrical.
Application to the study of the critical points
One supposes F function of class C² on open U. the matrix hessienne allows, in many cases, to determine the nature of the critical points of the function F, that is to say points of cancellation of the gradient.
Requirement of extremum local
if has is a point of local minimum of F, then it is a critical point and the hessienne in has is positive
if has is a point of local maximum of F, then it is a critical point and the hessienne in has is negative
In particular, if the hessienne in a critical point admits at least a strictly positive eigenvalue and a strictly negative eigenvalue, the critical point is a point coll.
Sufficient condition of extremum local
Precisely, a critical point of F known as is degenerated when the discriminant hessien is cancelled, in other words when 0 are eigenvalue of the hessienne. In a point criticizes not degenerated, the sign of the eigenvalues (all nonnull) determines the nature of this point (not of extremum local or not collar):
if the hessienne is definite positive, the function reaches a local minimum at the critical point
if the hessienne is definite negative, the function reaches a local maximum at the critical point
if there are eigenvalues of each sign, the critical point is a point collar
In this last case, one defines the index of the critical point like the number of negative eigenvalues.
In dimension two in particular, discriminating it hessien being the product of the eigenvalues, its sign is enough to determine the nature of a point criticizes not degenerated.
Finally for a degenerated critical point, none of these implications is true.
Curve hessienne
If C is the algebraic curve of projective equation (homogeneous) ƒ (X, there, Z) = 0, one calls curve hessienne (or simply hessienne) of C the curve whose projective equation is H (ƒ) | (X, there, Z) = 0, where H (ƒ) is the hessien (the determinant of the matrix hessienne) of F. The hessienne of a.c. for intersection with C the critical points and the points of inflection of C. If C is of degree D, its hessienne is of degree 3 (d-2), according to the theorem of Bézout, the number of the points of inflection of a regular curve of degree D is thus 3D (d-2), which is a particular case of one of the formulas of Plücker.
Extension to the framework of the varieties
When M is a differential variety and F an indefinitely differentiable numerical function on M, it is possible to define the differential of F in any point, but not the matrix hessienne, as one sees it by writing a formula of board swapping.
However, when m is a critical point for the function F, the matrix hessienne of F in m can be defined indeed. One can thus speak about critical point degenerated or not and prolong the results of the preceding paragraph.
Lemma of Morse
The lemma of Morse shows that the behavior of a regular function in the vicinity of a point criticizes not degenerated is entirely determined by the knowledge of the index of the critical point.
Lemma of Morse, Is F a C function on a differential variety of dimension N. One considers a point criticizes not degenerated m of the function F, and one notes K his index. Then there exists a local frame of reference x1, ....., xn, centered in m and such as the corresponding expression of F is
ƒ(x) = ƒ(m) - x21 - ... - x2k + x2k + 1 + ... + x2n
One qualifies such a frame of reference of Morse.
It results in particular from the lemma that the not degenerated critical points are isolated.
The lemma of Morse spreads with spaces of Hilbert under the name of lemma of Morse-Palate.
Theory of Morse
A function of which all the critical points are not degenerated is described as function of Morse. The theory of Morse aims to connect the study of the topology of the variety to that of the critical points of the functions which can be defined there.

The matrix jacobienne

In vectorial analysis, the matrix jacobienne is a matrix associated with a vector function in a given point. Its name comes from the mathematician Charles Jacobi. The determinant of this matrix, called jacobien, plays a big role in the solution to problem nonlinear.
The matrix jacobienne is the matrix of the first order derivative partial of a vector function.
That is to say ƒ a function of open of R n to values in R m. Such a function is defined by its m functions component in actual values:
  x1   ƒ 1(x1,....,xn)
F: | |
  xn   ƒ m(x1,....,xn)
The derivative partial of these functions in a point m, if they exist, can be arranged in a matrix with m lines and N columns, called matrix jacobienne of ƒ:
  ∂ ƒ 1 ⁄ ∂ x1 -- ∂ ƒ 1 ⁄ ∂ xn
JF (M) = | |
  ∂ ƒ m ⁄ ∂ x1 -- ∂ ƒ m ⁄ ∂ xn

This matrix is noted:
JF (M), ∂ (ƒ 1,...,ƒ m) ⁄ ∂ (x1,...,xn)
or
D (ƒ 1,...,ƒ m) ⁄ D (x1,...,xn)
For I = 1..., m, I e line of this matrix is transposed of the vector gradient to the point m of the function ƒi, when this one exists. The matrix jacobienne is also the matrix of the differential of the function, when this one exists. It is shown that the function ƒ is of C1 class if and only if its derivative partial exist and are continuous.
The matrix jacobienne of the function ƒ of R3 in R4 defined by:
Properties
Made up the ƒ R G of functions differentiable is differentiable and its matrix jacobienne is obtained by the formula:
JF o G = (JF o G).JG
Determinant jacobien
If m = N, then the matrix jacobienne of ƒ is a square matrix. We can then define his determinant det Jƒ, called the determinant jacobien, or jacobien. To say that the jacobien is nonno cost thus to say that the matrix jacobienne is invertible.
A function ƒ of C1 class is invertible in the vicinity of m with reciprocal a ƒ-1 of C1 class if and only if its jacobien in m is nonnull. Moreover, the matrix jacobienne of ƒ-1 results from the reverse of the matrix jacobienne of ƒ by means of the formula
JF - 1 = (JF o F1) -1
The theorem of change of variables in the multiple integrals utilizes the absolute value of the jacobien.
It is not necessary to suppose that V is opened, nor that ƒ is a homeomorphism of U on V : that results from the assumptions, according to the theorem of the invariance of the field.
One shows initially this theorem if ƒ is a diffeomorphism, which according to the theorem of local inversion, simply amounts adding the assumption that the jacobien of ƒ is not cancelled in any point of U, then one is freed from this assumption thanks to the theorem of Sard.

Stamp newton

In numerical analysis, the method of Newton or method of Newton-Raphson is, in its simplest application, an effective algorithm to numerically find an approximation precise of one zero (or root) of a real function of a real variable.
Presentation
In its modern form, the algorithm can be presented like: with each iteration, the function which one seeks one zero is linearized in reiterated running and reiterated according to is taken equal to the zero of the linearized function. This summary description indicates that at least two conditions are necessary for the good walk of the algorithm: the function must be differentiable at the visited points is added to these conditions the strong constraint of having to take the first reiterated rather near of one zero regular to the function, so that the convergence of the process is assured.
The principal interest of the algorithm of Newton is its local quadratic convergence. In picturesque terms but not very precise, that means that the number of correct significant digits of reiterated double with each iteration, asymptotically. Indeed, if the reiterated initial one is not taken sufficiently near to one zero, the continuation of reiterated generated by the algorithm has an erratic behavior, whose possible convergence can be only the fruit of the chance.
Applied to derived from a real function, this algorithm makes it possible to obtain critical points. This observation is at the origin of its use in optimization without or with constraints.
while taking as reiterated initial the point x1 = 2, which differs from less than 10% from the true value from a root. He writes x = 2 + d1, or d1 is thus the increase to give to 2 to obtain root X. He replaces x by 2 + d1 in the equation, which becomes
1 + 6d²1 + 10d1 - 1 = 0
and it is necessary to find the root to add it to 2. He neglects d³1 + 6d²1 because of his smallness one supposes that d1|« 1, so that there remain 10d1 - 1 = 0 or d1 = 0, 1, which gives like new approximation of the x2 = x1 + d1 = 2,1. He writes then d1 = 0 , 1 + d2, or d2 is thus the increase to be given to d1 to obtain the root of the preceding polynomial. He thus replaces d1 by 0,1 + d2 in the preceding polynomial to obtain
2 + 6,3²2+ 11,23d2 + 0,061 = 0
One would obtain the same equation by replacing X by 2,1 + d2 in the initial polynomial. Neglecting the first two terms, there remain 11,23 d2 + 0,061 = 0 or d2 ≈ - 0,0054, which gives as new approximation of the x3 = x2 + d2 ≈ 2,0946.On can continue the operations as a long time as it is appropriate.
Real function of a real variable
The algorithm, one thus will seek to build a good approximation of one zero of the function of a real variable ƒ (X) while being based on its development of Taylor to the first command. For that, on the basis of a point x0 which one preferably chooses near to the zero to find (by making coarse estimates for example), one approaches the function with the first command, in other words, one considers it about equalizes with his tangent in this point:
ƒ (x) ≈ ƒ (x0) + ƒ’ (x0) (x - x0)
On the basis of there, to find one zero of this function of approximation, it are enough to calculate the tangent line intersection with the X-axis, that is to say to solve the equation closely connected:
0 = ƒ (x0) + ƒ’ (x0) (x - x0)
One then obtains a point x1 which in general has good lucks be closer to truth zero of ƒ than the preceding point x0. By this operation, one can thus hope to improve the approximation by successive iterations: one again approaches the function by his tangent in x1 to obtain a new point x2.

Illustration of the method of Newton
This method requires that the function has a tangent of each leader character which one builds by iteration, for example it is enough that ƒ is derivable.
Formally, one starts from a point x0 pertaining to the whole of definition of the function and one builds by recurrence the continuation:
xk + 1 = xk - ƒ (xk) ⁄ ƒ’ (xk)
or
ƒ’ indicates the derivative of the function ƒ.
The point xk+1 is well the solution of the equation closely connected ƒ (xk) + ƒ’ (xk) (x - xk) = 0.
It may be that the recurrence must finish, so at the stage K, xk does not belong to the field of definition or if the derivative ƒ’ (xk) is null, in these cases, the method fails.
If the zero unknown α is isolated, then there exists a vicinity of α such as for all the starting x0 values in this vicinity, the continuation xk will converge towards a. Moreover, if ƒ (α) is nonnull, then convergence is quadratic, which means intuitively that the number of correct figures is roughly doubled with each stage.
Although the method is very effective, certain practical aspects must be taken into account. Above all, the method of Newton requires that the derivative is actually calculated. Whenever the derivative is only estimated by taking the slope between two points of the function, the method takes the name of method of the secant, less effective of command 1,618 which is the golden section and lower than other algorithms. In addition, if the starting value is too far away from truth zero, the method of Newton can enter in infinite loop without producing improved approximation. Because of that, very implemented of the method of Newton must include a check code of the iteration count.
Convergence
The speed of convergence of a continuation xn obtained by the method of Newton can be obtained like application of the formula of Taylor-Lagrange. It is a question of evaluating an increase of log | xn - α.
ƒ is a function defined in the vicinity of α and twice continuously differentiable. It is supposed that a is being one zero of ƒ that one tries to approach by the method of Newton. The assumption is made that α is one zero of command 1, in other words that ƒ’ (α) is nonnull. The formula of Taylor-Lagrange is written:
0 = ƒ (α) = ƒ (x) + ƒ’ (x) (α - x) + ]ƒ"(ξ) ⁄ 2_ (α - x)²
with ∇ between x and α.
On the basis of approximation X, the method of Newton provides at the end of an iteration:
Nƒ (x) - α = x - ƒ (x) ⁄ ƒ’ (x) - α = ƒ" (ξ) ⁄ 2ƒ’ (x) (x - α)²
For a compact interval I containing X and α and included in the field of definition of ƒ, one poses: m1 = minx∈I | ƒ’ (x) | like M2 = maxx∈I | ƒ" (x) |. Then, for all x∈I:
| Nƒ (x) - α | ≤ M2 ⁄ 2m1 | x - α |²
by immediate recurrence, it comes:
K | xn - α | < (K |x0 - α | )²n
or
K = M2 ⁄ 2m1.En passing to the logarithm:
log | xn - α | < 2n log (K | x0 - α |) - log (K)
The convergence of xn towards α is thus quadratic, provided that | x0 - α |< 1/K.
Criterion of stop
Possible criteria of stop, given relative with a numerically negligible size, are:
||ƒ (xk) || < ε1
or
|| xk+1 - xk || < ε2
or
ε12 ξR+
errors of approximations represent characterizing the quality of the numerical solution.
In all the cases, it may be that the criterion of stop is checked in points not corresponding to solutions of the equation to solve.
Square root
A particular case of the method of Newton is the algorithm of Babylon, so known under the name of method of Héron: it acts, to calculate the square root of has, to apply the method of Newton to the resolution of
ƒ (x) = x^2 - α
One obtains then, by using the formula of derived the ƒ’ (X) = 2x ", a method of approximation of the solution √α given by the following iterative formula:
xk + 1 : = xk - (x²k - α ⁄ 2xk = 1 ⁄ 2 ( xk + α ⁄ xk
This method converges for all α ≥ 0 and any starting point x0 > 0.
One can extend it to the calculation of any nth root of a number has with the formula:
xk + 1 : = xk - (xnk - α ⁄ n xn-1k) = xk ]1 + 1 ⁄ n (α ⁄ xnk - 1)_
The convergence of the continuation (xk) is shown by recurrence: for K given, one can show that if 0 ≤ √ α ≤ xk alors 0 ≤ √ α ≤ xk + 1 ≤ xk. Moreover, if 0 < xk ≤ √ α, alors √ α ≤ xk + 1.The continuation is thus decreasing at least starting from the second term. It is also limited, therefore it converges. Remain to show that this limit 1 is quite equal to √ α : one obtains this result by showing that it is necessary that ∫ = √ α so that xk + 1 - 1 tends towards 0 when K tends towards + ∞.
Intersection of graphs
One can determine an intersection of the graphs of two derivable real functions ƒ and G, that is to say an item x such as ƒ (x) = G (x), by applying the method of Newton to the fonctionƒ - G.
Complex function
Method of Newton applied to the polynomial z1 3 - 1 with complex variable Z converges starting from all the points of the plan (of the complex numbers) coloured in red, green or blue towards one of the three roots of this polynomial, each color corresponding to a different root. The remaining points, being on the clearer structure, called fractale of Newton are the starting points for which the method does not converge.
The method can also be applied to find of the zeros of complex functions
convergence towards one zero
infinite limit
the continuation admits a cycle limits in other words, the continuation can be cut out out of p under-continuations disjoined of form z (n0 + kp) k which each one converge towards distinct points (which are not of the zeros of ƒ forming a periodic cycle for function z - ƒ (z) ⁄ ƒ’ (z)
the continuation approaches the whole of the zeros of the function without there not being however of limiting cycle, and with each stage of the iteration, one finds near to one zero different from the precedents
following a chaotic behavior
Generalizations/alternatives
Systems of equations to several variables
One can also use the method of Newton to solve a system of n equations (nonlinear) with n unknown factors X = x1,...., xn, which amounts finding one zero of a function ƒ of Rn in R^n, who will have to be differentiable. In the formulation given above, it is necessary to multiply by the reverse of the matrix jacobienne ƒ (xk) instead of dividing by ƒ’ (xk).
F’ (xk) (xk + 1 - xk) = - F (xk)
in the unknown factor xk + 1 - xk. Once again, this method x0 suffisamment functions only for one initial value near to one zero of F

Symmetrical matrix

In linear algebra and bilinear, a symmetrical matrix is a square matrix which equal to its characteristic is transposed.
Any diagonal matrix is symmetrical.
Properties
A matrix representing a bilinear form is symmetrical if and only if the latter is symmetrical.
The whole of the symmetrical matrices of command N to coefficients in a commutative body is a vectorial subspace of dimension N (n+1) /2 of the vector space of the square matrices of command N, and if the characteristic of the body is different from 2, an additional subspace is that of the antisymmetric matrices.
Real symmetrical matrices
Euclidean structure
One notes Sn (R), or simply Sn if there is no possible confusion, the vector space of the symmetrical real matrices of command N. This vector space from dimension n (n + 1) ⁄ 2 is canonically provided with a structure of Euclidean space, which is that of mn (R). The scalar product is defined by
(A,B) ∈ Sn * Sn → ]A,B_ : = tr(ATB) = ∑1≤ i ≤ n,1 ≤ j ≤ n AijBij ∈ R

tr (α) = ∑ni = 1 Aii
indicate the trace of has and A indicates the element Aij.
The standard associated with this scalar product is the standard of Frobenius, which one notes here simply
|| A || = √]tr (ATA)_ = (∑1 ≤ i ≤1 ≤ j ≤ A2ij)2
with these notations, the inequality of Cauchy-Schwarz is written then for all A and B
∈ Sn
| ]A,B_ | ≤ || A || B ||
Spectral theory
Spectral decomposition
The spectral theorem (in finished dimension) states that any symmetrical matrix with real elements is diagonalisable using an orthogonal matrix. Its eigenvalues are thus real and its own subspaces are orthogonal.
Inequality of Fan
One notes λi (α) ∈ R
n eigenvalues of A ∈ Sn, which one arranges by decreasing order:
λ1 (α) ≥ λ2 (α) ≥ ...≥ λn (α)
The application is introduced
λ : Sn → Rn : A → (λ1(α),...,λn(α))
and, for a vector (column)
v ∈ Rn
one notes vt transposed vector and
Diag (v) the diagonal matrix whose element (i,i) est vi.
Inequality of Fan
For all A and B ∈ Sn, one A
]A,B_ ≤ λ (α) T λ (B)
with equality if and only if one can obtain the ordered spectral decompositions
λ (α) et λ (B) of A and B consequently orthogonal matrix, that is to say if and only if
∇ V orthogonal: With = A = V Diag (λ (α)) VT and B = V Diag (λ (B)) VT
Positive symmetrical matrices
Positive matrix and positive definite Matrix.
A matrix S symmetrical real of command N is known as positive if the associated symmetrical bilinear form is positive, that is to say if
∀ x ∈ Rn, xT Sx ≥ 0
A matrix S symmetrical real of command N is known as definite positive if the associated bilinear form is definite and positive, that is to say if
∀ x ∈ Rn \ {0}, xT Sx > 0

In this chapter K indicates a commutative body.
Definitions
Are N and p two natural entireties nonnull.
We call matrix with elements in K of the type (N, p) any application of {1,2,..., N} * {1,2,..., p} in K (family of elements of K indexed by {1,2,..., N} * {1,2,..., p}), that is to say a rectangular table with N lines and p columns of the form:
a11 a12 -- -- a1p
a21 a22 -- -- a2p
| | \   |
| |   \ |
an1 an2 -- -- anp

or
a11,a12,an
K are called the elements or the coefficients of the matrix.
Such a matrix is noted too
(aij) 1 ≤ i ≤ n,1 ≤ j ≤ p
or more simply (aij).
The whole of the matrices of the type (N, p) to elements in K, notes Mnp.
When N = p, the matrix is known as square of dimension N.
When p = 1, the matrix comprises only one column of N elements:
a1
a2
|
an

one speaks about vector.
the whole of the square matrices of type (N, N) or of command N, notes Kn.
when K = R the matrix is known as real.
when K = C the matrix is known as complex.
the elements a11,a12,ann form the principal diagonal of the matrix.
Opposite matrix
That is to say M a matrix. The reverse of M, if it exists, is defined like the single matrix NR such as: M . N = N . M = In
Stamp transposed
Above all, one speaks about transposed of a matrix. Transposed of a matrix M is noted tM.
It is the matrix obtained starting from M by reversing the line and the columns. that is to say to obtain NR = tM, one nij = mij(with N = (nij) and M = (mij))
Other notation: tM = (mji) notation without re-electing transposed of Mr.
Property: When the matrix M is known as symmetrical there is then mij = mji, which gives tM = m".
Stamp diagonal
A square matrix aij is known as diagonal if all the elements out of the diagonal are null, ∀ (i,j) ∈ {1,...,n}²,i ≠ j ⇒ aij = 0.Une such matrix notes diag a11,a22,...,ann
The whole of the diagonal matrices notes Dn (K).
Triangular matrix
Lower triangular matrix
A square matrix (aij) is known as triangular lower (or trigonal lower) if all the elements located above the principal diagonal are null, ∀ (i,j) ∈ {1,...,n}²,i < j ≠ aij = 0.
So moreover, the elements of the principal diagonal are null the matrix is known as strictly triangular lower or strictly trigonal lower.
1 0 0
2 3 0
4 5 6

A lower triangular matrix
0 0 0
2 0 0
4 5 0

A lower strictly triangular matrix
The whole of the lower triangular matrices notes Ti (K).
Higher triangular matrix
In a similar way, a square matrix (aij) is known as triangular higher (or trigonal higher) if all the elements located below the principal diagonal are null, ∀ (i,j) ∈ {1,...,n}²,i > j ≠ aij = 0..
So moreover, the elements of the principal diagonal are null the matrix is known as strictly triangular higher or strictly trigonal higher.
1 2 3
0 4 5
0 0 6

A higher triangular matrix
0 2 3
0 0 5
0 0 0

A higher strictly triangular matrix
The whole of the higher triangular matrices notes Ts (K).
The determinant of a triangular matrix has as a value the products of the terms of the principal diagonal. For the first example: det = 1 X 4 X 6 = 24
Stamp diagonal
A square matrix is known as diagonal matrix when aij = 0, for any I ∇ J what means that all the elements located out of the principal diagonal are null. If all the nonnull elements of the diagonal matrix are equal, the matrix is known as scalar matrix.
1 0 0
0 2 0
0 0 3

A diagonal matrix
2 0 0
0 2 0
0 0 2

A scalar matrix
Stamp identity
A matrix identity is a scalar matrix where aii = 1
1 0 0
0 1 0
0 0 1

A matrix identity 3x3
When one multiplies a matrix by the matrix identity one returns to the starting matrix.A n * m • In = An * m
Symmetrical and antisymmetric matrix
A matrix has is known as symmetrical if it equal to its is transposed:
tA = A
A matrix has is known as antisymmetric if it equal contrary to its is transposed:
tA = -A
Orthogonal matrices
M and NR are two orthogonal matrices if MN = NM = 0
Matrices idempotentes
These matrices have the following property : Mn = M
Matrices nilpotentes
A matrix M is known as nilpotente if: ∃ p ∈ N : Mp = O

execution time customer :
runtime server : 0.035 seconds