In mathematics, tensor calculus, tensor analysis, or Ricci calculus is an extension of vector calculus to tensor fields (tensors that may vary over a manifold, e.g. in spacetime).
Vector and Tensor Analysis
Developed by Gregorio Ricci-Curbastro and his student Tullio Levi-Civita,[1] it was used by Albert Einstein to develop his general theory of relativity. Unlike the infinitesimal calculus, tensor calculus allows presentation of physics equations in a form that is independent of the choice of coordinates on the manifold.
In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus.
Tensor notation makes use of upper and lower indexes on objects that are used to label a variable object as covariant (lower index), contravariant (upper index), or mixed covariant and contravariant (having both upper and lower indexes). In fact in conventional math syntax we make use of covariant indexes when dealing with Cartesian coordinate systems ( x 1 , x 2 , x 3 ) \displaystyle (x_1,x_2,x_3) frequently without realizing this is a limited use of tensor syntax as covariant indexed components.
Tensor notation allows upper index on an object that may be confused with normal power operations from conventional math syntax. For example, in normal math syntax, e = m c 2 = m c c \displaystyle e=mc^2=mcc , however in tensor syntax a parenthesis should be used around an object before raising it to a power to disambiguate the use of a tensor index versus a normal power operation. In tensor syntax we would write, e = m ( c 1 ) 2 = m ( c 1 ) ( c 1 ) \displaystyle e=m(c^1)^2=m(c^1)(c^1) and e = m ( c 2 ) 2 = m ( c 2 ) ( c 2 ) \displaystyle e=m(c^2)^2=m(c^2)(c^2) . The number in the inner parenthesis distinguishes the contravariant component where the outer parenthesis number distinguishes the power to raise the quantities to. Of course this is just an arbitrary equation, we could have specified that c is not a tensor and known that this particular variable does not need a parenthesis around it to take the quality c to a power of 2, however, if c were a vector, then it could be represented as a tensor and this tensor would need to be distinguished from normal math indexes that indicate raising a quantity to a power.
For example, in physics you start with a vector field, you decompose it with respect to the covariant basis, and that's how you get the contravariant coordinates. For orthonormal cartesian coordinates, the covariant and contravariant basis are identical, since the basis set in this case is just the identity matrix, however, for non-affine coordinate system such as polar or spherical there is a need to distinguish between decomposition by use of contravariant or covariant basis set for generating the components of the coordinate system.
The metric tensor represents a matrix with scalar elements ( Z i j \displaystyle Z_ij or Z i j \displaystyle Z^ij ) and is a tensor object which is used to raise or lower the index on another tensor object by an operation called contraction, thus allowing a covariant tensor to be converted to a contravariant tensor, and vice versa.
This means that if we take every permutation of a basis vector set and dotted them against each other, and then arrange them into a square matrix, we would have a metric tensor. The caveat here is which of the two vectors in the permutation is used for projection against the other vector, that is the distinguishing property of the covariant metric tensor in comparison with the contravariant metric tensor.
Two flavors of metric tensors exist: (1) the contravariant metric tensor ( Z i j \displaystyle Z^ij ), and (2) the covariant metric tensor ( Z i j \displaystyle Z_ij ). These two flavors of metric tensor are related by the identity:
For an orthonormal Cartesian coordinate system, the metric tensor is just the kronecker delta δ i j \displaystyle \delta _ij or δ i j \displaystyle \delta ^ij , which is just a tensor equivalent of the identity matrix, and δ i j = δ i j = δ j i \displaystyle \delta _ij=\delta ^ij=\delta _j^i .
In contrast, for standard calculus, the gradient vector formula is dependent on the coordinate system in use (example: Cartesian gradient vector formula vs. the polar gradient vector formula vs. the spherical gradient vector formula, etc.). In standard calculus, each coordinate system has its own specific formula, unlike tensor calculus that has only one gradient formula that is equivalent for all coordinate systems. This is made possible by an understanding of the metric tensor that tensor calculus makes use of.
Introductory course in modern differential geometry focusing on examples, broadly aimed at students in mathematics, the sciences, and engineering. Emphasis is on rigorously presented concepts, tools and ideas rather than on proofs. Topics covered include differentiable manifolds, tangent spaces and orientability; vector and tensor fields; differential forms; integration on manifolds and Generalized Stokes' Theorem; Riemannian metrics, Riemannian connections and geodesics. Applications to configuration and phase spaces, Maxwell equations and relativity theory will be discussed.
THE vector analysis of Gibbs and Heaviside and the more general tensor analysis of Ricci are now recognized as standard tools in mechanics, hydro-dynamics and electrodynamics. Their use not only materially simplifies and condenses the exposition, but also makes mathematical concepts more tangible and easy to grasp. Moreover, tensor analysis provides a simple automatic method for constructing invariants. Since a tensor equation has precisely the same form in all co-ordinate systems, the desirability of stating physical laws or geometrical properties in tensor form is manifest. The perfect adaptability of the tensor calculus to the theory of relativity was responsible for its original renown. It has since won a firm place in mathematical physics and engineering technology. Thus Sir Edmund Whittaker rates the tensor calculus as one of the three principal mathematical advances in the last quarter of the nineteenth century.
This is a concise but thorough text in vectors and tensors from the physics (not linear algebra) point of view. The text is unusual in that tensors take central place. It starts out with vectors, but at least half the material in the rest of the book is stated for general tensors. It is one of a series of texts that Richard A. Silverman prepared in the 1960s and 1970s by translating and freely adapting Russian-language texts. The present volume is a Dover 1979 corrected reprint of the 1968 Prentice-Hall edition.
A good knowledge of physics is almost essential to use this book. The vector and tensor parts start from scratch, but all the examples are drawn from physics, with no explanation of the physical concepts used. The main areas of physics covered are fluid dynamics and electromagnetic theory. There is a great deal on the metric tensor but (oddly) nothing on relativity.
The applications come at the end of the book, in the last chapter, but are worth the wait. They are extended analyses of physics problems and use all the preceding material. I think the reason they come so late is that they make heavy use of vector calculus, and that is the last thing developed. There are also little examples scattered through the text, but most of these just point out that some physical quantity is a tensor without telling you how this information is useful.
In this section, we briefly introduce tensors, their significance to fluid dynamics and their applications. The tensor analysis is a powerful tool that enables the reader to study and to understand more effectively the fundamentals of fluid mechanics. Once the basics of tensor analysis are understood, the reader will be able to derive all conservation laws of fluid mechanics without memorizing any single equation. In this section, we focus on the tensor analytical application rather than mathematical details and proofs that are not primarily relevant to engineering students. To avoid unnecessary repetition, we present the definition of tensors from a unified point of view and use exclusively the three-dimensional Euclidean space, with N = 3 as the number of dimensions. The material presented in this chapter has drawn from classical tensor and vector analysis texts, among others those mentioned in References. It is tailored to specific needs of fluid mechanics and is considered to be helpful for readers with limited knowledge of tensor analysis.
Scalar, vector, tensor - a mathematical representation of a physical entity that may becharacterized by a magnitude and/or directions associated with it. Scalars, vectors and tensors are quantities, which do not change if the system of coordinates is changed (e.g. between Cartesian, cylindrical, spherical).1)2)
Vectors can be analysed from the viewpoint of covariant and contravariant components, and can be transformed between various coordinate systems, including non-orthogonal ones. There are many detailed implications regarding vectors and calculations based of them, and it is best to study the relevant textbooks and literature, as well as practice with simpler cases before performing more complex calculations.1) This article contains only the most basic information. 2ff7e9595c
Comments