



In order to transform an equation from the differential to the integral form (vice versa), the Gauss Theorem should be applied.It's important for fluid dynamics.
For vector and tensor equations, the longest but clearest notation is the Cartesian one. If the equation contains several similar terms which can be summed up , this notation can be abbreviated by applying the Einsteins summation convention.
对于向量和张量方程,最清晰且最长的表示方法就是笛卡尔表示。但是,如果这些方程中的某些项重复出现,则可以用Einsteins Summation Convention 来简化方程。
The Cartesian form is given by:
To simply this equation, the Einsteins summation convention can be applied. Commonly, the summation sign
For example:
The rules of Einsteins Summation Convention
If the subscript appears more than once in the same term, that means summation operation, and these subscripts are called dummy index
凡是下脚标重复出现的变量,则认定是求和运算,此时的下脚标被称为“哑标(dummy index)”
the dot production of two vectors 两个速度向量相乘
the derivative of velocity 对于速度向量求导
If there are two different subscript in one term, for the term which repeat appears, means summation operation, while for the subscript which appears once, traverse is performed for every axis.
如果一项中出现两个不同的脚标,对于重复出现的脚标做求和运算,对于单独出现的脚标不作求和运算,而是遍历每一个坐标轴,且每次只遍历一个,单独出现的脚标是自由自在的,称为自由标(free index)“”Exp1 例子1
Exp2 例子2
以N-S方程为例 Taking N-S equation for an example $$ \begin{cases} + u + v + w = f_x - + ( + + ) \
+ u + v + w = f_y - + ( + + ) \
+ u + v + w = f_z - + ( + + ) \end{cases} $$
Every equations contain
Based on the discussion above, the N-S equation can be re-written in tensor form:
vector
tensor
Inner product or Dot product 内积 点积
In general, for engineering, inner and dot product can be treated the same operation
工科范围内,通常不怎么区分内积和点积
The inner product of two vectors
The inner product of vector
if
The outer product (dyadic product) of two vectors
Usually, sign
But the first one is mathematically correct.
the magnitude of the cross product of two vectors represents the area of the parallelogram spanned by the two vectors
两向量叉乘的模为该两向量所组成的平行四边形的面积
Differential Operators
The spatial derivative of a variable (scalar, vector or tensor) is made by using the "Nabla" operator
the gradient of a scalar
the gradient of a vector
So the divergence operation decreases the rank of the tensor by one.