- Matrix rules
- Determinant rules
- Vector calculation
- Vector scalar
- Vector addition
- Vector subtraction
- Inner product
- Cross product
- Length
- Matrix

#### Graphical

#### Calculator

- Adjugate matrix
- Inverse Matrix
- Determinant 2x2
- Determinant 3x3
- Determinant 3x3 symbolic
- Determinant 4x4
- Determinant 4x4 symbolic
- Determinant 5x5
- Determinant NxN

#### Term Matrix

The term matrix was introduced in 1850 by ** James Joseph Sylvester **. James Joseph Sylvester (born September 3, 1814 in London, † March 15, 1897) was a British mathematician. Before the matrices already determinants were analyzed as a property of linear systems in the late 16th century by ** Cardano **. Approx. 100 years later ** Leibniz ** Her investigations continue on a general basis.

early 200 BC. Chinese mathematicians were the solution methods for systems of equations known 3x3. They recognized that the solution of linear systems depended only on the coefficients. They led to a matrix notation.

as algebraic properties of matrices ** Carl Friedrich Gauss ** have been introduced, but not to describe for linear systems, but to linear maps.

The first to systematically investigated matrices as algebraic objects, was ** Arthur Cayley ** (1821-1895). He recognized the connection between matrices and systems of linear equations and defined sum, scalar multiple and product for matrices. He continued, the inverse of a matrix.

A complete classification of complex matrices based on the Jordan normal form, which was introduced by ** Camille Jordan ** (1838-1922).

As matrix is called a system of elements a _{ ij }, which are arranged in a 2-dimensional rectangular pattern. If the scheme of m-rows and n-columns then one speaks of a (m, n)-matrix. The position of an element within the matrix is characterized by two subscripts. The first index is the row number and the second index is the column number. The numbering starts at the top left of the matrix and going from left to right and from top to bottom.

Applies to a matrix n = m it is referred to as the matrix ** square matrix **.

#### Main Diagonal

The elements of the matrix for the subscripts, i = j, ie, the elements a _{ ii } are the diagonal elements. While the entire ** the main diagonal ** of the matrix. The elements from the lower left to upper right are referred to as secondary diagonal. When speaking of the main diagonal ** n ** so you closes the series with a parallel to main diagonal. This also applies for the secondary diagonals.

#### Unit Matrix

The matrix in which all elements of the main diagonal equal to 1 and all other elements are equal to 0 means ** unit matrix E **.

#### Transposed Matrix

The mirrored on the diagonal matrix is called a matrix transpose. For a matrix A = (a ij _{ }) is given by the transposed matrix A ^{ T } = (a _{ ji }). The transposed of a transposed matrix, the matrix itself is that A = (A^{T})^{T}.

#### Determinant

Each square matrix can be assigned a unique number, which is referred to as the determinant (det (A)) of the matrix.

In general, the determinant is calculated by multiplying the elements of each diagonal, and then add the values. Them the values of the secondary diagonals are subtracted.

Major determinants of one leads to the Laplace development set back to a lower determinants. In this method according to the determinant of any row or column is developed.

#### Calculation Rules for Matrices

The matrix multiplication is associative:

A *( B * C ) = ( A * B ) * C

The matrix multiplication and matrix addition are distributive:

A *( B + C ) = A * B + A * C

For addition and multiplication by real numbers λ, μ:

(λ+ μ)A = λA + μB

and: λ (A + B) = λA + λB

There are zero divisor Matrices A ≠ 0 und B ≠ 0 applies to

A * B = 0

Applies to square matrices

det(A * B) = det(A) * det(B)

#### Inverse Matrix

The inverse matrix A ^{ -1 } is defined by the following equation

A * A^{-1} = E

Matrices for which an inverse exists is referred to as regular matrices. Matrices which have no inverse are called singular matrices.

For the inverse matrix, the following calculation rules are valid:

(A * B)^{-1} = A^{-1} * B^{-1}

((A)^{-1})^{-1} = A

The calculation of the inverse matrix A ^{ -1 } is either by means of the Gauss-Jordan algorithm or on the adjuncts. The Gauss-Jordan method transforms the matrix (A | E) in the form (E | A ^{ -1 }) from the one ^{ A -1 } can be read directly. With the adjuncts and the determinant of the inverse can be specified directly as A^{-1}=1/det(A) * adj(A)^{T}.

#### Clases of Matrices

A square matrix A is called * symmetric * matrix if and only if A ^{ T } = A * antisymmetric * matrix applies if A ^{ T } = A * orthogonal * matrix if and only if A^{T} = A^{-1}

#### Adjungate Matrix

The adjunct of matrix A is calculated in a way that for each matrix element a_{ij} is set a sub determinant with removing the line i and the column j. The value of this determinat is multiplied by (-1)^{i+j} that gives the element i,j of the adjungate matrix.