As matrix is called a system of elements a_{ij}, which are arranged in a 2dimensional rectangular pattern. The scheme of mrows and ncolumns is called a (m, n)matrix or a m x n matrix. The position of an element within the matrix is characterized by two subscripts. The first index is the row number and the second index is the column number. The numbering starts at the top left of the matrix and going from left to right and from top to bottom. If for a matrix is n = m then the matrix is called a square matrix.
$A=\left({a}_{ij}\right)=\left(\begin{array}{cccc}{a}_{11}& {a}_{12}& \dots & {a}_{1m}\\ {a}_{21}& {a}_{22}& \dots & {a}_{2m}\\ & \vdots \\ {a}_{n1}& {a}_{n2}& \dots & {a}_{nm}\end{array}\right)$
The elements of the matrix for the subscripts i = j are the main diagonal elements. The elements from the lower left to upper right are referred as secondary diagonal.
Here the main diagonal elements are shown in red color:
$\left(\begin{array}{cccc}{{a}_{11}}& {a}_{12}& \dots & {a}_{1m}\\ {a}_{21}& {{a}_{22}}& \dots & {a}_{2m}\\ & \vdots \\ {a}_{n1}& {a}_{n2}& \dots & {{a}_{nm}}\end{array}\right)$
and the secondary diagonal elements in green color:
$\left(\begin{array}{ccccc}{a}_{11}& {a}_{12}& \dots & {a}_{1\mathrm{m1}}& {{a}_{1m}}\\ {a}_{21}& {a}_{22}& \dots & {{a}_{2\mathrm{m1}}}& {a}_{2m}\\ & \vdots \\ {{a}_{n1}}& {a}_{n2}& \dots & {a}_{n\mathrm{m1}}& {a}_{nm}\end{array}\right)$
The matrix in which all elements of the main diagonal equal to 1 and all other elements are equal to 0 means unit matrix E.
$E=\left(\begin{array}{cccc}1& 0& \dots & 0\\ 0& 1& \dots & 0\\ & \vdots \\ 0& 0& \dots & 1\end{array}\right)$
The matrix mirrored on the main diagonal is called the matrix transpose. For a matrix A = (a_{ij}) the transposed matrix A^{T} = (a_{ji}). The transposed of a transposed matrix is the matrix itself A = (A^{T})^{T}.
${A}^{T}={\left({a}_{ij}\right)}^{T}={\left(\begin{array}{cccc}{a}_{11}& {a}_{12}& \dots & {a}_{1m}\\ {a}_{21}& {a}_{22}& \dots & {a}_{2m}\\ & \vdots \\ {a}_{n1}& {a}_{n2}& \dots & {a}_{nm}\end{array}\right)}^{T}=\left(\begin{array}{cccc}{a}_{11}& {a}_{21}& \dots & {a}_{n1}\\ {a}_{12}& {a}_{22}& \dots & {a}_{n2}\\ & \vdots \\ {a}_{1m}& {a}_{2m}& \dots & {a}_{nm}\end{array}\right)$
Each square matrix can be assigned a unique number, which is called the determinant (det(A)) of the matrix. In general, the determinant of an NxN matrix is defined by the Leibniz formula:
$$\mathrm{det\; A}=\sum _{\sigma \in {S}_{n}}^{}\left(\mathrm{sgn}\left(\sigma \right)\underset{i=1}{\overset{n}{\Pi}}{A}_{i\rho \left(i\right)}\right)$$
here the sum has to be extended over all the permutations σ. Thus, from the elements of A, all possible products are formed for each nelement in such a way that each of the products of each row and column contains exactly one element. These products are added and the sum is the determinant of A. The sign of the summands is positive for even permutations and negative for odd permutations.
The inverse matrix A^{1} is defined by the following equation
$$A\cdot {A}^{1}=E$$
Matrices for which an inverse exists is referred to as regular matrices. Matrices which have no inverse are called singular matrices.
For the inverse matrix, the following calculation rules are valid:
$${(A\cdot B)}^{1}={A}^{1}\cdot {B}^{1}$$
$${\left({A}^{1}\right)}^{1}=A$$
The calculation of the inverse matrix A^{1} is either done by the GaussJordan algorithm or with the adjuncts. The GaussJordan method transforms the matrix (A  E) in the form (E  A^{1}) from which the inverse can be read directly. With the adjuncts and the determinant the inverse can be calculated directly as
$${A}^{1}=\frac{1}{\mathrm{det}\left(A\right)}{\mathrm{adj(}A)}^{T}$$
A square matrix A is called a symmetric matrix if and only if A^{T} = A and a antisymmetric matrix applies if A^{T} = A. A orthogonal matrix if and only if A^{T} = A^{1}
The adjunct of matrix A is calculated in a way that for each matrix element a_{ij} is set a sub determinant with removing the line i and the column j. The value of this determinat is multiplied by (1)^{i+j} that gives the element i,j of the adjungate matrix.
The matrix multiplication is associative:
$$A\cdot (B\cdot C)=(A\cdot B)\cdot C$$
The matrix multiplication and matrix addition are distributive:
$$A\cdot (B+C)=A\cdot B+A\cdot C$$
For addition and multiplication by real numbers λ, μ:
$$(\lambda +\mu )\cdot A=\lambda \cdot A+\mu \cdot A$$
and:
$$\lambda \cdot (A+B)=\lambda \cdot A+\lambda \cdot B$$
There are zero divisor matrices A ≠ 0 and B ≠ 0 applies to
$$A\cdot B=0$$
For square matrices is:
$$\mathrm{det}(A+B)=\mathrm{det}\left(A\right)+\mathrm{det}\left(B\right)$$
The addition of two matrices A and B is done by adding the elements of the matrices. C = A + B with c_{i, j} = a_{i, j} + b _{i, j}
$\left(\begin{array}{cccc}{a}_{11}& {a}_{12}& \dots & {a}_{1m}\\ {a}_{21}& {a}_{22}& \dots & {a}_{2m}\\ & \vdots \\ {a}_{n1}& {a}_{n2}& \dots & {a}_{nm}\end{array}\right)+\left(\begin{array}{cccc}{b}_{11}& {b}_{12}& \dots & {b}_{1m}\\ {b}_{21}& {b}_{22}& \dots & {b}_{2m}\\ & \vdots \\ {b}_{n1}& {b}_{n2}& \dots & {b}_{nm}\end{array}\right)=\left(\begin{array}{cccc}{a}_{11}+{b}_{11}& {a}_{12}+{b}_{12}& \dots & {a}_{1m}+{b}_{1m}\\ {a}_{21}+{b}_{21}& {a}_{22}+{b}_{22}& \dots & {a}_{2m}+{b}_{2m}\\ & \vdots \\ {a}_{n1}+{b}_{\mathrm{n}1}& {a}_{n2}+{b}_{n2}& \dots & {a}_{nm}+{b}_{nm}\end{array}\right)$
Calculator for the addition of two matrices:

+ 


= 

A general calculator for the sum of NxM matrices is here: Sum and dif of MxN matrices
The subtraction of two matrices A and B is by subtracting the elements of the matrices. C = A  B with c _{ i, j } = a _{ i, j }  b _{ i, j }
$\left(\begin{array}{cccc}{a}_{11}& {a}_{12}& \dots & {a}_{1m}\\ {a}_{21}& {a}_{22}& \dots & {a}_{2m}\\ & \vdots \\ {a}_{n1}& {a}_{n2}& \dots & {a}_{nm}\end{array}\right)\left(\begin{array}{cccc}{b}_{11}& {b}_{12}& \dots & {b}_{1m}\\ {b}_{21}& {b}_{22}& \dots & {b}_{2m}\\ & \vdots \\ {b}_{n1}& {b}_{n2}& \dots & {b}_{nm}\end{array}\right)=\left(\begin{array}{cccc}{a}_{11}{b}_{11}& {a}_{12}{b}_{12}& \dots & {a}_{1m}{b}_{1m}\\ {a}_{21}{b}_{21}& {a}_{22}{b}_{22}& \dots & {a}_{2m}{b}_{2m}\\ & \vdots \\ {a}_{n1}{b}_{\mathrm{n}1}& {a}_{n2}{b}_{n2}& \dots & {a}_{nm}{b}_{nm}\end{array}\right)$
Calculator for subtraction of two matrices:

 


= 

A general calculator for the subtraction of NxM matrices is here: Sum and dif of MxN matrices
Multiplying a matrix by a scalar is by multiplying each by the scalar matrix elements. a * B = a * b_{i,j}
$\lambda \cdot \left(\begin{array}{cccc}{a}_{11}& {a}_{12}& \dots & {a}_{1m}\\ {a}_{21}& {a}_{22}& \dots & {a}_{2m}\\ & \vdots \\ {a}_{n1}& {a}_{n2}& \dots & {a}_{nm}\end{array}\right)=\left(\begin{array}{cccc}\lambda \cdot {a}_{11}& \lambda \cdot {a}_{12}& \dots & \lambda \cdot {a}_{1m}\\ \lambda \cdot {a}_{21}& \lambda \cdot {a}_{22}& \dots & \lambda \cdot {a}_{2m}\\ & \vdots \\ \lambda \cdot {a}_{n1}& \lambda \cdot {a}_{n2}& \dots & \lambda \cdot {a}_{nm}\end{array}\right)$
The multiplication of two matrices A and B requires that the number of columns of the first matrix is equal to the number of rows of the second matrix. The product obtained by multiplying the row and column elements and summed. For the first element of the result matrix, the elements of the first row of the first matrix are multiplied by the elements of the first column of the second matrix and summed. For the other elements the same for the other rows and columns.
$\left(\begin{array}{cccc}{a}_{11}& {a}_{12}& \dots & {a}_{1m}\\ {a}_{21}& {a}_{22}& \dots & {a}_{2m}\\ & \vdots \\ {a}_{n1}& {a}_{n2}& \dots & {a}_{nm}\end{array}\right)\cdot \left(\begin{array}{cccc}{b}_{11}& {b}_{12}& \dots & {b}_{1j}\\ {b}_{21}& {b}_{22}& \dots & {b}_{2j}\\ & \vdots \\ {b}_{m1}& {b}_{m2}& \dots & {b}_{mj}\end{array}\right)=\left(\begin{array}{cccc}\sum _{k=1}^{m}\left({a}_{1k}\cdot {b}_{k1}\right)& \sum _{k=1}^{m}\left({a}_{1k}\cdot {b}_{k2}\right)& \dots & \sum _{k=1}^{m}\left({a}_{1k}\cdot {b}_{kj}\right)\\ \sum _{k=1}^{m}\left({a}_{2k}\cdot {b}_{k1}\right)& \sum _{k=1}^{m}\left({a}_{2k}\cdot {b}_{k2}\right)& \dots & \sum _{k=1}^{m}\left({a}_{2k}\cdot {b}_{kj}\right)\\ & \vdots \\ \sum _{k=1}^{m}\left({a}_{nk}\cdot {b}_{k1}\right)& \sum _{k=1}^{m}\left({a}_{nk}\cdot {b}_{k2}\right)& \dots & \sum _{k=1}^{m}\left({a}_{nk}\cdot {b}_{kj}\right)\end{array}\right)$
A general calculator for the multiplication of NxM matrices is here: Multiplication of matrices
The determinant of a square 3x3 matrix is computed according to the Sarrus rule by subtracting the sum of the products of the main diagonal of the sum of the products of the secondary diagonal.
A general determinant solver is here:Determinant NxN
Wanted is the inverse matrix A^{1} to the matrix A. For this, first with the identity matrix, the matrix E (A  E) is formed. By suitable transformations we managed to form the (E  A ^{ 1 }). In the following the steps of an example can be performed.
$A=\left(\begin{array}{cccc}{a}_{11}& {a}_{12}& \dots & {a}_{1N}\\ {a}_{21}& {a}_{22}& \dots & {a}_{2N}\\ & \vdots \\ {a}_{N1}& {a}_{N2}& \dots & {a}_{NN}\end{array}\right)$
GaussJordan approach
$\left(A\rightE)=\left(\begin{array}{cccc}{a}_{11}& {a}_{12}& \dots & {a}_{1N}\\ {a}_{21}& {a}_{22}& \dots & {a}_{2N}\\ & \vdots \\ {a}_{N1}& {a}_{N2}& \dots & {a}_{NN}\end{array}\right\begin{array}{cccc}1& 0& \dots & 0\\ 0& 1& \dots & 0\\ & \vdots \\ 0& 0& \dots & 1\end{array})$
Transformations to get the following shape.
$\left(E\right{A}^{1})=$ $\left(\begin{array}{cccc}1& 0& \dots & 0\\ 0& 1& \dots & 0\\ & \vdots \\ 0& 0& \dots & 1\end{array}\right\begin{array}{cccc}{b}_{11}& {b}_{12}& \dots & {b}_{1N}\\ {b}_{21}& {b}_{22}& \dots & {b}_{2N}\\ & \vdots \\ {b}_{N1}& {b}_{N2}& \dots & {b}_{NN}\end{array})$
Calculator for the inverse matrix: Solver Inverse Matrix
The adjunct of matrix A is calculated in a way that for each matrix element a_{ij} is set a sub determinant with removing the line i and the column j. The value of this determinat is multiplied by (1)^{i+j} that gives the element i,j of the adjungate matrix.
${a}_{ij}^{*}={\left(1\right)}^{\left(i+j\right)}\left\begin{array}{ccccccc}{a}_{11}& {a}_{12}& \dots & {a}_{1,j1}& {a}_{1,j+1}& \dots & {a}_{1n}\\ & \vdots \\ {a}_{i1,1}& {a}_{i1,2}& \dots & {a}_{i1,j1}& {a}_{i1,j+1}& \dots & {a}_{i1,n}\\ {a}_{i+1,1}& {a}_{i+1,2}& \dots & {a}_{i+1,j1}& {a}_{i+1,j+1}& \dots & {a}_{i+1,n}\\ & \vdots \\ {a}_{n1}& {a}_{n2}& \dots & {a}_{n,j1}& {a}_{\mathrm{n},j+1}& \dots & {a}_{nn}\end{array}\right$
The result is the adjungate matrix.
$\mathrm{adj(}A)=\left(\begin{array}{cccc}{a}_{11}^{*}& {a}_{12}^{*}& \dots & {a}_{1n}^{*}\\ {a}_{21}^{*}& {a}_{22}^{*}& \dots & {a}_{2n}^{*}\\ & \vdots \\ {a}_{n1}^{*}& {a}_{n1}^{*}& \dots & {a}_{nn}^{*}\end{array}\right)$
Calculator for the adjugate matrix: Solver Adjugate matrix
The product of a matrix with a vector is a linear image. The multiplication is explained if the number of columns of the matrix is equal to the number of elements of the vector. The result is a vector whose number of components equals the number of rows of the matrix. This means that a matrix with 2 rows always maps a vector to a vector with two components.
$A\cdot \overrightarrow{v}=\left(\begin{array}{cccc}{a}_{11}& {a}_{12}& \dots & {a}_{1m}\\ {a}_{21}& {a}_{22}& \dots & {a}_{2m}\\ & \vdots \\ {a}_{n1}& {a}_{n2}& \dots & {a}_{nm}\end{array}\right)\cdot \left(\begin{array}{c}{v}_{1}\\ {v}_{2}\\ \vdots \\ {v}_{m}\end{array}\right)=\left(\begin{array}{c}{a}_{11}{v}_{1}+{a}_{12}{v}_{2}+\dots +{a}_{1m}{v}_{\mathrm{m}}\\ {a}_{21}{v}_{1}+{a}_{22}{v}_{2}+\dots +{a}_{2m}{v}_{\mathrm{m}}\\ \vdots \\ {a}_{n1}{v}_{1}+{a}_{n2}{v}_{2}+\dots +{a}_{nm}{v}_{\mathrm{m}}\end{array}\right)$
Calculator for the matrixvector product: Graphical MatrixVector product
The equation
$Av=\lambda v$
can be transformed into the homogeneous equation system
$(A\lambda E)v=0$
The system of equations has a nontrivial solution if and only if the determinant disappears. That if applicable
$\mathrm{det}(A\lambda E)=0$
The polynomial is called the characteristic polynomial of A and the equation is the characteristic equation of A. If λ_{i} is an eigenvalue of A then the solutions of the characteristic equation are the eigenvectors of A to the eigenvalue λ_{i}.
Calculator for eigenvalues: Eigenvalues
Here is a list of of further useful calculators and sites:
Matrix calculators.
Sum and dif of MxN matrices Multiplication of matrices Solver Adjugate matrix Solver Inverse Matrix QR decomposition Eigenvectoreigenvalue identityDeterminant Calculators with step by step calculation of the determinant value.
Determinant 2x2 Determinant 3x3 Determinant 4x4 Determinant 5x5 Determinant NxNCalculation Rules.
Determinant rules Vector calculation EigenvaluesGraphical calculation methods.
Graphical Vector addition Graphical Vector subtraction Graphical MatrixVector product Graphical Inner product