## Regression

The regression analysis is used for determining or estimating model parameters. The starting point are measured values ​​and a model that the measured values ​​should be based on. Since the measured values ​​generally subject to fault optimizes the regression analysis, the model parameters to optimal adaptation. The basic procedure is the method of least squares.

## Least square method

The method of least squares, a method of compensation calculation. With the method, an optimum compromise is calculated, in which the squares of the deviations are minimized by the model function.

With respect to the measured values ​​(xi, yi) and the model function f is the quadratic deviation is minimized. To achieve this, the parameters ai of the model function are determined so that the following condition is satisfied.

$∑ i = 1 n ( f ( x i , a → ) - y i ) 2 → min$

### Model function: Linear fit

For the determination of the regression line is a linear model function f is used for the least squares method.

$f\left(x,\stackrel{\to }{a}\right)={a}_{0}+{a}_{1}x$

The deviation of the regression line to the measured values ​​is then given as follows.

$\begin{array}{c}{a}_{0}+{a}_{1}{x}_{1}-{y}_{1}={r}_{1}\\ {a}_{0}+{a}_{1}{x}_{2}-{y}_{2}={r}_{2}\\ ⋮\\ {a}_{0}+{a}_{1}{x}_{n}-{y}_{n}={r}_{n}\end{array}$

The goal now is the sum of squares of the deviations from a straight line r to make as small as possible.

$\sum _{i=1}^{n}{r}_{i}^{2}$ $={\left({a}_{0}+{a}_{1}{x}_{1}-{y}_{1}\right)}^{2}+\dots$ $+{\left({a}_{0}+{a}_{1}{x}_{n}-{y}_{n}\right)}^{2}$ $\to \text{min}$

The extremum is determined by the partial derivatives with respect to a 0 and a be set 1 zero.

$\frac{\partial }{\partial {a}_{0}}{\left({a}_{0}+{a}_{1}{x}_{1}-{y}_{1}\right)}^{2}+\dots$ $+{\left({a}_{0}+{a}_{1}{x}_{n}-{y}_{n}\right)}^{2}=0$

$\frac{\partial }{\partial {a}_{1}}{\left({a}_{0}+{a}_{1}{x}_{1}-{y}_{1}\right)}^{2}+\dots$ $+{\left({a}_{0}+{a}_{1}{x}_{n}-{y}_{n}\right)}^{2}=0$

The resolution of the equation system provides the parameters a0 and a1 of the regression line.

$a0 =y‾-a1x‾$

$a1 = ∑ i = 1 n ( x i - x ‾ ) ( y i - y ‾ ) ∑ i = 1 n ( x i - x ‾ ) 2$

Online-Calculator:

Online-Calculator: Fitting linear line

### Fitting of exponential functions

If the measured values ​​is an exponential relationship is based can also be used for the best fit straight line linear model. Therefore it is necessary to take the logarithm, the measured values​​, because then gives a linear equation by substitution.

$y=b\cdot {a}^{x}$

Logarithm leads to a linear equation.

$\mathrm{ln}y=\mathrm{ln}b+x\mathrm{ln}a$

With the logarithm of measured values ​​y' and the substitutions a' = ln a and b'= ln b is present, the linear model.

$\mathrm{y\text{'}}=\mathrm{b\text{'}}+\mathrm{a\text{'}}x$

### Model function: power functions

The approximation of a power function is performed by returning to the linear model function.

$y=a\cdot {x}^{b}$

Logarithm leads to a linear equation.

$\mathrm{ln}y=\mathrm{ln}a+b\mathrm{ln}x$

With the logarithmic measurement values ​​y' and the substitutions a' = ln a and x' = ln x is the linear model before.

$\mathrm{y\text{'}}=\mathrm{a\text{'}}+b\mathrm{x\text{'}}$

Online-Calculator:

Online-Calculator: Power function

### Fitting of the Gaussian distribution

The Gaussian distribution or normal distribution is defined as follows:

$f\left(x\right)=\frac{1}{\sqrt{2\pi }\sigma }\phantom{\rule{0.3em}{0ex}}{e}^{-\frac{1}{2}\frac{{\left(x-\mu \right)}^{2}}{{\sigma }^{2}}}$

The fitting of the Gaussian distribution to the measured values takes place by forming the weighted mean value of the measured values. The weighted mean value corresponds to the μ In the Gaussian distribution. The standard deviation of the measured values from the mean value is the σ in the normal distribution.

$\mu =\frac{\sum _{i=1}^{n}{x}_{i}{y}_{i}}{\sum _{i=1}^{n}{y}_{i}}$

$σ = ∑ i = 1 n ( x i - μ ) 2 y i ∑ i = 1 n y i$

Online-Calculator:

Online-Calculator: Normal distribution Normal Distribution Plot

### Model function: Periodically (Fourier series)

Measured values ​​can also be approximated by the periodic functions. The procedure for this is the development of a Fourier series. The elements of the Fourier series are sine and cosine functions. The development takes place in ascending order of frequencies.

The Fourier series is:

$sn(x)= a 0 2 + ∑ k = 1 n ( a k cos ( k ω x ) + b k sin ( k ω x ) )$

with the Fourier coefficients ak und bk and ω = 2π/T. This is the period T = b - a with the initial interval a and the end of interval b.

The Fourier coefficients ak und bk satisfy the least squares condition for the associated sine or cosine function. The coefficients are calculated as follows.

$ak= 2 l ∫ a b f ( x ) cos ( k ω x ) dx$

$bk= 2 l ∫ a b f ( x ) sin ( k ω x ) dx$

Online-Calculator:

Online-Calculator: Fourier approximation

### Polynomial approximation using the QR method

The linear approximation problem is solved by the QR decomposition. The calculator determines the coefficients of the n-th degree polynomial.

The starting point is the over-determined system of equations:

$Ax=b$

$\text{with}\phantom{\rule{1em}{0ex}}x\phantom{\rule{0.5em}{0ex}}\in \phantom{\rule{0.5em}{0ex}}{\mathbb{R}}^{n}\phantom{\rule{1em}{0ex}}\text{und}\phantom{\rule{1em}{0ex}}A\phantom{\rule{0.5em}{0ex}}\in \phantom{\rule{0.5em}{0ex}}{\mathbb{R}}^{nxm}$

The QR decomposition leads to the factorization of the matrix A:

$A=QR$

This applies to the compensation problem:

${\mathrm{||}Ax-b\mathrm{||}}_{2}^{2}={\mathrm{||}QRx-b\mathrm{||}}_{2}^{2}={\mathrm{||}{R}^{*}x-{Q}^{\mathrm{T*}}b\mathrm{||}}_{2}^{2}$

here R and Q are reduced to the relevant proportion. That is, R* is the upper triangular matrix of R and QT* contains the corresponding rows of Q.

Replacing A by the Vandermond matrix with the corresponding measured values xi and b by the measured values yi yields the coefficients of the compensating polynomial as the solution of the equation system.

Online-Calculator:

Online-Calculator: Polynomial fitting

## Mean values ​​and standard deviation

The arithmetic means

$x ‾ = 1 n ∑ i = 1 n x i$

$y ‾ = 1 n ∑ i = 1 n y i$

Standard deviation from the mean

$σ = 1 n-1 ∑ i = 1 n ( x i - x ‾ ) 2$

For the standard deviation of the regression line of the average value of x of the relevant function value of the straight line is to be replaced.

### Weighted average and standard deviation

The weighted average μ is formed by multiplying the measured values by their respective weight yi.

$\mu =\frac{\sum _{i=1}^{n}{x}_{i}{y}_{i}}{\sum _{i=1}^{n}{y}_{i}}$

In the standard deviation, the respective weights yi must also be considered.

$σ = ∑ i = 1 n ( x i - μ ) 2 y i ∑ i = 1 n y i$

## Releated sites

Here is a list of of further useful sites:

## Calculator

The Online-Calculator The online calculator performs a least squares compensation calculation for the following functions: Equalization line, power approximation, equalization polynomial, normal distribution and Fourier approximation. The input of the measured values can be done with a table or alternatively the data can be read in from a file. The parameters of the compensation function are calculated and the function is displayed graphically.

Online-Calculator:

Curve fitting for: linear line, power function, polynomial, normal distribution, Fourier series Fourier series calculator

List of further sites:

Index Trigonometric calculations Normal Distribution Plot NxN Gauss method Derivation rules