Linear Regression



Overview and implementation of Linear Regression analysis.

linear regression correlation

Simple


$$ \large y_i=mx_i+b $$

Where m describes the angular coefficient (or line slope) and b the linear coefficient (or line y-intersept).

$$ \large m=\frac{\sum_i^n (x_i-\overline{x})(y_i-\overline{y})}{\sum_i^n (x_i-\overline{x})^2} $$$$ \large b=\overline{y}-m\overline{x} $$

linear regression prediction

$$ \large MSE=\frac{1}{n} \sum_i^n (Y_i- \hat{Y}_i)^2 $$

linear regression residuals

Multiple


$$ \large y=m_1x_1+m_2x_2+...+m_nx_n+b $$

linear regression multiple linear regression multiple residuals

Gradient Descent


$$ \large e_{m,b}=\frac{1}{n} \sum_i^n (y_i-(mx_i+b))^2 $$

To perform the gradient descent as a function of the error, it is necessary to calculate the gradient vector $\nabla$ of the function, described by:

$$ \large \nabla e_{m,b}=\Big\langle\frac{\partial e}{\partial m},\frac{\partial e}{\partial b}\Big\rangle $$

where:

$$ \large \begin{aligned} \frac{\partial e}{\partial m}&=\frac{2}{n} \sum_{i}^{n}-x_i(y_i-(mx_i+b)), \\ \frac{\partial e}{\partial b}&=\frac{2}{n} \sum_{i}^{n}-(y_i-(mx_i+b)) \end{aligned} $$

gradient descent

Non-linear analysis


non linear non linear residuals