基礎統計原理及梯度下降法

林嶔 (Lin, Chin)

Lesson 1

基本知識(1)

\[\hat{y} = f(x)\]

\[\hat{y} = f(x) = b_{0} + b_{1}x\]

基本知識(2)

\[loss = diff(y, \hat{y})\] - 以簡單線性迴歸的損失函數為例,所求的值為殘差平方和,可將此式改寫為:

\[loss = diff(y, \hat{y}) = \sum \limits_{i=1}^{n} \left(y_{i} - \hat{y_{i}}\right)^{2}\]

\[loss = diff(y, f(x))\]

\[loss = diff(y, f(x)) = \sum \limits_{i=1}^{n} \left(y_{i} - \left(b_{0} + b_{1}x_{1,i}\right)\right)^{2}\]

基本知識(3)

\[min(loss)\]

極值問題(1)

\[f(x) = x^{2} + 2x + 1\] - 接著,我們對上述函數進行微分,並尋找微分後函數為0的位置,將可以知道此函數的極值位置:

\[\frac{\partial}{\partial x} f(x) = 2x + 2 = 0\]

\[x = -1\]

極值問題(2)

梯度下降法(1)

\[x_{\left(epoch:0\right)} = 10\]

\[x_{\left(epoch:t\right)} = x_{\left(epoch:t - 1\right)} - lr \cdot \frac{\partial}{\partial x}f(x_{\left(epoch:t - 1\right)})\] - 由於剛剛函數的導函數為「\(2x + 2\)」,我們可以將式子帶入運算:

\[ \begin{align} x_{\left(epoch:1\right)} & = x_{\left(epoch:0\right)} - lr \cdot \frac{\partial}{\partial x}f(x_{\left(epoch:0\right)}) \\ & = 10 - lr \cdot \frac{\partial}{\partial x}f(10) \\ & = 10 - 0.05 \cdot (2\cdot10+2)\\ & = 8.9 \end{align} \]

梯度下降法(2)

\[ \begin{align} x_{\left(epoch:2\right)} & = x_{\left(epoch:1\right)} - lr \cdot \frac{\partial}{\partial x}f(x_{\left(epoch:1\right)}) \\ & = 8.9 - lr \cdot \frac{\partial}{\partial x}f(8.9) \\ & = 8.9 - 0.05 \cdot (2\cdot8.9+2)\\ & = 7.91 \end{align} \]

\[ \begin{align} x_{\left(epoch:3\right)} & = 7.91 - 0.891 = 7.019 \\ x_{\left(epoch:4\right)} & = 7.019 - 0.8019 = 6.2171 \\ x_{\left(epoch:5\right)} & = 6.2171 - 0.72171 = 5.49539 \\ & \dots \\ x_{\left(epoch:\infty\right)} & = -1 \end{align} \]

梯度下降法(3)

\[f(x) = x^{2}\]

original.fun = function(x) {
  return(x^2)
}

differential.fun = function(x) {
  return(2*x)
}

start.value = 5
learning.rate = 0.1
num.iteration = 1000

result.x = rep(NA, num.iteration)

for (i in 1:num.iteration) {
  if (i == 1) {
    result.x[1] = start.value
  } else {
    result.x[i] = result.x[i-1] - learning.rate * differential.fun(result.x[i-1])
  }
}

print(tail(result.x, 1))

[1] 7.68895e-97

F1_2

梯度下降法(4)