Understanding Gradient Descent Optimization for Neural Network Training
Gradient Descent Algorithm
The gradient descent method is a fundamental optimization technique used to minimize objective functions. It operates with three core components:
Components:
- Objective function
f(x): The funcsion we want to minimize - Gradient function
g(x): The derivative of the objective function - Learning rate
η: A fixed step size controlling descent speed
Algorithm:
import numpy as np
import matplotlib.pyplot as plt
def gradient_descent(initial_x, learning_rate, gradient_fn, max_iterations=20, tolerance=1e-6):
"""
Iteratively update variable x in the opposite direction of the gradient.
"""
current_x = initial_x
for iteration in range(max_iterations):
gradient_value = gradient_fn(current_x)
current_x = current_x - learning_rate * gradient_value
print(f'Iteration {iteration}: gradient={gradient_value:.6f}, x={current_x:.6f}')
if abs(gradient_value) < tolerance:
break
return current_x
# Define the quadratic function: f(x) = x² - 2x + 1
def objective(x):
return x**2 - 2*x + 1
# Gradient of the function: g(x) = 2x - 2
def gradient(x):
return 2*x - 2
# Visualization
x_range = np.linspace(-5, 7, 100)
y_values = objective(x_range)
plt.plot(x_range, y_values)
plt.xlabel('x')
plt.ylabel('f(x)')
plt.title('Quadratic Function Optimization')
plt.show()
# Execute optimization starting from x=5
optimal_x = gradient_descent(initial_x=5.0, learning_rate=0.1, gradient_fn=gradient)
print(f'\nConverged to x = {optimal_x}')
Output:
Iteration 0: gradient=8.000000, x=4.200000
Iteration 1: gradient=6.400000, x=3.560000
Iteration 2: gradient=5.120000, x=3.048000
Iteration 3: gradient=4.096000, x=2.638400
Iteration 4: gradient=3.276800, x=2.310720
Iteration 5: gradient=2.621440, x=2.048576
Iteration 6: gradient=2.097152, x=1.838861
Iteration 7: gradient=1.677722, x=1.671089
Iteration 8: gradient=1.342177, x=1.536871
Iteration 9: gradient=1.073742, x=1.429497
Iteration 10: gradient=0.858993, x=1.343597
Iteration 11: gradient=0.687195, x=1.274878
Iteration 12: gradient=0.549756, x=1.219902
Iteration 13: gradient=0.439805, x=1.175922
Iteration 14: gradient=0.351844, x=1.140737
Iteration 15: gradient=0.281475, x=1.112590
Iteration 16: gradient=0.225180, x=1.090072
Iteration 17: gradient=0.180144, x=1.072058
Iteration 18: gradient=0.144115, x=1.057646
Iteration 19: gradient=0.115292, x=1.046117
Converged to x = 1.046117
Interpretation:
The parabola f(x) = x² - 2x + 1 has its minimum at x = 1. Starting from x₀ = 5, the algorithm progressively reduces the gradient magnitude with each iteration. By epoch 19, the solution approaches the optimal value, demonstrating how gradient descent systematically navigates toward the minimum by taking steps proportional to the negative gradient.