As part of explorations into machine learning, I’ve been brushing up on computer science basics starting with linear algebra. Nice to see good old eigenvectors again after so many years.

An eigenvector of a square matrix $A$ is a non-zero vector $v$ such that multiplication by $A$ alters only the scale of $v$:

$\mathrm{Av}=\mathrm{\lambda v}$

The scalar $\lambda $ is known as the eigenvalue corresponding to this eigenvector.

Excerpted from Chapter 2 of Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville.