Summary of Linear Algebra
I've spent one more week studying linear algebra. And I think linear algebra is more abstract than calculus because most of the time I can understand what the textbook is talking about, but if confronted with a specific linear algebra problem, I don't know how to get started to solve it. But after reading the whole textbook, I am on track now. Generally speaking, linear algebra concerns itself with vector spaces and linear mappings among such spaces. The term space here refers to a specific field that satisfies conditions described below:
Let K be a subset of complex numbers C, K is a field if
1, if x and y belong to K, then x + y and xy belong to K
2, if x belongs to K, then -x belongs to K;1/x also belongs to K
3, 0 and 1 belong to K
the term mapping here refers to the transformations of some vectors from one space to other spaces. Recall that in high school, we define a vector as the combination of both direction and length. In traditional Cartesian coordinate system, a vector has two elements, namely x and y, and (x, y) forms the vector's angle relative to x and y axis, and we define the length of the vector as (x^2 + y^2)^1/2. We call this kind of vectors 2-dimensional vectors. If we extend this to let a vector has more than two elements, we have 3-dimensional, 4-dimensional,..., and n-dimensional vectors. To simplify things and make notations easier, they invented the term space to denote this. So, actually, a space refers to the base coordinates of a system, and all vectors in a space or field can be expressed by the base coordinates of that space. In linear algebra, they call this the space base. Here, we may think that a coordinate is a scalar, not really. A base coordinate can be a vector, or several vectors. Space base coordinates cannot be linear dependent, meaning that they cannot be formed by the linear combination of each other. I think these two concepts are the essential parts of linear algebra. Other things like eignvectors and eignvalues, singular component decomposition and the rest are based on these two concepts. For machine learning, linear algebra provides a good way to handle features, and optimize multivariate models, and also a good way to do numeric calculations.