:- linear algebra :-

:- linear algebra :-

:-Vectors and spaces:-

A vector space V is a collection of objects with a (vector) addition and scalar multiplication defined that closed under both operations and which in addition satisfies the following axioms In the study of mathematics we encounter many examples of mathematical objects that can be added to each other and multiplied by real numbers. First of all, the real numbers themselves are such objects. Other examples are real valued functions, the complex numbers, infinite series, vectors in n-dimensional spaces, and vector valued functions. Vector space includes all these examples and many others as special cases.

The fundamental motions like linear independence, basis and dimension which will be discussed here are potent and effective tools in all branches of mathematics.

Ordered n-tuples of real numbers

A set (x1, x2, …, xn) of n real numbers arranged in order is called an ordered n-tuple of reals or simply n-tuple

Example: In two-dimensional co-ordinate system, any point (7, 15) is nothing but an ordered 2-tuple.

(-5, 7, 0, 8) is a 4-tuple.

Each row of a matrix

No alt text provided for this image

Equality of two n-tuples :- For two n-tuples (x1, x2, …, xn) and (y1, y2, …, yn) we say (x1, x2, …, xn) = (y1, y2, …, yn) if x1= y1, x2= y2,…, xn= yn.

Sum of two n-tuples

Let α = (x1, x2, …, xn), ξ = (y1, y2, …, yn) be two n-tuples. Then their sum α + ξ = (x1+ y1, x2+ y2, …, xn+ yn)

Example:

(1, 0, 5, ?) + (7, 8, -7, ?) = (8, 8, -2, 1)

Multiplication of an ordered n-tuple by a real number

Let α = (x1, x2, …, xn ) be an n-tuple and c be a real number. Then the product of α and c is c.α = α = (cx1, cx2, …, cxn).

Example:

  1. (7, 8, -7, ?) = (14, 16, -14, 1)

Difference of two n-tuples

Difference of two n-tuples α and ξ is α – ξ is defined as α – ξ = α + (-1). ξ

Definition of Vector Space

Let V be a non-empty set and R be the set of all real numbers. Let we have two composition, one is ‘+’ between two numbers of V and another is ‘.’ Between a member of R and a member of V. V is said to be a vector space (or, linear space) if the following axioms hold:

α + β ? V for all α and β in V [closure property under ‘+’]

α + (β + γ) = (α + β) + γ for all α, β, γ in V [Associative property under ‘+’]

V contains a member, say θ, such that α + θ = α for all α in V.

Corresponding to each α in V, there exists an element, say ‘-α’ in V such that α + (-α) = θ.

α + β = β + α for all α and β in V. [Commutative property under ‘+’]

Thus V is a abelian group under ‘+’

α ? V for all a in R and for all α in V.

1.α = α for all α in V.

(ab). α = a.(b. α) for all a, b in R and for all α in V.

a.( α+ β) = a. α + a.β for all a in R and for α, β in V.

(a + b). α = a. α + b. α for all a, b in R and for α in V.

The elements of vector space are called vectors and the elements of R are called scalars.

 Theorem

In a vector space V over a field F the following results hold

(i) 1.α = θ for all α in V

(ii) c.θ = θ for all c in F

(iii) -1. α = – α, α?V

(iv) 1.α = θ ? either c = 0 or α = θ.

Proof:

(i) In F, we have 0 + 0 = 0

Therefore, for α in V, we have

(0 + 0). α = 0. α

Or, 0. α + 0. α = 0.α

Or, 0. α + 0. α = 0.α + θ

? 0. α = θ, by left cancellation rule in the abelian group.

(ii) We have in V, θ + θ = θ

Therefore for any c in F,

c.( θ + θ) = c.θ

? c.θ + c.θ = c.θ

? c.θ + c.θ = c.θ + θ

So by left cancellation rule in V, c.θ

(iii) We have in F, 0 = 1 + (-1)

Therefore, for α in V, we have

0.α = (1 + (-1)).α

?0.α = 1.α + (-1).α

Now 0.α = θ and 1.α = α, so

θ = α + (-1).α

? -α + θ = -α + α + (-1).α

? -α = (-1).α

(iv) Let c. α = θ and suppose that c ≠ 0. Then c-1 exists in F.

Hence, c-1 .(c. α ) = c-1 . θ

? (c-1 c).α = θ

?1.α = θ

? α = θ

Again, if c = 0, then c. α = θ for any α ? V

Therefore, c. α = θ implies either c = 0 or α = θ

-:Subspace of Vector Space :-

If V is a vector space over a field F and W ? V, then W is a subspace of vector space V if under the operations of V, W itself forms vector space over F.

It is clear that {θ} and V are both subspaces of V. These are trivial subspaces. Any subspace of a vector space V other than {θ} and V itself is called a proper subspace of V.

Theorem

A non-empty subset of W of vector space V over a field F is a subspace of V if and only if

α ∈ W, β ∈ W ? α + β ∈ W

α ∈ W, c ∈ F ? cα ∈ 

Proof:

Let us suppose that the conditions (1) and (2) hold in W. Let α, β ∈ W. Since F is a field, 1 ∈ F and so -1 ∈ F.

By condition (2), we have (-1).β ∈ W or, – β ∈ W , Hence by condition (1), α + (-β) ∈ W or, α – β ∈ W whenever α, β ∈ W

This shows that (W, +) is a subgroup of the additive group (V, +). But since V is abelian, W is abelian. Hence using condition (2) and the heredity property, we can conclude that W is a vector space over the field F. Hence W is a subspace of V over F. Thus the sufficiency of the conditions is established. The necessity of the conditions (1) and (2) follows from the definition of a vector space. Note: The above theorem may be stated in the alternative form giving only one condition as follows:

A non-empty set W of a vector space V over a field F is a subspace of V if and only if a.α + b.β ∈ W for all α, β ∈ W and all a, b∈ F.

Example

Let S be the subset of R3 defined by S = {(x, y, z) ∈ R3 | y = z =0}. Then S is a non-empty subset of R3, since (0, 0, 0) ∈ S.

Let α = (a, 0, 0) and β = (b, 0, 0) ∈ S, where a, b ∈ R

Then α + β = (a, 0, 0) + (b, 0, 0) = (a + b, 0, 0) ∈ S

Thus α ∈ S, β ∈ S ? α + β ∈ S

Let c ∈ R, then c.α = c. (a, 0, 0) =(ca, 0, 0) ∈ S

Hence, by definition, S is a subspace of R3.

Linear combination and span:

Assuming A = {a1, a2, a3, …., an} be a set of “n” number of vectors

and c1, c2, c3,..., cn be “n” number of constants (scalars which can have different values).

Linear combination of a set of vectors is formed when each vector in the set is multiplied with a scalar and the products are added together.

Then the linear combination of these vectors (and scalars) will be :

No alt text provided for this image

Let this linear combination be equal to 0.

No alt text provided for this image

This equation will be satisfied when all the scalars (c1, c2, c3, …, cn) are equal to 0.

No alt text provided for this image


But, if 0 is the only possible value of scalars for which the equation is satisfied then that set of vectors is called linearly independent

Linear Dependence

For a vector space V defined over a field F, the n vectors α1, α2, …, αn ∈ V are said to be linearly dependent if there exists a set of scalars c1, c2, …, cn ∈ F, not all zero (where zero is additive identity of F), such that, c1 α1 + c2 α2 + … + cn αn = θ

Linear Independence

For a vector space V defined over a field F, the n vectors α1, α2, …, αn ∈ V are said to be linearly independent if and only if c1 α1 + c2 α2 + … + cn αn = θ,        ci ∈ F (i=1, 2, …, n) implies that c1 = c2 = … = cn = 0

Example

The coordinate vectors α1= (1, 1, 0), α 2= (3, 2, 1) and α 3 =(2, 1, 1) are linearly dependent if there exists a set of scalars c1, c2, c3 not all zero, such that c1 (1, 1, 0) + c2 (3, 2, 1) + c3 (2, 1, 1) = (0, 0, 0).

This requires that

c1 + 3c2 +2c3 = 0

c1 + 2c2 +c3 = 0

c2 + c3 = 0

The system of homogeneous linear equations has non zero solution as the rank of the coefficient matrix is 2 (<3). We may also directly solve to check that c1 = 1, c2 = -1, c3 = 1 is a solution to the system. Hence, (1) α1 + (-1) α2 + (1) α3 = θ

Thus vectors α1 , α2, α3 are linearly dependent and any one of the vectors can be written as a linear combination of the other two. For example, α1 = α2 – α3 .

Theorem 01

A collection of vectors containing null vector is linearly dependent.

Theorem 02

A collection of vectors which contains a collection of linearly dependent vectors is linearly dependent.

 Example

The vectors (1, 2, 3), (2, 4, 6), (5, 9, 1), (-6, 7, 8) and (11, 2, 5) are linearly dependent.

We see 2. (1, 2, 3) + (-1). (2, 4, 6) = (0, 0, 0)

So, (1, 2, 3), (2, 4, 6) are linearly dependent. So by above theorem the given five vectors are also linearly dependent.

 Theorem 03

Any part of a collection of linearly independent vectors is linearly independent.

 Theorem 04

The number of n-tuples (a11, a12, …, a1n), (a21, a22, …, a2n), …, (an1, an2, …, ann) will be independent if and only if the determinant

dot product and cross product of vectors

Dot product and cross product are two types of vector product. The basic difference between dot product and the scalar product is that dot product always gives scalar quantity while cross product always vectors quantity. The dot product is always used to calculate the angle between two vectors.

What is dot product of two vectors?

When two vectors are multiplied with each other and answer is a scalar quantity then such a product is called the scalar product or dot product of vectors.

A dot (.) is placed between vectors which are multiplied with each other that’s why it is also called “dot product”.

Scalar = vector .vector

Vector dot product examples

  • The product of force F and displacement S is work “W”.

i.e.     W =F . S

  • The product of force F and velocity V is power “P”.

i.e.     P =F . V

The product of electric intensity E and area vector A is electric flux Φ.

i.e.     Φ = E . A

The dot product formula

The product of magnitudes of vectors and the cosine of an angle between them. Consider two vectors A and B making an angle θ with each other.

 A . B = AB Cos θ

Where “B Cos θ ” is the component of B along vector and 0 ≤ θ ≤ π.

Scalar product properties

  •  If vector A is parallel to B then their scalar product is maximum.

i.e.  A . B = AB Cos 0o=AB (1) =AB

  • The scalar product of the same vectors is equal to the square of their magnitude.

                A . A = AA Cos 0o=A2 (1)=A2

  • If two vectors are opposite to each other than their scalar product will be negative.

i.e.  A . B = AB Cos 180o=AB (-1) = -AB

  • If vector A is perpendicular to B then their scalar product is minimum.

i.e.  A . B = AB Cos 90o=AB (0) = 0

  • For unit vectors i ,j and k ,the dot product of same unit vectors is 1 and for different unit vectors is zero.

i.e.                  i . i = j . j = k . k = 1

and

                    i . j = j . k = k . i = 0

What is Vector cross product of two vectors?

“When two vectors are multiplied with each other and the answer is also a vector quantity then such a product is called vector cross product or vector product.”

A cross (×) is placed between the vectors which are multiplied with each other that’s why it is also known as “cross product”. i.e.

Vector = Vector × Vector

examples of Vector cross product

  • The product of position vector “r ” and force “F” is Torque which is represented as “τ“.

i.e.           τ = r × F

  • The product of angular velocity ω and radius vector “r” is tangential velocity.

i.e.          V t = ω × r

Cross product formula

The cross product is defined by the relation

C = A × B = AB Sinθ u

Where is a unit vector perpendicular to both A and B.

Matrices for solving systems by elimination :

Gaussian elimination

It’s a Row reduction algorithm to solve System of linear equations.

To perform Gaussian elimination, the coefficients of the terms in the system of linear equations are used to create a type of matrix called an augmented matrix.

Then, elementary row operations are used to simplify the matrix.

The goal of Gaussian elimination is to get the matrix in row-echelon form.

If a matrix is in row-echelon form, which is also called Triangular Form.

Some definitions of Gaussian elimination say that the matrix result has to be in reduced row-echelon form.

Gaussian elimination that creates a reduced row-echelon matrix result is sometimes called Gauss-Jordan elimination.

To be simpler, here is the structure:

  • Algorithm: Gaussian Elimination
  • Step 1: Rewrite system to a Augmented Matrix.
  • Step 2: Simplify matrix with Elementary row operations.
  • Result:
  • Row Echelon Form or
  • Reduced Echelon Form

And if we make the result only in RREF, so the name of the algorithm could also be called:

  • Algorithm: Gauss-Jordan Elimination
  • Step 1: Rewrite system to a Augmented Matrix.
  • Step 2: Simplify matrix with Elementary row operations.
  • Result: Only in Reduced Echelon Form

Elementary Row Operations

Elementary row operations are used to simplify the matrix.

The three types of row operations used are:

  • Type 1: Switching one row with another row.
  • Type 2: Multiplying a row by a non-zero number.
  • Type 3: Adding a row from another row. (!Note: you can only ADD them but not subtract, but you can ADD a negative)

Confusing operation: See where the negative sign was put:

No alt text provided for this image

Column Space

Similar to row space, column space is a vector space formed by set of linear combination of all column vectors of the matrix.

Column vectors of matrix A

No alt text provided for this image

Column space of matrix A will be

No alt text provided for this image

All the linear combinations of column vectors : a1, a2 and a3

Both of these spaces have same dimension (same number of independent vectors) and that dimension is equal to rank of matrix. Why?

Because, rank of matrix is maximum number of linearly independent vectors in rows or columns and dimension is maximum number of linearly independent vectors in a vector space (like column space or row space).

Rows and columns of a matrix have same rank so the have same dimension.

Null Space

We are familiar with matrix representation of system of linear equations.

No alt text provided for this image

We can also find it’s solution (values of variables for which the equation above is satisfied) using Gaussian Elimination algorithm.

If we take a set of all possible solution vectors (all possible values of “x”), then the vector space formed out of that set will be called null space.

Or

Null space contains all possible solutions of a given system of linear equations.

Nullity

Dimension of null space is called nullity.

Nullity of the system above is 1.

Matrix Transformations

Now we specialize the general notions and vocabulary from the previous subsection to the functions defined by matrices that we considered in the first subsection.

Definition Let A be an m × n matrix. The matrix transformation associated to A is the transformation T:Rn?→Rm defined by T(x)=Ax.

This is the transformation that takes a vector x in Rn to the vector Ax in Rm.

If A has n columns, then it only makes sense to multiply A by vectors with n entries. This is why the domain of T(x)=Ax is Rn.

If A has n rows, then Ax has m entries for any vector x in Rn; this is why the codomain of T(x)=Ax is Rm.

No alt text provided for this image

The definition of a matrix transformation T tells us how to evaluate T on any given vector: we multiply the input vector by a matrix

Find the inverse of the matrix.

No alt text provided for this image
No alt text provided for this image

Let us now go back to our original matrices A and B. Though they have the same set of elements, are they equal?

The answer is no. That’s because their order is not the same. Now, there is an important observation. There can be many matrices which have exactly the same elements as A has.

Here, the number of rows and columns in A is equal to number of columns and rows in B respectively. Thus, the matrix B is known as the Transpose of the matrix A. The transpose of matrix A is represented by A′ or AT

No alt text provided for this image

DETERMINANTS

Every square matrix A is associated with a real number called the determinant of A, written |A|.


No alt text provided for this image

Alternate coordinate systems(bases)

The orthogonal complement of a subspace  v of the vector space Rn is the set of vectors which are orthogonal to all elements of v. For example, the orthogonal complement of the space generated by two non proportional vectors u ,v  of the real space R^3  is the subspace formed by all normal vectors to the plane spanned by  u and v .

No alt text provided for this image
No alt text provided for this image

orthogonal projection

In this subsection, we change perspective and think of the orthogonal projection x W as a function of x. This function turns out to be a linear transformation with many nice properties, and is a good example of a linear transformation which is not originally defined as a matrix transformation.

No alt text provided for this image
No alt text provided for this image

Throughout this section, we restrict our attention to vector spaces that are finite-dimensional. If we have a (finite) basis for such a vector space V , then, since the vectors in a basis span V , any vector in V can be expressed as a linear combination of the basis vectors. The next theorem establishes that there is only one way in which we can do this.

Theorem 4.7.

1 If V is a vector space with basis {v1, v2,..., vn}, then every vector v ∈ V can be written uniquely as a linear combination of v1, v2,..., vn.

Proof Since v1, v2,..., vn span V , every vector v ∈ V can be expressed as

v = a1v1 + a2v2 +···+ anvn, (4.7.1)

for some scalars a1, a2,...,an. Suppose also that v = b1v1 + b2v2 +···+ bnvn, (4.7.2)

for some scalars b1, b2,...,bn. We will show that ai = bi for each i, which will prove the uniqueness assertion of this theorem.

Subtracting Equation (4.7.2) from Equation (4.7.1) yields (a1 ? b1)v1 + (a2 ? b2)v2 +···+ (an ? bn)vn = 0. (4.7.3) But {v1, v2,..., vn} is linearly independent, and so Equation (4.7.3) implies that a1 ? b1 = 0, a2 ? b2 = 0, ..., an ? bn = 0.

That is, ai = bi for each i = 1, 2,...,n

No alt text provided for this image

Orthonormal bases and gram schmidt process

No alt text provided for this image
No alt text provided for this image

Introduction to eigenvalues and vector

Eigenvectors and Eigenvalues

Eigenvectors are unit vectors, which means that their length or magnitude is equal to 1.0. They are often referred as right vectors, which simply means a column vector (as opposed to a row vector or a left vector). A right-vector is a vector as we understand them.

Eigenvalues are coefficients applied to eigenvectors that give the vectors their length or magnitude. For example, a negative eigenvalue may reverse the direction of the eigenvector as part of scaling it.

A matrix that has only positive eigenvalues is referred to as a positive definite matrix, whereas if the eigenvalues are all negative, it is referred to as a negative definite matrix.

No alt text provided for this image
No alt text provided for this image

Eigenspaces

The eigenvalues of A are given by the roots of the polynomial

det(A ? λIn) = 0. The corresponding eigenvectors are the nonzero solutions of the linear system (A ? λIn)~x = 0. Collecting all solutions of this system, we get the corresponding eigenspace. 

No alt text provided for this image




要查看或添加评论,请登录

Mohsin khan的更多文章

  • -:Computer Science for Business Professionals :-

    -:Computer Science for Business Professionals :-

    PROGRAMING LANGUAGES: At the end of the day programming that we all consider is all about creating softwares. There are…

  • Statistics and Probability :-

    Statistics and Probability :-

    ANALYSING CATAGORICAL DATA:- Categorical data analysis is the analysis of data where the response variable has been…

  • introduction of sql

    introduction of sql

    SQL basics :- SQL was developed at IBM by Donald D. Chamberlin and Raymond F.

  • ABOUT EXCEL :-

    ABOUT EXCEL :-

    Learn to create well-designed graphs in Excel:- In addition to working with large volumes of data, finance and…

  • LEARNING OBJECT :-

    LEARNING OBJECT :-

    Different types of charts:- BAR CHART :-Use bar charts to compare data across categories. You create a bar chart by…

  • INTRODUCTION COMPUTER :-

    INTRODUCTION COMPUTER :-

    A computer system has three main components: hardware, software, and people. .

社区洞察

其他会员也浏览了