# Vector Spaces

Vector spaces and linear transformations are the primary objects of study in linear algebra. A vector space (which I'll define below) consists of two sets: A set of objects called vectors and a field (the scalars).

Definition. A vector space V over a field F is a set V equipped with an operation called (vector) addition, which takes vectors u and v and produces another vector .

There is also an operation called scalar multiplication, which takes an element and a vector and produces a vector .

These operations satisfy the following axioms:

1. Vector addition is associative: If , then

1. Vector addition is commutative: If , then

1. There is a zero vector 0 which satisfies

1. For every vector , there is a vector which satisfies

1. If and , then

1. If and , then

1. If and , then

1. If , then

The elements of V are called vectors; the elements of F are called scalars. As usual, the use of words like "multiplication" does not imply that the operations involved look like ordinary "multiplication".

Example. If F is a field, then denotes the set

is called {the vector space of n-dimensional vectors over F}. The elements , ..., are called the vector's components.

becomes a vector space over F with the following operations:

It's easy to check that the axioms hold. For example, I'll check Axiom 6. Let , and let . Then

As a specific example, consists of 3-dimensional vectors with real components, like

You're probably familiar with addition and scalar multiplication for these vectors:

(Sometimes people write , using angle brackets to distinguish vectors from points. I'll use angle brackets when there's a danger of confusion.)

consists of 2-dimensional vectors with components in . Since each of the two components can be any element in , there are such vectors:

Here are examples of vector addition and multiplication in :

Example. The set of polynomials with real coefficients is a vector space over , using the standard operations on polynomials. For example,

Unlike , is infinite dimensional (in a sense to be made more precise shortly). Intuitively, you need an infinite set of polynomials, like

to "construct" all the elements of .

Example. Let denote the continuous real-valued functions defined on the interval . Add functions pointwise:

From calculus, you know that the sum of continuous functions is a continuous function.

If and , define scalar multiplication in pointwise fashion:

For example, if and , then

These operations make into an -vector space.

Like , is infinite dimensional. However, its dimension is uncountably infinite, while has countably infinite dimension over .

You can also define a "dot product" for two vectors :

The product of continuous functions is continuous, so the integral of is defined. This example shows that abstract vectors do not have to look like little arrows!.

Proposition. Let V be a vector space over a field F.

1. for all .
2. for all .
3. for all .

Proof. (a) Note that the "0" on the left is the zero {\it scalar} in F, whereas the "0" on the right is the zero {\it vector} in V.

Subtracting from both sides, I get .

(b) (The "-1" on the left is the scalar -1; the " " on the right is the "negative" of .)

(c)

Definition. Let V be a vector space over a field F, and let , . W is a subspace of V if:

1. If , then .
2. If and , then .

In other words, W is closed under addition of vectors and under scalar multiplication.

Lemma. Let W be a subspace of a vector space V.

1. The zero vector is in W.
2. If , then .

Proof. (a) Take any vector (which you can do because W is nonempty), and take . Since W is closed under scalar multiplication, . But , so .

(b) Since and , is in W.

Example. If V is a vector space over a field F, and V are subspaces of V.

Example. Consider the real vector space , the usual x-y plane. Then

are subspaces of . (These are just the x and y-axes, of course.)

I'll check that is a subspace. First, I have to show that two elements of add to an element of . An element of is a pair with the second component 0. So here are two elements of : , . Add them:

is in , because its second component is 0. Thus, is closed under sums.

Next, I have to show that is closed under scalar multiplication. Take a scalar and a vector . Take their product:

The product is in because its second component is 0. Therefore, is closed under scalar multiplication.

Thus, is a subspace.

Notice that in doing the proof, I did not use specific vectors in like or . I'm trying to prove statements about arbitrary elements of , so I use "variable" elements.

In general, the subspaces of are , , and lines passing through the origin. (Why can't a line which doesn't pass through the origin be a subspace?)

In , the subspaces are the , , and lines or planes passing through the origin.

And so on.

Example. Prove or disprove: The following subset of is a subspace:

If you're trying to decide whether a set is a subspace, it's always good to check whether it contains the zero vector before you start checking the axioms. In this case, the set consists of 3-dimensional vectors whose third components are equal to 1. Obviously, the zero vector doesn't satisfy this condition.

Since W doesn't contain the zero vector, it's not a subspace of .

Example. Let

Prove or disprove: W is a subspace of .

Note that . This is not one of the axioms for a subspace, but it's a good thing to check first because you can usually do it quickly. If the zero vector is not in a set, then the lemma above shows that the set is not a subspace. In this case, the zero vector is in W, so the issue isn't settled, and I'll try to check the subspace axioms.

First, I might try to check that the set is closed under sums. I take two vectors in W --- say and . I add them:

The last vector isn't in the right form --- it would be if was equal to . That doesn't sound right, so I suspect that W is not a subspace. I try to get a specific counterexample to contradict closure under addition.

First,

On the other hand,

For I have .

Since W is not closed under vector addition, it is not a subspace.

Example. Let F be a field, and let . Consider the following subset of :

Prove or disprove: W is a subspace.

This set is defined by a property rather than by appearance, and axiom checks for this kind of set often give people trouble. The problem is that elements of W don't "look like" anything --- if you need to refer to a couple of arbitrary elements of W, you might call them u and v (for instance). There's nothing about the symbols u and v which tells you that they belong to W. But u and v are like people who belong to a club: You can't tell from their appearance that they're club members, but they're carrying membership cards in their pockets.

With this in mind, I'll check closure under addition. Let . I must show that .

Since u and v are in W,

Adding the equations and factoring out, I get

The last equation shows that .

Warning: Don't say " " --- it doesn't make sense! " " is an equation that satisfies; it can't be an element of W, because elements of W are vectors.

Next, I'll check closure under scalar multiplication. Let and let . Since , I have

Multiply both sides by k, then commute the matrices and the scalar:

The last equation says that .

Since W is closed under addition and scalar multiplication, it's a subspace.

Example. Consider the following subsets of the polynomial ring :

is a subspace; it consists of all polynomials having as a root.

is not a subspace. One way to see this is to notice that the zero polynomial (i.e. the zero vector) is not in , because the zero polynomial does not give 1 when you plug in .

Alternatively, the constant polynomial is an element of --- it gives 1 when you plug in 2 --- but is not. So is not closed under scalar multiplication.

Lemma. If A is an matrix over the field F, the set of n-dimensional vectors x which satisfy

is a subspace of (the solution space of the system).

Proof. If and , then

Therefore, if x and y are in the set, so is .

If and k is a scalar, then

Therefore, if x is in the set, then so is .

Therefore, the solution space is a subspace.

Example. Consider the following system of linear equations over :

The solution can be written as

Thus,

The Lemma says that the set of all vectors of this form constitute a subspace of .

For example, if you add two vectors of this form, you get another vector of this form:

You can check that the set is also closed under scalar multiplication.

Definition. If , , ..., are vectors in a vectors space V, a linear combination of the v's is a vector

where the k's are scalars.

Example. Take and in . Here is a linear combination of u and v:

is also a linear combination of u and v. u and v are themselves linear combinations of u and v, as is the zero vector (why?).

In fact, it turns out that any vector in is a linear combination of u and v.

On the other hand, there are vectors in which are not linear combinations of and . Do you see how this pair is different from the first?

Definition. If S is a subset of a vector space V, the span of S is the set of all linear combinations of vectors in S.

Theorem. If S is a subset of a vector space V, the span of S is a subspace of V.

Proof. Here are typical elements of the span of S:

where the j's and k's are scalars and the u's and v's are elements of S.

Take two elements of the span and add them:

This humongous sum is an element of the span, because it's a sum of vectors in S, each multiplied by a scalar. Thus, the span is closed under taking sums.

Take an element of the span and multiply it by a scalar:

This is an element of the span, because it's a sum of vectors in S, each multiplied by a scalar. Thus, the span is closed under scalar multiplication.

Therefore, the span is a subspace.

Example. Prove that the span of and in is

To show that two sets are equal, you need to show that each is contained in the other. To do this, take a typical element of the first set and show that it's in the second set. Then take a typical element of the second set and show that it's in the first set.

Let W be the span of and in . A typical element of W is a linear combination of the two vectors:

Since the sum is a vector of the form for , it is in V. This proves that .

Now let . I have to show that this vector is a linear combination of and . This means that I have to find real numbers x and y such that

If I expand the left side, I get

Equating corresponding components, I get

This is a system of linear equations which you can solve by row reduction or matrix inversion (for instance). The solution is

In other words,

Since is a linear combination of and , it follows that . This proves that .

Since and , I have .