Vector Spaces

Vector spaces and linear transformations are the primary objects of study in linear algebra. A vector space (which I'll define below) consists of two sets: A set of objects called vectors and a field (the scalars).

Definition. A vector space V over a field F is a set V equipped with an operation called (vector) addition, which takes vectors u and v and produces another vector $u + v$ .

There is also an operation called scalar multiplication, which takes an element $a \in F$ and a vector $u \in
   V$ and produces a vector $au \in V$ .

These operations satisfy the following axioms:

  1. Vector addition is associative: If $u, v, w \in V$ , then

$$(u + v) + w = u + (v + w).$$

  1. Vector addition is commutative: If $u, v \in V$ , then

$$u + v = v + u.$$

  1. There is a zero vector 0 which satisfies

$$0 + u = u = u + 0 \quad\hbox{for all}\quad u \in V.$$

  1. For every vector $u \in V$ , there is a vector $-u \in V$ which satisfies

$$u + (-u) = 0 = (-u) + u.$$

  1. If $a, b \in F$ and $x
   \in V$ , then

$$a(bx) = (ab)x.$$

  1. If $a, b \in F$ and $x
   \in V$ , then

$$(a + b)x = ax + bx.$$

  1. If $a \in F$ and $x, y \in V$ , then

$$a(x + y) = ax + ay.$$

  1. If $x \in V$ , then

$$1\cdot x = x.$$

The elements of V are called vectors; the elements of F are called scalars. As usual, the use of words like "multiplication" does not imply that the operations involved look like ordinary "multiplication".


Example. If F is a field, then $F^n$ denotes the set

$$F^n = \{(a_1, \ldots, a_n) \mid a_1, \ldots, a_n \in F\}.$$

$F^n$ is called {the vector space of n-dimensional vectors over F}. The elements $a_1$ , ..., $a_n$ are called the vector's components.

$F^n$ becomes a vector space over F with the following operations:

$$(a_1, \ldots, a_n) + (b_1, \ldots, b_n) = (a_1 + b_1, \ldots, a_n + b_n).$$

$$p\cdot (a_1, \ldots, a_n) = (pa_1, \ldots, pa_n), \quad\hbox{where}\quad p \in F.$$

It's easy to check that the axioms hold. For example, I'll check Axiom 6. Let $p, q \in F$ , and let $(a_1, \ldots, a_n) \in F^n$ . Then

$$\matrix{(p + q)(a_1, \ldots, a_n) & = & \left((p + q)a_1, \ldots, (p + q)a_n\right) & \hbox{Definition of scalar multiplication} \cr & = & \left(pa_1 + qa_1, \ldots, pa_n + qa_n\right) & \hbox{Field axiom: Distributivity} \cr & = & (pa_1, \ldots, pa_n) + (qa_1, \ldots, qa_n) & \hbox{Definition of vector additon} \cr & = & p(a_1, \ldots, a_n) + q(a_1, \ldots, a_n) & \hbox{Definition of scalar multiplication} \cr}.$$

As a specific example, $\real^3$ consists of 3-dimensional vectors with real components, like

$$(3, -2, \pi) \quad\hbox{or}\quad \left(\dfrac{1}{2}, 0, -1.234\right).$$

You're probably familiar with addition and scalar multiplication for these vectors:

$$(1, -2, 4) + (4, 5, 2) = (1 + 4, -2 + 5, 4 + 2) = (5, 3, 6).$$

$$7\cdot (-2, 0, 3) = (7\cdot (-2), 7\cdot 0, 7\cdot 3) = (-14, 0, 21).$$

(Sometimes people write $\langle 3,
   -2, \pi\rangle$ , using angle brackets to distinguish vectors from points. I'll use angle brackets when there's a danger of confusion.)

$\integer_3^2$ consists of 2-dimensional vectors with components in $\integer_3$ . Since each of the two components can be any element in $\{0, 1, 2\}$ , there are $3\cdot 3 = 9$ such vectors:

$$(0,0), (0,1), (0,2), (1,0), (1,1), (1,2), (2,0), (2,1), (2,2).$$

Here are examples of vector addition and multiplication in $\integer_3^2$ :

$$(1,2) + (1,1) = (1 + 1, 2 + 1) = (2,0).$$

$$2\cdot (2,1) = (2\cdot 2, 2\cdot 1) = (1,2).\quad\halmos$$


Example. The set $\real[x]$ of polynomials with real coefficients is a vector space over $\real$ , using the standard operations on polynomials. For example,

$$(2x^2 + 3x + 5) + (x^3 + 7x - 11) = x^3 + 2x^2 + 10x - 6.$$

$$4\cdot (-3x^2 + 10) = -12x^2 + 40.$$

Unlike $\real^n$ , $\real[x]$ is infinite dimensional (in a sense to be made more precise shortly). Intuitively, you need an infinite set of polynomials, like

$$1, x, x^2, x^3, \ldots$$

to "construct" all the elements of $\real[x]$ .


Example. Let $C[0,1]$ denote the continuous real-valued functions defined on the interval $0 \le x \le 1$ . Add functions pointwise:

$$(f + g)(x) = f(x) + g(x) \quad\hbox{for}\quad f, g \in C[0,1].$$

From calculus, you know that the sum of continuous functions is a continuous function.

If $a \in \real$ and $f
   \in C[0,1]$ , define scalar multiplication in pointwise fashion:

$$(af)(x) = a\cdot f(x).$$

For example, if $f(x) = x^2$ and $a = 3$ , then

$$(af)(x) = 3x^2.$$

These operations make $C[0,1]$ into an $\real$ -vector space.

Like $\real[x]$ , $C[0,1]$ is infinite dimensional. However, its dimension is uncountably infinite, while $\real[x]$ has countably infinite dimension over $\real$ .

You can also define a "dot product" for two vectors $f, g \in C[0,1]$ :

$$f\cdot g = \int_0^1 f(x)g(x)\,dx.$$

The product of continuous functions is continuous, so the integral of $f(x)g(x)$ is defined. This example shows that abstract vectors do not have to look like little arrows!.


Proposition. Let V be a vector space over a field F.

  1. $0\cdot x = 0$ for all $x \in V$ .
  2. $(-1)\cdot x = -x$ for all $x \in V$ .
  3. $-(-x) = x$ for all $x \in V$ .

Proof. (a) Note that the "0" on the left is the zero {\it scalar} in F, whereas the "0" on the right is the zero {\it vector} in V.

$$0\cdot x = (0 + 0)\cdot x = 0\cdot x + 0\cdot x.$$

Subtracting $0\cdot x$ from both sides, I get $0 = 0\cdot x$ .

(b) (The "-1" on the left is the scalar -1; the "$-x$ " on the right is the "negative" of $x \in V$ .)

$$(-1)\cdot x + x = (-1)\cdot x + 1\cdot x = \left((-1) + 1\right)\cdot x = 0\cdot x = 0.$$

(c)

$$-(-x) = (-1)\cdot [(-1)\cdot x] = [(-1)\cdot (-1)]x = 1\cdot x = x.\quad\halmos$$

Definition. Let V be a vector space over a field F, and let $W \subset V$ , $W \ne \emptyset$ . W is a subspace of V if:

  1. If $u, v \in W$ , then $u
   + v \in W$ .
  2. If $k \in F$ and $u \in W$ , then $ku \in W$ .

In other words, W is closed under addition of vectors and under scalar multiplication.

Lemma. Let W be a subspace of a vector space V.

  1. The zero vector is in W.
  2. If $w \in W$ , then $-w \in W$ .

Proof. (a) Take any vector $w \in W$ (which you can do because W is nonempty), and take $0 \in F$ . Since W is closed under scalar multiplication, $0\cdot w \in W$ . But $0\cdot w =
   0$ , so $0 \in W$ .

(b) Since $w \in W$ and $-1
   \in F$ , $(-1)\cdot w = -w$ is in W.


Example. If V is a vector space over a field F, $\{0\}$ and V are subspaces of V.


Example. Consider the real vector space $\real^2$ , the usual x-y plane. Then

$$W_1 = \{(x,0) \mid x \in \real\} \quad\hbox{and}\quad W_2 = \{(0,y) \mid y \in \real\}$$

are subspaces of $\real^2$ . (These are just the x and y-axes, of course.)

I'll check that $W_1$ is a subspace. First, I have to show that two elements of $W_1$ add to an element of $W_1$ . An element of $W_1$ is a pair with the second component 0. So here are two elements of $W_1$ : $(x_1,0)$ , $(x_2,0)$ . Add them:

$$(x_1,0) + (x_2,0) = (x_1 + x_2,0).$$

$(x_1 + x_2,0)$ is in $W_1$ , because its second component is 0. Thus, $W_1$ is closed under sums.

Next, I have to show that $W_1$ is closed under scalar multiplication. Take a scalar $k
   \in \real$ and a vector $(x,0) \in W_1$ . Take their product:

$$k\cdot (x,0) = (kx,0).$$

The product $(kx,0)$ is in $W_1$ because its second component is 0. Therefore, $W_1$ is closed under scalar multiplication.

Thus, $W_1$ is a subspace.

Notice that in doing the proof, I did not use specific vectors in $W_1$ like $(42,0)$ or $(-17,0)$ . I'm trying to prove statements about arbitrary elements of $W_1$ , so I use "variable" elements.

In general, the subspaces of $\real^2$ are $\{0\}$ , $\real^2$ , and lines passing through the origin. (Why can't a line which doesn't pass through the origin be a subspace?)

In $\real^3$ , the subspaces are the $\{0\}$ , $\real^3$ , and lines or planes passing through the origin.

And so on.


Example. Prove or disprove: The following subset of $\real^3$ is a subspace:

$$W = \{(x, y, 1) \mid x, y \in \real\}.$$

If you're trying to decide whether a set is a subspace, it's always good to check whether it contains the zero vector before you start checking the axioms. In this case, the set consists of 3-dimensional vectors whose third components are equal to 1. Obviously, the zero vector $(0, 0, 0)$ doesn't satisfy this condition.

Since W doesn't contain the zero vector, it's not a subspace of $\real^3$ .


Example. Let

$$W = \left\{(x, \sin x) \mid x \in \real\right\}.$$

Prove or disprove: W is a subspace of $\real^2$ .

Note that $(0,0) = (0, \sin 0) \in
   W$ . This is not one of the axioms for a subspace, but it's a good thing to check first because you can usually do it quickly. If the zero vector is not in a set, then the lemma above shows that the set is not a subspace. In this case, the zero vector is in W, so the issue isn't settled, and I'll try to check the subspace axioms.

First, I might try to check that the set is closed under sums. I take two vectors in W --- say $(x, \sin
   x)$ and $(y, \sin y)$ . I add them:

$$(x, \sin x) + (y, \sin y) = (x + y, \sin x + \sin y).$$

The last vector isn't in the right form --- it would be if $\sin x + \sin y$ was equal to $\sin
   (x + y)$ . That doesn't sound right, so I suspect that W is not a subspace. I try to get a specific counterexample to contradict closure under addition.

First,

$$\left(\dfrac{\pi}{2}, \sin \dfrac{\pi}{2}\right) = \left(\dfrac{\pi}{2}, 1\right) \in W \quad\hbox{and}\quad (\pi, \sin \pi) = (\pi, 0) \in W.$$

On the other hand,

$$\left(\dfrac{\pi}{2}, \sin \dfrac{\pi}{2}\right) + (\pi, \sin \pi) = \left(\dfrac{\pi}{2}, 1\right) + (\pi, 0) = \left(\dfrac{3\pi}{2}, 1\right) \not\in W.$$

For I have $\sin \dfrac{3\pi}{2} =
   -1 \ne 1$ .

Since W is not closed under vector addition, it is not a subspace.


Example. Let F be a field, and let $A, B \in M(n, F)$ . Consider the following subset of $F^n$ :

$$W = \{v \in F^n \mid Av = Bv\}.$$

Prove or disprove: W is a subspace.

This set is defined by a property rather than by appearance, and axiom checks for this kind of set often give people trouble. The problem is that elements of W don't "look like" anything --- if you need to refer to a couple of arbitrary elements of W, you might call them u and v (for instance). There's nothing about the symbols u and v which tells you that they belong to W. But u and v are like people who belong to a club: You can't tell from their appearance that they're club members, but they're carrying membership cards in their pockets.

With this in mind, I'll check closure under addition. Let $u, v \in W$ . I must show that $u + v \in W$ .

Since u and v are in W,

$$Au = Bu \quad\hbox{and}\quad Av = Bv.$$

Adding the equations and factoring out, I get

$$\eqalign{Au + Av &= Bu + Bv \cr A(u + v) &= B(u + v) \cr}$$

The last equation shows that $u +
   v \in W$ .

Warning: Don't say "$A(u + v) + B(u + v) \in W$ " --- it doesn't make sense! "$A(u + v) + B(u + v)$ " is an equation that $u + v$ satisfies; it can't be an element of W, because elements of W are vectors.

Next, I'll check closure under scalar multiplication. Let $k \in F$ and let $v \in W$ . Since $v \in W$ , I have

$$Av = Bv.$$

Multiply both sides by k, then commute the matrices and the scalar:

$$\eqalign{k(Av) &= k(Bv) \cr A(kv) &= B(kv) \cr}$$

The last equation says that $kv
   \in W$ .

Since W is closed under addition and scalar multiplication, it's a subspace.


Example. Consider the following subsets of the polynomial ring $\real[x]$ :

$$V_1 = \{f(x) \in \real[x] \mid f(2) = 0\}, \qquad V_2 = \{f(x) \in \real[x] \mid f(2) = 1\}.$$

$V_1$ is a subspace; it consists of all polynomials having $x = 2$ as a root.

$V_2$ is not a subspace. One way to see this is to notice that the zero polynomial (i.e. the zero vector) is not in $V_2$ , because the zero polynomial does not give 1 when you plug in $x = 2$ .

Alternatively, the constant polynomial $f(x) = 1$ is an element of $V_2$ --- it gives 1 when you plug in 2 --- but $2\cdot f(x)$ is not. So $V_2$ is not closed under scalar multiplication.


Lemma. If A is an $m \times n$ matrix over the field F, the set of n-dimensional vectors x which satisfy

$$Ax = 0$$

is a subspace of $F^n$ (the solution space of the system).

Proof. If $Ax = 0$ and $Ay = 0$ , then

$$A(x + y) = Ax + ay = 0 + 0 = 0.$$

Therefore, if x and y are in the set, so is $x + y$ .

If $Ax = 0$ and k is a scalar, then

$$A(kx) = k(Ax) = k\cdot 0 = 0.$$

Therefore, if x is in the set, then so is $kx$ .

Therefore, the solution space is a subspace.


Example. Consider the following system of linear equations over $\real$ :

$$\left[\matrix{1 & 1 & 0 & 1 \cr 0 & 0 & 1 & 3 \cr}\right] \left[\matrix{w \cr x \cr y \cr z \cr}\right] = \left[\matrix{0 \cr 0 \cr 0 \cr 0 \cr}\right].$$

The solution can be written as

$$w = -s - t, \quad x = s, \quad y = -3t, \quad z = t.$$

Thus,

$$\left[\matrix{w \cr x \cr y \cr z \cr}\right] = \left[\matrix{-s - t \cr s \cr -3t \cr t \cr}\right].$$

The Lemma says that the set of all vectors of this form constitute a subspace of $\real^4$ .

For example, if you add two vectors of this form, you get another vector of this form:

$$\left[\matrix{-s - t \cr s \cr -3t \cr t \cr}\right] + \left[\matrix{-s' - t' \cr s' \cr -3t' \cr t' \cr}\right] = \left[\matrix{-(s + s') - (t + t') \cr s + s' \cr -3(t + t') \cr t + t' \cr}\right].$$

You can check that the set is also closed under scalar multiplication.


Definition. If $v_1$ , $v_2$ , ..., $v_n$ are vectors in a vectors space V, a linear combination of the v's is a vector

$$k_1v_1 + k_2v_2 + \cdots + k_nv_n,$$

where the k's are scalars.


Example. Take $u = \langle 1,2\rangle$ and $v = \langle -3,7\rangle$ in $\real^2$ . Here is a linear combination of u and v:

$$2u - 5v = 2\cdot \langle 1,2\rangle - 5\langle -3,7\rangle = \langle 17,-31\rangle.$$

$(\sqrt{2} - 17)u +
   \dfrac{\pi^2}{4}v$ is also a linear combination of u and v. u and v are themselves linear combinations of u and v, as is the zero vector (why?).

In fact, it turns out that any vector in $\real^2$ is a linear combination of u and v.

On the other hand, there are vectors in $\real^2$ which are not linear combinations of $p = \langle 1,-2\rangle$ and $q = \langle -2,4\rangle$ . Do you see how this pair is different from the first?


Definition. If S is a subset of a vector space V, the span $\langle S\rangle$ of S is the set of all linear combinations of vectors in S.

Theorem. If S is a subset of a vector space V, the span $\langle S\rangle$ of S is a subspace of V.

Proof. Here are typical elements of the span of S:

$$j_1u_1 + j_2u_2 + \cdots + j_nu_n, \quad k_1v_1 + k_2v_2 + \cdots + k_nv_n,$$

where the j's and k's are scalars and the u's and v's are elements of S.

Take two elements of the span and add them:

$$(j_1u_1 + j_2u_2 + \cdots + j_nu_n) + (k_1v_1 + k_2v_2 + \cdots + k_mv_m) = j_1u_1 + j_2u_2 + \cdots + j_nu_n + k_1v_1 + k_2v_2 + \cdots + k_mv_m.$$

This humongous sum is an element of the span, because it's a sum of vectors in S, each multiplied by a scalar. Thus, the span is closed under taking sums.

Take an element of the span and multiply it by a scalar:

$$k\cdot(k_1v_1 + k_2v_2 + \cdots k_nv_n) = kk_1v_1 + kk_2v_2 + \cdots + kk_nv_n.$$

This is an element of the span, because it's a sum of vectors in S, each multiplied by a scalar. Thus, the span is closed under scalar multiplication.

Therefore, the span is a subspace.


Example. Prove that the span of $\langle 3,1,0\rangle$ and $\langle 2,1,0\rangle$ in $\real^3$ is

$$V = \left\{\langle a, b, 0\rangle \mid a, b \in \real\right\}.$$

To show that two sets are equal, you need to show that each is contained in the other. To do this, take a typical element of the first set and show that it's in the second set. Then take a typical element of the second set and show that it's in the first set.

Let W be the span of $\langle
   3,1,0\rangle$ and $\langle 2,1,0\rangle$ in $\real^3$ . A typical element of W is a linear combination of the two vectors:

$$x\cdot \langle 3,1,0\rangle + y\cdot \langle 2,1,0\rangle = \langle 3x + 2y, x + y, 0\rangle.$$

Since the sum is a vector of the form $\langle a, b, 0\rangle$ for $a, b \in \real$ , it is in V. This proves that $W \subset V$ .

Now let $\langle a, b, 0\rangle
   \in V$ . I have to show that this vector is a linear combination of $\langle 3,1,0\rangle$ and $\langle 2,1,0\rangle$ . This means that I have to find real numbers x and y such that

$$x\cdot \langle 3,1,0\rangle + y\cdot \langle 2,1,0\rangle = \langle a, b, 0\rangle.$$

If I expand the left side, I get

$$\langle 3x + 2y, x + y, 0\rangle = \langle a, b, 0\rangle.$$

Equating corresponding components, I get

$$3x + 2y = a, \quad x + y = b.$$

This is a system of linear equations which you can solve by row reduction or matrix inversion (for instance). The solution is

$$x = a - 2b, \quad y = -a + 3b.$$

In other words,

$$(a - 2b)\cdot \langle 3,1,0\rangle + (-a + 3b)\cdot \langle 2,1,0\rangle = \langle a, b, 0\rangle.$$

Since $\langle a, b, 0\rangle$ is a linear combination of $\langle 3,1,0\rangle$ and $\langle 2,1,0\rangle$ , it follows that $\langle a, b,
   0\rangle \in W$ . This proves that $V \subset W$ .

Since $W \subset V$ and $V
   \subset W$ , I have $W = V$ .


Send comments about this page to: Bruce.Ikenaga@millersville.edu.

Bruce Ikenaga's Home Page

Copyright 2008 by Bruce Ikenaga