Lecture 1: Vectors, Linear Independence, and Spanning Sets
Instructions
- This section covers the concepts listed below. You can click the button to go directly to that topic.
- After each video, there are notes for the material in the video.
- When you have finished the material below, you can go to the next section or return to the main page.
Vectors and Vector Spaces
Video Notes
For the beginning of the course, we will define a vector and vector space in this way (this is not the most abstract or the best definition, but it is what we will start with):Definition. A real vector is a column of \(n\) real numbers. We call such a vector an \(n\) dimensional real vector. The set of all \(n\) dimensional vectors is \(\mathbb{R}^n.\)
In this series of videos, we will concentrate mostly on the \(n=2\) and \(n=3\) cases. The reason is that most Math 308 classes only use 2 and 3 dimensional vectors and because once the basic application of linear algebra to differential equations is understood, you can come back to the subject after you have had a proper linear algebra class.
Example. Here are some examples:
- The vector \(\begin{pmatrix}1 \\ 2\end{pmatrix}\) is a vector in \(\mathbb{R}^2\).
- The vector \(\begin{pmatrix}1 \\ 0 \\ 1\end{pmatrix}\) is a vector in \(\mathbb{R}^3\).
- \(\begin{pmatrix}1 \\ 2 + 3i \\ 0\end{pmatrix}\) is not in \(\mathbb{R}^3\) since the second entry, \(2 + 3i\), is not a real number.
Example. All the examples in the previous example are complex vectors.
In the study of differential equations, the vectors in question are functions of some variable – for example – \(t\). This means that each entry in the vector is a function. For example, the following vectors are typical of those that come up in differential equations:
- \(\begin{pmatrix}t\\t^2\end{pmatrix}\)
- \(\begin{pmatrix}e^t\\e^{2t}\\1\end{pmatrix}\)
- \(e^t\begin{pmatrix}\cos t + \sin t\\\cos t - \sin t\\\cos t\end{pmatrix}\)
Video Notes
There are two important ways to create new vectors out of old vectors that we now discuss. The first that we discuss is scalar multiplication. Given a vector \(\boldsymbol{v}\in\mathbb{R}^n\), and a real or complex number \(c\) (real and complex numbers are called scalars) the product \(c \boldsymbol{v}\) is defined as follows:\begin{align*}
c \boldsymbol{v}
=c \begin{pmatrix}a_1\\a_2\\\vdots\\a_n\end{pmatrix}
= \begin{pmatrix}c a_1 \\ c a_2\\\vdots \\c a_n\end{pmatrix}.
\end{align*}
That is, each entry of \(\boldsymbol{v}\) is just multiplied by the scalar \(c\). As you will learn when you take an actual linear algebra class (and as you might know from Physics–related applications), multiplying a vector by a scalar has the geometric interpretation of "stretching'' the vector. While geometric interpretations are always helpful for the intuition they bring, this won't be a focus of this mini course.
Example. Compute \(3\begin{pmatrix}1\\2\\-2\end{pmatrix}\).
Solution. We just multiply each entry by \(3\):
\begin{align*}
3\begin{pmatrix}1\\2\\-2\end{pmatrix}
= \begin{pmatrix}3\\6\\-6\end{pmatrix}.
\end{align*}
Video Notes
The next way to combine vectors that we discuss is adding two vectors together. Let \(\boldsymbol{v},\boldsymbol{w}\) be two vectors in \(\mathbb{R}^n\). Then we can define the sum of these vectors as:\begin{align*}
\boldsymbol{v} + \boldsymbol{w}
=\begin{pmatrix}a_1\\\vdots\\a_n\end{pmatrix} + \begin{pmatrix}b_1\\\vdots\\b_n\end{pmatrix}
=\begin{pmatrix}a_1 + b_1 \\ \vdots \\ a_n + b_n\end{pmatrix}.
\end{align*}
That is, we just add the vectors "entry–wise''. Note that two add two vectors together, they must both be in \(\mathbb{R}^n\). That is, addition of a vector in \(\mathbb{R}^2\) and a vector in \(\mathbb{R}^3\) is not defined.
Example. Find the sum of \(\begin{pmatrix}1\\0\\-1\end{pmatrix}\) and \(\begin{pmatrix}2\\0\\1\end{pmatrix}\).
Solution. \begin{align*}
\begin{pmatrix}1\\0\\-1\end{pmatrix} + \begin{pmatrix}2\\0\\1\end{pmatrix}
= \begin{pmatrix}1 + 2 \\0 + 0 \\ -1 + 1\end{pmatrix}
= \begin{pmatrix}3\\0\\0\end{pmatrix}.
\end{align*}
Example
Find the sum \(e^t\begin{pmatrix}\cos t\\ \sin t\end{pmatrix}\) and \(e^{2t}\begin{pmatrix}\cos 2t \\ \sin 2t\end{pmatrix}.\)Video Notes
The last example is an example of something called a linear combination – this is when we combine both scalar multiplication and vector addition into one operation. In general, if \(\boldsymbol{v}_1, \ldots, \boldsymbol{v}_m\) are vectors in \(\mathbb{R}^n\) and \(c_1,\ldots, c_m\) are scalars, then we can form the linear combination:\begin{align*}c_1 \boldsymbol{v}_1 + \cdots + c_m\boldsymbol{v}_m
&= c_1 \begin{pmatrix} a^1_1\\\vdots\\a^1_n\end{pmatrix}
+ \cdots +
c_m \begin{pmatrix} a^m_1\\\vdots\\a^m_n\end{pmatrix}
\\[4mm]&= \begin{pmatrix} c_1 a^1_1\\\vdots\\c_1 a^1_n\end{pmatrix}
+ \cdots +
\begin{pmatrix} c_m a^m_1\\\vdots\\c_m a^m_n\end{pmatrix}
\\[4mm]&= \begin{pmatrix}c_1 a^1_1 + \cdots + c_m a^m_1 \\ \vdots \\
c_1 a^1_n + \cdots + c_m a^m_n \end{pmatrix}.
\end{align*}
Example. Compute \(2\begin{pmatrix}1\\2\end{pmatrix} - 3\begin{pmatrix}10\\2\end{pmatrix} + \begin{pmatrix}2\\1\end{pmatrix}\).
Solution. \begin{align*}
2\begin{pmatrix}1\\2\end{pmatrix} - 3\begin{pmatrix}10\\2\end{pmatrix} + \begin{pmatrix}2\\1\end{pmatrix}
= \begin{pmatrix}2\\4\end{pmatrix}
+ \begin{pmatrix}-30\\-6\end{pmatrix}
+ \begin{pmatrix}2\\1\end{pmatrix}
= \begin{pmatrix}-26\\-1\end{pmatrix}.
\end{align*}
Motivation from System of Equations
Video Notes
Oftentimes (and we will see specific applications to ODEs later in the course), we want to solve systems of equations like:\begin{align*}
c_1\begin{pmatrix}1\\1\\1\end{pmatrix}
+ c_2\begin{pmatrix}1\\0\\-1\end{pmatrix}
+ c_3\begin{pmatrix}0\\1\\-1\end{pmatrix}
=
\begin{pmatrix}
2\\5\\2
\end{pmatrix}
\end{align*}
This means that we want to find all \(c_1,c_2,c_3\) that satisfy this equation. So there are two questions to ask: (1) Is there a solution? and (2) If there is, is the solution unique?
For this particular problem:
\begin{align*}
\begin{pmatrix}
2\\5\\2
\end{pmatrix}
=
3\begin{pmatrix}1\\1\\1\end{pmatrix}
-1\begin{pmatrix}1\\0\\-1\end{pmatrix}
+ 2\begin{pmatrix}0\\1\\-1\end{pmatrix}
\end{align*}
And so one solution is:
\begin{align*}
c_1 = 3 \hspace{.25in}
c_2 = -1\hspace{.25in}
c_3 = 2.
\end{align*}
Next, we need to answer: is there more than one solution?
If \(c_1,c_2,c_3\) satisfy the equation then the top equation implies \(c_2 = 2-c_1\) and the middle equation implies \(c_3 = 5-c_1\). We can now plug this in the bottom equation to get:
\begin{align*}
2
&= c_1 - c_2 - c_3
\\&= c_1 - (2-c_1) - (5-c_1)
\\&= 3c_1 - 7
\end{align*}
so \(c_1 = 3\) and so \(c_2 = 2-c_1 = -1\) and \(c_3 = 5-c_1 = 2\). So these are the only solutions.
The questions above are called "existence" and "uniqueness" questions and are related to the concepts of "spanning sets" and "linear independence" that we will discuss in this lecture.
Linear Independence
Video Notes
Definition 3. A set of vectors, \(\boldsymbol{v}_1, \ldots, \boldsymbol{v}_m\) in \(\mathbb{R}^n\) is said to be linearly independent if the only solution to:\begin{align*}
c_1 \boldsymbol{v}_1 + c_2 \boldsymbol{v}_2 + \cdots + c_m \boldsymbol{v}_m = \boldsymbol{0}
\end{align*}
is \(c_1=c_2=\cdots =c_m=0\).
Observe that if \(c_1=\cdots=c_m = 0\), then this is a solution to the above equation. So a set of vectors is linear independent if this is the only solution. The terminology can be explained as follows. If \(\boldsymbol{v}_1, \ldots, \boldsymbol{v}_m\) are linearly dependent, then there is some choice of scalars \(c_1, \ldots c_m\) not all zero such that \(c_1 \boldsymbol{v}_1 + c_2 \boldsymbol{v}_2 + \cdots + c_m \boldsymbol{v}_m = \boldsymbol{0}\). If we assume that \(c_1\neq 0\) then:
\begin{align*}
\boldsymbol{v}_1 = - \frac{1}{c_1}( c_2 \boldsymbol{v}_2 + \cdots + c_m \boldsymbol{v}_m).
\end{align*}
That is, \(\boldsymbol{v}_1\) depends on the other vectors. Another way to define linear independence is:
Definition 4. A set of vectors is linearly dependent if one vector can be written as a linear combination of the others. And a set of vectors is linearly independent if they are not linearly dependent.
Let's look at a few examples.
Video Notes
If \(\boldsymbol{v}_1\) and \(\boldsymbol{v}_2\) are linearly independent, then the only solutions of the equation\[
c_1\boldsymbol{v}_1+c_2\boldsymbol{v}_2=0
\]
are \(c_1=c_2=0\).
If \(\boldsymbol{v}_1\) and \(\boldsymbol{v}_2\) are linearly dependent, then there are two scalars \(c_1\) and \(c_2\) where at least one is not zero and
\[
c_1\boldsymbol{v}_1+c_2\boldsymbol{v}_2\neq 0
\]
Assuming \(c_1\neq 0\), then we can solve the equation to get
\[
\boldsymbol{v}_1 =-\frac{c_1}{c_2}\boldsymbol{v}_2
\]
and then \(\boldsymbol{v}_1\) is a scalar multiple of \(\boldsymbol{v}_2\).
Therefore, two vectors are linearly dependent if and only if one vector is a scalar multiple of the other. Alternatively, two vectors are linearly independent if and only if neither vector is a scalar multiple of the other. Note this only works for determining if two vectors are linearly independent.
Examples
1. Determine if the vectors \(\begin{pmatrix}1\\1\end{pmatrix}, \begin{pmatrix}1\\-1\end{pmatrix}\) are linearly independent.
2. Determine if the vectors \(\begin{pmatrix}1\\1\end{pmatrix}, \begin{pmatrix}1\\-1\end{pmatrix}, \begin{pmatrix}-2\\0\end{pmatrix}\) are linearly independent.
3. Show that the vectors
\begin{align*}
\begin{pmatrix}1\\1\\1\end{pmatrix},
\begin{pmatrix}1\\0\\-1\end{pmatrix},
\begin{pmatrix}0\\1\\-1\end{pmatrix}
\end{align*}
are linearly independent.
\begin{align*}
\begin{pmatrix}1\\1\\1\end{pmatrix},
\begin{pmatrix}1\\0\\-1\end{pmatrix},
\begin{pmatrix}0\\1\\-1\end{pmatrix}
\end{align*}
are linearly independent.
4. Use the fact the vectors are linearly independent to show that the previously found solution (\(c_1 = 3\), \(c_2 = -1\), and \(c_3 = 2\)) to the following equation is unique\begin{align*}
c_1\begin{pmatrix}1\\1\\1\end{pmatrix}
+ c_2\begin{pmatrix}1\\0\\-1\end{pmatrix}
+ c_3\begin{pmatrix}0\\1\\-1\end{pmatrix}
=
\begin{pmatrix}
2\\5\\2
\end{pmatrix}
\end{align*}
c_1\begin{pmatrix}1\\1\\1\end{pmatrix}
+ c_2\begin{pmatrix}1\\0\\-1\end{pmatrix}
+ c_3\begin{pmatrix}0\\1\\-1\end{pmatrix}
=
\begin{pmatrix}
2\\5\\2
\end{pmatrix}
\end{align*}
Spanning Sets and Basis
Video Notes
The concepts of "spanning set" and "basis" are related to the concept of "linear independence". A set of vectors \(\boldsymbol{v}_1, \ldots, \boldsymbol{v}_m\) in \(\mathbb{R}^n\) is a spanning set for \(\mathbb{R}^n\) if every vector in \(\mathbb{R}^n\) can be written as a linear combination of these vectors.A general fact is that \(n\) linearly independent vectors in \(\mathbb{R}^n\) form a spanning set. In addition, a set of vectors that are (1) linearly independent and (2) a spanning set is called a basis.
Example
Determine if\(\begin{pmatrix}1\\1\\1\end{pmatrix},\begin{pmatrix}1\\0\\-1\end{pmatrix},
\begin{pmatrix}0\\1\\-1\end{pmatrix}\) form a spanning set for \(\mathbb{R}^3\).