Chapter 04.01: Prerequisites to Simultaneous Linear Equations
Learning Objectives
After successful completion of this lesson, you should be able to:
1) Define what a matrix is.
2) Identify special types of matrices.
What does a matrix look like?
Matrices are everywhere. If you have used a spreadsheet such as Excel or wrote numbers in a table, you have used a matrix. Matrices make the presentation of numbers clearer and make calculations easier to program. Look at the matrix below about the sale of tires in a Blowoutr’us store – given by quarter and make of tires.
\[\begin{matrix} Tirestone\\ Michigan\\ Copper\\ \end{matrix} \stackrel{\mbox{Q1. Q2. Q3. Q4.}}{\begin{bmatrix} 25 & 20 & 3 & 2 \\ 5 & 10 &15 &25 \\ 6 & 16 &7 & 27 \\ \end{bmatrix}}\]
If one wants to know how many Copper tires were sold in Quarter \(4\), we go along the row Copper and column Q4 and find that it is \(27\).
So, what is a matrix?
A matrix is a rectangular array of elements. The elements can be symbolic expressions or/and numbers. Matrix \(\lbrack A\rbrack\) is denoted by
\[\displaystyle \lbrack A\rbrack = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\a_{21} & a_{22} &\cdots & a_{2n} \\ \vdots & & & \vdots \\a_{m1} & a_{m2} & \cdots & a_{\text{mn}} \\\end{bmatrix}\]
Row \(i\) of \(\lbrack A\rbrack\) has \(n\) elements and is
\[\left\lbrack a_{i1}a_{i2}\ldots a_{{in}} \right\rbrack\]
and column \(j\) of \(\lbrack A\rbrack\) has \(m\) elements and is
\[\begin{bmatrix} a_{1j} \\ a_{2j} \\ \vdots \\ a_{{mj}} \\\end{bmatrix}\]
Each matrix has rows and columns, and this defines the size of the matrix. If a matrix \(\lbrack A\rbrack\) has \(m\) rows and \(n\) columns, the size of the matrix is denoted by \(m \times n\). The matrix \(\lbrack A\rbrack\) may also be denoted by \(\lbrack A\rbrack_{m \times n}\) to show that \(\lbrack A\rbrack\) is a matrix with \(m\) rows and \(n\) columns.
Each entry in the matrix is called the entry or element of the matrix and is denoted by \(a_{{ij}}\) where \(i\) is the row number, and \(j\) is the column number of the element.
The matrix for the tire sales example could be denoted by the matrix \(A\) as
\[\lbrack A\rbrack = \begin{bmatrix} 25 & 20 & 3 & 2 \\ 5 & 10 & 15 & 25 \\ 6 & 16 & 7 & 27 \\ \end{bmatrix}\]
There are \(3\) rows and \(4\) columns, so the size of the matrix is \(3 \times 4\). In the above \(\lbrack A\rbrack\) matrix, \(a_{34} = 27\).
What are the special types of matrices?
Vector: A vector is a matrix that has only one row or one column. There are two types of vectors – row vectors and column vectors.
Row Vector
If a matrix \(\lbrack B\rbrack\) has one row, it is called a row vector \(\lbrack B\rbrack = \lbrack b_{1}\ b_{2}\ldots\ldots b_{n}\rbrack\) and \(n\) is the dimension of the row vector.
Column vector
If a matrix \(\lbrack C\rbrack\) has one column, it is called a column vector
\[\lbrack C\rbrack = \begin{bmatrix} c_{1} \\ \vdots \\ \vdots \\ c_{m} \\ \end{bmatrix}\]
and \(m\) is the dimension of the vector.
Submatrix
If some row(s) or/and column(s) of a matrix \(\lbrack A\rbrack\) are deleted (no rows or columns may be deleted), the remaining matrix is called a submatrix of \(\lbrack A\rbrack\).
Square matrix
If the number of rows \(m\) of a matrix is equal to the number of columns \(n\) of a matrix \(\lbrack A\rbrack\), that is, \(m = n\), then \(\lbrack A\rbrack\) is called a square matrix. The entries \(a_{11},a_{22},...,a_{{nn}}\) are called the diagonal elements of a square matrix. Sometimes the diagonal of the matrix is also called the principal or main of the matrix.
Example 3
Give an example of a square matrix.
Solution
\[\lbrack A\rbrack = \begin{bmatrix} 25 & 20 & 3 \\ 5 & 10 & 15 \\ 6 & 15 & 7 \\ \end{bmatrix}\]
is a square matrix as it has the same number of rows and columns, that is, \(3\). The diagonal elements of \(\lbrack A\rbrack\) are \[a_{11} = 25,\ \ a_{22} = 10,\ \ a_{33} = 7.\]
Upper triangular matrix
A \(n \times n\) matrix for which \(a_{{ij}} = 0,\ \ i > j\) for all \(i,j\) is called an upper triangular matrix. That is, all the elements below the diagonal entries are zero.
Lower triangular matrix
A \(n \times n\) matrix for which \(a_{{ij}} = 0,\ \ j > i\) for all \(i,j\) is called a lower triangular matrix. That is, all the elements above the diagonal entries are zero.
Diagonal matrix
A square matrix with all non-diagonal elements equal to zero is called a diagonal matrix, that is, only the diagonal entries of the square matrix can be non-zero (\(a_{ij} = 0,\ \ i \neq j\)).
Example 6
Give examples of a diagonal matrix.
Solution
\[\lbrack A\rbrack = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2.1 & 0 \\ 0 & 0 & 5 \\ \end{bmatrix}\]
is a diagonal matrix.
Any or all the diagonal entries of a diagonal matrix can be zero. For example
\[\lbrack A\rbrack = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2.1 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix}\]
is also a diagonal matrix.
Identity matrix
A diagonal matrix with all diagonal elements equal to \(1\) is called an identity matrix, (\(a_{{ij}} = 0,\ \ i \neq j)\) for all \(i,j\) and \(a_{{ii}} = 1\) for all \(i\)).
Learning Objectives
After successful completion of this lesson, you should be able to:
1) Add one matrix to another
2) Subtract one matrix from another
3) Multiply one matrix by another
How do you add two matrices?
Two matrices \(\left\lbrack A \right\rbrack\) and \(\left\lbrack B \right\rbrack\) can be added if they are of the same size. The addition is then shown as
\[\left\lbrack C \right\rbrack = \left\lbrack A \right\rbrack + \left\lbrack B \right\rbrack\]
where
\[c_{{ij}} = a_{{ij}} + b_{{ij}}\]
Example 1
Add the following two matrices.
\[\left\lbrack A \right\rbrack = \begin{bmatrix}5 & 2 & 3 \\1 & 2 & 7 \\\end{bmatrix}\] \[\left\lbrack B \right\rbrack = \begin{bmatrix}6 & 7 & - 2 \\3 & 5 & 19 \\\end{bmatrix}\]
Solution
\[\begin{split} \left\lbrack C \right\rbrack &= \left\lbrack A \right\rbrack + \left\lbrack B \right\rbrack\\ &= \begin{bmatrix} 5 & 2 & 3 \\ 1 & 2 & 7 \\ \end{bmatrix} + \begin{bmatrix} 6 & 7 & - 2 \\ 3 & 5 & 19 \\ \end{bmatrix}\\ &= \begin{bmatrix} 5 + 6 & 2 + 7 & 3 - 2 \\ 1 + 3 & 2 + 5 & 7 + 19 \\ \end{bmatrix}\\ &= \begin{bmatrix} 11 & 9 & 1 \\ 4 & 7 & 26 \\ \end{bmatrix} \end{split}\]
How do you subtract two matrices?
Two matrices \(\left\lbrack A \right\rbrack\) and \(\left\lbrack B \right\rbrack\) can be subtracted if they are the same size. The subtraction is then shown as
\[\left\lbrack D \right\rbrack = \left\lbrack A \right\rbrack - \left\lbrack B \right\rbrack\]
where
\[d_{{ij}} = a_{{ij}} - b_{{ij}}\]
Example 2
Subtract matrix \(\left\lbrack B \right\rbrack\) from matrix \(\left\lbrack A \right\rbrack\).
\[\left\lbrack A \right\rbrack = \begin{bmatrix} 5 & 2 & 3 \\ 1 & 2 & 7 \\ \end{bmatrix}\]
\[\left\lbrack B \right\rbrack = \begin{bmatrix} 6 & 7 & - 2 \\ 3 & 5 & 19 \\ \end{bmatrix}\]
Solution
\[\begin{split} \left\lbrack D \right\rbrack &= \left\lbrack A \right\rbrack - \left\lbrack B \right\rbrack\\ &= \begin{bmatrix} 5 & 2 & 3 \\ 1 & 2 & 7 \\ \end{bmatrix} - \begin{bmatrix} 6 & 7 & - 2 \\ 3 & 5 & 19 \\ \end{bmatrix}\\ &= \begin{bmatrix} \left( 5 - 6 \right) & \left( 2 - 7 \right) & \left( 3 - \left( - 2 \right) \right) \\ \left( 1 - 3 \right) & \left( 2 - 5 \right) & \left( 7 - 19 \right) \\ \end{bmatrix}\\ &= \begin{bmatrix} - 1 & - 5 & 5 \\ - 2 & - 3 & - 12 \\ \end{bmatrix}\end{split}\]
How do I multiply two matrices?
Two matrices \(\left\lbrack A \right\rbrack\) and \(\left\lbrack B \right\rbrack\) can be multiplied only if the number of columns of \(\left\lbrack A \right\rbrack\) is equal to the number of rows of \(\left\lbrack B \right\rbrack\) to give
\[\left\lbrack C \right\rbrack_{m \times n} = \left\lbrack A \right\rbrack_{m \times p}\left\lbrack B \right\rbrack_{p \times n}\]
If \(\left\lbrack A \right\rbrack\) is a \(m \times p\) matrix and \(\left\lbrack B \right\rbrack\) is a \(p \times n\) matrix, the resulting matrix \(\left\lbrack C \right\rbrack\) is a \(m \times n\) matrix.
So how does one calculate the elements of \(\left\lbrack C \right\rbrack\) matrix?
\[\begin{split} c_{{ij}} &= \sum_{k = 1}^{p}{a_{{ik}}b_{{kj}}}\\ &= a_{i1}b_{1j} + a_{i2}b_{2j} + \ldots + a_{{ip}}b_{{pj}} \end{split}\]
for each \(i = 1,\ 2,\ \ldots\ \ ,\ m\) and \(j = 1,\ 2,\ \ldots\ \ ,\ n\).
To put it in simpler terms, the \(i^{{th}}\) row and \(j^{{th}}\) column of the \(\left\lbrack C \right\rbrack\) matrix in \(\left\lbrack C \right\rbrack = \left\lbrack A \right\rbrack\left\lbrack B \right\rbrack\) is calculated by multiplying the \(i^{{th}}\) row of \(\left\lbrack A \right\rbrack\) by the \(j^{{th}}\) column of \(\left\lbrack B \right\rbrack\). That is
\[\begin{split} c_{{ij}} &= \left\lceil a_{i1} \ \ a_{i2}\ \ \ldots\ \ \ a_{{ip}} \right\rceil\begin{bmatrix} \begin{matrix} b_{1j} \\ b_{2j} \\ \end{matrix} \\ \begin{matrix} \vdots \\ b_{{pj}} \\ \end{matrix} \\ \end{bmatrix}\\ &= a_{i1}b_{1j} + a_{i2}b_{2j} + \ldots + a_{{ip}}b_{{pj}}\\ &= \sum_{k = 1}^{p}{a_{{ik}}b_{{kj}}}\end{split}\]
Example 3
Given
\[\left\lbrack A \right\rbrack = \begin{bmatrix} 5 & 2 & 3 \\ 1 & 2 & 7 \\ \end{bmatrix}\]
\[\left\lbrack B \right\rbrack = \begin{bmatrix} 3 & - 2 \\ 5 & - 8 \\ 9 & - 10 \\ \end{bmatrix}\]
Find
\[\left\lbrack C \right\rbrack = \left\lbrack A \right\rbrack\left\lbrack B \right\rbrack\]
Solution
\(c_{12}\) can be found by multiplying the first row of \(\left\lbrack A \right\rbrack\) by the second column of \(\left\lbrack B \right\rbrack\),
\[\begin{split} c_{12} &= \begin{bmatrix} 5 & 2 & 3 \\ \end{bmatrix}\begin{bmatrix} - 2 \\ - 8 \\ - 10 \\ \end{bmatrix}\\ &= \left( 5 \right)\left( - 2 \right) + \left( 2 \right)\left( - 8 \right) + \left( 3 \right)\left( - 10 \right)\\ &= - 56\end{split}\]
Similarly, one can find the other elements of \(\left\lbrack C \right\rbrack\) to give
\[\left\lbrack C \right\rbrack = \begin{bmatrix} 52 & - 56 \\ 76 & - 88 \\ \end{bmatrix}\]
Learning Objectives
After successful completion of this lesson, you should be able to:
1) Develop simultaneous linear equations model from a physical problem
2) Set up simultaneous linear equations in matrix form
Matrix algebra is used for solving systems of equations. Can you illustrate this concept?
Matrix algebra is used to solve a system of simultaneous linear equations. In fact, for many
mathematical procedures such as the solution to a set of nonlinear equations, interpolation, integration, and differential equations, the solutions reduce to a set of simultaneous linear equations. Let us illustrate with an example for interpolation.
Example 1
The upward velocity of a rocket is given at three different times on the following table.
Table 1. Velocity vs. time data for a rocket
Time, t | Velocity, v |
---|---|
\(\text{s}\) | \((\text{m/s})\) |
\(5\) | \(106.8\) |
\(8\) | \(177.2\) |
\(12\) | \(279.2\) |
The velocity data is approximated by a polynomial as
\[v\left( t \right) = at^{2} + {bt} + c,\ 5 \leq t \leq 12\;\;\;\;\;\;\;\;\;\;\;\; (E1.1)\]
Set up the equations in matrix form to find the coefficients \(a,b,c\) of the velocity profile.
Solution
The polynomial is going through three data points \(\left( t_{1},v_{1} \right),\left( t_{2},v_{2} \right), \text{ and} \left( t_{3},v_{3} \right)\) where from Table 1
\[t_{1} = 5,\ v_{1} = 106.8\]
\[t_{2} = 8,\ v_{2} = 177.2\]
\[t_{3} = 12,\ v_{3} = 279.2\]
Requiring that \(v\left( t \right) = at^{2} + {bt} + c\) passes through the three data points gives
\[\begin{split} v\left( t_{1} \right) &= v_{1} = at_{1}^{2} + bt_{1} + c\\ v\left( t_{2} \right) &= v_{2} = at_{2}^{2} + bt_{2} + c\\ v\left( t_{3} \right) &= v_{3} = at_{3}^{2} + bt_{3} + c \end{split}\]
Substituting the data \(\left( t_{1},v_{1} \right),\left( t_{2},v_{2} \right),\ and\ \left( t_{3},v_{3} \right)\) gives
\[\begin{split} a\left( 5^{2} \right) + b\left( 5 \right) + c &= 106.8\\ a\left( 8^{2} \right) + b\left( 8 \right) + c &= 177.2\\ a\left( 12^{2} \right) + b\left( 12 \right) + c &= 279.2\end{split}\]
or
\[\begin{split} 25a + 5b + c &= 106.8\\ 64a + 8b + c &= 177.2\\ 144a + 12b + c &= 279.2 \end{split}\]
This set of equations can be rewritten in the matrix form as
\[\begin{bmatrix} 25a + & 5b + & c \\ 64a + & 8b + & c \\ 144a + & 12b + & c \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix}\]
The above equation can be written as a linear combination as follows
\[a\begin{bmatrix} 25 \\ 64 \\ 144 \\ \end{bmatrix} + b\begin{bmatrix} 5 \\ 8 \\ 12 \\ \end{bmatrix} + c\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix}\]
and further using matrix multiplication gives
\[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} a \\ b \\ c \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix}\]
The above is an illustration of why matrix algebra is needed. The complete solution to the set of equations is given later in this chapter.
A general set of \(m\) linear equations and \(n\) unknowns,
\[\begin{split} &a_{11}x_{1} + a_{12}x_{2} + {......} + a_{1n}x_{n} = c_{1}\\ &a_{21}x_{1} + a_{22}x_{2} + {......} + a_{2n}x_{n} = c_{2}\\ &{.......................................}\\ & {.......................................}\\ &a_{m1}x_{1} + a_{m2}x_{2} + ...... + a_{\text{mn}}x_{n} = c_{m} \end{split}\]
can be rewritten in the matrix form as
\[\begin{bmatrix} a_{11} & a_{12} & . & . & a_{1n} \\ a_{21} & a_{22} & . & . & a_{2n} \\ \vdots & & & & \vdots \\ \vdots & & & & \vdots \\ a_{m1} & a_{m2} & . & . & a_{\text{mn}} \\ \end{bmatrix}\begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ \vdots \\ x_{n} \\ \end{bmatrix} = \begin{bmatrix} c_{1} \\ c_{2} \\ \vdots \\ \vdots \\ c_{m} \\ \end{bmatrix}\]
Denoting the matrices by \(\left\lbrack A \right\rbrack\), \(\left\lbrack X \right\rbrack\), and \(\left\lbrack C \right\rbrack\), the system of equation is \(\left\lbrack A \right\rbrack\ \left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\), where \(\left\lbrack A \right\rbrack\) is called the coefficient matrix, \(\left\lbrack C \right\rbrack\) is called the right-hand side vector and \(\left\lbrack X \right\rbrack\) is called the solution vector.
Sometimes \(\left\lbrack A \right\rbrack\ \left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\) system of equations is written in the augmented form, that is,
\[[A\ \vdots\ C]= \begin{bmatrix} a_{11} &a_{12} &\cdots &a_{1n}\ \ \vdots&c_1 \\ a_{21} &a_{22} &\cdots &a_{2n}\ \ \vdots&c_2 \\ \vdots&\vdots&\ddots&\ \ \ \ \ \ \ \vdots& \vdots\\ a_{m1} &a_{m2} &\cdots &a_{mn}\ \vdots&c_n \end{bmatrix}\]
As an example, for the set of equations \[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} a \\ b \\ c \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix}\] the augmented matrix form is \[\begin{bmatrix} 25 & 5 & 1 & | & 106.8 \\ 64 & 8 & 1 & | & 177.2 \\ 144 & 12 & 1 & | & 279.2 \\ \end{bmatrix}\]
Learning Objectives
After successful completion of this lesson, you should be able to:
1) define the inverse of a matrix
2) know important statements about the inverse of a matrix
3) solve a set of equations where the inverse of the coefficient matrix is given
Can you divide two matrices?
If \(\lbrack A\rbrack\ \lbrack B\rbrack = \lbrack C\rbrack\) is defined, it might seem intuitive that \(\displaystyle \lbrack A\rbrack = \frac{\left\lbrack C \right\rbrack}{\left\lbrack B \right\rbrack}\), but matrix division is not defined like that. However, an inverse of a matrix can be defined for certain types of square matrices. The inverse of a square matrix \(\lbrack A\rbrack\), if existing, is denoted by \(\lbrack A\rbrack^{- 1}\) such that
\[\lbrack A\rbrack\ \lbrack A\rbrack^{- 1} = \lbrack I\rbrack = \lbrack A\rbrack^{- 1}\lbrack A\rbrack\]
where \(\lbrack I\rbrack\) is the identity matrix.
In other words, let \(A\) be a square matrix. If \(\lbrack B\rbrack\) is another square matrix of the same size such that \(\lbrack B\rbrack\ \lbrack A\rbrack = \lbrack I\rbrack\), then \(\lbrack B\rbrack\) is the inverse of \(\lbrack A\rbrack\). \(\lbrack A\rbrack\) is then called to be invertible or nonsingular. If \(\lbrack A\rbrack^{- 1}\) does not exist, \(\lbrack A\rbrack\) is called noninvertible or singular.
If \(\lbrack A\rbrack\) and \(\lbrack B\rbrack\) are two \(n \times n\) matrices such that \(\lbrack B\rbrack\ \lbrack A\rbrack = \lbrack I\rbrack\), then these statements are also true
a) \([B]\) is the inverse of \([A]\)
b) \([A]\) is the inverse of \([B]\)
c) \([A]\) and \([B]\) are both invertible
d) \([A] [B]= [I]\).
e) \([A]\) and \([B]\) are both nonsingular
f) all columns of \([A]\) and \([B]\) are linearly independent
g) all rows of \([A]\) and \([B]\) are linearly independent.
Example 1
Determine if
\[\lbrack B\rbrack = \begin{bmatrix} 3 & 2 \\ 5 & 3 \\ \end{bmatrix}\]
is the inverse of
\[\lbrack A\rbrack = \begin{bmatrix} - 3 & 2 \\ 5 & - 3 \\ \end{bmatrix}\]
Solution
\[\begin{split} \lbrack B\rbrack\lbrack A\rbrack &= \begin{bmatrix} 3 & 2 \\ 5 & 3 \\ \end{bmatrix}\begin{bmatrix} - 3 & 2 \\ 5 & - 3 \\ \end{bmatrix}\\ &= \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} \\ &= \lbrack I\rbrack\end{split}\]
Since
\[\left\lbrack B \right\rbrack\left\lbrack A \right\rbrack = \left\lbrack I \right\rbrack,\]
\(\lbrack B\rbrack\) is the inverse of \(\lbrack A\rbrack\), and \(\lbrack A\rbrack\) is the inverse of \(\lbrack B\rbrack\).
But, we can also show that
\[\begin{split} \lbrack A\rbrack\lbrack B\rbrack &= \begin{bmatrix} - 3 & 2 \\ 5 & - 3 \\ \end{bmatrix}\begin{bmatrix} 3 & 2 \\ 5 & 3 \\ \end{bmatrix}\\ &= \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix}\\ &= \lbrack I\rbrack \end{split}\]
to prove that \(\lbrack A\rbrack\) is the inverse of \(\lbrack B\rbrack\).
Can I use the concept of the inverse of a matrix to find the solution of a set of equations [A][X] = [C]?
Yes, if the number of equations is the same as the number of unknowns, the coefficient matrix \(\lbrack A\rbrack\) is a square matrix.
Given
\[\lbrack A\rbrack\ \lbrack X\rbrack = \lbrack C\rbrack\]
Then, if \(\lbrack A\rbrack^{- 1}\) exists, multiplying both sides by \(\lbrack A\rbrack^{- 1}\).
\[\lbrack A\rbrack^{- 1}\lbrack A\rbrack\lbrack X\rbrack = \lbrack A\rbrack^{- 1}\lbrack C\rbrack\]
\[\lbrack I\rbrack\ \lbrack X\rbrack = \lbrack A\rbrack^{- 1}\lbrack C\rbrack\]
\[\lbrack X\rbrack = \lbrack A\rbrack^{- 1}\lbrack C\rbrack\]
This implies that if we are able to find \(\lbrack A\rbrack^{- 1}\), the solution vector of \(\lbrack A\rbrack\ \lbrack X\rbrack = \lbrack C\rbrack\) is simply a multiplication of \(\lbrack A\rbrack^{- 1}\) and the right-hand side vector, \(\lbrack C\rbrack\).
How do I find the inverse of a matrix?
If \(\lbrack A\rbrack\) is a \(n \times n\) matrix, then \(\lbrack A\rbrack^{- 1}\) is a \(n \times n\) matrix, and according to the definition of inverse of a matrix
\[\lbrack A\rbrack\ \lbrack A\rbrack^{- 1} = \lbrack I\rbrack\]
Denoting
\[\lbrack A\rbrack = \begin{bmatrix} a_{11} & a_{12} & \cdot & \cdot & a_{1n} \\ a_{21} & a_{22} & \cdot & \cdot & a_{2n} \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ a_{n1} & a_{n2} & \cdot & \cdot & a_{{nn}} \\ \end{bmatrix}\]
\[\lbrack A\rbrack^{- 1} = \begin{bmatrix} a_{11}^{\prime} & a_{12}^{\prime} & \cdot & \cdot & a_{1n}^{\prime} \\ a_{21}^{\prime} & a_{22}^{\prime} & \cdot & \cdot & a_{2n}^{\prime} \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ a_{n1}^{\prime} & a_{n2}^{\prime} & \cdot & \cdot & a_{\text{nn}}^{\prime} \\ \end{bmatrix}\]
\[\lbrack I\rbrack = \begin{bmatrix} 1 & 0 & \cdot & \cdot & \cdot & 0 \\ 0 & 1 & & & & 0 \\ 0 & & \cdot & & & \cdot \\ \cdot & & & 1 & & \cdot \\ \cdot & & & & \cdot & \cdot \\ 0 & \cdot & \cdot & \cdot & \cdot & 1 \\ \end{bmatrix}\]
Using the definition of matrix multiplication, the first column of the \(\lbrack A\rbrack^{- 1}\) matrix can then be found by solving
\[\begin{bmatrix} a_{11} & a_{12} & \cdot & \cdot & a_{1n} \\ a_{21} & a_{22} & \cdot & \cdot & a_{2n} \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ a_{n1} & a_{n2} & \cdot & \cdot & a_{\text{nn}} \\ \end{bmatrix}\begin{bmatrix} a_{11}^{\prime} \\ a_{21}^{\prime} \\ \cdot \\ \cdot \\ a_{n1}^{\prime} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ \cdot \\ \cdot \\ 0 \\ \end{bmatrix}\]
Similarly, one can find the other columns of the \(\lbrack A\rbrack^{- 1}\) matrix by changing the right-hand side accordingly.
Example 2
The upward velocity of the rocket is given by
Table 1. Velocity vs. time data for a rocket
Time, t (s) | Velocity, v (m/s) |
---|---|
\(5\) | \(106.8\) |
\(8\) | \(177.2\) |
\(12\) | \(279.2\) |
In an earlier example, we wanted to approximate the velocity profile by
\[v\left( t \right) = at^{2} + {bt} + c,\ 5 \leq t \leq 12\]
We found that the coefficients \(a,\ b,\text{ and }\ c\) in \(v\left( t \right)\) are given by solving
\[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} a \\ b \\ c \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix}\]
First, find the inverse of
\[\left\lbrack A \right\rbrack = \begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\]
and then use the definition of inverse to find the coefficients \(a,\ b,\ \text{and}\ c\), and the velocity profile
Solution
If
\[\left\lbrack A \right\rbrack^{- 1} = \begin{bmatrix} a_{11}^{\prime} & a_{12}^{\prime} & a_{13}^{\prime} \\ a_{21}^{\prime} & a_{22}^{\prime} & a_{23}^{\prime} \\ a_{31}^{\prime} & a_{32}^{\prime} & a_{33}^{\prime} \\ \end{bmatrix}\]
is the inverse of \(\lbrack A\rbrack\), then
\[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} a_{11}^{\prime} & a_{12}^{\prime} & a_{13}^{\prime} \\ a_{21}^{\prime} & a_{22}^{\prime} & a_{23}^{\prime} \\ a_{31}^{\prime} & a_{32}^{\prime} & a_{33}^{\prime} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}\]
gives three sets of equations
\[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} a_{11}^{\prime} \\ a_{21}^{\prime} \\ a_{31}^{\prime} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix}\]
\[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} a_{12}^{\prime} \\ a_{22}^{\prime} \\ a_{32}^{\prime} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ \end{bmatrix}\]
\[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} a_{13}^{\prime} \\ a_{23}^{\prime} \\ a_{33}^{\prime} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \\ \end{bmatrix}\]
Solving the above three sets of equations separately gives
\[\begin{bmatrix} a_{11}^{\prime} \\ a_{21}^{\prime} \\ a_{31}^{\prime} \\ \end{bmatrix} = \begin{bmatrix} 0.04762 \\ - 0.9524 \\ 4.571 \\ \end{bmatrix}\]
\[\begin{bmatrix} a_{12}^{\prime} \\ a_{22}^{\prime} \\ a_{32}^{\prime} \\ \end{bmatrix} = \begin{bmatrix} - 0.08333 \\ 1.417 \\ - 5.000 \\ \end{bmatrix}\]
\[\begin{bmatrix} a_{13}^{\prime} \\ a_{23}^{\prime} \\ a_{33}^{\prime} \\ \end{bmatrix} = \begin{bmatrix} 0.03571 \\ - 0.4643 \\ 1.429 \\ \end{bmatrix}\]
Hence
\[\lbrack A\rbrack^{- 1} = \begin{bmatrix} 0.04762 & - 0.08333 & 0.03571 \\ - 0.9524 & 1.417 & - 0.4643 \\ 4.571 & - 5.000 & 1.429 \\ \end{bmatrix}\]
Now
\[\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\]
where
\[\left\lbrack X \right\rbrack = \begin{bmatrix} a \\ b \\ c \\ \end{bmatrix}\]
\[\left\lbrack C \right\rbrack = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix}\]
Using the definition of \(\left\lbrack A \right\rbrack^{- 1},\)
\[\left\lbrack A \right\rbrack^{- 1}\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack A \right\rbrack^{- 1}\left\lbrack C \right\rbrack\]
\[\left\lbrack X \right\rbrack = \left\lbrack A \right\rbrack^{- 1}\left\lbrack C \right\rbrack\]
\[\begin{bmatrix} a \\ b \\ c \\ \end{bmatrix} =\begin{bmatrix} 0.04762 & - 0.08333 & 0.03571 \\ - 0.9524 & 1.417 & - 0.4643 \\ 4.571 & - 5.000 & 1.429 \\ \end{bmatrix}\begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix}\]
Conducting matrix multiplication of the right hand side gives
\[\begin{bmatrix} a \\ b \\ c \\ \end{bmatrix} = \begin{bmatrix} 0.2905 \\ 19.69 \\ 1.086 \\ \end{bmatrix}\]
So
\[v\left( t \right) = 0.2905t^{2} + 19.69t + 1.086,\ 5 \leq t \leq 12\]
Multiple Choice Test
(1). Given
\[[A] =\begin{bmatrix} 6 & 2 & 3 & 9 \\ 0 & 1 & 2 & 3 \\ 0 & 0 & 4 & 5 \\ 0 & 0 & 0 & 6 \\ \end{bmatrix}\]
then \([A]\) is a ______________ matrix.
(A) diagonal
(B) identity
(C) lower triangular
(D) upper triangular
(2). A square matrix \([A]\) is lower triangular if
(A) \(a_{{ij}} = 0,j > i\)
(B) \(a_{{ij}} = 0,i > j\)
(C) \(a_{{ij}} \neq 0,i > j\)
(D) \(a_{{ij}} \neq 0,j > i\)
(3). Given
\[\lbrack A\rbrack = \begin{bmatrix} 12.3 & - 12.3 & 20.3 \\ 11.3 & - 10.3 & - 11.3 \\ 10.3 & - 11.3 & - 12.3 \\ \end{bmatrix},\ \lbrack B\rbrack = \begin{bmatrix} 2 & 4 \\ - 5 & 6 \\ 11 & - 20 \\ \end{bmatrix}\]
then if
\([C] = [A] [B]\), then
\(c_{31}=\) _____________________
(A) \(-58.2\)
(B) \(-37.6\)
(C) \(219.4\)
(D) \(259.4\)
(4). The following system of equations has ____________ solution(s).
\[x + y = 2\]
\[6x+6y =12\]
(A) infinite
(B) no
(C) two
(D) unique
(5). Consider there are only two computer companies in a country. The companies are named Dude and Imac. Each year, company Dude keeps 1/5th of its customers, while the rest switch to Imac. Each year, Imac keeps 1/3rd of its customers, while the rest switch to Dude. If in 2003, Dude had 1/6th of the market and Imac had 5/6th of the market, what will be the share of Dude computers when the market becomes stable?
(A) \(37/90\)
(B) \(5/11\)
(C) \(6/11\)
(D) \(53/90\)
(6). Three kids - Jim, Corey, and David receive an inheritance of \(\$2,253,453\). The money is put in three trusts but is not divided equally, to begin with. Corey’s trust is three times that of David’s because Corey made an A in Dr. Kaw’s class. Each trust is put in an interest-generating investment. The three trusts of Jim, Corey, and David pays an interest of \(6\%\), \(8\%\), \(11\%\), respectively. The total interest of all the three trusts combined at the end of the first year is \(\$190,740.57\). The equations to find the trust money of Jim (\(J\)), Corey (\(C\)), and David (\(D\)) in a matrix form is
(A) \(\begin{bmatrix} 1 & 1 & 1 \\ 0 & 3 & - 1 \\ 0.06 & 0.08 & 0.11 \\ \end{bmatrix}\begin{bmatrix} J \\ C \\ D \\ \end{bmatrix} = \begin{bmatrix} 2,253,453 \\ 0 \\ 190,740.57 \\ \end{bmatrix}\)
(B) \(\begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & - 3 \\ 0.06 & 0.08 & 0.11 \\ \end{bmatrix}\begin{bmatrix} J \\ C \\ D \\\end{bmatrix} = \begin{bmatrix} 2,253,453 \\ 0 \\ 190,740.57 \\ \end{bmatrix}\)
(C) \(\begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & - 3 \\ 6 & 8 & 11 \\ \end{bmatrix}\begin{bmatrix} J \\ C \\ D \\ \end{bmatrix} = \begin{bmatrix} 2,253,453 \\ 0 \\ 190,740.57 \\ \end{bmatrix}\)
(D) \(\begin{bmatrix} 1 & 1 & 1 \\ 0 & 3 & - 1 \\ 6 & 8 & 11 \\ \end{bmatrix}\begin{bmatrix} J \\ C \\ D \\ \end{bmatrix} = \begin{bmatrix} 2,253,453 \\ 0 \\ 19,074,057 \\ \end{bmatrix}\)
For complete solution, go to
http://nm.mathforcollege.com/mcquizzes/04sle/quiz_04sle_background_solution.pdf