CHAPTER 04.06: GAUSSIAN ELIMINATION: Naive Gaussian Elimination: Theory: Part 1 of 2   In this segment, we'll talk about how Naive Gaussian method works, we'll talk about the background of Naive Gaussian method so far as the steps which are involved in it is concerned, so let's go ahead and do that.    Now, Naive Gaussian method is basically a way to solve simultaneous linear equations which can be written in the form of A X equal to C, where A is the coefficient matrix, C is the right-hand side vector, and X is sometimes called the solution vector or it might be called the unknown vector. Now, for Naive Gaussian method there are two steps, one is forward elimination, and one is, the second one is . . . followed by back substitution.    What you do in forward elimination is basically is to take the coefficient matrix and you reduce it . . . take the equations, and you reduce it such that the coefficient matrix becomes upper triangular, in the back substitution, what you do is you then solve those equations now, starting from the last equation, that's why it's called back substitution, but we'll get familiar with what we meant by that once we start talking about the individual steps, the forward elimination step and the back substitution step.  Now, the goal of forward elimination is to transform the coefficient matrix into an upper triangular matrix.  So I'm going to show you an example first, because that's a better way of trying to teach you this stuff.  Let's suppose I have three equations, three unknowns like this one, and what I'm going to do is I'm going to eventually manipulate these equations by doing some addition, subtraction, multiplication, and division, and what I'm going to end up with is a set of equations like this one right here.  So the solution to this set of equations is same as the solution to this set of equations, because they have been found by simply doing some manipulations of addition . . . multiplication, addition, division, and subtraction which are allowed by matrix manipulation, so that the solution stays the same, and what you are basically seeing is that the coefficient matrix is turning out to be an upper triangular matrix, which is pretty evident from the 0s which I get below the diagonal, and what that does is that it allows you to now be able to use the back substitution steps which I was talking to you about, one equation at a time and one unknown at a time.  So if you look at this one here, by being able to convert this into a . . . into an upper triangular matrix, you are able to now solve only one equation, one unknown at a time, because if you look at this equation, there is . . . x3 is the only unknown.  Now, if you look at this equation right here, you might think that, hey, there are two unknowns, x2 and x3, but you just found out x3, so there's only one unknown, which is x2.  Now here . . . then you start with the last . . . the third last equation, which in this case is the first equation, and here you might say, hey, in that equation I have three unknowns, x1, x2, and x3, but you just found out x2, you just found out x3, so there's only one unknown in that equation.  So the reason why you are going from here to here, going from the coefficient matrix which is given to you to a coefficient matrix which is upper triangular is mainly to be able to solve one equation, one unknown at a time by using the steps of back substitution.    So now let's go ahead and look at the algorithm for . . . for forward elimination.  So these are the steps . . . these are the equations which are given to you.  So let's suppose somebody's giving you n equations and somebody's giving you n unknowns, the unknowns are x1, x2, x3, xn, so on and so forth, these as which you have here are the elements of the coefficient matrix, and b1 and so on and so forth, they are the elements of the right-hand side vector.  And one of the things, again, you've got to understand, there are n-minus-1 steps of forward elimination, because each step within the forward elimination is used to make certain things to be 0 so as to be able to convert the coefficient matrix into an upper triangular matrix. What do we mean by that?  It's that what we're going to do is, in order to be able to make . . . this is the second row . . . no, this is the first row, so we're going to take the first row here . . . so I'm going to take the first row here . . . let me go back here. If you see this one right here, what I'm going to do in my first step of forward elimination is that I want to make this to be 0, and all of the elements which are in the first . . . first column, below the second row and below, I want to make them to be 0. And now what I can do is, in order to be able to make them to be 0, I can divide by . . . divide by this element, a11, and multiply it by a21, let's suppose, and use that multiplier on this particular equation, and what that's going to do is I'm going to get a21 from there, and I can subtract it from there to make this to be 0, and I'll continue the same process until the last . . . last element.  So I'm going to show you in the next slide, which I was just showing it to you, and that's what I'm going to do.  So I'm taking the . . . I'm taking the first row, and I'm going to multiply by a21 divided by a11.  Now how do I come up with this?  It's because I know that this is a11, if I divide by a11, I'll get 1, and then if I multiply by a21 then the coefficient of this one will turn out to be a21, and now if I subtract this particular equation from the second equation, the coefficient of x1 is going to become 0, so let's look at that.  So this is what's going to happen, this is my . . . the equation which I get . . . this is the second row which I have, or the second equation which I have, this is the equation which I get by multiplying the first row by a21 divided by a11, and you can see that now the coefficient of x1 and the coefficient of x1 is the same, so when I subtract, I don't have any x1 terms, the coefficient of x1 terms is 0, but these terms, of course, change, and hence I get the second equation to be something like this. I use this prime business here to be able to tell you that, hey, this coefficient has changed, or this number, this element has changed, but the main thing to understand is that there is no x1 term in the second equation anymore. Now what's going to happen is that I can do the same thing for all of the . . . all of the elements in the first column now, below the first row.  I just made this to be 0, I'll make this to be 0 x1 plus, and then I'll 0 x1 plus, and the last equation also I'll have 0 x1 plus, so all the elements of the first column, except for the first row, have been made to be 0 in the first step of forward elimination, so that's why it's very important to understand, this is the end of the first step of forward elimination.  If you have n equations, you'll have n-minus-1 steps of forward elimination which you'll have to conduct.  If you have, let's suppose five equations, you'll have four steps to conduct, in the example which I showed you there were three equations, three unknowns, so I had to conduct two steps of forward elimination.    So let's go ahead and see now what I have to do next. So what I'll have to do next is that I'll have to take this as my pivot equation . . . pivot element, and make this to be 0, and all of the elements which are below here to be 0 right there.  So what that means is that I'll have to divide by a22-prime, let's suppose if I want to make this to be 0, I'll have to divide by a22-prime and multiply it by a32-prime.  So, by doing that, so I'll take this equation, and I'll multiply it by this . . . this multiplier, and then when I subtract it from this particular equation, I'll get a 0 there, and you're going to use the same methodology to make every element below the second row in the second column to be 0.  And this is what you're going to end up with, so if you look at this particular portion of the equations right here, they are all going to become 0 by following the procedure which I talked about, so you don't see any elements of x2 here, they're all . . . the coefficients are all 0.  So that's the end of the step two, so you're very well seeing that how you have to conduct each step for each . . . each column.    So the first step of forward elimination was corresponding to the first column, the second step of forward elimination made the second row . . . made the second column below the second row to be 0, the third step of forward elimination will make everything below third row in the third column to be 0, so you can see algorithmically how this can be written in a loop to be able to conduct this forward . . . these forward elimination steps.  So, as we said that there are going to be n-minus-1 steps of forward elimination, and this is what I am going to be obtaining at the end of the forward elimination steps, and you can see that I get 0s all the way right here, I get 0s all the way right there, and 0s all the way here, and this is turning out to be that if I write this in a coefficient . . . write it in a matrix form, that the coefficient matrix will turn out to be an upper triangular matrix. So that's what I'm showing you here, that we are writing down the . . . the equations which we get at the end of the forward elimination step . . . steps, all the n-minus-1 steps, in the matrix form, and you are getting an upper triangular matrix right here, because all of these elements are 0.  Now, pay special attention to these primes which I am showing you right here, this shows you that how many times the a element has changed, because in the first step of forward elimination you will find out that this element will change, and then it is no longer used, so that's why you find out is a double-prime right here, telling you that that number element is changing twice, it's just to show you that it's not the same element which you had in the previous step.  And you can very well see that the last element, last element, last row, last column element will change n-minus-1 times, and same as the case here, this right-hand side number element changes once, this right-hand side number element changes twice, and this number changes n-minus-1 times, because you're going through those n-minus-1 steps of forward elimination for the last . . . last row.