CHAPTER 04.06: GAUSSIAN ELIMINATION: Pitfalls of Naive Gauss Elimination Method
In this segment, we're going to talk about the pitfalls of Naive Gaussian method, which is a method to solve simultaneous linear equations. So let's suppose, let's look at the first pitfall, which is division by 0. Somebody's giving you three equations, three unknowns, which are given right here, and if you put these three equations, three unknowns in the matrix form, they will look like this. Now if you are at the first step of forward elimination, what would happen is that you would say, okay, I'm going to choose this as my pivot element, and I want to make this 6 to be 0 and I want to make this 5 to be 0. So what you will have to do is in order to calculate the multiplier for this element to become 0, you will have to say, hey, I take multiply by 6 and divide by 0. Same thing's going to happen here, you will say, hey, I'm going to multiply by 5 and I'm going to divide by a 0 here, and you can very well see that this is a big problem here, because you've got division by 0 here, and you get division by 0 here. So that's one of the pitfalls of Naive Gaussian method. Let's look at another example right here. Now here we have, again, three equations, three unknowns given to us, and we are writing them in the matrix form, and people might say, hey, I don't see any division by 0 problems right here, because if I take the first step of forward elimination right here, this is 12, this is 6, and this is 5, so in order to be able to find the multiplier, the multiplier for this will be multiplied by 6, divided 12, so the multiplier will be 6 divided by 12, and the multiplier for this one will be 5 divided by 12, so I don't see any division by 0 problems, but one of the things which you've got to understand is that division by 0 is not necessarily to be looked at as an issue when you start your forward elimination, but at the start of every forward elimination step. So this is what you get at the end of the first step, so this is the first step of forward elimination, so if you take the first step of forward elimination, you're starting with equations which are given to you, and you're going to end up with equations . . . with a coefficient matrix like this. Now when you are at the second step of forward elimination, what you're going to do is you're going to choose this as the pivot element, and in order to make this to be 0, you have a multiplier of -21 divided by 0, so you can very well see that there also, you're going to have the problem with division by 0, so you've got to understand, it's a common myth among students that division by 0 is only to be looked at when you first start the forward elimination steps in the set of equations. Division by 0 has to be looked at as a possibility and a pitfall at the beginning of any step of forward elimination, so that's something which . . . which you have to be careful about when you understand what this pitfall is. So let's look at the second pitfall. We can already see that there is a problem that even if you don't have division by 0, let's suppose, let's suppose you now have division by . . . division by a small number, that's also going to create problems, and that gives us into what we talk about the pitfall number two, which is the large round off errors which we are getting here. So in the large round off errors which we have here is that this is an example given of a coefficient matrix, this is the right-hand vector, the exact solution which you can get by simply plugging in these values of x1 equal to 1, x2 equal to 1, and x3 equal to 1, into these equations, you are going to exactly get the right-hand side, so that is the exact solution, if not anything else. Let's go ahead and see that if we conduct . . . if we conduct forward . . . this thing, Gauss elimination, Naive Gaussian method, and we use a six significant digit computer with chopping, so what we are saying is that all calculations, intermediate as well as final, that you will have six significant digits in your numbers, with chopping, meaning that you are not . . . the seventh number which you are getting . . . the seventh significant digit which you are getting, you're not using that to round off any of the numbers. This is what you get as a solution, and we already know that the exact solution is 1, 1, 1, that's exact. So that's already exact, so we are finding out that the difference between the actual solution which we are . . . which we are getting by using six significant digits with chopping is very close to what is the exact solution, so we don't need to be worried about that, but let's suppose we use Naive Gaussian method now with five significant digits with chopping, which is a pretty reasonable computer to have, and now you are having five . . . that means that the sixth significant digit which you are getting, you are ignoring it, anything after the sixth significant digits, you are ignoring it, and you . . . for all the calculations, intermediate as well as final, you are using five significant digits with chopping. So what you find out now is that you are getting a different solution now. This is, again, I'm going to write down the exact solution, 1, 1, 1 is the exact solutions, but now you are getting a number which is very close to 1 for x3, but x2 you're getting 1.5, for x0 you're getting . . . x1 you're getting 0.625, which is quite far away, it's about 30 percent away from the exact value. So the question arises that we have two pitfalls in the Naive Gaussian method. Is there a way to reduce round off error or division by 0? Now, what are the possibilities? Now, what we can do is in order to avoid the pitfalls, we need to . . . we can increase the number of significant digits, so if you are using single precision, we can use double precision, we can use quad precision, but what that will do is that by increasing the precision it will decrease the round off error, but the problem of division by 0, you cannot avoid division by 0, because whether you're single precision or double precision, if the number is 0 in dividing by 0, whether you're using single precision, double precision, or quad precision, that's not going to help you any. So that's one way to avoid one of the pitfalls. If you want to reduce the round off error, you can increase the number of significant digits by going from single to double to quad precision. Now, the other way of avoiding the pitfall is Gauss elimination with partial pivoting, which is a separate method, which is based on Naive Gaussian method, is Gauss elimination with partial pivoting, which basically what it does is it avoids division by 0, if the . . . if the solution is unique, then it will avoid division by 0 no matter what, and then you reduce the round off error by using the method itself. So that seems to be a good antidote to the pitfalls of Naive Gaussian method. So, coupled with Gauss elimination with partial pivoting as well as increasing the number of significant digits which you are using by increasing the precision to double and quad precision, you will be able to decrease the penalty which you're going to pay for the pitfalls by having either division by 0 or having large round off error. And that's the end of this segment.