CHAPTER 05.01: BACKGROUND OF INTERPOLATION: Uniqueness of Polynomial Interpolant: Part 2 of 2
In this segment, we're going to continue our discussion on uniqueness of interpolating polynomial, and we're at this particular phase, which I said that, hey, if what I want to be able to show is that if R2 x has three zeros, if a second-order interpolating polynomial has three zeros, then R2 of x is identically equal to 0. So let's go ahead and see that how that can be proven. So if you have a second-order polynomial, it has three zeros, then R2 of x is identically equal to 0. That means R2 of x, for example, let's suppose I assume R2 of x to be a second-order polynomial which is like this, and I know that the three data . . . it has three zeros, so that means that it has three zeros at x0, x1, and x2. So R2 of x0 is 0, R2 of x1 is equal to 0, and R2 of x2 is equal to 0, that's what we showed last time, is that you . . . in the last segment, that you have three zeros, one at x0, one at x1, and one at x2. So if I write down this as an equation, that means that a0, plus a1 x0, plus a2 x0 squared has to be 0, because we know that R2 of x0 is equal to 0. Then same thing, R2 at x1 is 0, but that can be written as a0, plus a1 x1, plus a2 x1 squared is equal to 0. We also know that R2 at x2 is 0, so that's a0, plus a1 x2, plus a2 x2 squared is equal to 0. So basically what we have is we have three zeros of R2 at x0, x1, and x2, so we can set up these three equations and three unknowns there. So once we have the three unknowns, which are a0, a1, and a2, let's go ahead and write this in the matrix form, so that will make it a little bit clearer what we are trying to look for, and how we can show that R2 of x is identically equal to 0. So the unknowns are a0, a1, and a2, and the coefficient matrix looks like this, 1, x1, x1 squared, 1, x2, x2 squared, and 0, 0, 0. You can right away see that a trivial solution for this one, the trivial solution for this one is that a1 . . . a0, a1, and a2 are all 0. So the solution vector is a 0 vector, because if I put a0, a1, a2 equal to 0, I'll get 0 equal to 0, but this does not prove that R2 of x is equal to 0, that it's identically equal to 0, because I have to show that this trivial solution is the only solution, and the way I can show that the trivial solution is the only solution is by showing that, hey, that this particular matrix here is invertible, or what they call as nonsingular. So I need to show that this particular coefficient matrix is invertible or nonsingular. Only if I'm able to show that it's invertible or nonsingular can I say that, hey, the trivial solution is the only solution, because if the coefficient matrix is invertible, or nonsingular, then we know that it is a unique solution, since we have already found out one solution, that means that will be the only solution. So how do I go about doing that? It's by finding the determinant of this matrix, and I'm going to leave this as homework here, the determinant of this matrix 1, x0, x0 squared, 1, x1, x1 squared, and 1, x2, x2 squared, the determinant of this will turn out to be x1 minus x0, times x2 minus x1, times x3 minus x2, that's what the determinant will turn out to be, and I'm going to leave this as homework. I'm going to give you a few hints in a little bit for how to do this. So, but this is what turns out to be the determinant of this equation, and since we know that x1 is not same as x0, because you cannot have two identical x values when you're doing interpolation, same thing, x2 is not same as x1, and x3 is not same as x2, so this means it's not equal to 0, and when the determinant of a coefficient matrix is not equal to 0, that automatically proves that this particular coefficient matrix is nonsingular, or it's invertible. So what that means is that we have shown that R2 of x is identically equal to 0, because a0, a1, and a2 are . . . is the only solution, and that is 0, 0, 0, and so R2 of x is identically equal to 0. Now, somebody might say, hey, can you show me how I can go about finding this determinant equal to this quantity here. All you have to do is, let me show you some of the hints, you follow the forward elimination steps of Gauss elimination, that's what you need to do. You need to follow the . . . follow the forward elimination steps of Gauss elimination, of Naive Gauss elimination, and then once . . . whatever you have at the end of forward elimination, you will get what? You will get an upper triangular matrix. So once you get an upper triangular matrix, you know that the determinant of an upper triangular matrix is same as the multiplication of the diagonal elements. So basically you're using two theorems of determinants to be able to do that. So the theorem 1 is basically you saying that if an multiple of a row is added or subtracted from another row of an n-by-n matrix, its determinant does not change. So that's the first theorem which you are using. So that's when you follow the forward elimination steps, you basically, what you are doing is you are adding . . . you are subtracting a multiple of one row from another, so the determinant of the original matrix is not going to change once you go through the forward elimination step process of Naive Gaussian method. The second theorem which you are using is that for an upper triangular matrix, so for an upper triangular matrix A, which is an n-by-n matrix, the determinant of A is nothing but a11 star a22 star all the way up to ann, which is simply the multiplication of all the diagonal elements, because that's what you're going to get at the end of the steps of the forward elimination of the Gauss elimination. So those two theorems you can apply, and you will be able to see that the determinant of the matrix will turn out to be what I showed you in the previous . . . on the previous board. Now, some of the things which I want to mention here is that let's suppose if somebody gives you . . . this is some confusion which, or myths which students of mine develop, so let's suppose if you have somebody gives you three data points, 3, comma, 40, 5, 82, and 9, 214, and you might be . . . somebody says, hey, go ahead and find the second-order polynomial which goes through those three data points, and you'll get something like this, y is equal to 2 x squared, plus 5 x, plus 7, and that's by the direct method. If you use Lagrange interpolation, you will get 10 divided by 3, x minus 5, x minus 9, minus 41 by 4, x minus 3, x minus 9, plus 107 by 12, x minus 3, x minus 5. If you use . . . so that's from Lagrange interpolation. So if you use Lagrange interpolation, that's what you're going to get. If you use Newton's divided difference polynomial, this is what you're going to get, 40, plus 21, times x minus 3, plus 2, times x minus 3 and x minus 5, so this is the Newton's divided difference polynomial method. So what you are getting is for the same three data points you're getting this particular polynomial by using the direct method, this particular polynomial by using the Lagrange method, and this particular polynomial by using the Newton's divided difference method. So it might give the false impression that you are getting three different polynomials by using these three different methods, that's not the case. This polynomial is exactly the same as this polynomial, is exactly the same as this polynomial. The only difference is the form of the polynomials which are being exhibited are different. So that's something which you do need to understand that if somebody's giving you three data points, and you're able to find the second-order polynomial which goes through those three data points, the polynomials which you're going to get from any other polynomial interpolation method of order 2 or less, which is, in this case this is by Lagrange method, and this by Newton's divided difference polynomial method, they are exactly the same. And that's the end of this segment.