CHAPTER 06.04: NONLINEAR REGRESSION: Power Model: Transformed Data: Derivation
In this segment, we're going to talk about the power model regression, and we're going to, again, talk about transformed data derivation. So we're going to take certain data points and regress them to a power model, but the derivation will be through transformed data. What I mean by transformed data will be evident as we go through the process. So let's have a statement first, so you are given x1, y1, all the way up to xn, yn, and what you want to do is you want to best fit y is equal to a x to the power b to the data, that's what you want to do. So what that means is that, let's suppose somebody gives you data, y versus x. So somebody's giving you data, y versus x . . . maybe this is not the right way to show data for this particular example. So let's suppose somebody gives you data, y versus x like this, and what they want you to do is they want you to regress it to a power model. So the power model might look like this. So you've got y is equal to a x raised to the power b, a nonlinear model going through those . . . best fitting those data points. I think I should have some points here, otherwise this is not the proper regression curve, so let's suppose you're getting like that. So what we want to be able to do is we want to be able to find the constants of this model here, and the two constants of the model are a and b. But what we're going to do is we are going to do this by transforming the data, and that's what we meant by when we said the derivation would be based on transformed data, because if we're going to start from the sum of the square of the residuals like we have been by saying that, hey, the sum of the square of the residuals is the observed value minus the predicted value, what's going to happen is that, this is the observed value, this is the predicted value, you're going to square it, and when you're going to find the minimum Sr, that is you're going to try to take the derivative of Sr with respect to a and b, put those equal to 0, you're going to get two simultaneous nonlinear equations, and that's the reason to avoid the solution of this . . . to avoid finding the solution of simultaneous nonlinear equations, you are transforming the data. So let's go ahead and see this data is transformed. So you have y is equal to a x raised to the power b is your model. And what we can do is we can take the log of both sides, we can say log of y is equal to log of a x raised to the power b, and that's log of a, plus log of x raised to the power b, so this is all coming from your formula, log of a times b is log of a plus log of b, and then this, you get log of a, plus b log of x, and this is log of y right here. So what we're going to have is, let's suppose I call this to be z, I call this to be a0, I call this to be a1, and I call this to be w, let's suppose. So if I make those substitutions for these expressions there, I'll get z is equal to a0, plus a1 times w, and what you are seeing here that z versus w is linear now. z versus w is linear, because you are finding out that the relationship between z and w is a straight line, this being the intercept and this being the slope, and what we can do is since z versus w is linear, we can use the relationships which we derived for our linear regression formulas to calculate a0 and a1, and then backtrack here to calculate our a and b from there, because a0 is nothing but log of a, and that'll give you a is equal to e to the power a0, and b is nothing but a1. So once you have found your a0 and a1 from the linear regression formulas, you can calculate your a by just taking the exponent of . . . exponential of a0, and then b is equal to a1, and you will be able to find out what the constants of the original model are. But what you have to do is you have to transform the data, because zis will be log of yis. So all the y values which you have, you have to take the natural log of those in order to generate the z values. That's why we call it transforming the data, because you are transforming the data by taking the log of the y values. Same thing, your wi values, which are the individual w values which you will need, will be the log of xi values. So that's the transformation is needed, so in order to . . . the original y versus x data, you'll have to transform into z versus w data by taking the natural log of y and natural log of x . . . x values, and that's how you're going to get the z versus wi data, because these are the data things which you need to be able to calculate a0 and a1 from the linear regression formula which you have here for z versus w, which is as follows, you will get a1 is equal to n times summation, zi wi, i is equal to 1 to n, minus summation, zi, i is equal to 1 to n, so that's the summation of the z values, and then the summation of the w values, then divided by n times summation, wi squared, i is equal to 1 to n, minus summation, wi, whole squared, i is equal to 1 to n. That's how I'm going to get a1, and a0 will be nothing but z-bar minus a1 w-bar. So that's how I'm going to get the values of a0 and a1 from . . . from the linear regression formulas with the transformed data of z, zi is the log of the y values, and wi is the log of the xi values. So once you have found a0 and a1, you backtrack it to find a and b, and that's how we are able to get the derivation of the power model by using the transformed data. However, what I want to point out is that what we wanted to minimize . . . what we wanted to minimize was the sum of the square of the residuals between the observed values and the predicted values, this is the observed value and the predicted value, you take the difference between the two, that gives you the residual, and you square it, and you're adding them up, and this is what you would have liked to minimize with respect to a and b, but what happens is that this is going to result in two simultaneous nonlinear equations, and hence you have to find out what the value of a and b is by solving simultaneous nonlinear equations. So that's why we took the approach of using the transformed data. So you do have to appreciate that what you are actually minimizing is not this, you're not minimizing this, not this, you're not minimizing this particular expression when you are trying to find the values of a and b by using the transformed data. What you're minimizing is this, you are minimizing the sum of the square of the residuals between the z values and the w values. You are minimizing this, because this is the observed value, and this is the predicted value from your linear regression model, and then if we're going to write it down in the original terms, zi is nothing but log of yi, and a0 is nothing but log of a, and a1 is nothing but b, and wi is nothing but log of xi. So that's what you're actually minimizing when you are using the transformed data, that's what you should have been minimizing, but the reason why you are choosing to minimize this as opposed to this is for . . . simply for mathematical convenience and nothing else. So let's go ahead and see how . . . so this, if you minimize this, you're going to get a different value a and b, if you minimize this by going to the basics here, you're going to get a different value a and b. So this is not statistically optimal way of finding values of a and b, but the only reason why we're doing this is for mathematical convenience. And that's the end of this segment.