CHAPTER 03.03: BISECTION METHOD: Advantages & Drawbacks of Bisection Method

 

In this segment, we're going to talk about bisection method, and look at the advantages and drawbacks of the bisection method. 

 

Now, let's go ahead and enumerate the advantages to begin with.  The advantage, first advantage of the bisection method is that it is always convergent. And the reason why it's always convergent is . . . is evident from the approach itself, if you look at it from a graphical point of view is that we have . . . we start with some bracketing of the root.  You start with a bracketing of the root of the equation f of x equal to 0, and if you look at this as the lower . . . lower guess, this as the upper guess, what you are finding out here is that as you keep on decreasing the interval size by choosing a new guess as the midpoint, what is happening is that it is . . . the interval length keeps on diminishing, it keeps on halving as you keep on going from one iteration to another, so it always is going to find the interval where the function is changing sign, so it's always going to be convergent, there's no way for it to diverge. So that's . . . that's a big advantage of the bisection method. 

 

The second one is that the error can be controlled. What I mean by that is that we already know that if the root is between, let's suppose, this point and this point, then we know that the maximum error which you're going to have in the root will be that width, because if the . . . if the root is right here, or something like that, then we know that the, in the starting bracket, this is the maximum error which you're going to have.  Once you have halved the bracket you got right here, then you know the root . . . that the root is between this point and this point, then you know that the true error can be maximum of that.  So, you can see that as you keep on applying one iteration after another that the amount of error which you're going to have is getting halved, the maximum amount of error which you're going to have keeps on getting halved as you keep on going from one iteration to another, so your error can be controlled, so you can, in the beginning itself, know that how much error you're going to have, as you go from one iteration to another. Now, let's go ahead and look at some of the drawbacks of bisection method.

 

One of the main drawbacks of bisection method is that convergence is slow. Although convergence is guaranteed, convergence is generally slow.  And the reason why it is slow is because every time your interval is getting halved, and that's the max . . . that's the amount of error which is getting . . . getting it halved by as you go from one iteration to another, so the convergence is slow.  If you take different examples of the bisection method and run them through, you'll find out that to be the case.  The second one here is that . . . that you can . . . you can get a lot of . . . you may have to do a lot of iterations if you choose one of the guesses to be very close to the . . . to the root.  So, let's suppose if we have a function like this, and you choose this to be the lower guess, and you choose this to be the upper guess. So you can very well see that as you are halving the step size, or halving the interval, that you're going to . . . it's going to take a lot of iterations to keep on zeroing in on the root here, because you chose one of the initial guesses to be very close to the root itself, so that is . . . that is a drawback, choosing . . . so let me write this down, choosing a guess close to the root may result in needing many iterations to converge. It is going to converge, but if you're going to choose . . . choose one of the guesses, one of the initial guesses to be close to the root, then the number of iterations which you'll need for the same order of convergence will be quite . . . quite a few. 

 

Let's look at some other drawbacks.  Another drawback can be something like this, you can have . . . I can have a function like this, so let's suppose this function is x squared. So if I have function is equal to x squared, so the equation is x squared equal to 0, let's suppose, then we know that, hey, x equal to 0 is the root of that particular equation.  But you won't be able to find out a lower guess and upper guess where the function changes sign, because the root is non-negative everywhere, so, cannot find roots of some equations. Again, it's not necessarily directly the fault of bisection method that you cannot find the root of that equation because of the fact that bisection method requires you to find the lower guess and the upper guess to be such that the function changes sign.  So, if it is not able to do that, it's just that you cannot apply this method to find the root of an equation like that one. 

 

Another one can be something like this, may seek a singularity point as root.  What is an example of that is that if your equation is f of x equal to 1 by x equal to 0. So if we go ahead and plot the function f of x for this case, you're going to get something like this. So, what you can have is that you can have this as your lower limit and this as your upper limit, or, I shouldn't say limits, but lower guess and upper guess, so if you have that as your lower guess and your upper guess of the bracket, the function is changing sign, it is positive here, it is negative here, and what's going to happen is that it is going to start converging to the singular point, which is x equal to 0.  Again, this is not the fault of the bisection method theorem which we use, because the bisection method says that the function has to be real and continuous between xl and xu.  This function, 1 by x, is continuous 0 onwards on the right side of the y-axis, or on the left . . . left side of the y-axis, but if you are crossing the y-axis, it is not continuous, because you're getting discontinuity at x equal to 0, at x equal to 0 getting discontinuity.  So, but if you were doing this without paying attention that whether the function is continuous between the lower and upper limit, you would get this phenomenon where you might converge on a singular point, assuming that you have the lower guess and the upper guess where the function is changing sign.  So those are the advantages and drawbacks of the bisection method.  And that's the end of this segment.