Mod-01 Lec-09 Newton-Raphson method contd…

So, we will continue with the our discussion on the Newton- Raphson method. I quickly derived the Newton-Raphson method on the blackboard by using graphical interpretation. So, we will go through the derivation little more slowly, then will see and we will see the important points considering the Newton-Raphson method. As is our practice, we solve some problems using the Newton-Raphson method; first with one variable, and then will go on to solve the Newton go on to solve the two variable problem using Newton- Raphson method In order to, see the potency, and the power of the method it it could be better if we take the problem which which way we already solved using the method of successive substitution So, we may decide either on the fan hundred problem or the, what is the, what what is the other thing… The engine characteristics… The engine characteristics problem. And then, we will see how if we applied the Newton- Raphson method for multiple unknowns, whether it phase better or not So, Newton-Raphson method is basically a root extraction method Why do we say it is a root at extraction method because we are trying to set f of x is equal to 0. You are trying to find the values of x at which f of x tends your numerical 0 because this equations cannot the analyticals analytically solve invariably we encountered non-linear equations. So, if you look at the closer part of f of x was a x for a typical function whose roots you are seeking to obtain. As you all know the as you are all able to see the root is here. So, this is basically again iterative technique. So, we started from somewhere, I called this as x i So, this is xi corresponding to this corresponding corresponding to this we have f of x i, the root is somewhere for of what is this method work, it draw tangent at x i wherever this tangent intercepts the x axis we called it as x i plus 1, the next iterate So, So, this is f of x i minus 0, f of x i minus 0. So, this is you able to see an and please, change your place alright Now, what is the algorithm f dash of x i, is the approximately equal to alright now, Three, we call this is the algorithm just called it as Newton- Raphson formula. This

are the most potent and powerful techniques for solving a single variable problem. Next time, it will multiple variables there is a very very powerful technique as engineers, you must not leave this place without knowing the Newton- Raphson method, this you should never forget x i plus 1and x i minus f of x i divided by f dash of x i. So, what happens the next one is x of i plus 1. So, you you can go here and draw tangent So, hopefully it will covert to this within a few iterations. The major difference between this method and the method of successive substitution is, it using information the derivatives Since, you are using the information of the derivatives it convergence it convergence hopefully should be very fast, but we cannot say that the convergence is always guaranteed, why do I say that, look at the formula and tell me what could be the possible fit for fit for limitation or short comings for the Newton raphson method. Limitations Let us, not write the advantages. We we are able to see the advantages. So, it uses information of first derivative, it should be fast, it should be rapidly converge all that. Let us see the limitation which are not so, apparent upon first site. So, if f dash of x and if f dash x is equal to 0 for any real real disaster for the method. f double dash of x is equal to 0 causes the solution to diverge f dash of x need not even be 0, it is very small also this fellow will oscillate too much, this fellow will shake too much that is the problem So, it depends on that f of x divided by f dash of x it should be such that, it is small compared to x of i. So, that you get the new x of i plus 1. So, sometimes causes the solution to diverge, sometimes gets trapped in local minima or maxima So, each of this we can take a particular example each of this we can take a particular example and show when we converges slowly when when f f of dash 0 it becomes disaster on all that, but again please be remind that, this not a course m a four 0 one numerically analysis or something. Though, we are looking at optimization. So, I am going to stop at certain stage and those people are interested, you can always look at some books on numerical analysis to get more ideas about this method Now, how do we how do we get this algorithm using Taylor series, this is a graphical depiction, it is possible for us to get the same algorithm using the Taylor series expansion So, will get the same thing using the Taylor series, is always confusion whether it is

Taylor series or Taylors series that is, Taylor apostrophe s or Taylors series or Taylorses apostrophe. So, there are several version I am using the plain vanilla simple Taylor series. So, even if I make the mistake it is All right. I mean there is a level of simplicity associated with this Taylor series expansion Now, how do we do the Taylor series expansion on f of x of x i in the vicinity of x i, f of can you proceed, f of can you proceed We we will call it us x of i plus 1 where psi is somewhere between where psi is somewhere between x i and x i plus 1. This term plus higher order terms of course, in a close to the true solution, it does not matter when all that, when x i and x i plus 1 are also close to the true solution x of x true, it does not matter. Now, this is becomes really mathematically involved question why it should be psi where you want evaluate this second derivative and so, on does not getting it in to also debates Now, if x i plus 1 and x i are sufficiently close and then for the second derivative is small then, we neglect the second second and higher order term, neglecting does not does not mean to the same solution What are you trying to make 0, which is which is that we want to we trying to make 0, 1 Minus is that a minus. So, using Taylor series expansion also we get the same result is go to graphical method in the Taylor series expansion give the same result the correct version algorithm Now, will say will started with an example where the Newton-Raphson method is not powerful at all. So, I will demonstrate I demonstrate you the simple example that our all is not

too well, everybody says all is well and all is well, all is not do well with the Newton- Raphson method. Let us take an example like this So, is got to be problem number eight is the good Problem nine, is not exactly equal, say at some place we are putting exactly equal, but I want to I want to remind you that we are chopping up so many term higher order term no no when I no no when I use the when I use this. So, graphical method is already we can put that, but no problem, but behave you have to be careful, graphical method there is no problem. Though, but in the Taylor series expansion I want it to you remember there are higher order terms which we neglect So, we are to talk about the convergence of this method which involves some little more advance math, but five o’clock we will do it, please be remain remind that we have class at five o’clock anyway. So, will be working on lot of problems on Newton-Raphson method So, it should be fun. So, problem number nine, determine the, the root of the equation determine the root of the equation, f of x equal to x to the power of eight minus one, x to the power of eight over minus one not x to the power of seven Evaluates the roots root of the equation f of x equal to x to the power of eight minus one using the Newton-Raphson method, with an initial gets of with an initial gets of x equals point five with an initial gets of x equals point five. So, when it will be call it as x naught, initial gets it x naught Answer is, so obvious, but so, it is very painful what is the answer, one. They illusive one, we start with point five, you see the fun. Shall we, first write the algorithm before start, before you begin to draw the tabular column first write the algorithm x i plus 1 is equal to x minus bracket is not require that is a algorithm You can enough is equal to what happen this, it goal of the I am not I am just showing

how this function behave this shown as. So, they do not regret what we do now, not the was the bad All values are decent or some indecent values are also there. So, let us plot. Now, tell me why this fellow misbehaving why are you getting the. So, why do we think we where did we start here, then it is now, this slope So, the algorithm becomes very slow when it is like this and the it nicely converges if fear something like this when it is moving very gently, if the initial case is in that region it takes forever for it to converge Today, we put x i plus 1 x i and all that in show in to no need write. Now, will… So, the solution is of course, x 25 or x 30 what is the x 24, is the, what is the value of x 24 So, we have to necessary do it, because this stuff is going outside. Now, will we just say one two three and we will go to twenty six, the interest of time. 0.5 leads to if correct or please be remind that, we have to fill it up. I I have the freedom not do

what you say you have to fill it. Now, 20 24 What is the value So, it slowly converging. So, all the big talk about Newton-Raphson method because it using f dash of x very powerful and all that, it is not it is not really. He does not come, but it the convergences is very slow. Now, let us let us workout the problem in which the Newton- Raphson method is really works Problem number ten, use the Newton- Raphson method use the Newton- Raphson method use the Newton-Raphson method use the Newton-Raphson method to determine the first positive root Use the Newton-Raphson method to determine the first positive root of the equation, f of x is equal to x minus 2 sin x, where x is in radians f of x equal to x minus 2 sin x where x is in radians starting case, x equal to 2 radians. Starting this, x is equal to 2 radians x naught. So, what is the algorithm 1 minus 2 cos x i correct. So, I will plot this Enough This are just shown the graphical variation of the two functions f x, f of x is equal to x and f of x is equal to 2 sin x. So, there is the solution is about 1.895 correct. Solution is x is equal to 1.895. So, which is the sinusoidal function here, What about Jayaraman, you still their

So, you can see exactly with the yellow one, yellow one is the difference between the two that is the actually f of x you can see where f of x becomes 0. So, as you get the first positive root, it also a negative roots. f of x is equal to x will keep on going on the other side also and also a negative roots So, it is pertinent it is imperative that, I tell you whether you whether I want the first positive root or whatever. So, if you workout the algorithm if you workout the steps Where x I plus 1, what is the, So, fast absolutely fast is no comparison with the successive substitution. Generally if you are in a right region your initial case is in the right region and this function is not slow is not slowly changing, f of x is not slowly changing with x is a good chance, but will get what do you say you can get very quick and fast convergence Now, please retry this problem with the initial case of one radian. So, ten b, ten a, initial cases, if we go to 1.895 what we get here, 895. So, ten b will be x naught equal to one radiant I also shown it on, shown it graphically Now, when you work out the parts a and part b. Part a we started with two radians it converge, part b will started with one radiant you going tell me whether it is converge or not, when we look at the graph and then you love your own understanding as to why it works for some case one case and does not work for the other case. Ten steps, you are to doing at the calci, when we start with one what is the next one minus seven point somewhere it goes then, totally off it comes back it comes back

Now, graphically explain what is the difference between starting at one end what happens when it is what happens, f dash x at one what are what is f dash of x at 1 close to 0. I told you when it is close to 0, it is the disaster for the algorithm. It is close to 0 then, suddenly from one, it swings to it minus 7.5 or plus 7.5, it swings to minus 7.55 from there it swings to some other value. Eventually, it converges, but it takes. So, that is why the initial case is very important. So, their people will say then there is a some kind of cheating involve with the initial case is so, important that is if we know what is the initial case then is already known the answer or something But I thing, we we cannot call it us cheating because often times an engineering the mathematician can argue anyway, I do not have any idea on all that,, but as an engineer you will have a good idea what is the approximate range in which you except the solution, but therefore, in optimization problems if if this involves a maximization or minimization of particular function, leave their for the time being forget that we are trying to c curve roots to a particular equation. Suppose, we are trying to maximize or minimize in that case, you are using a technique which is similar to Newton-Raphson You will uses the ,what is called the conjugate gradients or design method where will you use the information on the derivative first first derivative. So, if it is going to give trouble like this, if you start at the wrong value then, if it is going to oscillate oscillate left and right and then it is going to become difficult for you then, one possibility is to have some other technique which will localize I mean which will ensure that you give an initial case in the proper region and you direct the solutions such that, you get accelerated convergence how do you do that. So, you can use an ant-colony optimization or genetic algorithm or simulated annuli for some other technique where we will we will see this in the later part of this course Suppose, you got a function which is a f of x is a function of x 1 and x 2 you are trying to. So, this is what is called the feasible space that means, all combinations all combinations of x 1 x 1 and x 2 will satisfy all the constraints of the problem, they will be the working solution to the problem, but if you have an objective function each of it is will not be equally desired desirable some will have a high value of y, some will have a low value of y depending upon the context, whether you want to maximize or minimize some are more desirable compared to the others Now, if you start with the initial gets value suppose, this your optimum point use a gradient method suppose, you start from here, it may work, but if you start suppose, you start from here, it may not work, but if you start from here, it may work. So, what you generally do in this cases is, you use a global optimization search technique at genetic algorithm or simulated annuli and then, you identify a region where the solution where you expect the solution Once, you are in this region then, you fire your gradient method from anywhere it will work. Now, you will ask me sir, if I am able to converge to if I know from this region I am able to reduce this region using genetic algorithm why not I proceed and converge, genetic algorithm usually this global optimization techniques are extremely slow So, you initially use a global optimization technique, narrow down the region narrow down the region in which you expect a solution factor, and then once you hit up on the narrow region, then use a very powerful calculus base technique which uses information and first derivative, sometimes you can use the information on the second derivative hessian, and all that. And then, you can narrow down a search to such techniques are called hybrid optimization techniques. They become the norm now, for multi variable optimization problems, non-linear optimization problems involve in several variables people are increasingly trying to use this