## Abstract

In this article, we propose a new computational method for second order initial value problems in ordinary differential equations. The algorithm developed is based on a local representation of theoretical solution of the second order initial value problem by a non-linear interpolating function. Numerical examples are solved to ensure the computational performance of the algorithm for both linear and non-linear initial value problems. From the results we obtained, the algorithm can be said computationally efficient and effective.

## 1 Introduction

Many phenomena that occur in chemical, biological, engineering, physical and social sciences can be modelled mathematically in the form of either ordinary or partial differential equations. However it is difficult to obtain exact solution for these differential equations especially if it is nonlinear, by analytical means.So we consider an approximate solution to these problems.There are numerous ways by which an approximate solution can be constructed.In numerical analysis a concept of approximation play very important role. Thus solving these practical problems which modelled as differential equation approximately, is one of the main preoccupations in numerical analysis.

Consider second order initial value problems in ordinary differential equations of the form

subject to initial conditions

In the literature, problems of the form(1.1) are conventionally solved by reducing the differential system to first order equations. Some eminent authors have contributed in this specific area of research [1,2,3,4,11]. Another approach to investigate the solution of such problems were and referred to as shooting method either simple or multiple [8].In recent years researchers[5,6] applied a nonstandard method and obtained competitive results to those obtained with other method. So, much research have reported on the numerical integration of initial value problems in literature, many of them are excellent work.But a concept to develop a new algorithm to solve equation (1.1) can not be over emphasized.

In this article, we develop a new single step algorithm capable of solving equations of the form (1.1).The similar algorithm was first reported [7] in study of first order initial value problems.Having seen the performance of the algorithm for solution of first order initial value problems, we are motivated and challenged to investigate what will happen if a similar idea is used to derive an algorithm for solution of second order initial value problems.

The existence and uniqueness of the solution to initial value problem(1.1) is assumed.Further we assume that problems (1.1) is well posed with continuous derivatives and that the solution depends differentially on the initial conditions.The specific assumption on f(x, y, y’) to ensure existence and uniqueness will not be considered[8,9,10].

This paper is divided into five sections.Section 2 deals with the derivation and development of the algorithm while truncation error and convergence of the algorithm are developed in Section 3.The stability of the algorithm is discussed in section 4 while numerical experiments on four model problems are presented in section 5.

## 2 Development of Algorithm

We define N, the finite number of the nodal points of the interval [a, b], in which the solution of the problem (1.1) is desired as

where the terms in right side of expression (2.2) are defined as, the step length

Suppose we have to determine a number *y _{j}*, which is numerical approximation to the value of the theoretical solution

*y*(

*x*) of problem(1.1) at the nodal point

*x*,

_{j}*j*= 1,2…,

*N*and other similarly notations like

*f*defined as

_{j}*f*(

*x*,

_{j}*y*,

_{j}*y*(

*x*) to the initial value problem (1.1) can be locally represented in the interval [

*x*,

_{j}*x*

_{j+1}] by the interpolating function

where *a*_{0},*a*_{1},*a*_{2},*and a*_{3} are undetermined coefficients.

To determine these undetermined coefficients, let impose these following conditions.

The interpolating function and its first derivative w.r.t.

*x*must coincide with*y*(*x*)*and**y*′(*x*) the theoretical solution and derivative of solution w.r.t.*x*of the problem (1.1) at*x*=*x*and_{j}*x*=*x*_{j+1}i.e.$$\begin{array}{}{\displaystyle \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}F({x}_{j})=y({x}_{j})\phantom{\rule{1em}{0ex}}and\phantom{\rule{1em}{0ex}}F({x}_{j+1})=y({x}_{j+1})}\\ {\displaystyle {F}^{\prime}({x}_{j})={y}^{\prime}({x}_{j})\phantom{\rule{1em}{0ex}}and\phantom{\rule{1em}{0ex}}{F}^{\prime}({x}_{j+1})={y}^{\prime}({x}_{j+1})}\end{array}$$ The second and third derivatives w.r.t

*x*, of the interpolating function respectively coincide with*f*(*x*,*y*,*y*′) and derivative of*f*(*x*,*y*,*y*′) w.r.t.*x*at*x*=*x*i.e._{j}$$\begin{array}{}{\displaystyle {F}^{(2)}({x}_{j})={f}_{j}\phantom{\rule{1em}{0ex}}and\phantom{\rule{1em}{0ex}}{F}^{(3)}({x}_{j})={f}_{j}^{\prime}}\end{array}$$

Thus, from assumptions (2.4,2.5), we will get

Solving the system of equation (2.6) for *a*_{0},*a*_{1},…., we will obtain

From equation (1.4) we have

From equation (2.2) and substituting the values of *a*_{1}, *a*_{2} *and* *a*_{3} from (2.7), in equation (2.8), we have

We replace

So we will obtain our single step implicit algorithm.

Thus we have developed single step implicit algorithm of the form

where *ϕ* *and* *φ* are increment functions.These increment functions depend on *h*, *f _{j}*

*and*

*x*

_{j+1}, the values

*y*

_{j+1}

*and*

## 3 The Local truncation error and Convergence

In this section, we consider the error associated to the proposed algorithm (2.9). Let the local truncation error *T*_{n+1}, defined as in[13],

Substituting the value of *y*_{n+1} from (2.9) in (3.11), and expanding *y*(*x _{n}*+

*h*) in Taylor series about point

*x*, so we have

_{n}where *b* = *max* (*x _{n}*)

*in*[

*a*,

*b*]. Thus local truncation error

*T*

_{n+1}is bounded.We know

*x*

_{0}and

*y*(

*x*

_{0}) exactly then using algorithm (2.9), we can compute

*y*

_{n+1},

*n*= 0,1,2,3,…..

*N*with maximum error

*h*→ 0 i.e. for large N.Similarly we can find maximum error in second algorithm of (2.9), for computation of derivative of solution. Thus we have concluded that method (2.9) is convergent for large N.

## 4 Stability property

To discuss stability property of the algorithm (2.9), we follow the same method as discussed in [12,13]. Consider the Dahlquist test equation for stability,

subject to initial conditions *y*(*x*_{0}) = *y*_{0}, *y*′(*x*_{0}) = *λ* *y*_{0}. Apply the method (2.9) to this test equation, we obtained a finite difference equation, assuming the negligible contribution of the terms with *O*(*h*^{2}) and *O*(

where the stability function *E*(*hλ*) is an approximation to *e ^{hλ}*. For the alogorithm (2.9) to be converge

Solving inequality (4.14), thus we obtained the corresponding interval of absolute stability of (2.9) is (–2,0).

## 5 Numerical experiment

In this section, four numerical examples linear and nonlinear were considered, to illustrate our algorithm (2.9) and to demonstrate computationally its efficiency and accuracy.In tables, we have shown maximum absolute error computed on the nodal points in the interval of integration for these examples in their solution and derivative of solution. Let *y _{i}* and

*y*(

*x*)and derivative of solution i.e.

*y*′(

*x*) at the point

*x*=

*x*. Maximum absolute error is calculated in both solution and derivative of solution by

_{i}All computations in the examples consider were performed in the GNU FORTRAN environment version -99 compiler(2.95 of gcc) running on a MS Window 2000 professional operating system.

Consider the initial value problem,

The exact solution in [1,2] is *y*(*x*) = *y*(*x*) and *y*’(*x*) are given Table 1.

Maximum absolute error in *y*(*x*) = *y*′(*x*) for Example 5.1.

MAE | N | ||||||
---|---|---|---|---|---|---|---|

64 | 128 | 256 | 512 | 1024 | 2048 | 4096 | |

y | .5621729(-3) | .2748708(-3) | .1313720(-3) | .5965512(-4) | .2474944(-4) | .1009797(-4) | .4115111(-5) |

y′ | .5267575(-6) | .1289518(-6) | .3187597(-7) | .7879862(-8) | .1913576(-8) | .5345364(-9) | .1852914(-9) |

Consider nonlinear initial value problem

The exact solution in [0,1] is *y*(*x*) = (1 + *x*)^{–2}. The maximum absolute error in *y*(*x*) and *y*′(*x*) are given Table 2.

Maximum absolute error in *y*(*x*) = (1 + *x*)^{–2} and *y*′(*x*) for Example 5.2.

MAE | N | ||||||
---|---|---|---|---|---|---|---|

128 | 256 | 512 | 1024 | 2048 | 4096 | 8192 | |

y | .9100055(-2) | .5043587(-2) | .2747672(-2) | .1475068(-2) | .7822510(-3) | .4106779(-3) | .2136840(-3) |

y′ | .1660321(-1) | .8981451(-2) | .4802107(-2) | .2541303(-2) | .1330271(-2) | .6887614(-3) | .3566145(-3) |

Consider nonlinear initial value problem

The exact solution in [1,2] is *y*(*x*) = (1 + *x*)^{–1}. The maximum absolute error in *y*(*x*) and *y*′(*x*) are given in Table 3.

Maximum absolute error in *y*(*x*) = (1 + *x*)^{–1} and *y*′(*x*) for Example 5.3.

MAE | N | ||||||
---|---|---|---|---|---|---|---|

128 | 256 | 512 | 1024 | 2048 | 4096 | 8192 | |

y | .1363998(-1) | .7091403(-2) | .3645122(-2) | .1860261(-2) | .9453296(-3) | .4789233(-3) | .2411603(-3) |

y′ | .6387859(-2) | .3583610(-2) | .1952946(-2) | .1042857(-2) | .5497336(-3) | .2868771(-3) | .1467168(-3) |

Consider nonlinear initial value problem

The exact solution in [0,1] is *y*(*x*) = *sin*^{2} (*y*(*x*) and *y*′(*x*) are given Table 4.

Maximum absolute error in *y*(*x*) = sin^{2}(*y*′(*x*) for Example 5.4.

MAE | N | ||||||
---|---|---|---|---|---|---|---|

128 | 256 | 512 | 1024 | 2048 | 4096 | 8192 | |

y | .3083482(-1) | .2080867(-1) | .1396590(-1) | .9322166(-2) | .6191766(-2) | .4095947(-2) | .2701349(-2) |

y′ | .2283364(-2) | .1416236(-2) | .8000433(-3) | .4311204(-3) | .2279877(-3) | .1192688(-3) | .6163130(-4) |

## 6 Conclusion

In this paper, we have described a new method that is efficient, stable and convergent for solving second order initial value problems in ordinary differential equations.The implementation of the method is simple.The results we obtained for examples shows that method is computationally efficient and accurate. Our future works will deal with extension of the present method to solve higher order boundary value problems and improving its order of accuracy.

Communicated by Juan L.G. Guirao

## References

- [2]↑
Henrici P., Discrete Variables Methods in Ordinary Differential Equation, John Wiley and Sons New York (1962).

- [3]↑
Cash J.R. and Wright M.H., A deferred correction method for nonlinear two-point boundary value problem, SIAM J. Sci. Stat. Comp. no. 12,971-989 (1991).

- [4]↑
Collatz L., Numerical Treatment of Differential Equations (3/e), Springer Verlag, Berlin (1966).

- [5]↑
Mickens R.E., Nonstandard Finite Difference Models of Diffferential Equations, World Scientific, Singapore (1994).

- [6]↑
Sunday J. and Odekunle M.R., A New Numerical Integrator for the Solution of Initial Value Problems in Ordinary Differential Equations, The Pacfic Journal of Science and Technology, Vol.-13, no. 1, pp. 221-227 (2012).

- [7]↑
Fatunla S.O., A New Algorithm for Numerical Solution of Ordinary Differential Equations, Computer and Mathematics with Applications, no. 2, 247-253 (1973).

- [8]↑
Keller H. B., Numerical Methods for Two Point Boundary Value Problems, Blaisdell Waltham Mass. (1968).

- [9]↑
Stoer J. and Bulirsch R., Introduction to Numerical Analysis (2/e), Springer-Verlag, Berlin Heidelberg (1991).

- [10]↑
Baxley J.V., Nonlinear Two Point Boundary Value Problems in Ordinary and Partial Differential Equations (Everrit, W.N. and Sleeman, B.D. Eds.), 46-54, Springer-Verlag, New York (1981).

- [11]↑
Gear C.W., Numerical Initial Value Problems in Ordinary Differential Equations, Prentice Hall (1971).

- [13]↑
Jain M. K., Iyenger S. R. K. and Jain R.K., Numerical Methods for Scientific and Engineering Computations (2/e), Wiley Eastern Ltd. New Delhi (1987).