Cite

Introduction

Consider the numerical solution of the initial value problem

y(t)=f(t,y(t)),t[0,T]y(0)=y0$$\left\{\begin{array} {}y'(t) = f(t,y(t)), & \quad t \in [0,T] \\ y(0) = y_0 & \end{array}\right. $$

where T > 0, and y0 are given real numbers and f:[0,T]×RR$\begin{array}{} \displaystyle f:[0,T]\times \mathbf{R} \to \mathbf{R} \end{array}$ is a continuously differentiable function in a neighborhood {(t, y) | t ∊ [0, T ], |yy(t)| ≤ a} of the solution (t, y(t)), t ∊ [0, T] of (1). Although the following results can be also applied to systems of ODE’s, for simplicity we restrict our considerations to a single equation.

Assume that approximations yj,yj,j=0,,n1,nk$\begin{array}{} \displaystyle {y_j},{y'_j},j = 0, \ldots ,n - 1,n \ge k \end{array}$ to the true solution and its derivative are known at the grid points tj, j = 0,...,n−1 and we want to advance the numerical solution computing new approximations at tn = tn−1 +hn with a k-step Adams-Moulton method (AMk). Then, denoting by Pn(t) the polynomial of degree ≤ k + 1 such that

Pn(tn)=yn,Pn(tn1)=yn1,Pn(tnj)=f(tnj,Pn1(t¯nj)),j=1,,k,$$\begin{array}{} \displaystyle {P_n}({t_n}) = {y_n},\quad {P_n}({t_{n - 1}}) = {y_{n - 1}},\quad {P'_n}({t_{n - j}}) = f({t_{n - j}},{P_{n - 1}}({\bar t_{n - j}})),\quad j = 1, \ldots ,k, \end{array}$$

where t¯nj$\begin{array}{} \displaystyle \overline{t}_{n-j} \end{array}$ are back points that define the class of AMk method and the new approximation yn is determined so that Pn(t) satisfies the differential equation at tn, i.e.,

Pn'(tn)=f(tn,yn).$$\begin{array}{} \displaystyle P_n^{^\prime }({t_n}) = f({t_n},{y_n}). \end{array}$$

By substituting the Lagrange form of the interpolatory polynomial (2) into (3), one gets the difference equations of the AMk method. However, for our purposes it turns out more convenient to deal with the polynomial form of the AMk method. To define the method in this way, note that polynomials Pn and Pn−1 associated to tn and tn−1 respectively satisfy

Pn(t)=Pn1(t)+γqn(t),qn(t)=tn1t(st¯n1)(st¯nk)ds$$\begin{array}{} \displaystyle P_n(t) = P_{n-1}(t) + \gamma q_n(t), \quad q_n(t) = \int_{t_{n-1}}^t (s-\overline{t}_{n-1}) \ldots (s-\overline{t}_{n-k})\,\text{d}s \end{array}$$

and γ is a real constant which may be determined from the two conditions

Pn(tn)=yn,Pn(tn)=f(tn,yn).$$\begin{array}{} \displaystyle P_n(t_n) = y_n, \quad\quad P'_n(t_n) = f(t_n , y_n). \end{array}$$

The equations (4) imply that Pn and Pn−1 are such that their derivatives agree at the k back points t¯n1,,t¯nk$\begin{array}{} \displaystyle {\bar t_{n - 1}}, \ldots ,{\bar t_{n - k}} \end{array}$ whereas yn is chosen so that Pn(t) satisfies the ODE at the advanced point tn.

On a uniform grid we take as back points t¯nj=tnjh$\begin{array}{} \displaystyle \overline{t}_{n-j}=t_n-jh \end{array}$ where h is the fixed stepsize of the grid. For non uniform grids there are many possibilities about the choice of back points. Firstly, taking as t¯nj,j=1,,k$\begin{array}{} \displaystyle {\bar t_{n - j}},j = 1, \ldots ,k \end{array}$ the past grid points we get the AMk method with the so called variable coefficient technique for stepsize changing. On the other hand, taking the k equally spaced back points t¯nj=tnjhn,j=1,,k$\begin{array}{} \displaystyle {\bar t_{n - j}} = {t_n} - j{h_n},j = 1, \ldots ,k \end{array}$, we have the AMk method with the interpolation technique for stepsize changing. Other techniques have been proposed by Skeel [14], Gupta and Wallace [8] and Jackson and Sack-Davis [10].

The above techniques are the most usual in practical codes for non-stiff IVP’s. Thus, STEP written by Shampine and Gordon [12], DVDQ by Krogh [11] and EPISODE by Byrne and Hindmarsh [1] employ AM formulas with the variable coefficient technique while DIFSUB by Gear [5] and LSODE by Hindmarsh [9] use the interpolation technique.

The stability properties of these techniques have been studied by several authors (e.g. Gear and Tu [6], Crouzeix and Lisbona [4], Grigorieff [7], Skeel and Jackson [13] and Calvo, Lisbona and Montijano [2, 3]). From these papers it follows that the variable coefficient technique is more stable than the interpolation one. However, since the computational cost of the first is higher than the second one, for those problems with stability requirements not too strong, the interpolation technique can be advantageously used.

In this paper, a new technique for stepsize change in Adams methods is proposed. Its computational cost is similar to the interpolation technique but it has better stability properties and smaller leading error term. The paper is organized in the following manner: In section 2 the new technique is introduced in the frame of the polynomial formulation of AM methods. In section 3 the leading term of the local error is obtained. Finally, in section 4 the stability (0-stability) of this technique is studied and some comparisons with the interpolation technique are presented.

The new technique of stepsize change

As it was remarked earlier, when AMk methods are written in polynomial form, the difference between interpolation and variable coefficient techniques is the choice of the back points tnj on which agree the derivatives Pn=Pn1$\begin{array}{} \displaystyle P_n^{^\prime } = P_{n - 1}^{^\prime } \end{array}$ of two consecutive polynomials. Moreover, when the back points are equally spaced, the calculations required to advance one step are considerably simplified.

In view of these facts we will take the back points t¯nj=tnjh¯,j=1,,k$\begin{array}{} \displaystyle \overline t_{n-j}=t_n - j \overline{h} , \quad j=1,\ldots, k \end{array}$ equally spaced with a stepsize h¯$\begin{array}{} \displaystyle \overline{h} \end{array}$ adjusted to improve the stability and leading error term over the corresponding ones of the interpolation technique.

Denoting by rn = hn/hn−1 the stepsize ratio, we have considered h¯$\begin{array}{} \displaystyle \overline{h} \end{array}$ given by

h¯=hnϕ(rn)$$\begin{array}{} \displaystyle \overline{h} = h_n \phi(r_n) \end{array}$$

where ϕ(r) is a bounded function for 0 ≤ r* < r < r* with ϕ(1) = 1. In particular, the following functions have been studied:

ϕ(r)=αk+(1αk)r1,$$\begin{array}{} \displaystyle \phi(r) = \alpha_k + (1-\alpha_k) r^{-1}, \end{array}$$

ϕ(r)={αk+(1αk)r1,if r>11,if r1,$$\begin{array}{} \phi(r) = \begin{cases} \alpha_k + (1-\alpha_k) r^{-1}, & \text{if } r>1 \\ 1 , & \text{if } r \le 1, \end{cases} \end{array}$$

ϕ(r)=αk,if r>11,if r1.$$\begin{array}{} \phi(r) = \begin{cases} \alpha_k , & \text{if } r>1 \\ 1, & \text{if } r \le 1. \end{cases} \end{array}$$

where αk is, for the AMk method, a constant that will be determined by imposing that the stability of the technique and the leading error term behave in an optimal way in a sense that will be made precise later.

Note that if αk < 1, then ϕ(r) < 1 for all r for the T3 technique, and the same holds for the T1 and T2 techniques if αk ∊ (0, 1), for r > 1, i.e., when the current stepsize is decreased, and therefore h¯<hn$\begin{array}{} \displaystyle \overline{h} <\ h_n \end{array}$. This means that our interpolation points t¯nj$\begin{array}{} \displaystyle \bar t_{n-j} \end{array}$ are closer to tn than those used in the interpolation technique. The choice ϕ(r) = 1 for r ≤ 1 in T2 and T3 is mainly due to the fact the stability in the interpolation technique is satisfactory for r ≤ 1.

On the other hand, in the techniques T1 and T2, the interpolation stepsize h¯$\begin{array}{} \displaystyle \overline{h} \end{array}$ is taken as a convex combination of the two last stepsizes hn−1 and hn, i.e.

h¯=αkhn+(1αk)hn1.$$\begin{array}{} \displaystyle \overline{h}= \alpha_k h_n + (1-\alpha_k) h_{n-1}. \end{array}$$

Next, to obtain the propagation matrices we write the AMK methods in matrix form by means of Nordsieck vectors (see [15]). Let

qn(t)=t¯n1t(st¯n1)(st¯nk)ds.$$\begin{array}{} \displaystyle q_n(t) = \int_{\overline t_{n-1}}^t (s-\overline t_{n-1}) \ldots(s-\overline t_{n-k})\,\text{d}s. \end{array}$$

the modifier polynomial of the new technique and Nn(t, h) and Cn(t, h) the (k + 2)-Nordsieck vectors at the point t, with stepsize h of Pn(t) and qn(t) respectively, given by

Nn(t,h) =(Pn(t),hPn(t),,hk+1(k+1)!Pn(k+1)(t))T,Cn(t,h) =(qn(t),hqn(t),,hk+1(k+1)!qn(k+1)(t))T.$$\begin{array}{} \displaystyle \begin{array}{*{20}{c}} {{N_n}(t,h){\rm{ }} = {{({P_n}(t),h{{P'}_n}(t), \ldots ,\frac{{{h^{k + 1}}}}{{(k + 1)!}}P_n^{(k + 1)}(t))}^T},} \\ {{C_n}(t,h){\rm{ }} = {{({q_n}(t),h{{q'}_n}(t), \ldots ,\frac{{{h^{k + 1}}}}{{(k + 1)!}}q_n^{(k + 1)}(t))}^T}.} \\ \end{array} \end{array}$$

With these notations, the first equation of (4) can be written equivalently in the form

Nn(tn,hn)=Nn1(tn,hn)+γCn(tn,hn).$$\begin{array}{} \displaystyle {N_{n}(t_{n},h_{n}) = N_{n-1}(t_{n},h_{n}) + \gamma\, C_n(t_{n},h_{n})}. \end{array}$$

Now, using the Taylor formula to expand each component of Nn−1(tn,hn) at the point tn−1 and substituting hn by rnhn−1 we have

Nn1(tn,hn)=PD(rn)Nn1(tn1,hn1),$$\begin{array}{} \displaystyle {N_{n - 1}}\left( {{t_n},{h_n}} \right) = PD\left( {{r_n}} \right){N_{n - 1}}\left( {{t_{n - 1}},{h_{n - 1}}} \right), \end{array}$$

where D(r) = diag(1,...,rk+1) and P=(pij)i,j=0k+1R(k+2)×(k+2)$\begin{array}{} \displaystyle P=\left(p_{ij}\right)_{i,j=0}^{k+1}\in \mathbf{R}^{(k+2)\times(k+2)} \end{array}$ is the Pascal matrix whose entries are

pij=ji,if k+1ji00,otherwise.$$\begin{array}{} p_{ij}= \begin{cases} {j\choose i}, & \text{if } k+1\ge j \ge i \ge0 \\ \strut \ 0, & \text{otherwise.} \end{cases} \end{array}$$

On the other hand, putting in (6)s=tn+xh¯$\begin{array}{} \displaystyle s=t_n + x \overline{h} \end{array}$, the Nordsieck vector of qn(t) at tn becomes

Cn(tn,hn)=h¯k+1D(r¯n)c,$$\begin{array}{} \displaystyle C_n(t_n, h_n) = \overline{h}^{k+1} D(\bar r_n)\, c, \end{array}$$

where c = (c0, c1,...,ck+1)TRk+2 is a constant vector whose components are the coefficients of the polynomial

Λ(x)=1x(s+1)(s+k)ds=j=0k+1cjxj,$$\begin{array}{} \displaystyle \Lambda(x) = \int_{-1}^x (s+1) \ldots (s+k)\,\text{d}s = \sum_{j=0}^{k+1}c_j x^j, \end{array}$$

and r¯n=hn/h¯=1/ϕ(r)$\begin{array}{} \displaystyle \bar r_n = h_n/\overline{h}=1/\phi(r) \end{array}$.

Substituting (9) and (10) into (8) and putting Nj = Nj(tj, hj) we have

Nn=PD(rn)Nn1+γh¯k+1D(r¯n)c.$$\begin{array}{} \displaystyle N_n = P D(r_n) N_{n-1} + \gamma \overline{h}^{k+1} D(\bar r_n) c. \end{array}$$

Finally, the value of constant γ is determined from the second component of (12) and the equation (5). Thus, we arrive at the matrix equation of the AMk method with the new technique of stepsize change

Nn=Ω(rn)Nn1+hnc1r¯nf(tn,yn)D(r¯n)c$$\begin{array}{} \displaystyle N_n = \Omega(r_n) N_{n-1} + {h_n\over c_1 \bar r_n}f(t_n, y_n)D(\bar r_n) c \end{array}$$

where the propagation matrix Ω(r) is given by

Ω(rn)=(Iω(r¯n)1D(r¯n)ce2T)PD(rn)ω(r¯n)=e2TD(r¯n)c=k!r¯n,e2=(0,1,0,)TRk+2,r¯n=1/ϕ(rn).$$\begin{array}{} \displaystyle \begin{array}{*{20}{c}} {\Omega ({r_n}) = (I - \omega {{({{\bar r}_n})}^{ - 1}}D({{\bar r}_n})ce_2^T)PD\left( {{r_n}} \right)} \hfill \\ {\omega ({{\bar r}_n}) = e_2^TD({{\bar r}_n})c = k!{\kern 1pt} {{\bar r}_n},\quad {e_2} = {{(0,1,0, \ldots )}^T} \in {{\bf{R}}^{k + 2}},} \hfill \\ {{{\bar r}_n} = 1/\phi ({r_n}).} \hfill \\ \end{array} \end{array}$$

The local truncation error

Although the matrix form (13)-(14) of the AMk method will be useful to study the stability of the methods, for practical purposes it is more convenient to write the equations in the equivalent form

Nn,0=PD(rn)Nn1=(yn,0,hnyn,0,,hnk+1(k+1)!yn,0(k+1))Tyn=yn,0+11r¯n(hf(tn,yn)hyn,0)Nn=Nn,0+(ynyn,0)D(r¯n)$$\begin{array}{} \displaystyle \begin{array}{*{20}{c}} {{N_{n,0}} = PD({r_n}){N_{n - 1}} = {{({y_{n,0}},{h_n}y_{n,0}^\prime , \ldots ,\frac{{h_n^{k + 1}}}{{(k + 1)!}}y_{n,0}^{(k + 1)})}^T}} \hfill \\ {{y_n} = {y_{n,0}} + \frac{1}{{{\ell _1}{{\overline r }_n}}}(hf({t_n},{y_n}) - hy_{n,0}^\prime )} \hfill \\ {{N_n} = {N_{n,0}} + ({y_n} - {y_{n,0}})D({{\overline r }_n})\ell } \hfill \\ \end{array} \end{array}$$

where is the (k + 2)-vector defined by =1c0c=(1,1,,k+1)T$\begin{array}{} \displaystyle \ell={\displaystyle 1\over c_0} c = (1, \ell_1, \ldots, \ell_{k+1})^T \end{array}$. The first equation of (15) is the prediction stage in which the Nordsieck vector at tn is predicted from the corresponding one at tn−1. The second equation of (15) is a nonlinear algebraic equation which must solved by some iterative method (e.g. fixed point) to get yn. In the last equation (15) the predicted Nn,0 is corrected to the final value Nn.

To define the truncation error EL(tn) at the grid point tn, we assume that the true solution y(t) is sufficiently differentiable and then

EL(tn)=y(tn)yn*$$\begin{array}{} \displaystyle EL(t_n) = y(t_n) - y_n^* \end{array}$$

where yn*$\begin{array}{} \displaystyle y_n^* \end{array}$ is the solution computed by using (15) with an exact Nordsieck vector at tn−1, i.e.

Nn1*=(y(tn1),hn1y(tn1),hn1k+1(k+1)!y(k+1)(tn1))T.$$\begin{array}{} \displaystyle N_{n-1}^* = \big( y(t_{n-1}), h_{n-1} y'(t_{n-1}), \ldots {h_{n-1}^{k+1}\over (k+1)!} y^{(k+1)}(t_{n-1}) \big)^T. \end{array}$$

Next, let us calculate the leading term of EL(tn) when it is expanded in powers of hn. We write it in the form

EL(tn)=(y(tn)yn,0)+(yn,0yn*).$$\begin{array}{} \displaystyle EL(t_n) = (y(t_n) - y_{n,0}) + ( y_{n,0} - y_n^*). \end{array}$$

Taking into account the first equation of (15), the first term in the right hand side of (16) is

y(tn)yn,0=hnk+2(k+2)!y(k+2)(tn1)+O(hnk+3).$$\begin{array}{} \displaystyle y(t_n)-y_{n,0} = {h_n^{k+2}\over (k+2)!} \, y^{(k+2)}(t_{n-1}) + {\mathscr O}(h_n^{k+3}). \end{array}$$

Similarly, from the second equation of (15) we get

yn*yn,0=11r¯nhnk+2(k+1)!y(k+2)(tn1)+O(hnk+3),$$\begin{array}{} \displaystyle y_n^*- y_{n,0} ={1\over \ell_1 \bar r_n} {h_n^{k+2}\over (k+1)!} \, y^{(k+2)}(t_{n-1}) + {\mathscr O}(h_n^{k+3}), \end{array}$$

and therefore,

EL(tn)=Ck+2hnk+2y(k+2)(tn1)+O(hnk+3),$$\begin{array}{} \displaystyle EL(t_n) = C_{k+2} h_n^{k+2}y^{(k+2)}(t_{n-1}) + {\mathscr O}(h_n^{k+3}), \end{array}$$

with

Ck+2=1(k+2)!(1k+2r¯n1).$$\begin{array}{} \displaystyle {C_{k + 2}} = \frac{1}{{(k + 2)!}}\left( {1 - \frac{{k + 2}}{{{{\bar r}_n}{\ell _1}}}} \right). \end{array}$$

The values of (k + 2)/1, for 2 ≤ k ≤ 8 are given in Table 1.

k

2

3

4

5

6

7

8

k+21$\begin{array}{} \displaystyle \dfrac{k+2}{\ell_1} \end{array}$

53$\begin{array}{} \displaystyle \dfrac{5}{3} \end{array}$

158$\begin{array}{} \displaystyle \dfrac{15}{8} \end{array}$

251120$\begin{array}{} \displaystyle \dfrac{ 251}{120} \end{array}$

665228$\begin{array}{} \displaystyle \dfrac{ 665}{228} \end{array}$

190877560$\begin{array}{} \displaystyle \dfrac{19087}{7560} \end{array}$

52571920$\begin{array}{} \displaystyle \dfrac{ 5257}{1920} \end{array}$

1070017362880$\begin{array}{} \displaystyle \dfrac{ 1070017}{362880} \end{array}$

Since all these values are greater than unity, it follows that

|Ck+2(1)|>|Ck+2(r¯)|$$\begin{array}{} \displaystyle |C_{k+2}(1)| > |C_{k+2}(\bar r)| \end{array}$$

whenever 1<r¯<(k+2)/12(k+2)/1$\begin{array}{} \displaystyle \displaystyle 1<\bar r<{(k+2)/\ell_1 \over 2- (k+2)/\ell_1} \end{array}$, i.e., when 1>ϕ(r)>21k2k+2$\begin{array}{} \displaystyle \displaystyle 1>\phi(r)>{2\ell_1 - k-2 \over k+2} \end{array}$. This means that for both T2 and T3 techniques with αk ∊ (0, 1) the coefficient of EL(tn) is smaller than the corresponding to the IT. On the other hand, for the T1 technique, this coefficient is smaller only if rn > 1.

The 0-stability of the new techniques

In this section we study the 0-stability of AMk methods with the techniques T1 and T2 for the stepsize changing introduced in section 2. Taking into account that f (t, y) satisfies a uniform Lipschitz condition in a neighborhood of the true solution and assuming that the stepsize ratios are bounded, it may be proved (e.g. [4,7]) that the stability of (13) is independent of the non-linear term and therefore the method (13) is stable if and only if there exist numbers K, r*, h* > 0 such that for any grid {tn} ⊂ [0, T ] with hn = tntn−1h, and rnr*, the inequality

Ωk(rn)Ωk(rn1)Ωk(rm)<K<$$\begin{array}{} \displaystyle { \Vert \Omega_k(r_n)\Omega_k(r_{n-1})\ldots \Omega_k(r_m)\Vert < K} <\ \infty \end{array}$$

holds for all mn

Since the propagation matrix Ωk(r) depends only on the stepsize ratio and on function ϕ(r), following the ideas of [2] we introduce

Definition 1.

A sequence of real positive numbers {rj} is called a stable sequence of stepsize ratios if inequality (17) holds for all mn with a positive constant K independent of n and m.

A set of real positive numbers Jk is called a stability set for (13) if every sequence {rj} with rjJk is a stable sequence.

To simplify the condition (17), let us remark that the propagation matrices Ω(rn) have the special form

where bnR, a¯n,b¯nRk,Ω¯k(rn)Rk×k$\begin{array}{} \displaystyle \overline{a}_n,\overline{b}_n \in \mathbf{R}^k,\quad \overline{\Omega}_k(r_n) \in \mathbf{R}^{k\times k} \end{array}$. Consequently, any product of propagation matrices Ωk(rn)...Ωk(rm) can be written as

with

Ω¯n,m=Ω¯k(rn)Ω¯k(rn1)Ω¯k(rm),a¯n,m=Ω¯n,m+1a¯m    bn,m=bm+i=m+1nb¯iTΩ¯i1,ma¯m,b¯n,mT=b¯mT+i=m+1nb¯iTΩ¯i1,m$$\begin{array}{} \displaystyle \begin{array}{*{20}{c}} {{{\bar \Omega }_{n,m}} = {{\bar \Omega }_k}({r_n}){{\bar \Omega }_k}({r_{n - 1}}) \ldots {{\bar \Omega }_k}({r_m}),} \hfill & {{{\bar a}_{n,m}} = {{\bar \Omega }_{n,m + 1}}{{\bar a}_m}} \hfill \\ {{\rm{ }}{b_{n,m}} = {b_m} + \sum\limits_{i = m + 1}^n {\bar b_i^T} {{\bar \Omega }_{i - 1,m}}{{\bar a}_m},} \hfill & {\bar b_{n,m}^T = \bar b_m^T + \mathop \sum \limits_{i = m + 1}^n \bar b_i^T{{\bar \Omega }_{i - 1,m}}} \hfill \\ \end{array} \end{array}$$

and Ω¯i,j=I si i<j$\begin{array}{} \displaystyle {\bar \Omega _{i,j}} = I{\rm{ si }}i <\ j \end{array}$.

We are now in position to state:

Theorem 1.

The method (13)(14) is stable if and only if there exist positive constants K1, K2such that the inequalities

Ω¯n,m=Ω¯k(rn)Ω¯k(rn1)Ω¯k(rm)K1i=m+1nb¯iTΩ¯i1,mK2.$$\begin{array}{} \displaystyle \begin{array}{*{20}{c}} {{{\bar \Omega }_{n,m}} = {{\bar \Omega }_k}({r_n}){{\bar \Omega }_k}({r_{n - 1}}) \ldots {{\bar \Omega }_k}({r_m}) \le {K_1}} \hfill \\ {\sum\limits_{i = m + 1}^n {\bar b_i^T} {{\bar \Omega }_{i - 1,m}} \le {K_2}.} \hfill \\ \end{array} \end{array}$$

for all nmk − 1.

One important stability result on interpolatory AMk methods given by Gear and Tu [6] (see also [13]) is that the stability can be ascertained if the stepsize remains fixed for al least k steps before a stepsize change. Such a property was generalized by Calvo, Lisbona and Montijano [3], showing that even with a finite number of stepsize changes between k consecutive equal stepsizes, the AMk method remains stable. Using the same reasoning as in [3], it can be proved the following properties can be proved:

Theorem 2.

If there is a positive integer p, such that in any set of p + 2(k + 1) consecutive steps there at least (k + 1) consecutive steps with constant size, then the AMk method is stable.

If there is a norm ∥⋅∥ such thatΩ¯k(r)K1<1$\begin{array}{} \displaystyle \Vert \overline{\Omega}_k(r) \Vert\le K_1<1 \end{array}$for all r ≤ 1, then the AMk method remains stable with the following strategy of stepsize variation: Arbitrary decreases of stepsizes are allowed while the size of a step can be increased only after k + 1 consecutive steps with the same length.

Next, we turn our attention to the choice of the α-parameter in the new techniques. We consider first the two steps AM method. In this case, after some elementary calculations it is found that Ω¯2(r)$\begin{array}{} \displaystyle \overline{\Omega}_2(r) \end{array}$ is a 2 × 2 matrix given by

Ω¯2(r)=r2(132r¯ (394r¯)r13r¯2(112r¯2)r)$$\begin{array}{} \displaystyle {\bar \Omega _2}(r) = {r^2}\left( {\begin{array}{*{20}{c}} {1 - \frac{3}{2}\bar r{\rm{ }}} & {(3 - \frac{9}{4}\bar r)r} \\ { - \frac{1}{3}{{\bar r}^2}} & {(1 - \frac{1}{2}{{\bar r}^2})r} \\ \end{array}} \right) \end{array}$$

where r¯=1/ϕ(r)$\begin{array}{} \displaystyle \bar r=1/\phi(r) \end{array}$. By using Schur’s criteria, it follows that the spectral radius ρ(Ω¯2(r))<1$\begin{array}{} \displaystyle \rho(\overline{\Omega}_2(r))< 1 \end{array}$ if and only if the following inequalities hold

r5|(r¯1)(r¯2)|<2r2|(23r¯)+r(2r¯2)|<|2+r5(r¯1)(r¯2)|.$$\begin{array}{} \displaystyle \begin{array}{*{20}{c}} {{r^5}|(\bar r - 1)(\bar r - 2)| < 2} \hfill \\ {{r^2}|(2 - 3\bar r) + r(2 - {{\bar r}^2})| <\ |2 + {r^5}(\bar r - 1)(\bar r - 2)|.} \hfill \\ \end{array} \end{array}$$

Thus, starting with the technique T1, for each value α = α2 we may found the set of all r ≥ 0 such that ρ(Ω¯2(r))<1$\begin{array}{} \displaystyle \rho(\overline{\Omega}_2(r))< 1 \end{array}$ which is given by (18). This set has the form [0, φ1(α)). Figure 1 shows the function φ1(α), and a remarkable fact is that it attains a maximum for α=α2*=0.7677$\begin{array}{} \displaystyle \alpha=\alpha_2^* = 0.7677 \end{array}$. In view of this, a reasonable choice for the AM2 method with the T1 technique would be α2 = 0.7677. Note that in the interpolation technique α2 = 1 and then the spectral radius ρ(Ω¯2(r))<1$\begin{array}{} \displaystyle \rho(\overline{\Omega}_2(r))<1 \end{array}$ only for r ∊ [0, 1.695) while for the new technique, the same condition is satisfied for r ∊ [0, 1.803). A similar study can be carried out for the remaining techniques as well as for k > 3. In Table 2 the optimal values of the α-parameters for the T1 and T3 techniques and k ≤ 7 are given. Moreover, Table 2 includes the greatest value of rk such that ρ(Ω¯k(r))<1$\begin{array}{} \displaystyle \rho(\overline{\Omega}_k(r))<1 \end{array}$ for r ∊ [0, rk) for the T1, T3 and IT techniques respectively. In all cases, it is clear that the stability of the new techniques is better than that of the IT.

Fig. 1

Plot of the φ1(α)) for AM2 methods for T1 and T3 techniques

T1

T3

interp.

k

αk*$\begin{array}{} \displaystyle \alpha_k^* \end{array}$

rk

αk*$\begin{array}{} \displaystyle \alpha_k^* \end{array}$

rk

rk

2

0.7677

1.803

0.8987

1.803

1.695

3

0.7374

1.491

0.9161

1.489

1.439

4

0.7172

1.321

0.9322

1.321

1.297

5

0.7272

1.251

0.9524

1.250

1.233

6

0.7373

1.196

0.9685

1.194

1.187

7

0.8989

1.162

0.9846

1.163

Although the sets Ik={r|ρ(Ω¯k(r))<1}$\begin{array}{} \displaystyle I_k= \{ r\; \vert\; \rho(\overline{\Omega}_k(r))<1 \} \end{array}$, see Figures 2-7, are good guides to select the optimal values of the parameters, it is clear that Ik is not in general a stability set (see e.g. Th. 4.1 [3]). One approach to get stability sets consist in finding a suitable matrix H such that the set Mk defined by

Fig. 2

Spectral radius of the AM2 methods for T1, T3 and interpolation techniques

Fig. 3

Spectral radius of the AM3 methods for T1, T3 and interpolation techniques

Fig. 4

Spectral radius of the AM4 methods for T1, T3 and interpolation techniques

Fig. 5

Spectral radius of the AM5 methods for T1, T3 and interpolation techniques

Fig. 6

Spectral radius of the AM6 methods for T1, T3 and interpolation techniques

Fig. 7

Spectral radius of the AM7 methods for T1, T3 and interpolation techniques. Note that there exists an interval near 0.8 where the spectral radius of T1 and interpolation techniques is greater than one.

Mk={rR+;||H1Ω¯k(r)H||1<1}Ik$$\begin{array}{} \displaystyle M_k = \{ r\in R^+ \ ; \ \vert \vert H^{-1} \overline{\Omega}_k(r) H \vert \vert_1 <\ 1 \} \subset I_k \end{array}$$

be as large as possible. Moreover, it is convenient for practical applications for Mk to have the form [0, dk) with dk > 1. It has been proved in [3], that all closed set of Mk is a stability set for the AMk method.

Finally, we briefly outline the choice of H for the new techniques and the corresponding stability sets. The approach is purely numerical and it follows essentially the same ideas used in [3]. For a value rIk such that Ω¯k(r)$\begin{array}{} \displaystyle \overline{\Omega}_k(r) \end{array}$ possesses a complete set of eigenvectors, we take these vectors as columns of H(r). Then, we get the corresponding set Mk(H(r)). By a numerical search we obtain an optimal rkIk. In Table 3 the optimal values of rk for k ≤ 6 are given. It must be pointed that for the T1 technique Mk = Ik, k = 2,...,5 and for the T3 technique Mk = Ik only for k = 2 and k = 4. Therefore, optimal norms have obtained in these cases.

T1

T3

interp.

k

rk

Mk(rk)

αk

rk

Mk(rk)

αk

rk

Mk(rk)

2

1.803

[0, 1.803]

0.8122

1.803

[0, 1.803]

0.9184

1.695

[0, 1.695]

3

1.462

[0, 1.491]

0.8000

1.422

[0, 1.461]

0.9898

1.419

[0, 1.437]

4

1.315

[0, 1.321]

0.8000

1.296

[0, 1.321]

1.0020

1.297

[0, 1.297]

5

1.221

[0, 1.251]

0.9000

1.094

[0, 1.123]

0.9796

0.905

[0, 1.056]

6

1.120

[0, 1.172]

0.7489

1.017

[0, 1.096]

1.0351

1.170

[0, 0.257] ∪ [.971, 1.181]

eISSN:
2444-8656
Language:
English
Publication timeframe:
2 times per year
Journal Subjects:
Life Sciences, other, Mathematics, Applied Mathematics, General Mathematics, Physics