Open Access

High-accuracy approximation of piecewise smooth functions using the Truncation and Encode approach


Cite

Introduction

Uncertainty Quantification (UQ) is the science that studies the quantification and the reduction of uncertainties in real applications with an intensive computational component. An example would be the computation of the fuel consumption of a car. Suppose we know how to compute the consumption as a function of some parameters (car and wind velocity, wheel condition, car weight, etc.), but we do not know the exact value of these parameters because they are random variables. Hence, the fuel consumption is also a random variable. UQ studies how to infer the model of the fuel consumption random variable by assuming models for the parameters. To carry out such study, many simulations of the consumption, changing the parameters among simulations, maybe required. These computations tend to be very expensive, because they may involve numerically solving systems of partial differential equations.

One of the aims of UQ is to find the statistical moments of the random variable that provides the solution to a given problem. In [1. 11], the authors propose a new multi-scale technique, the Truncation and Encode approach (TE henceforth) to be applied to lower the cost of the computation (by quadrature rules) of the moments of the solution of the following stochastic PDE

tu(x,t,ξ)+xf(x,t,ξ,u(x,t,ξ))=0,$$ \begin{equation*}% \label{eq:PDE} \partial_t u(x,t,\xi) + \partial_x f(x,t,\xi,u(x,t,\xi)) = 0, \end{equation*} $$

where ξ ∈ ℝN is a random vector, with known probabilistic distribution. Notice that for each ξ, a PDE must be solved to compute uξ(x, t) = u(x, t, ξ), namely

tuξ(x,t)+xfξ(x,t,uξ(x,t))=0.$$ \begin{equation*}%{eq:PDE_xi} \partial_t u_\xi(x,t) + \partial_x f_\xi(x,t,u_\xi(x,t)) = 0. \end{equation*} $$

Consequently, the calculation of these moments is a computationally intensive task, since function evaluations for given parameter values usually require a complex numerical simulation. Many numerical methods have been specially developed for this kind of UQ problems (for instance, methods generalizing or improving the classical Monte Carlo method, or those based on the Polynomial Chaos representation [14]) and all of them try to use as few function evaluations as possible.

The TE technique is specifically designed to meet a target accuracy in the computation of the desired integral, by ensuring a predetermined accuracy in the required function evaluations. It is specially suited for the integration of functions which are only piecewise smooth, since in this case other alternatives such as the Polynomial Chaos method may be highly inefficient.

In this paper we analyze a basic ingredient of the TE technique designed in [1, 11]: the influence of the approximation method for piecewise smooth functions in the global efficiency of the TE technique. For this, we consider a simplified framework in which we seek to compute integrals of the type

E=01f(ξ)dξ$$ \begin{equation*}\nonumber E=\int_0^1 f(\xi) d\xi \end{equation*} $$

by means of a quadrature rule using point values on an equally spaced grid. For the sake of simplicity, let us assume that we use the trapezoidal rule and a uniform mesh of grid-size hK = 2K, i.e.

EK=2K(12v0K+i=02KviK+12v2KK),  viK=f(i2K).$$ \begin{equation*}%{eq:integration_rule1} E_K = 2^{-K}(\frac12 v_0^K + \sum_{i=0}^{2^K} v_i^K + \frac12 v_{2^K}^K), \qquad v_i^K=f(i2^{-K}). \end{equation*} $$

By standard results, we know that when f is sufficiently smooth

|EEK|=O(22K)$$ |E-E_K|= O(2^{-2K}) $$

while

|EEK|=O(2K)$$ |E- E_K| = O(2^{-K}) $$

when f is only piecewise smooth.

The TE algorithm seeks to replace as many as possible viK $v_i^K$ values, which require function evaluations of f, with modifiedv^iK $\hat v_i^K$ values, so that

E^K=2K(12v^0K+i=02Kv^iK+12v^2KK)$$ \begin{equation*}%{eq:integration_rule2} \hat E_K = 2^{-K}(\frac12 \hat v_0^K + \sum_{i=0}^{2^K} \hat v_i^K + \frac12 \hat v_{2^K}^K) \end{equation*} $$

satisfies

|EE^K|ε$$ |E-\hat E_K| \leq \epsilon $$

where ε is a user-dependent predetermined accuracy, even when f is only piecewise smooth.

Notice that

|EKE^K|=2K|(12v0K+i=12K1v^iK+12v2KK)(12v^0K+i=12K1v^iK+12v^2KK)|vKv^K1vKv^K,$$ \begin{align*} |E_K - \hat E_K| &= 2^{-K}\left |(\frac12 v_0^K + \sum_{i=1}^{2^K-1} \hat v_i^K + \frac12 v_{2^K}^K) - (\frac12 \hat v_0^K + \sum_{i=1}^{2^K-1} \hat v_i^K + \frac12 \hat v_{2^K}^K)\right | \leq \|v^K - \hat v^K\|_1 \leq \|v^K - \hat v^K\|_\infty, \end{align*} $$

where

vKv^K1=2Ki=02K|viKv^iK|,  vKv^K=sup0i2K|viKv^iK|.$$ \begin{equation*}\nonumber \|v^K - \hat v^K\|_1 = 2^{-K}\sum_{i=0}^{2^K} |v_i^K - \hat v_i^K|, \qquad \|v^K-\hat v^K\|_\infty = \sup_{0\leq i \leq 2^K} |v_i^K - \hat v_i^K|. \end{equation*} $$

Hence,

|EE^K||EEK|+||v^KvK||1.$$ \begin{equation} |E - \hat E_K| \leq |E - E_K| + ||\hat v^K - v^K ||_1. \end{equation} $$

Thus, for a given (piecewise smooth) f, it is possible to ensure a target accuracy by choosing appropriately the mesh (i.e. K) and the target accuracy ε so that ||v^KvK||ε $||\hat v^K - v^K ||_\infty \leq \epsilon$ .

The TE algorithm in [1, 11] is based on Harten’s Multiresolution Framework (MRF), which provides a set of tools to manage data in a multi-scale setting. In the last two decades, the MRF has been successfully applied in various contexts, and in particular in the design of adaptive schemes for the numerical solution of conservation laws and systems [6[7]–8, 13]. The TE algorithm follows a strategy similar to that used in the cost-effective schemes described in [7] and used in [6, 8, 13]. Both algorithms use the ideas in Harten’s MRF to compute a sequence v^K $\hat v^K$ such that ||v^KvK||ε $||\hat v^K - v^K|| \leq \epsilon$ for a specified norm, with vK=(f(ξiK))i=02K $v^K=\left ( f(\xi_i^K)\right )_{i=0}^{2^K}$ .

These strategies proceed from coarse to fine resolution levels as follows: At each resolution level kK, (associated to a uniform grid with spacing 2k), a vector v^k $\hat v^k$ is obtained, whose components either coincide with or approximate the values of f on the corresponding grid, i.e. v^ikf(i2k) $\hat v^k_i\approx f(i2^{-k})$ . Approximations are computed in [1, 7, 11] via an interpolatory reconstruction. The TE authors also consider a multilevel thresholding strategy (εk)k=0K $(\epsilon^k)_{k=0}^K$ , which should be properly chosen to guarantee certain precision in the output v^K $\hat v^K$ :

vKv^K<ε.$$ \begin{equation*}%{eq:error_control0} \| v^K - \hat v^K\|_\infty \lessapprox \epsilon. \end{equation*} $$

In this work we show that the simple strategy εk = ε, also considered in [7], is more effective than the one proposed in [1, 11] for the current purpose, εk = 2Kkε. Another advantage of taking εk = ε is that K can be taken as large as needed without increasing the number of function evaluations. Thus, |EEK| can be made arbitrary small in (1), which implies |EE^K|ε $|E - \hat E_K|\lessapprox \epsilon$ .

In this work we also analyze the use of high accuracy interpolation techniques, and we establish the growth rate of the number of evaluations (neval) respect to the precision (vKv^K) $(\|v^K - \hat v^K\|_\infty)$ :

neval=O(vKv^K1/s)vKv^K=O(nevals),$$ \begin{equation*}\nonumber n_{eval} = O(\|v^K - \hat v^K\|_\infty^{-1/s}) \quad \equiv \quad \|v^K - \hat v^K\|_\infty = O(n_{eval}^{-s}), \end{equation*} $$

where s is the approximation order of the interpolation technique. This result is independent of the smoothness of f. That is, the number of function evaluations increases slowly even if f has some discontinuities. Hence the TE algorithm is very convenient in such situations.

A comparison between linear and non-linear interpolation techniques is also carried out. Our numerical results will show that non-linear techniques are dispensable in the TE strategy. Since linear ones are faster, computationally speaking, they are preferable. In particular, we make the comparison between piecewise polynomial Lagrange interpolation of degrees 1 and 3, and the nonlinear PCHIP interpolation developed in [5]. In [1, 11], the ENO interpolation was chosen instead of PCHIP, but PCHIP interpolation is monotone, non-oscillatory, stable as a subdivision process and faster than the ENO technique.

The paper is organized as follows: In section 2 we recall Harten’s MRF and define the specific prediction schemes to be considered for our study. In section 3 we recall the Truncation and Encode strategy described in [1, 11] and in section 4 we perform a series of numerical experiments that confirm our observations. We close the paper with some conclusions and perspectives for future work.

Harten’s Multiresolution Framework
Presentation

The TE technique is based on Harten’s Multiresolution Framework (MRF), which relies on two basic elements: discretization and (compatible) reconstruction operators. In this paper we only consider the interpolatory framework for functions defined in the unit interval. The reader is referred to [3, 12] for a more complete description of Harten’s MRF.

In the interpolatory framework, the discretization operators (Dk)k≥0:

Dk:C([0,1])nk+1,Dkf=(f(ξik))i=0nknk+1$$ \begin{equation}\mathscr{D}_k:\mathscr{C}([0,1])\rightarrow \mathbb{R}^{n_k+1}, \qquad \mathscr{D}_k f = (f(\xi_i^k))_{i=0}^{n_k}\in \mathbb{R}^{n_k+1}\end{equation} $$

obtain the point values of a function f on a sequence of nested grids (ξk)k≥0, i.e.

ξk=(ξik)i=0nknk+1,ξ0k=0,ξnkk=1,ξik<ξi+1k,ξik=ξ2ik+1,nk+1=2nk.$$ \begin{equation*}\nonumber \xi^k=(\xi_i^k)_{i=0}^{n_k}\in\mathbb{R}^{n_k+1}, \quad \xi_0^k=0, \, \xi_{n_k}^k=1,\quad \xi_i^k\lt\xi_{i+1}^k, \quad \xi^k_i=\xi^{k+1}_{2i}, \quad n_{k+1}=2 n_k. \end{equation*} $$

Together with the discretization operators, a sequence of interpolatory reconstruction operators (ℛk)k≥0 is considered, which verify

k:nk+1C([0,1]),kvkC([0,1]),kvk(ξik)=vik,i=0,1,,nk,$$ \begin{equation}\mathbb{R}_k:\mathbb{R}^{n_k+1}\rightarrow\mathscr{C}([0,1]), \quad \mathbb{R}_k v^k\in \mathscr{C}([0,1]), \quad \mathbb{R}_k v^k(\xi_i^k)=v_i^k, \quad i=0,1,\ldots,n_k, \end{equation} $$

with 𝒞([0,1]) = {f:[0,1] → ℝ, continuous.

Since ℛkis interpolatory, thecompatibility condition

DkRk=Ik$$ \begin{equation} \mathscr{D}_k\mathbb{R}_k = \mathbb{I}_k \end{equation} $$

is fulfilled, where Ik:nk+1nk+1 $\mathbb{I}_k:\mathbb{R}^{n_k+1}\rightarrow\mathbb{R}^{n_k+1}$ is the identity operator.

Note 1

An example of these concepts is the following. On the one hand, let beξik=i2k $\xi_i^k=i2^{-k}$ , nk = 2k. Then

Dkf=(f(i2k))i=02k2k+1.$$ \begin{equation*}\nonumber \mathscr{D}_k f = (f(i2^{-k}))_{i=0}^{2^k} \in \mathbb{R}^{2^k+1}. \end{equation*} $$

On the other hand, we consider the reconstruction operators

kvkC([0,1]),(kvk)(ξ)=1(ξ;ξk,vk),$$ \begin{equation*}\nonumber \mathbb{R}_k v^k\in \mathscr{C}([0,1]), \quad (\mathbb{R}_k v^k)(\xi) = \mathscr{I}_1(\xi;\xi^k,v^k), \end{equation*} $$

where ℐ1is the 1st degree polynomial interpolation:

1(ξ;ξk,vk)=vi+1kξξikξi+1kξikvikξξi+1kξi+1kξik,  ξ[ξik,ξi+1k].$$ \begin{equation*}\nonumber \mathscr{I}_1(\xi;\xi^k,v^k) = v_{i+1}^k \frac{\xi - \xi^k_{i}}{\xi^k_{i+1}-\xi^k_i} - v_i^k \frac{\xi - \xi^k_{i+1}}{\xi^k_{i+1}-\xi^k_i}, \qquad \xi\in [\xi^k_i,\xi^k_{i+1}]. \end{equation*} $$

Actually, ℛkvk is the unique polygonal satisfying (kvk)(i2k)=vik $(\mathbb{R}_kv^k)(i2^{-k})=v_i^k$ , for all 0 ≤ ink.

𝒟kf is the representation of f at the resolution level k, and will be denoted by vk. In Lemma 3.1 of [12] it is proved that the operator that sends vk+1 = 𝒟k+1f to vk = 𝒟kf is well-defined and linear. It is called the decimation operator and it is written Dk+1k $D_{k+1}^k$ . Moreover, it can always be expressed as

Dk+1k=Dkk+1,$$ \begin{equation} D_{k+1}^k = \mathscr{D}_k\mathscr{R}_{k+1}, \end{equation} $$

although ℛk+1 could be non-linear. In particular, for the point-valued discretization 𝒟k, the decimation operator is

Dk+1kvk+1=(v2ik+1)i=0nk+1/2.$$ \begin{equation} D_{k+1}^k v^{k+1} = (v_{2i}^{k+1})_{i=0}^{n_{k+1}/2}. \end{equation} $$

Inversely, the prediction operator Pkk+1 $P_k^{k+1}$ gives an approximation of 𝒟k+1f from 𝒟kf:

Pkk+1:nk+1nk+1+1,Pkk+1:=Dk+1k.$$ \begin{equation*}\nonumber P_k^{k+1}:\mathbb{R}^{n_k+1}\rightarrow\mathbb{R}^{n_{k+1}+1}, \quad P_k^{k+1}:=\mathscr{D}_{k+1}\mathbb{R}_k. \end{equation*} $$

Notice the similarity with (3). The error of Pkk+1 $P_k^{k+1}$ is measured as

δk+1:=Dk+1fPkk+1Dkf.$$ \begin{equation*}\nonumber \delta^{k+1}:= \mathscr{D}_{k+1} f - P_k^{k+1} \mathscr{D}_k f. \end{equation*} $$

By (4), (3) and (2), in this order,

(δ2ik+1)i=0nk+1/2=Dk+1kδk+1=Dkk+1Dk+1fDkk+1Dk+1kDkf=DkfDkkDkf=DkfDkf=0.$$ \begin{equation}(\delta^{k+1}_{2i})_{i=0}^{n_{k+1}/2}=D_{k+1}^k \delta^{k+1}= \mathscr{D}_k\mathbb{R}_{k+1} \mathscr{D}_{k+1}f - \mathscr{D}_k\mathbb{R}_{k+1}\mathscr{D}_{k+1}\mathbb{R}_k\mathscr{D}_k f = \mathscr{D}_kf - \mathscr{D}_k\mathbb{R}_k\mathscr{D}_k f = \mathscr{D}_kf - \mathscr{D}_k f = 0. \end{equation} $$

So the even values of δk+1 are 0. Hence, we will pay attention exclusively to the odd values of δk+1, which are named the detail coefficients:

dik:=δ2i+1k+1=f(ξ2i+1k+1)(Pkk+1Dkf)2i+1,  0i<nk+1/2=nk.$$ \begin{equation*} d_i^{k}:=\delta^{k+1}_{2i+1}= f(\xi_{2i+1}^{k+1}) - (P_k^{k+1} \mathscr{D}_k f)_{2i+1}, \qquad 0\leq i \lt n_{k+1}/2 = n_k. \end{equation*} $$

For simplicity, let us denote vk≔ 𝒟kf. Then

dik=v2i+1k+1(Pkk+1vk)2i+1,  0i<nk+1/2=nk.$$ \begin{equation} d_i^{k}= v_{2i+1}^{k+1} - (P_k^{k+1} v^k)_{2i+1}, \qquad 0\leq i \lt n_{k+1}/2 = n_k. \end{equation} $$

Let be a ∈ ℝα and b ∈ ℝβ. Let us denote by (a;b) ∈ ℝα+β the concatenation of a and b. Since vk ∈ ℝnk+1 = ℝnk+1/2+1 and dk ∈ ℝnk+1/2, then (vk;dk) ∈ ℝnk+1+1. In fact, there exists a bijection between (vk;dk) and vk+1 ∈ ℝnk+1+1:

vk+1=Pkk+1vk+δk+1{vk=Dk+1kvk+1δk+1=vk+1Pkk+1vk.$$ \begin{equation}v^{k+1} = P_k^{k+1} v^k + \delta^{k+1} \longleftrightarrow \begin{cases} v^k=D_{k+1}^k v^{k+1} \\ \delta^{k+1}= v^{k+1} - P_k^{k+1} v^k \end{cases}. \end{equation} $$

That is

v2ik+1=vikv2i+1k+1=(Pkk+1vk)2i+1+dikvk=Dk+1kvk+1dik=v2i+1k+1(Pkk+1vk)2i+1.$$ \begin{equation} \label{eq:bijection} \begin{cases} v^{k+1}_{2i}=v^k_i \\ v^{k+1}_{2i+1}= (P_k^{k+1} v^k)_{2i+1} + d_i^k \end{cases} \longleftrightarrow \begin{cases} v^k=D_{k+1}^k v^{k+1} \\ d_i^{k}= v^{k+1}_{2i+1} - (P_k^{k+1}v^k)_{2i+1} \end{cases}. \end{equation} $$

Abusing of notation, we denote by (a1;a2;a3; …) the concatenation of many vectors. Note that (6) can be applied recursively to obtain the following equivalences

vk(vk1;dk1)(vk2;dk2;dk1)(v0;d0;d1;;dk1)nk+1.$$ v^k \leftrightarrow (v^{k-1};d^{k-1}) \leftrightarrow (v^{k-2};d^{k-2};d^{k-1}) \leftrightarrow \cdots \leftrightarrow (v^0;d^0;d^1;\ldots;d^{k-1})\in\mathbb{R}^{n_k+1}. $$

Themultiresolution decomposition ofklevels, Mk, (or multiresolution transform) is defined as

Mkvk=(v0;d0;d1;;dk1).$$ \begin{equation*}\nonumber M^k v^k = (v^0;d^0;d^1;\ldots;d^{k-1}). \end{equation*} $$

We denote its inverse, the inverse multiresolution transform, by

Mk(v0;d0;d1;;dk1)=vk.$$ \begin{equation*}\nonumber M^{-k} (v^0;d^0;d^1;\ldots;d^{k-1}) = v^k. \end{equation*} $$

Interpolatory prediction operators

A classical approach to design prediction operators consists in using piecewise polynomial interpolation techniques, ℐ(ξ;ξk, vk). See for instance Remark 1. Given some values vk on a grid ξk, they provide a function of ξ satisfying

(ξik;ξk,vk)=vik.$$ \begin{equation*}\nonumber \mathscr{I}(\xi^k_i;\xi^k,v^k) = v^k_i. \end{equation*} $$

The corresponding prediction operators are obtained by evaluating ℐ(·;ξk, vk) on the finer grid ξk+1:

Pkk+1vk=((ξik+1;ξk,vk))i=0nk+1.$$ \begin{equation*} P_k^{k+1} v^k = (\mathscr{I}(\xi^{k+1}_i;\xi^k,v^k))_{i=0}^{n_{k+1}}. \end{equation*} $$

For equally-spaced grids, that is ξi+1kξik=hk $\xi^k_{i+1}-\xi^k_i = h_k$ , the reconstruction technique can usually be expressed in terms of a local rule I, as follows:

(ξ2ik+1;ξk,vk)=vik,  (ξ2i+1k+1;ξk,vk)=I(vil+1k,,vik,,vi+r1k),$$ \begin{equation*} \mathscr{I}(\xi^{k+1}_{2i} ;\xi^k,v^k) = v_i^k, \qquad \mathscr{I}(\xi^{k+1}_{2i+1} ;\xi^k,v^k) = I(v^k_{i-l+1},\ldots,v_{i}^k,\ldots,v_{i+r-1}^k), \end{equation*} $$

for some l, r > 0. In such case,

(Pkk+1vk)2i=vik,  (Pkk+1vk)2i+1=I(vil+1k,,vik,,vi+r1k).$$ \begin{equation*}\nonumber (P_k^{k+1} v^k)_{2i} = v_i^k, \qquad (P_k^{k+1} v^k)_{2i+1} = I(v^k_{i-l+1},\ldots,v_{i}^k,\ldots,v_{i+r-1}^k). \end{equation*} $$

The accuracy of the interpolation technique is an important aspect to be taken into account. ℐ has order of approximation s if

(;ξk,Dkf)fChks,k0,$$ \begin{equation*} \| \mathscr{I}(\, \mathscr{D}ot \,;\xi^k,\mathscr{D}_k f) - f\|_\infty \leq C h_k^s, \quad \forall k\geq 0, \end{equation*} $$

for any f sufficiently smooth. It can be written in terms of the local rule as

I(f(ξil+1k),,f(ξik),,f(ξi+r1k))=f(ξ2i+1k+1)+O(hks),i,k.$$ \begin{equation} I(f(\xi_{i-l+1}^k),\ldots,f(\xi_{i}^k),\ldots,f(\xi_{i+r-1}^k))=f(\xi_{2i+1}^{k+1})+O(h_k^{s}), \quad \forall i,k. \end{equation} $$

Note 2

Some examples of local rules are

Polygonal rule or 1st degree polynomial rule:

I(vik,vi+1k)=12vik+12vi+1k.$$ \begin{equation*}\nonumber I(v_i^k,v_{i+1}^k)=\frac12 v_i^k + \frac12 v_{i+1}^k. \end{equation*} $$

Cubic rule or 3rd degree polynomial rule:

I(vi1k,vik,vi+1k,vi+2k)=116vi1k+916vik+916vi+1k116vi+2k.$$ \begin{equation*}\nonumber I(v_{i-1}^k,v_i^k,v_{i+1}^k,v_{i+2}^k)=-\frac1{16} v_{i-1}^k + \frac{9}{16} v_{i}^k + \frac{9}{16} v_{i+1}^k - \frac{1}{16} v_{i+2}^k. \end{equation*} $$

PCHIP rule:

I(vi1k,vik,vi+1k,vi+2k)=12vik+12vi+1k18(H(vi+1kvik,vi+2kvi+1k)H(vikvi1k,vi+1kvik)),$$ \begin{equation}I(v_{i-1}^k,v_i^k,v_{i+1}^k,v_{i+2}^k)= \frac12 v_i^k + \frac12 v_{i+1}^k -\frac18\left( H(v_{i+1}^k-v_{i}^k,v_{i+2}^k-v_{i+1}^k) - H(v_{i}^k-v_{i-1}^k,v_{i+1}^k-v_{i}^k) \right), \end{equation} $$

whereH(x,y)=2xyx+y $H(x,y)=\frac{2xy}{x+y}$ ifx, yhave the same sign, H(x, y) = 0 otherwise.

Notice that the polygonal and cubic rules are linear, while the PCHIP rule is nonlinear. It can be proven that the polygonal rule has approximation order 2, the cubic rule has order 4, and the PCHIP rule has order 4 if(vi1k,vik,vi+1k,vi+2k) $(v_{i-1}^k,v_i^k,v_{i+1}^k,v_{i+2}^k)$ is strictly monotone but order 2 in general.

Prediction operators in Harten’s MRF define subdivision schemes. These are iterative process where denser and denser sets of data are generated using recursively the local rules. A subdivision scheme is convergent if this process converges to a continuous function. See for instance [4, 9, 10].

In [5] the subdivision scheme defined from the PCHIP rule was studied. Its authors found some advantages respect to other subdivision schemes, such as the ENO scheme [2]. They proved that PCHIP is monotonicity preserving, which means that it converges to a monotone function if monotone data is used. But also it is Lipschitz stable (see [5] for specific definitions) which makes it more suitable as prediction operators in Harten’s MRF than the non-linear Essentially Non Oscillatory (ENO) reconstruction techniques used in [1, 11]. The monotonicity preserving property is particularly interesting when data coming from discontinuous functions needs to be handled, because it avoids oscillatory behavior in the reconstructed data around jumps and steep gradients (see [5]), just as the ENO reconstructions.

If f is sufficiently smooth, by (7)

dik=v2i+1k+1(Pkk+1vk)2i+1=f(ξ2i+1k+1)I(f(ξil+1k),,f(ξi+rk))=O(hks).$$ \begin{equation}d_i^{k}= v^{k+1}_{2i+1} - (P_k^{k+1} v^k)_{2i+1} = f(\xi_{2i+1}^{k+1}) - I(f(\xi_{i-l+1}^k),\ldots,f(\xi_{i+r}^k)) = O(h_k^s). \end{equation} $$

So

d2ik+1(hk+1hk)sdikd2i+1k+1.$$ \begin{equation*}\nonumber d_{2i}^{k+1} \approx (\frac{h_{k+1}}{h_k})^s d_i^k \approx d_{2i+1}^{k+1}. \end{equation*} $$

We are assuming that (ξk)k≥0 are nested and equal-spaced. Thus

O(hks)=O(2ks),  (hk+1hk)s=2s.$$ \begin{equation*}\nonumber O(h_k^s) =O(2^{-ks}), \qquad (\frac{h_{k+1}}{h_k})^s = 2^{-s}. \end{equation*} $$

If f has a discontinuity in its s0 derivative, 0 ≤ s0s, then

dik=O(2ks0),  d2ik+12s0dikd2i+1k+1.$$ \begin{equation*}\nonumber d_i^k = O(2^{-ks_0}), \qquad d_{2i}^{k+1} \approx 2^{-s_0} d_i^k \approx d_{2i+1}^{k+1}. \end{equation*} $$

The Truncation and Encode approach

In this section, we describe the TE method restricted to the approximation of functions.

Given a target error ε > 0, a piecewise smooth function f: [0, 1]→ ℝ and a suitable finest resolution level K, the goal is to obtain some v^KnK+1 $\hat v^K\in\mathbb{R}^{n_K+1}$ such that

vKv^Kε,  vK=DKf,$$ \begin{equation} \|v^K-\hat v^K\|_\infty\lessapprox \epsilon, \qquad v^K = \mathscr{D}_K f, \end{equation} $$

using as few evaluations of f as we can.

The algorithm computes recursively v^k $\hat v^k$ , from the coarsest level k = 0 to the finest one k = K. The construction of (v^k)k=0K $(\hat v^k)_{k=0}^K$ begins by evaluating f at ξ0 and ξ1: v^0=D0f $\hat v^0 = \mathscr{D}_0 f$ and v^1=D1f $\hat v^1 = \mathscr{D}_1 f$ . The details are computed using (5):

d^j0=v^2j+11(P01v^0)2j+1=dj0=v2j+11(P01v0)2j+1.$$ \begin{equation*} \hat d^0_j = \hat v_{2j+1}^{1} - (P_{0}^{1}\hat v^{0})_{2j+1} = d^0_j = v_{2j+1}^{1} - (P_{0}^{1} v^{0})_{2j+1}. \end{equation*} $$

We compute v^k $\hat v^k$ , k > 1, iteratively: For each 0 ≤ ink, and for each 1 ≤ kK(K pre-fixed), we set v^2ik+1=v^ik $\hat v^{k+1}_{2i}=\hat v^k_i$ and

v^2i+1k+1=(Pkk+1v^k)2i+1,if |d^jk1|<εkf(ξ2i+1k+1),if |d^jk1|εk,i{2j,2j+1},$$ \begin{equation} \label{eq:recurrence} \hat v_{2i+1}^{k+1}= \begin{cases} (P_k^{k+1}\hat v^k)_{2i+1}, &\text{if }|\hat d^{k-1}_j|\lt\epsilon^k \\ f(\xi_{2i+1}^{k+1}), &\text{if }|\hat d^{k-1}_j|\geq\epsilon^k \end{cases}, \qquad \forall i\in\{2j,2j+1\}, \end{equation} $$

where

d^jk1=v^2j+1k(Pk1kv^k1)2j+1.$$ \begin{equation} \hat d_j^{k-1}=\hat v_{2j+1}^{k} - (P_{k-1}^{k}\hat v^{k-1})_{2j+1}. \end{equation} $$

Figure 1 shows a sketch of of recurrence (9). It has a simple interpretation: On one hand, if |d^jk1|<εk $|\hat d^{k-1}_j|\lt\epsilon^k$ , then a 'small' prediction error indicates that Pk1k $P_{k-1}^k$ approximates well the function at ξ2j+1k $\xi^k_{2j+1}$ ; thus Pkk+1 $P_k^{k+1}$ may also provide a good approximation at ξ2i+1k+1 $\xi^{k+1}_{2i+1}$ , i ∈ {2j, 2j + 1}, and we do not need to evaluate f at these points.

Fig. 1

Sketch of recurrence (9). The value of v^2i+1k+1 $\hat v^{k+1}_{2i+1}$ , i ∈ {2j, 2j + 1}, depends on d^jk1=v^2j+1k(Pk1kv^k1)2j+1 $\hat d_j^{k-1}=\hat v_{2j+1}^{k} - (P_{k-1}^{k}\hat v^{k-1})_{2j+1}$ .

On the other hand, |d^jk1|εk $|\hat d^{k-1}_j|\geq\epsilon^k$ means that Pk1k $P_{k-1}^k$ did not provide a sufficiently 'good' approximation to the function values, hence we prefer to evaluate f to obtain v^2i+1k+1 $\hat v^{k+1}_{2i+1}$ as the exact value of v2i+1k+1 $v^{k+1}_{2i+1}$ . It should be noted that if |d^jk1|<εk $|\hat d^{k-1}_j|\lt\epsilon^k$ then d^ik=0 $\hat d^{k}_i=0$ , and if |d^jk1|εk $|\hat d^{k-1}_j|\geq \epsilon^k$ then d^ik=f(ξ2i+1k+1)(Pkk+1v^k)2i+1 $\hat d^k_i = f(\xi^{k+1}_{2i+1}) - (P_k^{k+1}\hat v^k)_{2i+1}$ , i ∈ {2j, 2j + 1}. Also, |d^jk1|εk $|\hat d^{k-1}_j|\geq \epsilon^k$ implies that v4j+1k+1 $v^{k+1}_{4j+1}$ (i = 2j) and v4j+3k+1 $v^{k+1}_{4j+3}$ (i = 2j + 1) must be computed, so f is evaluated twice.

In the end of the recurrence, we obtain the vectors v^K $\hat v^K$ and (d^k)k=0K1 $(\hat d^k)_{k=0}^{K-1}$ . Moreover, we know the MR decomposition of v^k $\hat v^k$ because of (10):

Mkv^k=(v^0;d^0;;d^k1)=(v0;d0;d^1;;d^k1).$$ \begin{equation*}\nonumber M^k \hat v^k=(\hat v^0;\hat d^0;\ldots;\hat d^{k-1})=(v^0;d^0;\hat d^1;\ldots;\hat d^{k-1}). \end{equation*} $$

In [1] the authors arrive to the conclusion that (8) if fulfilled if εk = 2Kkε. However, we notice in the numerical experiments of [1, 11] that this thresholding strategy leads to

vKv^KK0.$$ \begin{equation*} \| v^K - \hat v^K \|_\infty \overset{K\rightarrow\infty}{\longrightarrow}0. \end{equation*} $$

Thus, the real precision of v^K $\hat v^K$ depends on ε and K. Moreover, if a large K is needed, because a very fine mesh is demanded, then it turns out that vKv^K $\| v^K - \hat v^K \|_\infty$ is unnecessarily small and the technique looses efficiency. We will see that the thresholding criterion εk = 2Kkε proposed in [1, 11] can be relaxed to εk = ε, to ensure (8) with a precision that stays close to ε. Our strategy reduces the number of evaluations. In addition, the non-dependency of K allows us to take this value as large as necessary to ensure the desired accuracy in the numerical integration of f.

Another advantage of using εk = ε is that it is not mandatory to prefix the finest level K. So we can select another criterion to stop the recurrence, for example, that certain amount of evaluations has been reached. On the opposite, taking ε = 2Kkε demands knowing K before the execution starts.

Numerical experiments

The purpose of the present section is to examine the efficiency of the TE algorithm under the various strategies considered in the previous sections. In particular, we show that the threshold strategy εk = ε, k = 0, leads to the desired target accuracy in a much more efficient manner than εk = 2Kkε, as proposed in [1, 11].

In this paper, the efficiency is a measure of the number of evaluations required to reach a target error in the computation of v^K $\hat v^K$ . We show in this section that the efficiency of the TE strategy relies on the interpolation technique used in the prediction operator. In addition, we show numerically that the strategy εk = ε ensures

2sεvkv^kε,$$ \begin{equation*} 2^{-s}\epsilon \lessapprox\|v^k - \hat v^k\|_\infty\lessapprox \epsilon, \end{equation*} $$

where s is the order of approximation of the local rule, but also we show that the number of evaluations increases slowly respect to the precision:

neval=O(vkv^k1/s)=O(ε1/s).$$ \begin{equation*} n_{eval} = O(\|v^k - \hat v^k\|_\infty^{-1/s}) = O(\epsilon^{-1/s}). \end{equation*} $$

Hence, highly accurate interpolation techniques improves the efficiency.

On the other hand, we will see that non-linear interpolation techniques does not provide, in general, any advantage respect to the linear ones. Thus, linear prediction operators are preferable, because they are faster to compute.

The TE authors consider the following efficiency measures. The first one is

μcr=nK+1nw+n0+1,  nw=#{|d^ik|ε},$$ \begin{equation*} \mu_{cr} = \frac{n_K +1}{n_w + n_0 + 1}, \qquad n_w=\#\{|\hat d^k_i| \geq \epsilon\}, \end{equation*} $$

where nw is the number of detail coefficients greater than ε. The second one is

τ=nK+1neval,$$ \begin{equation*} \tau = \frac{n_K+1}{n_{eval}}, \end{equation*} $$

where neval is the total amount of function evaluations used to obtain v^K $\hat v^K$ . τ = 1 if the function was evaluated everywhere in ξK, and the larger τ is, the more vik $v_i^k$ were interpolated instead of evaluated.

In the TE method, each |d^ik1|ε $|\hat d^{k-1}_i| \geq \epsilon$ implies two new function evaluations at the resolution level k + 1. So neval can be written as

neval = 2nw+n0.$$ \begin{equation} n_{eval} = 2 n_w + n_0. \end{equation} $$

Using (11), τ can be expressed as

τ=nK+12nw+n0μcr2τ,$$ \begin{equation*}\nonumber \tau = \frac{n_K+1}{2 n_w + n_0} \quad \Longrightarrow \quad \mu_{cr}\approx 2 \tau, \end{equation*} $$

as shown in Tables 1 and 2 of [1, 11]. In this paper, we will only consider τ.

Parameters for the numerical experiments.

neval: Number of function evaluations done by TE to compute v^K $\hat v^K$ .
nK+1 = 2K+1: Number of function evaluations needed to compute vK.
τ=nK+1neval $\tau = \frac{n_K+1}{n_{eval}}$ .
eK=vKv^K=supi|viKv^iK| $e_K=\| v^K - \hat v^K\|_\infty = \sup_i |v^K_i - \hat v^K_i|$ .
vKv^K1=nK1i=0nK|viKv^iK| $\| v^K - \hat v^K\|_1 = n_K^{-1}\sum_{i=0}^{n_K} |v^K_i - \hat v^K_i|$ .
E=01f(x)dx $E=\int_0^1 f(x) dx$ .
ÊK is obtained applying the trapezoidal rule to v^K $\hat v^K$ .

Parameters as specified in Table 1 for f1, ε = εk = 10-1 and the polygonal rule.

KnevalnK +1τvKv^K $||v^{k}- \hat v^K||_\infty $ vKv^K1 $||v^{k}- \hat v^K||_{2}$ |EÊK|
31.3000e+011.7000e+011.3077e+004.6488e-025.5437e-034.2125e-03
41.7000e+013.3000e+011.9412e+004.6488e-029.2544e-031.0395e-03
51.7000e+016.5000e+013.8235e+004.6488e-021.1582e-021.0395e-03
61.7000e+011.2900e+027.5882e+004.7016e-021.2225e-021.0395e-03
71.7000e+012.5700e+021.5118e+014.7016e-021.2411e-021.0395e-03
81.7000e+015.1300e+023.0176e+014.7026e-021.2470e-021.0395e-03
91.7000e+011.0250e+036.0294e+014.7033e-021.2491e-021.0395e-03
101.7000e+012.0490e+031.2053e+024.7033e-021.2499e-021.0395e-03
111.7000e+014.0970e+032.4100e+024.7034e-021.2503e-021.0395e-03

In Tables 1 to 2, we carry out the same experiment as in section 5.1 of [11]. Let us consider the parameters of Table 1 and the following functions, which are shown in Figure 2:

Fig. 2

Graphic representation of: left, f1; right, f2. Both functions are defined in (12).

f1(x)=sin(2πx2),f2(x)=sin(2πx2),x11/20sin(2πx2)+1,x>11/20,x[0,1].$$ \begin{equation} \label{eq:f1_f2} f_1(x)=\sin(2\pi x^2), \qquad f_2(x)=\begin{cases} \sin(2\pi x^2), & x\leq 11/20 \\ \sin(2\pi x^2) + 1, & x\gt 11/20 \end{cases}, \qquad x\in[0,1]. \end{equation} $$

In Tables 2 to 4 we apply the TE strategy with εk = 10−1, k = 0, to the smooth f1 function in Figure 2 and display the parameters of Table 1. We use the prediction operators, Pkk+1 $P_k^{k+1}$ described in Remark 2, i.e. the polygonal, cubic and PCHIP rules, while in [11] it was used the polygonal, cubic and ENO rules. As mentioned previously, we consider that the PCHIP prediction is more convenient because of its stability. As in the original papers, we set n0 = 1, so ξ0 = {0, 1}. From these experiments we extract the following conclusions.

Parameters as specified in Table 1 for f1, ε = εk = 10-1 and the cubic rule.

KnevalnK +1τvKv^K$||v^{k}- \hat v^K||_\infty $vKv^K1$||v^{k}- \hat v^K||_{1}$ |EÊK|
31.3000e+011.7000e+011.3077e+005.6435e-037.8277e-044.4742e-03
41.3000e+013.3000e+012.5385e+001.1245e-021.7642e-031.1215e-03
51.3000e+016.5000e+015.0000e+001.1245e-021.9634e-032.7580e-04
61.3000e+011.2900e+029.9231e+001.2010e-022.0193e-036.3898e-05
71.3000e+012.5700e+021.9769e+011.2010e-022.0373e-031.0892e-05
81.3000e+015.1300e+023.9462e+011.2010e-022.0439e-032.3614e-06
91.3000e+011.0250e+037.8846e+011.2010e-022.0465e-035.6749e-06
101.3000e+012.0490e+031.5762e+021.2012e-022.0477e-036.5032e-06
111.3000e+014.0970e+033.1515e+021.2012e-022.0482e-036.7103e-06

Parameters as specified in Table 1 for f1, ε = εk = 10-1 and the PCHIP rule.

KnevalnK +1τvKv^K $||v^{k}- \hat v^K||_\infty $vKv^K1 $||v^{k}- \hat v^K||_{1}$ |EÊK|
31.3000e+011.7000e+011.3077e+001.0606e-021.5196e-034.2125e-03
41.3000e+013.3000e+012.5385e+002.3126e-023.8381e-031.0814e-03
51.3000e+016.5000e+015.0000e+002.3126e-024.3212e-032.3874e-04
61.3000e+011.2900e+029.9231e+002.3126e-024.4675e-032.6119e-05
71.3000e+012.5700e+021.9769e+012.3126e-024.5131e-032.7146e-05
81.3000e+015.1300e+023.9462e+012.3144e-024.5297e-034.0470e-05
91.3000e+011.0250e+037.8846e+012.3177e-024.5360e-034.3801e-05
101.3000e+012.0490e+031.5762e+022.3177e-024.5387e-034.4634e-05
111.3000e+014.0970e+033.1515e+022.3177e-024.5399e-034.4842e-05

As observed in Table 2, the polygonal rule uses 13 evaluations for K = 3, but 17 for K = 4. The cubic and PCHIP rules only uses 13 evaluation for any K = 3, as shown in Tables 3 and 4. From this fact, we deduce that there is a level k0 such that for all k = k0 the TE algorithm does not evaluate f1 anymore. Actually, this property may hold for a general smooth function and a given threshold ε > 0.

Notice also that eK=vKv^K $e_K = \|v^K-\hat v^K\|_\infty$ is always smaller than ε, but it stays close to ε, a clear advantage of using εk = ε. Indeed, it can be observed in Tables 1 and 2 of [1, 11] that eK tends to zero when K increases, when the strategy εk = 2Kkε was selected. Since our target accuracy is eKε with as few evaluations as possible, if the TE strategy uses just the necessary evaluations, then eK should be close to ε. Thus, the fact that eK tends to zero with the criterion εk = 2Kkε means that the function is being evaluated more than it is actually needed.

As exposed in Section 1, the next condition is fulfilled:

|EE^K||EEK|+vKv^K1|EEK|+vKv^K.$$ \begin{equation} |E-\hat E_K| \leq |E - E_K| + \|v^K - \hat v^K \|_1 \leq |E - E_K| + \|v^K - \hat v^K \|_\infty. \end{equation} $$

Since f1 is smooth, then |EEK| = O (2−2K). Thus, for large values of K, (13) becomes

|EE^K|vKv^K1vKv^Kε,$$ \begin{equation} |E-\hat E_K| \lessapprox \|v^K - \hat v^K \|_1 \leq \|v^K - \hat v^K \|_\infty \lessapprox \epsilon, \end{equation} $$

which can also be checked in Tables 2 to 4 for any K > 3.

The smallest eK is achieved with the cubic rule, and the largest with the polygonal rule. Between them is the PCHIP rule, with less eK and less evaluations than the polygonal rule, but larger eK with the same evaluations than the cubic rule. Hence the cubic rule here seems to be better than PCHIP rule for smooth functions, which is probably a consequence of the better approximation properties of the cubic rule at all smooth regions, while the PCHIP formula its as accurate as the cubic rule only at monotone regions.

In Tables 5 to 7 we repeat the same experiment for the function f2, which has a discontinuity jump in ξ = 11/20. Once again, we use εk = ε = 10−1, and we explore the effect of increasing K.

Parameters as specified in Table 1 for f2, ε = εk = 10-1 and the polygonal rule.

KnevalnK +1τvKv^K $||v^{k}- \hat v^K||_\infty$ vKv^K1 $||v^{k}- \hat v^K||_{1}$ |EÊK|
31.3000e+011.7000e+011.3077e+004.6488e-025.5437e-032.2962e-02
41.9000e+013.3000e+011.7368e+004.6488e-027.8089e-033.5761e-03
52.1000e+016.5000e+013.0952e+004.6488e-029.9419e-034.0614e-03
62.3000e+011.2900e+025.6087e+004.7016e-021.0549e-021.3192e-04
72.5000e+012.5700e+021.0280e+014.7016e-021.0726e-021.8242e-03
82.7000e+015.1300e+021.9000e+014.7026e-021.0781e-028.4798e-04
92.9000e+011.0250e+033.5345e+014.7033e-021.0800e-023.5975e-04
103.1000e+012.0490e+036.6097e+014.7033e-021.0808e-026.0389e-04
113.3000e+014.0970e+031.2415e+024.7034e-021.0811e-027.2597e-04

Parameters as specified in Table 1 for f2, ε = εk = 10-1 and the cubic rule.

KnevalnK +1τvKv^K $||v^{k}- \hat v^K||_\infty$ vKv^K1 $||v^{k}- \hat v^K||_{1}$ |EÊK|
31.3000e+011.7000e+011.3077e+006.1742e-024.3700e-031.9318e-02
41.5000e+013.3000e+012.2000e+009.7336e-027.6400e-031.6578e-03
51.7000e+016.5000e+013.8235e+009.7336e-029.0579e-039.4329e-03
61.9000e+011.2900e+026.7895e+009.7601e-029.8133e-035.2198e-03
72.1000e+012.5700e+021.2238e+019.8885e-021.0169e-023.5400e-03
82.3000e+015.1300e+022.2304e+019.8885e-021.0359e-024.6595e-03
92.5000e+011.0250e+034.1000e+019.9210e-021.0448e-025.0960e-03
102.7000e+012.0490e+037.5889e+019.9210e-021.0496e-024.8203e-03
112.9000e+014.0970e+031.4128e+029.9272e-021.0518e-024.7122e-03

Parameters as specified in Table 1 for f2, ε = εk = 10-1and the PCHIP rule

KnevalnK +1τvKv^K $||v^{k}- \hat v^K||_\infty$ vKv^K1 $||v^{k}- \hat v^K||_{1}$ |EÊK|
31.3000e+011.7000e+011.3077e+005.2400e-023.9780e-032.0350e-02
41.5000e+013.3000e+012.2000e+005.3922e-027.1545e-038.9677e-04
51.7000e+016.5000e+013.8235e+005.7556e-027.8826e-037.5834e-03
61.9000e+011.2900e+026.7895e+005.7616e-028.1229e-033.8474e-03
72.1000e+012.5700e+021.2238e+015.7920e-028.2001e-031.9526e-03
82.3000e+015.1300e+022.2304e+015.7920e-028.2293e-032.9445e-03
92.5000e+011.0250e+034.1000e+015.7920e-028.2408e-033.4357e-03
102.7000e+012.0490e+037.5889e+015.7922e-028.2458e-033.1923e-03
112.9000e+014.0970e+031.4128e+025.7922e-028.2481e-033.0704e-03

For K > 4 in Table 5 and K > 3 in Tables 6 and 7, the TE algorithm evaluates f2 only two more times in each new level. This happens because there is only one detail coefficient satisfying |d^jk1|ε $|\hat d_j^{k-1}|\geq \epsilon$ , which implies two new evaluations at level k + 1. We refer to Figure 1 for the sake of clarity. This djk1 $d_j^{k-1}$ is located around the discontinuity, which is easily seen in Fig. 5 of [11].

In spite of the discontinuity of f2, vKv^K $\|v^K-\hat v^K\|_\infty$ is smaller than ε, and also is near ε, which supports the criterion εk = ε. Now |EEK| = O (2K), thus (14) is satisfied for K > 5.

Note that the polygonal rule uses more f2 evaluations, but also its eK are smaller than the other rules. eK has similar magnitudes in the PCHIP and cubic rules and both use the same number of evaluations.

Once again, eK converges to some value close to ε for K → ∞, in contrast to observed in Tables 1 and 2 of [1, 11], where eK → 0, which confirms that the thresholding strategy εk = ε, instead of εk = 2Kkε, is the most appropriate to attain a predetermined target accuracy in the computation.

In order to validate the error control property

2sεvKv^Kε,$$ \begin{equation} 2^{-s}\epsilon \lessapprox\|v^K - \hat v^K\|_\infty\lessapprox \epsilon, \end{equation} $$

we repeat the same experiment for a fixed value of K, K = 15, and several ε values. We do not consider τ in Tables 8 to 13, because if K is fixed, then τ=O(neval1) $\tau = O(n_{eval}^{-1})$ . In the following, we study the approximation of the smooth function f1, which is shown in Tables 8 to 10.

Parameters as specified in Table 1 for f1, K = 15 and the polygonal rule. εk = ε varies between 10-1 and 10-10.

ε2-2εnevalvKv^K $||v^{k}- \hat v^K||_\infty$ vKv^K1 $||v^{k}- \hat v^K||_{1}$ |EÊK|
1e-012.5e-021.7000e+014.7034e-021.2506e-021.0395e-03
1e-022.5e-035.7000e+012.4734e-039.4241e-041.3285e-04
1e-032.5e-041.7500e+022.6667e-048.3589e-052.2476e-05
1e-042.5e-054.9500e+024.1356e-051.1371e-052.2269e-06
1e-052.5e-061.7930e+036.6194e-068.9509e-071.7367e-07
1e-062.5e-075.4810e+032.5200e-078.5716e-082.1467e-08
1e-072.5e-081.5791e+048.1472e-081.1234e-082.0251e-09
1e-082.5e-095.6865e+042.5045e-098.1255e-101.5742e-10
1e-092.5e-101.7420e+057.8479e-104.0906e-112.0754e-11
1e-102.5e-112.5368e+053.2548e-114.7299e-131.5278e-11

Parameters as specified in Table 1 for f1, K = 15 and the polygonal rule. εk = ε varies between 10-1 and 10-10.

ε2-4εnevalvKv^K $||v^{k}- \hat v^K||_\infty$ vKv^K1 $||v^{k}- \hat v^K||_{1}$ |EÊK|
1e-016.25e-031.3000e+011.2012e-022.0488e-036.7794e-06
1e-026.25e-042.7000e+014.4450e-049.2550e-054.9354e-05
1e-036.25e-054.3000e+012.7188e-042.2672e-057.3722e-06
1e-046.25e-068.1000e+017.0349e-069.1132e-075.9591e-08
1e-056.25e-071.3700e+026.5275e-071.1135e-074.9200e-09
1e-066.25e-082.5500e+026.3729e-081.0246e-082.4199e-09
1e-076.25e-094.0900e+021.1524e-081.4623e-097.9961e-10
1e-086.25e-107.1900e+026.2685e-101.1760e-108.0592e-11
1e-096.25e-111.3610e+036.9934e-119.8732e-121.9488e-11
1e-106.25e-122.5150e+037.3016e-129.4366e-131.5464e-11

Parameters as specified in Table 1 for f1, K = 15 and the PCHIP rule. εk = ε varies between 10-1 and 10-10.

ε2-4εnevalvKv^K $||v^{k}- \hat v^K||_\infty$ vKv^K1 $||v^{k}- \hat v^K||_{1}$ |EÊK|
1e-016.25e-031.3000e+012.3177e-024.5411e-034.4912e-05
1e-026.25e-042.7000e+011.8042e-021.1650e-035.0535e-04
1e-036.25e-056.9000e+011.4751e-041.3376e-051.3180e-06
1e-046.25e-061.2300e+022.7379e-051.4956e-061.4601e-07
1e-056.25e-072.3300e+022.8833e-062.0876e-071.4208e-08
1e-066.25e-084.2100e+027.0168e-071.1231e-081.6390e-09
1e-076.25e-097.5700e+026.8925e-081.3645e-096.3232e-11
1e-086.25e-101.3610e+031.5733e-091.1174e-102.6616e-12
1e-096.25e-112.4230e+031.1084e-101.1353e-111.4272e-11
1e-106.25e-124.3150e+032.2858e-111.1252e-121.5153e-11

Parameters as specified in Table 1 for f2, K = 15 and the polygonal rule. εk = ε varies between 10-1 and 10-10.

ε2-2εnevalvKv^K $||v^{k}- \hat v^K||_\infty$ vKv^K1 $||v^{k}- \hat v^K||_{1}$ |EÊK|
1e-012.5e-024.5000e+014.7034e-021.0814e-026.5158e-04
1e-022.5e-038.1000e+012.4734e-039.1579e-041.5833e-04
1e-032.5e-041.9500e+022.6667e-048.3169e-052.1751e-05
1e-042.5e-055.1300e+024.1356e-051.1319e-051.1348e-06
1e-052.5e-061.8070e+036.6194e-068.9427e-079.6992e-07
1e-062.5e-075.4910e+032.5200e-078.5703e-081.1229e-06
1e-072.5e-081.5799e+048.1472e-081.1232e-081.1424e-06
1e-082.5e-095.6869e+042.5045e-098.1253e-101.1443e-06
1e-092.5e-101.7420e+057.8479e-104.0906e-111.1444e-06
1e-102.5e-112.5368e+053.2548e-114.7299e-131.1444e-06

Parameters as specified in Table 1 for f2, K = 15 and the polygonal rule. εk = ε varies between 10-1 and 10-10.

ε2-4εnevalvKv^K $||v^{k}- \hat v^K||_\infty$ vKv^K1 $||v^{k}- \hat v^K||_{1}$ |EÊK|
1e-016.25e-034.1000e+019.9324e-021.0541e-024.7904e-03
1e-026.25e-041.0700e+024.4450e-047.5032e-054.8158e-05
1e-036.25e-051.2100e+025.0725e-057.7497e-062.9716e-06
1e-046.25e-061.5300e+027.0349e-067.2132e-078.9567e-07
1e-056.25e-072.0300e+026.5275e-071.0494e-071.1429e-06
1e-066.25e-083.1500e+026.3729e-081.0056e-081.1418e-06
1e-076.25e-094.6300e+021.1524e-081.4565e-091.1436e-06
1e-086.25e-107.6700e+026.2685e-101.1741e-101.1443e-06
1e-096.25e-111.4070e+036.9934e-119.7495e-121.1444e-06
1e-106.25e-122.5570e+037.3016e-129.3789e-131.1444e-06

Parameters as specified in Table 1 for f2, K = 15 and the polygonal rule. εk = ε varies between 10-1 and 10-10.

ε2-4εnevalvKv^K $||v^{k}- \hat v^K||_\infty$ vKv^K1 $||v^{k}- \hat v^K||_{1}$ |EÊK|
1e-016.25e-034.1000e+015.7922e-028.2502e-033.1449e-03
1e-026.25e-045.9000e+011.8042e-021.1149e-035.0165e-04
1e-036.25e-051.0700e+022.7833e-041.2775e-053.1659e-06
1e-046.25e-061.6900e+023.7144e-051.4538e-061.3326e-06
1e-056.25e-072.8500e+024.2689e-062.0742e-071.1600e-06
1e-066.25e-084.7500e+027.0168e-071.1186e-081.1461e-06
1e-076.25e-098.0500e+026.8925e-081.3631e-091.1445e-06
1e-086.25e-101.4090e+031.5733e-091.1040e-101.1444e-06
1e-096.25e-112.4650e+031.1084e-101.1313e-111.1444e-06
1e-106.25e-124.3510e+032.2858e-111.1239e-121.1444e-06

The first feature we observe is that (15) is satisfied, which provides a strong control on the error eK=vKv^K $e_K = \|v^K - \hat v^K\|_\infty$ . We also notice that the growth rate of the number of evaluations, neval, is larger in the polygonal rule. We can easily see it plotting the results of the tables, shown in Figure 3 and 4. From such figures is deduced that the growth rate is neval=O(eK1/2) $n_{eval}=O(e_K^{-1/2})$ for the polygonal rule and neval=O(eK1/4) $n_{eval}=O(e_K^{-1/4})$ for the cubic and PCHIP rules. It can be equivalently formulated as eK = O ((neval)−2) and eK = O ((neval)−4). Remember that 2 and 4 are the highest orders of approximation of the considered local rules: PCHIP has only second order of approximation, in general, but forth order in monotone regions. Despite this, PCHIP verifies eK = O ((neval)−4) because, in the TE algorithm, the rule is applied in small monotone regions along the whole interval [0,1], where PCHIP has order 4. In addition, we clearly observe that cubic is a bit better than PCHIP, but much better than polygonal.

Fig. 3

In continuous line, the graphics of v15v^15 $\|v^{15} - \hat v^{15}\|_\infty$ as a function of neval for εk = ε varying from 10−1 to 10−10, applied to each local rule and f1. For comparison, in discontinuous line are shown the growth rates O(neval2) $O(n_{eval}^{-2})$ and O(neval4) $O(n_{eval}^{-4})$ .

Fig. 4

In continuous line, the graphics of |EÊ15| as a function of neval for εk = ε varying from 10−1 to 10−10, applied to each local rule and f1. For comparison, in discontinuous line are shown the growth rates O(neval2) $O(n_{eval}^{-2})$ and O(neval4) $O(n_{eval}^{-4})$ .

Another issue we can observe in Tables 8 to 10 is that |EÊK| decrees together with ε, if ε≥10−8, but for smaller ε, the precision of the integral gets stuck around 10−11. It can be explained using the triangle inequality:

|EE^K||EEK||EKE^K||EEK|vKvK1|EEK|ε.$$ \begin{equation*} |E - \hat E_K| \geq |E- E_K| - |E_K - \hat E_K| \geq |E - E_K| - \|v^K - v^K\|_1 \geq |E - E_K| - \epsilon. \end{equation*} $$

Since f1 is an smooth function, then

|EE15|=23012|f(η)|7.8e-11|f(η)|,  η[0,1].$$ \begin{equation} |E-E_{15}| = \frac{2^{-30}}{12} |f''(\eta)| \cong \text{7.8e-11} |f''(\eta)|, \qquad \eta\in[0,1]. \end{equation} $$

Thus,

|EE^K|7.8e-11|f(η)|ε.$$ \begin{equation*} |E - \hat E_K| \geq \text{7.8e-11} |f''(\eta)|- \epsilon. \end{equation*} $$

This says that, if ε → 0, then |EÊK| is lower bounded by |f″(η)|, which should be close to the number 1.5e-11, shown in Tables 8 to 11.

Comparing Figure 4 in this paper with Fig. 4 of [11], we note that our plot displays less oscillations. This may indicate that the dependency between eK and the number of evaluations is stronger and smoother when the criterion εk = ε is selected.

We repeat the experiment for the function f2, which has a discontinuity jump in ξ = 11/20, in Tables 5 to 7.

Notice that (15) is satisfied and the growth of neval is faster in the polygonal rule. Observing the Figures 5 and 6, corresponding to these tables, we observe that the growth rates are neval=O(eK1/2) $n_{eval}=O(e_K^{-1/2})$ for polygonal rule and neval=O(eK1/4) $n_{eval}=O(e_K^{-1/4})$ for cubic and PCHIP rules. As shown in Figure 5, cubic rule is better than PCHIP rule to approximate vK. However, PCHIP rule is now better than cubic rule to compute the integral. At least, it is true for ε > 10−4, before |EÊK| gets stuck.

Fig. 5

In continuous line, the graphics of v15v^15 $\|v^{15} - \hat v^{15}\|_\infty$ as a function of neval for εk = ε varying from 10−1 to 10−10, applied to each local rule and f2. For comparison, in discontinuous line are shown the growth rates O(neval2) $O(n_{eval}^{-2})$ and O(neval4) $O(n_{eval}^{-4})$ .

Fig. 6

In continuous line, the graphics of |EÊ15| as a function of neval for εk = ε varying from 10−1 to 10−10, applied to each local rule and f2. For comparison, in discontinuous line are shown the growth rates O(neval2) $O(n_{eval}^{-2})$ and O(neval4) $O(n_{eval}^{-4})$ .

A similar argument to (16) can be used to explain that |EÊK| → 1.1e-06 in Tables 5 to 7. Since f2 is not continuous, the error of the trapezoidal rule decreases as O (2K). Thus

|EE^K|O(2K)ε.$$ \begin{equation*}\nonumber |E - \hat E_K| \geq O(2^{-K}) - \epsilon. \end{equation*} $$

Again, a comparison between our Figure 4 and Fig. 6 of [11] seems to indicate that the strategy εk = ε makes the TE approach more robust than εk = 2Kkε.

We conclude that PCHIP only provides a better performance respect to the usual cubic interpolation when the integral of a discontinuous function is computed. However, the cubic rule was better in all the other cases: integration of a continuous functions and approximation of continuous or even discontinuous functions. The TE authors also arrived to the conclusion that no advantages were seen using the ENO interpolation for f1, but ENO was better for f2 in terms of accuracy and amount of evaluations.

The results included in the present paper indicate that non-linear interpolatory reconstructions may not lead to substantial advantages over linear ones within the TE framework. We have observed that, in general, the cubic interpolation lead to better results than the PCHIP interpolation, while both always fulfill neval=O(eK1/4) $n_{eval}=O(e_K^{-1/4})$ . On the other hand, the usual reason of using ENO or PCHIP interpolatory techniques is the capability of approximating jumps in the data without spurious oscillations. But in TE strategy this is not needed, because the algorithm itself has a mechanism to detect the problematic zones of the function. In addition, linear interpolations are always faster than nonlinear ones.

We wish to remark again that the choice εk = ε confers a high control over the error. In contrast, the original selection εk = 2Kkε done by the TE authors only guarantees that ε is an upper bound of the error. Observe that in [1, 11] all the numerical experiments were done with ε = 10−1, but the error converges to zero as soon as K → ∞. In our case, the error converges to some number under ε, but greater than 2sε, where s is the order of the interpolation technique. This is a practical advantage, because K determines the resolution of the approximation v^K $\hat v^K$ , and we can set it as large as we need, but the number of evaluations and the error depends exclusively on ε.

Conclusions

In this paper we have examined some features of the Truncation and Encode strategy described in [1, 11], in particular its ability to compute integrals by quadrature rules to a given target accuracy.

In the TE strategy, the values used in the quadrature rule are computed by combining function evaluations and suitably interpolated values in a multilevel fashion, following the basic guidelines of Harten’s MRF.

Our theoretical observations and numerical experiments allow us to conclude that the TE technique becomes more efficient if the thresholding strategy proposed in [1, 11] is relaxed to εk = ε for all resolution levels. In addition, higher order prediction schemes lead to improved the performance, but it does not seem necessary to resort to nonlinear prediction schemes.

More research should be done to support these conclusions at a theoretical level.

Acknowledgements

The authors acknowledge support from Project MTM2014-54388 (MINECO, Spain) and the FPU14/02216 grant (MECD, Spain).

eISSN:
2444-8656
Language:
English
Publication timeframe:
2 times per year
Journal Subjects:
Life Sciences, other, Mathematics, Applied Mathematics, General Mathematics, Physics