A unified algorithm for closed Newton-Cotes quadrature formulas

In this article we present an algorithm that merges all closed Newton-Cotes quadrature
formulas.

The literature treats separately Newton-Cotes formulas for quadrature, and computational mathematicians created a plethora of specific algorithms for each one of them: trapezoidal rule, Simpson's 1/3 rule, Simpson's 3/8 rule, Boole's rule, and usually they do not go beyond the latter. We were surprised ourselves by discovering that it is easy to encompass everything in just one simple algorithm. The starting point is the Theorem and its Corollary, which show how to compute conveniently the Newton-Cotes quadrature weights. As far as we know the presented proof for that theorem is new althought its content is known.

Let *f*:[*a* ; *b*] ® R be a continuous function. A polynomial interpolatory
quadrature of order *m* of *f *takes the form

(1) |

where the *A _{i}* and the

(2) |

Since the nodes are distinct, the coefficients matrix of that system,

If the nodes *a = x*_{0}* < x*_{1}* < ... < x _{m}
= b* are evenly spaced,

We will prove in the Theorem below that the weights depend only on the order of quadrature.

**Theorem**. *The weights of a closed Newton-Cotes
quadrature formula depend only on its order m.*

*Proof*. Let *a *be any real number and *h*, any positive real number.
Let us set *b* := *a *+ *mh*. Then, we prove first that the system **(2)** is equivalent to this
one

Replacing *x _{i}* by

where

The second equation in (4) can be written as

and then simplified using the first equation of the same system. The
resulting equation is the second one in **(3)**. Similarly, simplifying the
third equation in **(4)** by
means of the first and second ones in (3), we get the third equation in (3). We can see
through induction that the system (4) is equivalent to the system (3).

Now, replacing *A _{i}* by

This linear system has a unique solution, the vector *w *:=* *(*w*_{0},
*w*_{1}, *w*_{2}, . . . , *w _{m}*)

The proof of Theorem above contains the proof of the following

**Corollary **. *The weights of a closed
Newton-Cotes quadrature formula of order m on any interval *[*a ; b*]* are the
coefficients of that quadrature on the interval* [1 ; *m *+1].

The main consequence of the **Theorem**
and its corollary is that they allow to unify all closed Newton-Cotes quadrature formulas,
be them simple or composite, in only one algorithm.

The construction of a program that encompasses all closed Newton-Cotes formulas is now
very easy, in view of the **Theorem**
and the **Corollary** . We
exemplify using a MATLAB code.

function q=newcot(f,a,b,m,n)

% -----------------------------------------------------------

% Input:

% f: function to be integrated; for example, f may be

% 'x.exp(-x.^2)';

% a, b: limits of integration ;

% m: order of quadrature: m = 1 for trapezoidal rule, m = 2 for Simpson's 1/3 rule, m = 3 for Simpson's 3/8

% rule, and so on;

% n: number of times we want to apply a simples rule of

% order m on the interval [a;b], that is, [a;b] is

% divided in n subintervals, for applying on each

% of them a simple rule.

% Output:

% q: quadrature of f.

%------------------------------------------------------------

if a==b, q=0; break, end

if m<=0, error('You must take m>0.'),end

% Compute the weights:

A=fliplr(vander(1:m+1))';

for i=1:m+1, c(i)=((m+1)^i-1)/i; end

w=solve(A,c')'; % Could be w=(A\c')'.

% Compute the quadrature:

h=(b-a)/(m*n);

x = a:h:b;

y = eval(f)';

Y = zeros(m+1,n);

for i=1:n, Y(1:m+1,i) = y(m*(i-1)+1:i*m+1); end

sq = h*w*Y;

q = sum(sq);

In order to compute the weights the program solves the system (2), with base on the Corollary . The main strategy in that algorithm
resides inside the second for-loop. The matrix generated there is the *m* + 1 by *n*
matrix,

The *i ^{th}* column of

The above algorithm has a pretty low computational cost. It has also
the advantage of dispensing with any subroutine to store the function to be integrated. On
the other hand, it solves a *m* + 1 by *m* + 1 linear system, where the
coefficients matrix is a Vandermonde's matrix, which we know to be ill-conditioned; but,
for *m £ *10, the system is fairly stable and we do not
need to take *m *greater than 10; usually there is no need to go beyond *m* = 4,
since we may increase *n* as much as necessary instead. Moreover the cited
instability is minimized working with double precision, as MATLAB does.

We tested our algorithm in computing the integral

-9.816843611112658e-001:

f='-2*x.*exp(-x.^2)';

newcot(f,0,2,4,500) = -9.816843611112658e-001.

A PC166 took 0.0016 sec and did 17,331 flops to execute the task. We
also see that our choice reached machine precision. In the same computer, we changed the
order to 20 maintaining *n* = 500, and we got the answer

newcot(f,0,2,20,500) = -9.816843611112646e-001

in 0.55 sec with 75,565 flops. We note here a slight loss of accuracy, which is certainly due to the polynomial wiggle and to the ill-conditioning of the Vandermonde's matrix.

The program *newcot* works perfectly with complicated functions as
well, like gamma function and Bessel functions of any kind. We exemplify with the Bessel
function of kind zero, *x* ® besselj(0, *x*):

newcot('besselj(0,x)',1,10,4,3000) = 1.067011303956739e+000

All the digits in the answer are significant, the elapsed time for the computations was 1.6 sec and the total amount of flops were 1,719,592.

The familiar formulas for evaluating approximate derivatives work well when they are of low order, up to three or four at most. We propose newcot to estimate derivatives of any order, through Cauchy's integral theorem:

Here *f* is an analytic function on some simply connected open set
W Ì C* *containing *z*_{0} and *C* Ì W is, say, the unit circle centered at *z*_{0}*
*and described counterclockwise. Hence *C* is given by

so that

We test our proposition by computing the first, second, third and ninth
derivatives of *f*(*z*)* = *sin(*z*) at *z*_{0} = 0,
setting up newcot for Simpson rule:

D1=c*newcot('sin(exp(j*x)).*exp(-j*x)',0,2*pi,2,1000)

1.00000000000000 + 0.00000000000000i

D2=2*c*newcot('sin(exp(j*x)).*exp(-2*j*x)',0,2*pi,2,1000)

D2 =

-4.362219094368365e-017 +4.915369354826616e-017i

D3=6*c*newcot('sin(exp(j*x)).*exp(-3*j*x)',0,2*pi,2,1000)

-1.00000000000000 - 0.00000000000000i

D4=12*c*newcot('sin(exp(j*x)).*exp(-4*j*x)',0,2*pi,2,1000)

1.938150509016830e-016 -9.051302647577998e-016i

D9=gamma(10)*c*newcot('sin(exp(j*x)).*exp(-9*j*x)',0,2*pi,2,1000)

0.99999999999305 + 0.00000000001224i

We see that it seems to be an efficient way of computing numerical derivatives.

Oclide José Dotto

Universidade de Caxias do Sul - UCS

Departamento de Matemática e Estatística

E-mail: dotto@nutecnet.com.br