A Useful Note on Newton-Cotes Quadratures


Oclide José Dotto

Abstract

Introduction

Preliminaries

A unified algorithm for closed Newton-Cotes quadrature formulas

Example

Another Application

Abstract


In this article we present an algorithm that merges all closed Newton-Cotes quadrature formulas.


1. Introduction

The literature treats separately Newton-Cotes formulas for quadrature, and computational mathematicians created a plethora of specific algorithms for each one of them: trapezoidal rule, Simpson's 1/3 rule, Simpson's 3/8 rule, Boole's rule, and usually they do not go beyond the latter. We were surprised ourselves by discovering that it is easy to encompass everything in just one simple algorithm. The starting point is the Theorem and its Corollary, which show how to compute conveniently the Newton-Cotes quadrature weights. As far as we know the presented proof for that theorem is new althought its content is known.


2. Preliminaries

Let f:[a ; b] ® R be a continuous function. A polynomial interpolatory quadrature of order m of f takes the form

Image22.gif (1274 bytes)

(1)

where the Ai and the xi are the coefficients and the nodes of quadrature, respectively. We take for granted that the interpolation polynomial is of order m and that the nodes are distinct. By a polynomial of order m we mean that the interpolation data are m + 1 points (xi,f(xi)). Obviously (1) is an equality for all polynomials f of degree £ m. Therefore, if we choose the polynomials 1, x, . . . , xm, we obtain an m + 1 by m + 1 linear system in the unknowns Ai:

Image23.gif (2335 bytes)

(2)

 

Since the nodes are distinct, the coefficients matrix of that system,

Image24.gif (1000 bytes) Image25.gif (1095 bytes)

is non-singular because it is a Vandermonde's matrix, so that the linear system (2) has a unique solution.

If the nodes a = x0 < x1 < ... < xm = b are evenly spaced, (1) is called a closed Newton-Cotes formula. In such a case, the coefficients of quadrature of order m divided by h := (b - a)/m , that is, wi := Ai/h, for i = 0, 1, . . ., m, are the weights of quadrature of that formula.

We will prove in the Theorem below that the weights depend only on the order of quadrature.


Theorem. The weights of a closed Newton-Cotes quadrature formula depend only on its order m.

Proof. Let a be any real number and h, any positive real number. Let us set b := a + mh. Then, we prove first that the system (2) is equivalent to this one

(3)

Replacing xi by a + ih, for i = 0, 1, . . . , m, in the system (2), we find

(4)

 

where

 

The second equation in (4) can be written as

 

 

and then simplified using the first equation of the same system. The resulting equation is the second one in (3). Similarly, simplifying the third equation in (4) by means of the first and second ones in (3), we get the third equation in (3). We can see through induction that the system (4) is equivalent to the system (3).

Now, replacing Ai by wih in the system (3) which gives the quadrature coefficients of order m, we obtain the following linear system in the unknowns wi:

(5)

This linear system has a unique solution, the vector w := (w0, w1, w2, . . . , wm) of the weights, and we see that it depends only on the order m of quadrature.

The proof of Theorem above contains the proof of the following


Corollary . The weights of a closed Newton-Cotes quadrature formula of order m on any interval [a ; b] are the coefficients of that quadrature on the interval [1 ; m +1].

The main consequence of the Theorem and its corollary is that they allow to unify all closed Newton-Cotes quadrature formulas, be them simple or composite, in only one algorithm.


3. A unified algorithm for closed Newton-Cotes quadrature formulas

The construction of a program that encompasses all closed Newton-Cotes formulas is now very easy, in view of the Theorem and the Corollary . We exemplify using a MATLAB code.


function q=newcot(f,a,b,m,n)

% -----------------------------------------------------------

% Input:

% f: function to be integrated; for example, f may be

% 'x.exp(-x.^2)';

% a, b: limits of integration ;

% m: order of quadrature: m = 1 for trapezoidal rule, m = 2 for Simpson's 1/3 rule, m = 3 for Simpson's 3/8

% rule, and so on;

% n: number of times we want to apply a simples rule of

% order m on the interval [a;b], that is, [a;b] is

% divided in n subintervals, for applying on each

% of them a simple rule.

% Output:

% q: quadrature of f.

%------------------------------------------------------------

 

if a==b, q=0; break, end

if m<=0, error('You must take m>0.'),end

 

% Compute the weights:

A=fliplr(vander(1:m+1))';

for i=1:m+1, c(i)=((m+1)^i-1)/i; end

w=solve(A,c')'; % Could be w=(A\c')'.

 

% Compute the quadrature:

h=(b-a)/(m*n);

x = a:h:b;

y = eval(f)';

Y = zeros(m+1,n);

for i=1:n, Y(1:m+1,i) = y(m*(i-1)+1:i*m+1); end

sq = h*w*Y;

q = sum(sq);

 

In order to compute the weights the program solves the system (2), with base on the Corollary . The main strategy in that algorithm resides inside the second for-loop. The matrix generated there is the m + 1 by n matrix,

 

The ith column of Y is the vector of the values of f for calculating the simple Newton-Cotes quadrature in the subinterval [xi ; xi+1], with xi := a + (i -1)h, i = 1, 2 ,..., n, and, accordingly, the last entry in each column of Y and the first one in the next column are the same. Performing left multiplication of Y by the row-vector hw of the coefficients of quadrature produces a row-vector, whose entries are the quadratures on the n subintervals of [a ; b]. Finally, summing those entries, we end up with the desired quadrature.

The above algorithm has a pretty low computational cost. It has also the advantage of dispensing with any subroutine to store the function to be integrated. On the other hand, it solves a m + 1 by m + 1 linear system, where the coefficients matrix is a Vandermonde's matrix, which we know to be ill-conditioned; but, for m £ 10, the system is fairly stable and we do not need to take m greater than 10; usually there is no need to go beyond m = 4, since we may increase n as much as necessary instead. Moreover the cited instability is minimized working with double precision, as MATLAB does.


4. Example

We tested our algorithm in computing the integral

                                                                -9.816843611112658e-001:

f='-2*x.*exp(-x.^2)';

newcot(f,0,2,4,500) = -9.816843611112658e-001.

A PC166 took 0.0016 sec and did 17,331 flops to execute the task. We also see that our choice reached machine precision. In the same computer, we changed the order to 20 maintaining n = 500, and we got the answer

 

newcot(f,0,2,20,500) = -9.816843611112646e-001

 

in 0.55 sec with 75,565 flops. We note here a slight loss of accuracy, which is certainly due to the polynomial wiggle and to the ill-conditioning of the Vandermonde's matrix.

 

The program newcot works perfectly with complicated functions as well, like gamma function and Bessel functions of any kind. We exemplify with the Bessel function of kind zero, x  ®   besselj(0, x):

 

newcot('besselj(0,x)',1,10,4,3000) = 1.067011303956739e+000

 

All the digits in the answer are significant, the elapsed time for the computations was 1.6 sec and the total amount of flops were 1,719,592.

5. Another Application

The familiar formulas for evaluating approximate derivatives work well when they are of low order, up to three or four at most. We propose newcot to estimate derivatives of any order, through Cauchy's integral theorem:

 

 

Here f is an analytic function on some simply connected open set W Ì C containing z0 and C Ì W is, say, the unit circle centered at z0 and described counterclockwise. Hence C is given by

 

 

so that

 

 

We test our proposition by computing the first, second, third and ninth derivatives of f(z) = sin(z) at z0 = 0, setting up newcot for Simpson rule:

c=1/(2*pi);

 

D1=c*newcot('sin(exp(j*x)).*exp(-j*x)',0,2*pi,2,1000)

D1 =

1.00000000000000 + 0.00000000000000i

 

D2=2*c*newcot('sin(exp(j*x)).*exp(-2*j*x)',0,2*pi,2,1000)

D2 =

-4.362219094368365e-017 +4.915369354826616e-017i

 

D3=6*c*newcot('sin(exp(j*x)).*exp(-3*j*x)',0,2*pi,2,1000)

D3 =

-1.00000000000000 - 0.00000000000000i

 

D4=12*c*newcot('sin(exp(j*x)).*exp(-4*j*x)',0,2*pi,2,1000)

D4 =

1.938150509016830e-016 -9.051302647577998e-016i

 

D9=gamma(10)*c*newcot('sin(exp(j*x)).*exp(-9*j*x)',0,2*pi,2,1000)

D9 =

0.99999999999305 + 0.00000000001224i

 


We see that it seems to be an efficient way of computing numerical derivatives.




Oclide José Dotto
Universidade de Caxias do Sul - UCS
Departamento de Matemática e Estatística
E-mail: dotto@nutecnet.com.br



casa(5).gif (1005 bytes)