Some Current Work

Vectorized Solution of ODEs in Matlab with Control of Residual and Error

Vectorization is very important to the efficiency of computation in the popular problem-solving environment Matlab. Here we develop an explicit Runge--Kutta (7,8) pair of formulas that exploits vectorization. Conventional Runge--Kutta pairs control local error at the end of a step. The new method controls the extended local error at 8 points equally spaced in the span of a step. This is a byproduct of a control of the residual at these points. A new solver based on this pair, odevr7, not only has a very much stronger control of error than the recommended Matlab solver ode45, but competes well at modest tolerances and is notably more efficient at stringent tolerances.

A preliminary version of a paper on this work, the solver itself, and two sets of test problems due to Krogh and Hull et al. are available in


Vectorized Solution of ODEs in Matlab

Vectorization is very important to the efficiency of computation in the popular problem-solving environment Matlab. It is shown that a class of Runge-Kutta (RK) methods investigated by Milne and Rosser that compute a block of new values at each step are well-suited to vectorization. It is shown how to obtain local error estimates and continuous extensions for explicit block RK methods. A (7,8) pair is derived and implemented in a program BV78 that is shown to perform quite well when compared to the well-known Matlab ODE solver, ode45, which is based on a (4,5) pair.  A preliminary version of a paper on this work is available here.

Krogh proposed a small set of test problems carefully chosen to illuminate the behavior of programs for solving non-stiff IVPs for ODEs. We have used them to compare numerically BV78 and ode45. The test programs and BV78 itself are available as


Stability of the Leapfrog/Midpoint Method

Here is the abstract of a short communication on this topic:

The leapfrog method is popular because of its good stability when solving partial differential equations with oscillatory solutions. ``It has the disadvantage that the solution at odd time steps tends to drift farther and farther from the solution for even time steps, so it is common to stop the integration every twenty time steps or so and reinitialize with the first order forward Euler method ... ''. We prove that restarting in this way results in a method that is not stable. We further show that if the step size is not too big, perturbations grow so slowly that the computations are stable enough for practical purposes. The leapfrog method is not dissipative,
but we show that restarting results in a method with a useful amount of dissipation. We also show that Gragg's smoothing scheme improves the stability of the method.

Delay Differential Equations (DDEs)

For the I Coloquio de Ecuaciones Diferenciales y sus Aplications to be held at the Universidad Autonoma de Yucatan, I have prepared two lectures on DDEs, with special emphasis on solving DDEs in MATLAB. The first lecture presents basic material for (retarded) DDEs with constant lags and their solution using dde23. The second takes up more difficult problems with time- and state-dependent lags as well as singular problems with a vanishing lag. These problems can be solved with ddesd. DDEs that include derivatives with delayed arguments are said to be of "neutral" type. Solving such problems is an active research area. A new approach and a Matlab program ddeNsd are briefly discussed.

The talks involve a great many overlays, so here are versions without overlays for reading: Read1.pdf, Read2.pdf

DDEs of Neutral Type

ddeNsd is a Matlab program to solve delay differential equations (DDEs) of neutral type, which is to say that the differential equations may involve derivatives of the solution with delayed arguments.  These problems are difficult because the usual discontinuity in the first derivative at the initial point propagates throughout the interval of integration at a spacing determined by the delays.  Because it is so much easier to solve DDEs that do not involve derivatives of the solution, DDEs of retarded type, a method has been devised to solve equations of neutral type by approximating them with equations of retarded type.  We describe the approach as ``dissipative'' because it is reminiscent of using artificial dissipation to deal with shocks when solving hyperbolic partial differential equations. In the present context jumps occur in the first derivative rather than the solution itself, but the approach is similar in that we approximate the given problem with another that is easier to solve because the solution is smoother.  The ddeNsd program is a driver for the ddesd program of Matlab itself and the user interface is almost identical.  The file contains a preliminary version of a paper about the method, the ddeNsd program, and a large collection of example programs.  (A bug in the example P2p3p2.m was corrected 8/25/08.) To make the solver easy to use, a quantity controlling the amount of dissipation has been hard-coded in the program.  It limits the solver to moderate accuracies, but this is a natural restriction anyway in view of the low order of the numerical method of ddesd.  It is to be appreciated that like the underlying program ddesd, the new program controls the size of the residual of the numerical solution.  Here is the abstract of the paper:

We approximate the solution of a neutral DDE with the solution of a retarded DDE to exploit the fact that retarded DDEs are much easier to solve numerically. After demonstrating the validity of the approach, we develop a Matlab program, ddeNsd, that solves DDEs of neutral type.  The new approach and a simple user interface make it easy to solve a wide range of test problems from the literature to moderate accuracy.


EZ versions of DDE23 and DDESD

The MATLAB programs DDE23 and DDESD solve delay differential equations (DDEs).  They require that the DDEs be coded to accept delayed terms in a compact form. Some users find this form to be confusing, so programs EZDDE23 and EZDDESD were written that have a different syntax for the equations. These EZ versions are simply drivers that allow a user to code the DDEs in a way that some find more natural. Here is the current syntax as taken from the prolog to DDE23:

DDE23 Solve delay differential equations (DDEs) with constant delays.
    SOL = DDE23(DDEFUN,LAGS,HISTORY,TSPAN) integrates a system of DDEs
    y'(t) = f(t,y(t),y(t - tau_1),...,y(t - tau_k)). The constant, positive
    delays tau_1,...,tau_k are input as the vector LAGS. DDEFUN is a function
    handle. DDEFUN(T,Y,Z) must return a column vector corresponding to
    f(t,y(t),y(t - tau_1),...,y(t - tau_k)). In the call to DDEFUN, a scalar T
    is the current t, a column vector Y approximates y(t), and a column Z(:,j)
    approximates y(t - tau_j) for delay tau_j = LAGS(J).

Here is the syntax as stated in the prolog to EZDDE23:

EZDDE23 Solve delay differential equations (DDEs) with constant delays.
    SOL = EZDDE23(DDEFUN,LAGS,HISTORY,TSPAN) integrates a system of DDEs
    y'(t) = f(t,y(t),y(t - tau_1),...,y(t - tau_k)). The constant, positive
    delays tau_1,...,tau_k are input as the vector LAGS. DDEFUN is a function
    handle. DDEFUN(T,Y,YLAG1,YLAG2,...,YLAGK) must return a column vector
    corresponding to f(t,y(t),y(t - tau_1),...,y(t - tau_k)). In the call to
    DDEFUN, a scalar T is the current t, a column vector Y approximates y(t),
    and a column vector YLAGJ approximates y(t - tau_j) for delay
    tau_j = LAGS(J). contains the two drivers and three example programs EZDDEX1, EZDDEX2, EZDDEX3. The example programs are the example programs DDEX1,DDEX2,DDEX3 of  DDE23 and DDESD coded to use a different syntax for the differential equations.  The many example programs provided in MFILES6.ZIP with the DDE Tutorial of Matlab have been recoded for EZDDE23.  (They work with DDESD as well.)  This is not just a matter of changing to the new syntax for the equations.  Since MFILES6.ZIP  was assembled, the functionality of DDEVAL was included in the built-in function DEVAL, making DDEVAL obsolete.  A number of the programs were recoded to pass parameters to nested functions instead of through the various call lists.  The new versions are available in

Quadrature in 2D

TwoD, a Matlab program to compute integrals over plane regions, a large collection of examples, and a paper describing the algorithms and the performance of the program are available in TwoD.  At present Matlab has only one program for the task, dblquad.  TwoD integrates over a much larger class of regions, is much more reliable, and uses many fewer vector evaluations of the integrand. Here is an abstract for the paper, which is to appear in Applied Mathematics and Computation:

We discuss here the algorithms of TwoD, a Matlab program for approximating integrals over generalized rectangles and sectors. Capabilities of the language are exploited to make TwoD very easy to use. Vectorization is exploited by a novel algorithm to make the program efficient and reliable. TwoD provides for moderate singularities on boundaries of the region.


Skip Thompson of Radford University and I have written two articles for Scholarpedia, a peer-reviewed online encyclopedia.  One is on Initial Value Problems and the other is on Stiff Systems.  A particularly nice part of the second article is the display Skip put together of the numerical solution of a combustion model due to Bob O'Malley using two codes, one intended for stiff systems and one for non-stiff systems.  Though chosen to run in a reasonable amount of time with a non-stiff solver, this example is a nice one because it is non-stiff in part of the interval and stiff in part--the distinction is clear as the results are computed and displayed.

Integral Equations

Ken Atkinson of the Department of Mathematics at the University of Iowa and I have written mathematical software that solves a class of integral equations.  It expands greatly the kinds of kernels that are allowed in his pioneering FORTRAN 77 solver and is much easier to use by virtue of exploiting Matlab. There is a paper, Solving Fredholm Integral Equations of the Second Kind in Matlab, that is to appear in the ACM Transactions on Mathematical Software.  It has the abstract

We present here the algorithms and user interface of a Matlab program, Fie, that solves numerically Fredholm integral equations of
the second kind on an interval [a,b] to a specified, modest accuracy. The kernel function K(s,t) is to be moderately smooth on [a,b] x [a,b] except possibly across the diagonal s = t. If the interval is finite, Fie provides for kernel functions that behave in a variety of ways
across the diagonal, viz. K(s,t) may be smooth, have a discontinuity in a low-order derivative, have a logarithmic singularity, or have an algebraic singularity. Fie also solves a large class of integral equations with moderately smooth kernel function on [0,infinity).

A preliminary version of the paper can be downloaded from Fie_paper.pdf.  The program and a good many examples can be downloaded from Fie.package.

Accurate Numerical Derivatives in Matlab

Complex step differentiation (CSD) is a technique for computing very accurate numerical derivatives in languages that support complex arithmetic. We describe here the development of a CSD package in Matlab called PMAD. We have extended work done in other languages for scalars to the arrays that are fundamental to Matlab. This extension raises many new questions that we have been able to resolve in a satisfactory way. Our goal has been to make it as easy as possible to compute approximate Jacobians in Matlab that are all but exact. Although PMAD has a fast option for the expert that implements CSD as in previous work, the default is an object-oriented implementation that asks very little of the user. 

An example of Xiao Huang of Kennesaw State University is illuminating.  The function to be differentiated involves an evaluation of "besseli(nu,z)".  The Matlab implemenation of the Bessel I function allows complex "z", so CSD can be used to differentiate with respect to this argument.  PMAD provides for differentiation of the Bessel functions with respect to this argument.  However, the implementation of Matlab requires "nu" to be real and as a consequence, CSD cannot be used to differentiate with respect to this argument. contains a PDF file with a preliminary version of the paper Accurate Numerical Derivatives in Matlab, ACM Trans. Math. Software, 33 (2007) 26-42, the PMAD package (version of March 2008), a README file with instructions for installing the package, and some example programs.  Corrections/improvements and new functions are welcome.  To make this easier, the programs are available in this version as M-files. 

Correspondence with Justin Kao of Northwestern University led to changes made in January 2008.  He pointed out the value of "end" for array indexing---not providing for this was an oversight.  The "size" function was corrected/improved and several new functions added.  One is "transpose", which raises an important matter in using PMAD--the CSD technique is basically limited to real functions evaluated in real (double precision) arithmetic.  As the paper explains, this requires that transposition by means of a prime be altered to perform the non-conjugate transpose.  Accordingly, the function "transpose" does the same thing as a prime when using PMAD.

PMAD_DfDy evaluates the partial derivative of a function f(x,y), but often you will want a function Jac(xa,ya) that evaluates the partial derivatives of f(x,y) with respect to y at (xa,ya).  This can be accomplished easily with an appropriate anonymous function using f(x,y) and PMAD_DfDy as illustrated in Xfirst.  The form f(y) is handled in a similar way.  Xfirst was added in March 2008 as a straightforward application of PMAD that is also useful as a minimal check on the installation of the package.

Because the back slash operator applies to matrices, the original version of PMAD was able to differentiate the solutions of linear systems.  However, Mohsen Bouaissa of Laval University (Quebec) had an example for which it is convenient to use the inverse of a matrix.  A virtue of CSD is that it is very easy to add any real function for which Matlab allows complex arguments to PMAD, so "inv(M)" was added.  (This allows you to differentiate a numerical approximation to an inverse matrix.)  At the same time, the LU decomposition was added, though not all options in "lu" were implemented:  PMAD now allows "[L,U] = lu(M)" and "[L,U,P] = lu(M)".  These changes were made in March 2008.  It is worth remark that PMAD differentiates a vector function of a vector argument.  The package does not provide for matrix functions.


A paper on a quadrature program called quadva with abstract

Adaptive quadrature codes process a collection of subintervals one at a time. We show how to process them all simultaneously and so exploit vectorization and the use of fast built-in functions and array operations that are so important to efficient computation in Matlab.  Using algebraic transformations we have made it just as easy for users to solve problems on infinite intervals and with moderate end point singularities as problems with finite intervals and smooth integrands. Piecewise-smooth integrands are handled effectively with breakpoints.

was published in revised form in the Journal of Computational and Applied Mathematics 211 (2008) 131-140.  In collaboration with M. Hosea of The MathWorks, quadva was improved, renamed quadgk, and added to Matlab in the release R2007b.  Among the improvements is to make it easy to integrate over polygonal paths in the complex plane.  Programs that use quadva should be changed to use quadgk.  Though the syntax of quadgk is a little different from that of quadva, conversion is very easy.


Moving Averages of Solutions of ODEs

Skip Thompson of Radford University and I have written a paper on the topic of this section.  A paper, Fortran 90 and Matlab programs to compute moving averages, and some examples are available here.  In revised form the paper has appeared in Applied Mathematics and Computation 193 (2007) 175-182.  Here is an abstract for the paper:

Moving averages of the solution of an initial value problem for a system of ordinary differential equations are used to extract the general behavior of the solution without following it in detail. They can be computed directly by solving delay differential equations. Because they vary much less rapidly and are smoother, they are easier to compute.

Boundary Value Problems for ODEs

Jacek Kierzenka of The MathWorks, Inc. and I wrote a program called bvp5c to solve BVPs for ODEs.  The program has been added to Matlab.  The algorithms and software are presented in a paper published in a special issue of a journal devoted to problem-solving environments, namely J. Kierzenka and L.F. Shampine, A BVP Solver that Controls Residual and Error, JNAIAM 3 (2008) 27-41.  A preliminary version of the paper is available here.  Here is an abstract for the paper:

Programs for the numerical solution of boundary value problems (BVPs) for ordinary differential equations generally try to control the true error. BVP solvers require users to provide guesses for the desired solution and a mesh that reveals its behavior. Because these guesses may be poor, the asymptotic approximations that justify error estimates may not be very good, or perhaps not even valid. This is of particular concern when a user asks for only modest accuracy. A more robust approach is to control a residual in the numerical solution because it is possible to estimate it reliably, no matter how bad the guesses. This is the approach taken in bvp4c, the Matlab BVP solver. Unfortunately control of the residual controls only indirectly the true error that is of most interest to users. Nevertheless, we show that for some methods, a control of a kind of residual controls the true error directly. Our investigation has resulted in a new BVP solver called bvp5c. It is used exactly like bvp4c and has some advantages in addition to a robust and more natural control of error.

Here is a talk on this material for a CAIMS-Canadian Mathematical Society meeting in  Montreal.

Paul Muir of St. Mary's University, Halifax, and I have written A User-Friendly Fortran BVP Solver.  A paper about this development, the Fortran 90 code, and selected examples are available here.  Here is an abstract for the paper:

MIRKDC is a FORTRAN 77 code widely used to solve boundary value problems (BVPs) for ordinary differential equations (ODEs). A significant issue with the use of this package and other similar packages is that the user interface is rather complicated; indeed, so complicated that many potential users of such packages are reluctant to invest the time needed to learn how to use them properly. We have applied our experience in writing user interfaces for ODE solvers in Matlab and Fortran 90/95 to the development of a user-friendly Fortran 90/95 BVP solver, developed from an extensive modification of MIRKDC. In the course of developing a completely new user interface, we have added significantly to the algorithmic capabilities of MIRKDC. In particular, the new solver, BVP_SOLVER, extends the problem class of MIRKDC to problems with unknown parameters and problems with ODEs having a singular coefficient and uses more effective Runge-Kutta formulas and continuous extensions. We have also written a number of auxiliary routines to provide further convenience to users of this package such as routines for saving and retrieving solution information and routines for extending the problem domain, which are useful in the context of parameter continuation computations.

Hyperbolic PDEs

We have developed software in MATLAB to solve initial-boundary value problems for first order systems of hyperbolic partial differential equations (PDEs) in one space  variable x and time t. We allow PDEs of three general forms, viz.

and we allow general boundary conditions. Solving PDEs of this generality is not routine and the success of our software is not assured. On the other hand, it is very easy to use and has performed well on a wide variety of problems.  Provided here is a manuscript about this work and the solver itself.  A revised version of the manuscript has appeared as   Solving Hyperbolic PDEs in Matlab, Appl. Numer. Anal. &  Comput. Math., 2 (2005) 346-358.   Accompanying the solver are many examples.


IRKC: an IMEX Solver for Stiff Diffusion-Reaction PDEs

B.P. Sommeijer and J.G. Verwer of the Center for Mathematics and Computer Science (CWI), Amsterdam, and I have collaborated on a paper with the title of this section that is to appear in the Journal of Applied and Computational Mathematics.  Here is an abstract for the paper:

The Fortran 90 code IRKC is intended for the time integration of systems of partial differential equations (PDEs) of diffusion-reaction type for which the reaction Jacobian has real (negative) eigenvalues. It is based on a family of implicit-explicit Runge--Kutta--Chebyshev methods which are unconditionally stable for reaction terms and which impose a stability constraint associated with the diffusion terms that is quadratic in the number of stages. Special properties of the family make it possible for the code to select at each step the most efficient stable method as well as the most efficient step size. Moreover, they make it possible to apply the methods using just a few vectors of storage. A further step towards minimal storage requirements and optimal efficiency is achieved by exploiting the fact that the implicit terms, originating from the stiff reactions, are not coupled over the spatial grid points. Hence, the systems to be solved have a small dimension (viz., equal to the number of PDEs). These characteristics of the code make it especially attractive for problems in several spatial variables. IRKC is a successor to the RKC code (J. Comput. Appl. Math. 88 (1997, pp. 315--326) that solves similar problems without stiff reaction terms.

The Fortran 90 code for IRKC as well as for the accompanying examples can be obtained by anonymous ftp from the address IRKC can also be downloaded from (send irkc.f90 from ode). A preliminary version of the paper is available here.

Numerical/Analytical Methods

For some time I have been interested in combining numerical and analytical methods for the effective solution of ODE problems.  An early result of this interest was an ODE solver for Maple that I wrote with Rob Corless.  It is now a part of the Maple problem solving environment.  The code actually implements two solvers, an explicit Runge-Kutta method for non-stiff problems and a Rosenbrock method for stiff problems.  Although we originally intended to use AD (automatic differentiation) to evaluate Jacobians, we ended up using Maple tools inside the solver to construct a procedure for evaluating the Jacobian analytically from a procedure for evaluating the differential equations.  This work is described in Lawrence F. Shampine and Robert M. Corless, "Initial value problems for ODEs in problem solving environments", J. Comp. Appl. Math., 125 (2000), 31-40.  A preprint version is available here.

Shaun Forth has developed a nice AD package in the Matlab problem solving environment called MAD.  It underlies the AMOR Report 2003/04 , "Using AD to solve BVPs in MATLAB", by L. F. Shampine, Robert Ketzscher, and Shaun A. Forth.  An abstract follows:

The MATLAB program bvp4c solves two-point boundary value problems (BVPs) of considerable generality. The numerical method requires partial derivatives of several kinds. To make solving BVPs as easy as possible, the default in bvp4c is to approximate these derivatives with finite differences. The solver is more robust and efficient if analytical derivatives are supplied. In this paper we investigate how to use automatic differentiation (AD) to obtain the advantages of analytical derivatives without giving up the convenience of finite differences. In bvp4cAD we have approached this ideal by a careful use of the MAD AD tool and some modification of bvp4c.

MSWtalk.pdf        MSWtalk.tex

Bug fixes in Matlab solvers

Robert Piche of  the Tampere University of Technology, Finland, discovered a bug in odezero.  Jacek Kierzenka of The MathWorks fixed the bug, but the change implies a change in dde23, too.  The revised programs are available here.  The bug is present in the odezero of v. 7.0 (R14) and v. 7.01 (R14 SP1).  These corrected programs may not work with earlier versions of Matlab.  They will be added to Matlab at R14 SP2.

Solving ODEs in Matlab

is the name of a book by L.F. Shampine, I. Gladwell, and S. Thompson published in 2003 by Cambridge University Press.  It is a text for a one-semester course for upper-level undergraduates and beginning graduate students in engineering, science, and mathematics. Prerequisites are a first course in the theory of ODEs and a survey course in numerical analysis, in addition to some programming experience, preferably in MATLAB, and knowledge of elementary matrix theory. Professionals will also find that this useful concise reference contains reviews of technical issues and realistic and detailed examples. The programs for the examples are supplied on the support page and can serve as templates for solving other problems. Each chapter begins with a discussion of the "facts of life" for the problem, mainly by means of examples. Numerical methods for the problem are then developed, but only those methods most widely used. The treatment of each method is brief and technical issues are minimized, but all the issues important in practice and for understanding the codes are discussed. The last part of each chapter is a tutorial that shows how to solve problems by means of small, but realistic, examples.

The support page also has information for instructors.  In particular, an instructor can learn there how to obtain solutions to all the exercises, including M-files.

Delay Differential Equations

For some time I have been developing methods and software for delay differential equations, DDEs.  The first product of this work was a code, dde23, developed in collaboration with Skip Thompson that is now a part of MATLAB.  Associated papers can be found here.  This code solves only retarded DDEs with constant lags.  This is a restricted class of DDEs, but probably the most common. 

Subsequently I developed the ddesd code for a bigger class of DDEs.  It is an alternative to dde23 since it is based on an entirely different approach, but it was written to solve problems with time- and state-dependent delays.  It is now part of Matlab. An abstract for the paper "Solving ODEs and DDEs with residual control", Appl. Numer. Math. 52 (2005) 113-126 follows:

We first consider the numerical integration of ordinary differential equations (ODEs) with Runge-Kutta methods that have continuous extensions. For some methods of this kind we develop robust and inexpensive estimates of both the local error and the size of the residual. We then develop an effective program, ddesd, to solve delay differential equations (DDEs) with time- and state-dependent delays. To get reliable results for these difficult problems, the code estimates and controls the size of the residual. The user interface of ddesd makes it easy to formulate and solve DDEs, even those with complications like event location and restarts.

The Fortran 90 DDE solver, many examples, and a preprint are now available here. An abstract of the paper "A Friendly Fortran DDE Solver", Appl. Numer. Math. 56 (2006) 503-516 by S. Thompson and L.F. Shampine follows:

DKLAG6 is a FORTRAN 77 code widely used to solve delay differential equations (DDEs). Like all the popular Fortran DDE solvers, new users find it formidable and in many respects, it is not easy to use. We have applied our experience writing DDE solvers in Matlab and the capabilities of Fortran 90 to the development of a friendly Fortran DDE solver. How we accomplished this may be of interest to others who would like to modernize legacy code. In the course of developing a completely new user interface, we have added significantly to the capabilities of DKLAG6.

Pascal Gahinet of the MathWorks and I considered the solution of a class of neutral DDEs that arise in an important application.  A  preprint  of the paper "DDAEs in Control Theory", Appl. Numer. Math. 56 (2006) 574-588  by L.F. Shampine and Pascal Gahinet is available and the abstract follows:

Linear time invariant systems are basic models in control theory. When they are generalized to include state delays, the resulting models are described by a system of DDAEs, delay-differential-algebraic equations. We exploit the special properties of the models that arise in our control theory application to solve these difficult problems with enough accuracy and enough speed for design purposes.

Some Papers

The Lax-Friedrichs method is not dissipative, but we show that a variant is dissipative of order two.  It also damps middle frequencies less than the usual form.  It is very convenient to code this variant together with Richtmyer's variant of the Lax-Wendroff method.  This was done in the solver described in the section above on hyperbolic partial differential equations.  A preprint of  "Two-Step Lax-Friedrichs Method" is available here.  The paper has now been published in Applied Mathematics Letters 18 (2005) 1134-1136.

The paper "Error Estimation and Control for ODEs" is to appear in a special volume of the Journal of Scientific Computing dedicated to the AFOSR workshop entitled "Advances and Challenges in Time-Integration of PDE's".  A preprint is available here.  An abstract follows:

This article is about the numerical solution of initial value problems for systems of ordinary differential equations (ODEs). At first these problems were solved with a fixed method and constant step size, but nowadays the general-purpose codes vary the step size, and possibly the method, as the integration proceeds. Estimating and controlling some measure of error by variation of step size/method inspires some confidence in the numerical solution and makes possible the solution of hard problems. Common ways of doing this are explained briefly in the article.

The paper "Singular boundary value problems for ODEs" that appeared in Appl. Math. Comp., 138 (2003) 99-112 describes an extension to the bvp4c code.  A preprint version is available here.  An abstract follows:

This paper is concerned with the numerical solution of a system of ordinary differential equations (ODEs),  y' = S y/t + f(t,y,p), on an interval [0,b] subject to boundary conditions  0 =  g(y(0),y(b),p). The ODEs have a coefficient that is singular at t = 0, but it is assumed that the boundary value problem (BVP) has a smooth solution. Some popular methods for BVPs evaluate the ODEs at t = 0. This paper deals with the practical issues of solving this class of singular BVPs with such a method. The bvp4c solver of Matlab has been modified accordingly so that it can solve a class of singular BVPs as effectively as it previously solved non-singular BVPs.

In the paper of Budd, Koch, and Weinmueller, "From Nonlinear PDEs to Singular ODEs", for the proceedings of the Volterra meeting in Tempe, a hard BVP is solved using some new software. To make the point that standard software is not suitable for this problem with singularities at both ends of the interval, an attempt was made to solve the problem with bvp4c. Not surprisingly, this was not successful. However, we show in a note that if the program is applied skillfully following the guidance of the text "Solving ODEs in Matlab" for handling the singularities, it is possible to solve the BVP with bvp4c.  An M-file for the example of the note is available here.

The paper "Solving 0 = F(t,y(t),y'(t)) in Matlab" that appeared in J. Numer. Math., 10 (2002) 291-310 describes a solver that is to appear in the next release of Matlab.  A preprint version is available here.  An abstract follows:

Important algorithms and design decisions of a new code, ode15i, for solving 0 = F(t,y(t),y'(t)) are presented. They were developed to exploit Matlab, a popular problem solving environment (PSE). Codes widely-used in general scientific computation (GSC) compute consistent initial conditions only for restricted forms of the differential equations. ode15i does this for the general problem, a task that is qualitatively different. Unlike popular codes in GSC, ode15i saves partial derivatives because this is more efficient in the PSE. A new representation of fixed leading coefficient BDFs is based on the Lagrangian form of polynomial interpolation. Basic computations are both clear and efficient in the PSE when using this form. Some numerical experiments show that ode15i is an effective way to solve fully implicit ODEs and DAEs of index 1 in Matlab.