# Stiff delay equations

 Nicola Guglielmi and Ernst Hairer (2007), Scholarpedia, 2(11):2850. doi:10.4249/scholarpedia.2850 revision #91824 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Nicola Guglielmi

A system of delay differential equations in a quite general form is given by $\tag{1} \left\{ \begin{array}{rcl} M \dot{y}(t) & = & f\Big(t,y(t), y \big( \alpha(t,y(t)) \big) \Big), \quad t_{0} \leq t\leq t_f \\ y(t) & = & \phi(t), \quad t < t_{0}, \qquad y(t_0) = y_0 \end{array} \right.$

where $$y(t) \in \R^d\ ,$$ $$M$$ is a constant square matrix, $$f$$ a real vector function, $$\phi$$ a given initial function, and $$y_0$$ a given initial vector. The deviating argument $$\alpha(t,y(t))$$ is assumed to be bounded above by $$t\ .$$ The corresponding delay is given by the non-negative function $$\tau (t,y(t))= t - \alpha(t,y(t)).$$ All statements of this article extend to the case of several deviating arguments. If the matrix $$M$$ is the identity then the problem is said to be explicit (see Delay Differential Equations), while in all the other cases is said to be in implicit form.

As for ordinary differential equations (see Stiff Systems) a rigorous mathematical definition of stiffness is not possible. Stiffness is characterized by phenomena such as: strong contractivity of neighbouring solutions, multiple time scales (fast transient phases) and the fact that explicit numerical integrators are not able to reproduce a correct approximation of the solution in an efficient way. A general reference for the numerical treatment of delay equations (including stiff problems) is the book by Bellen and Zennaro (2003), and for stiff ordinary differential equations is the book by Hairer & Wanner (1996).

## Stability of linear delay equations

Much insight can be gained by considering linear delay equations with constant delay, whose exact solution is well understood.

#### The scalar case

It is common to start by studying the scalar equation $\tag{2} \dot y (t) = a y(t) + b y(t-1) ,$

where $$a$$ and $$b$$ are real (or complex) numbers. Without loss of generality the delay can be assumed to be $$\tau = 1\ ,$$ because any constant delay can be reduced to it by a linear time transformation. Searching for solutions of the form $$y(t)=c e^{\lambda t}\ ,$$ one is lead to the characteristic equation $\tag{3} \lambda -a -b \, e^{-\lambda} = 0 ,$

which for $$b\ne 0$$ possesses infinitely many solutions. The general solution is then obtained as a sum of such exponentials. The zeros of (3) are plotted in the left picture of Figure 2 for the case $$(a,b)=(0.5,-1)\ .$$ Notice that they all lie in the left half-plane, so that the solution will tend to zero for $$t\to\infty$$ (despite a positive $$a$$).

The white region in the right picture of Figure 2 shows the values of real pairs $$(a,b)$$ for which all solutions of the delay equation tend to zero. The shaded regions correspond to pairs, where the characteristic equation has zeros with positive real part (increasing darkness indicates an increasing number of unstable modes), so that solutions are unstable. The boundary of the stability domain can be obtained by the root locus technique: put $$\lambda = i\theta$$ with real $$\theta\ ,$$ and solve the characteristic equation for $$(a,b)\ .$$

Equation (2) gives the simplest prototype of stiff problem for large values of $$a<0$$ and $$b\ ,$$ but also for $$(a,b)$$-pairs close to lower boundary of the stability region (white set in the right picture of Figure 2). The stability restrictions associated to a numerical integration by means of forward Euler method are illustrated in Figure 3.

#### Linear systems

The stability of a linear system of delay equations $\tag{4} \dot y (t) = A y(t) + B y(t-1) ,$

(with square matrices $$A$$ and $$B$$) can be analyzed in a similar way. Setting $$I$$ the identity matrix, the characteristic equation has just to be replaced by $${\rm det}\,\Big( \lambda I - A - B\,e^{-\lambda} \Big) = 0\ .$$ The main difficulty here is that in general the matrices $$A$$ and $$B$$ do not commute, so that they cannot be transformed simultaneously to a simple (diagonal) form. This is in contrast to the situation of linear ordinary differential equations.

Suitable algorithms for the numerical computation of the characteristic roots of a linear system have been developed recently (see e.g. Engelborghs and Roose, 2002 and Breda, Maset and Vermiglio, 2004).

Such linear systems of delay equations, when the solution is stable and the matrices $$A$$ and/or $$B$$ contain elements of large modulus, are the first candidates of stiff delay equations. They can arise in the space discretization of partial delay differential evolution equations (see e.g Wu, 1996 and van der Houwen, Sommeijer and Baker, 1986).

## Further examples of stiff delay equations

All situations that can be simulated by stiff ordinary differential equations will sooner or later also lead to the consideration of stiff delay equations (delayed phenomena are often present in the physical problem, but their simulation is suppressed to keep the model as simple as possible).

#### Singularly perturbed problems

An important class of stiff problems are equations in singularly perturbed form: $\begin{array}{rcl} \dot y (t) &=& f\Big( y(t),z(t),y(\alpha (t)),z(\alpha (t))\Big) \\ \varepsilon\,\dot z (t) &=& g\Big( y(t),z(t),y(\alpha (t)),z(\alpha (t))\Big) \end{array}$ where $$0<\varepsilon \ll 1$$ is a positive, very small parameter, and the derivative of $$g$$ with respect to the $$z$$ variables is such that the solutions are stable when $$\varepsilon\to 0\ .$$ Of course, $$\alpha (t)$$ can be replaced by a state-dependent delay. This system is of the from (1) with a matrix $$M={\rm diag}(I,\varepsilon I)\ .$$

#### Differential algebraic equations

Putting formally $$\varepsilon =0$$ in the above singularly perturbed problem, one obtains a delay differential equation coupled with a delay algebraic equation. It is in some sense infinitely stiff. The system is still of the form (1), but the matrix $$M$$ is singular. Any system (1) with singular matrix $$M$$ is called a delay differential algebraic equation. In the absence of delayed terms (see Hairer and Wanner, 1996) such systems are well understood, and they are classified in terms of the index. Most of these results carry over to our situation as long as the delay is bounded away from zero. Applications to control theory are discussed in (Shampine and Gahinet, 2006).

#### Neutral problems

If the right-hand side of a delay equation also depends on the derivative of the solution, it is called neutral. A typical problem is $\dot y (t) = f\Big( y(t),y(\alpha (t)),\dot y (\alpha (t))\Big) .$ Introducing $$z(t)=\dot y(t)$$ as new variable, this neutral equation is seen to be equivalent to the system $\begin{array}{rcl} \dot y (t) &=& z(t) \\ 0 &=& f\Big( y(t),y(\alpha (t)),z(\alpha (t))\Big) - z(t) , \end{array}$ which is a differential algebraic delay equation with matrix $$M={\rm diag}(I,0)\ .$$

Example. Consider the neutral equation analyzed in (Enright and Hayashi, 1997) and (Castleton and Grimm, 1973) $\tag{5} \begin{array}{rcl} \dot y (t) &=& z(t) \\ 0 &=& \cos{(t)} \Big( 1 + y( t y(t)^2 ) + 0.6 y(t) z( t y(t)^2 )\Big) - z(t), \end{array}$

with initial function $$y(t)=-t/2$$ and $$z(t)=-1/2$$ for $$t \le 0.25\ .$$ Notice that at $$t=4.09...$$ a classical solution ceases to exist.

The component $$z(t)$$ is plotted in Figure 1. The blue and red circles indicate simultaneously the values of $$z(t)$$ and $$z( t y(t)^2 )\ ,$$ respectively.

## Breaking points

Discontinuities may occur in various orders of the derivative of the solution, independently of the regularity of the right hand side. In fact, if either $$y_0 \ne \phi(t_0)$$ or some right-hand derivative of the solution at $$t_0$$ is different from the corresponding left-hand derivative the discontinuity at $$t_0$$ may propagate along the integration interval by means of the deviating argument $$\alpha(t,y(t))\ .$$ Evidently, as soon as $$\alpha(\xi,y(\xi)) = t_0$$ for some $$\xi \ge t_0\ ,$$ due to the fact that $$y$$ is not regular at $$t_0\ ,$$ $$f\Big(t,y(t),y\big( \alpha (t,y(t))\big) \Big)$$ is not smooth at $$\xi\ .$$ Such discontinuity points are referred in the literature as breaking points (see for example Bellen and Zennaro, 2003 and Baker, Paul and Willé, 1995).

#### Determination of breaking points

If the deviating arguments do not depend on the solution itself, that is $$\alpha = \alpha(t)\ ,$$ such points may be computed and possibly inserted in advance into a mesh of integration. This allows for decomposing the Cauchy problem (1) into a finite (if the number of breaking points is finite) sequence of regular problems. But in the general state-dependent case this computation is not possible and one has to deal with a truly non-smooth problem.

#### Termination and bifurcation

If some component of the solution $$y$$ has a jump discontinuity at some breaking point $$\zeta\ ,$$ the right hand side of (1) can be discontinuous at $$t = \xi$$ as soon as $$\alpha(\xi,y(\xi)) = \zeta\ .$$ Usually, a classical solution will continue to exist, but when the delay depends on the solution $$y(t)$$ is might either cease to exist (termination) or lose the uniqueness (bifurcation). The loss of uniqueness is not generic. Detecting automatically the occurrence of these situations is an important prerogative of a numerical code and should be a by-product of an accurate computation of breaking points (c.f., Guglielmi and Hairer, 2007).

## Numerical stability of simple integrators

Numerical phenomena that can be observed in solving stiff ordinary differential equations (see e.g., Hairer and Wanner, 1996), are also relevant for stiff delay equations. We concentrate here on aspects that are particular to delay equations.

#### Stability domains

Let us apply the explicit Euler discretization to the scalar test equation (2) with a step size $$h=1/m\ ,$$ so that the delay $$\tau =1$$ is an integral multiple of the step size. This yields the recursion $y_{n+1} = y_n + h\Big(a y_n + b y_{n-m}\Big) .$ To study its stability one looks for solutions of the form $$y_n=\zeta^n\ ,$$ and so obtains the characteristic equation $\zeta^{m+1} = \zeta^m \Big( 1+ \frac am \Big)+ \frac bm .$ This polynomial equation has $$m+1$$ solutions which are related to the dominant roots of (3) via $$\zeta =e^{\lambda /m}\ .$$ If all of them lie inside the unit disc, a stable numerical solution is obtained. The set of real pairs $$(a,b)\ ,$$ for which the numerical solution is stable, is called the stability region of the method. It is plotted for $$m=2,3,4,5$$ as a red shaded region in Figure 3. For increasing $$m\ ,$$ the stability region becomes larger and asymptotically tends to the set of pairs for which the analytic solution is stable (white region in the right picture of Figure 2). Notice that there exist pairs $$(a,b)$$ (with $$a$$ not necessarily large and negative), so that the numerical solution solution diverges but the analytic solution is stable.

A similar analysis can be done for other numerical methods. It is interesting to note that the implicit Euler method, the mid-point rule and the trapezoidal rule have the property that whenever the analytic solution of (2) is stable, also the numerical solution is stable for every integer $$m\ ,$$ see Guglielmi (1998).

Another interesting topic, which is not discussed here for the sake of conciseness, is the stability behavior of numerical integrators applied to neutral delay differential equations (the reader is referred e.g. to Bellen, Jackiewicz and Zennaro (1988), Koto (1996) and Guglielmi (2001)).

#### One-sided Lipschitz conditions

For linear systems, the stability analysis is rather involved. It is therefore common to restrict the class of problems (4) to matrices satisfying $\tag{6} \mu (A) + \| B \| \le 0 ,$

where $$\mu (A) = \sup \{ \langle v,Av\rangle ; \| v\| =1 \}$$ is the logarithmic norm and $$\| B\| = \sup \{ \| Bv\| ; \| v\| =1 \}$$ the matrix norm corresponding to a scalar product. This condition implies stability of the analytic solution, even when the lag-term $$y(t-1)$$ is replaced by $$y(t-\tau )$$ with an arbitrary non-negative $$\tau\ .$$ This is the reason why a stability analysis based on this condition is often called stability for all delays or delay-independent stability. For the scalar case, this restriction corresponds to the sector $$|b|\le -a$$ which covers a large part of the analytic stability region (Fig.Figure 2). The implicit Euler method, the trapezoidal rule, and the mid-point rule produce stable numerical solutions when applied to linear systems of delay equations satisfying (6).

For an analysis of the stability properties of A-stable methods under condition (6) the reader is referred to (Zennaro, 1986), (Koto, 1994) and to (Bellen and Zennaro, 2003).

## Stiff integrators

A natural approach consists in suitably extending stiff integrators for ordinary differential equations (ODEs) to delay differential equations (DDEs).

#### Implicit Runge-Kutta methods (Radau IIA)

Let us explain such an extension for the example of a stiffly-accurate $$s$$-stage implicit Runge-Kutta method with abscissae $$\{c_{i} \}$$ and coefficients $$\{a_{ij} \}\ .$$ Using the step size $$h_{n}=t_{n+1}-t_{n}$$ and an approximation $$y_n$$ to the solution at time $$t_n\ ,$$ one step of the method is given by $\tag{7} M \left( Y_i^{(n)} - y_n \right) \, = \, h_{n}\sum_{j=1}^{s} a_{ij} f \Big( t_n+c_j h_n, Y_j^{(n)}, Z_{j}^{(n)} \Big), \qquad i=1,\ldots ,s$

where $$Z_{j}^{(n)}$$ has to approximate $$y\Big( \alpha (t,y(t))\Big)$$ at $$t=t_n+c_j h_n$$ and $$y_{n+1} = Y_s^{(n)}$$ approximates the solution at time $$t_{n+1}\ .$$

For ordinary differential equations, where $$f$$ only depends on $$t,y\ ,$$ this represents a nonlinear system for the internal stage values $$\{ Y_i^{(n)} \}\ .$$ For delay equations instead, it is necessary to define $$Z_{j}^{(n)}\ .$$ Setting $$\alpha_{j}^{(n)} = \alpha(t_{n} + c_{j}\,h_{n}, Y_{j}^{(n)})\ ,$$ it is usual to put $$Z_{j}^{(n)} = \phi \left( \alpha_{j}^{(n)} \right)$$ if $$\alpha_{j}^{(n)} < t_{0}$$ and $\tag{8} Z_{j}^{(n)} = u_m \left( \alpha_{j}^{(n)} \right) \qquad \mbox{if } \quad t_{m} \le \alpha_{j}^{(n)} < t_{m+1} \quad \mbox{for some } \ 0\le m\le n,$

where $$u_m(t)$$ is a continuous extension of the solution computed at the $$m$$-th step (which is available for $$t_m \le t \le t_{m+1}$$). For a collocation method, $$u_m(t)$$ can be the Lagrange polynomial interpolating the $$s+1$$ pairs $$(t_m,y_m)$$ and $$\{ (t_m + c_j h_m, Y_j^{(m)} ) \}_{j=1}^{s}\ .$$

On a general mesh, due to the dependence of the right hand side on the dense output, the order of convergence is usually reduced with respect to the case of ODEs. But on specially constrained meshes it is possible to preserve the classical order (see Bellen, 1984).

Stability. An analysis of Runge-Kutta methods applied to (2) with a step size $$h=1/m$$ shows that Gauss methods and Radau IIA methods of any order are stable for all $$m$$ (see Guglielmi and Hairer, 1999). This means that they preserve the correct asymptotic vanishing behavior of the true solutions for all pairs$$(a,b)$$ in the white region in Figure 2. This is not true for all A-stable Runge-Kutta methods (e.g. Lobatto IIIC methods). Recall that a method is A-stable if, when applied to the test equation $$\dot y = \lambda y$$ with $$\Re \lambda < 0$$ with an arbitrary step size, the numerical solution tends to zero.

For linear systems of delay equations a stability analysis is much more complicated and the results are negative. It is for example known (Guglielmi, 2001 and Maset, 2002) that for every numerical method and for every $$m$$ there exists a real linear system (4) in dimension $$2\ ,$$ such that the analytic solution is stable but the numerical solution diverges. Assuming instead delay independent asymptotic stability conditions, i.e., condition (6), it is known that all A-stable methods preserve the correct asymptotically vanishing behavior of true solutions (see Zennaro, 1986 and Koto, 1994).

When passing to nonlinear systems it is common to make use of a one-sided Lipschitz condition combined with a standard Lipschitz condition for arguments with delay. In this situation, many contractivity results (B-stability, algebraic stability) for numerical methods applied to stiff ordinary differential equations can be extended to stiff delay equations (see e.g. Zennaro, 1997 and Bellen and Zennaro, 2003 with the bibliography included therein).

#### Multistep methods (BDF)

The same idea can be applied to multistep stiff integrators such as BDF schemes. One step of the $$k$$-step BDF method is given by $\tag{9} M \Big( y_{n+1} - \sum_{j=1}^{k} \alpha_{n,j} y_{n+1-j} \Big) \, = \, h_{n}\,\beta_{n} f \big( t_{n+1}, y_{n+1}, z_{n+1} \big),$

where $$\{ \alpha_{n,j} \},\beta_{n}$$ denote the coefficients (depending on the stepsize ratios) of the BDF formula and $$z_{n+1}$$ approximates $$y\Big( \alpha (t,y(t))\Big)$$ at $$t_{n+1}=t_{n}+h_{n}\ .$$ Similarly to the previous case, with $$\alpha^{(n)} = \alpha (t_{n+1}, y_{n+1})\ ,$$ $\tag{10} z_{n+1} = v_m \left( \alpha^{(n)} \right) \qquad \mbox{if } \quad t_{m} \le \alpha^{(n)} < t_{m+1} \quad \mbox{for some } \ 0\le m\le n,$

where $$v_m(t)$$ is a continuous extension of the solution computed at the $$m$$-th step.

For a discussion the reader is referred to (Bocharov, Marchuk and Romanyukha, 1996) and - for differential algebraic equations - to (Ascher and Petzold, 1995).

#### Implementation issues

The numerical integration of a stiff delay differential equation presents several new difficulties compared to the treatment of stiff ordinary differential equation.

Nonlinear equations. Explicit numerical integrators are not suitable for solving stiff problems. As long as the step size is smaller than the delay, the same difficulties are encountered as for ordinary differential equations and there are well-established techniques (simplified Newton method, defect correction) to solve the arising nonlinear equations. If the step size is larger than the delay, one has to take care of the fact that the vectors $$Z_j^{(n)}$$ in the Runge-Kutta formula (or the vector $$z_{n+1}$$ in the BDF scheme) also depend on the unknown variables, and it may be necessary to use a more sophisticated Jacobian approximation during the simplified Newton iterations.

Implicit computation of breaking points. For state-dependent delay equations, where the breaking points are not known in advance, the step sizes may be severely restricted near the jump discontinuities of the solution or some low order derivative. It is therefore important to detect and accurately compute them during the integration. It is possible to take advantage of the use of an implicit method. Once the presence of a breaking point in the current interval is detected (a zero of the equation $$\alpha(\xi,y(\xi)) = \zeta ,$$ where $$\zeta$$ is a previous breaking point), a natural idea is to consider the step size $$h_n$$ as a new variable and to add the scalar equation $\tag{11} \alpha \Big( t_n+h_n, u_n(t_n+h_n) \Big) = \zeta$

to the system (7) (or (9)). This yields the breaking point $$t_n+h_n$$ at the same time as the numerical solution of the implicit integrator (see Guglielmi and Hairer, 2007 and Enright, Jackson, Nørsett and Thomsen, 1988 for details of this strategy).

Vanishing or small delays. It is essential that a stiff integrator can deal with vanishing or small delays. A typical situation for stiff problems is the existence of quasi-stationary solutions, where extremely large step sizes have to be used. If the solution is sufficiently smooth in such a region, the method of steps strategy would be very inefficient.

Error control. Step size selection strategies for stiff ordinary differential equations are usually based on error estimations at grid points. For delay equations the accuracy of the dense output strongly influences the performance. Thus suitable uniform error estimates have to be provided in order to control the error. For BDF formulas an accurate uniform error estimate (of the same order as that obtained at grid points) is immediately available. For Runge-Kutta methods one has to pay attention because the dense output is usually less accurate than the numerical solution at grid points (see Enright, 1989 and Guglielmi and Hairer, 2001).

#### An example from immunology

This is a system of delay equations which models the antibody response to antigen challenge (see Waltman, 1978).

The problem consists of six equations $\begin{array}{rcl} \dot{y}_1(t) & = & -r y_1(t) y_2(t) - s y_1(t) y_4(t)\\[1mm] \dot{y}_2(t) & = & -r y_1(t) y_2(t) + \alpha r y_1\big( y_5(t)\big) y_2\big( y_5(t)\big) H(t-t_0)\\[1mm] \dot{y}_3(t) & = & r y_1(t) y_2(t)\\[1mm] \dot{y}_4(t) & = & -s y_1(t) y_4(t) - \gamma y_4(t) + \beta r y_1\big( y_6(t)\big) y_2\big( y_6(t)\big) H(t-t_1)\\[1mm] \dot{y}_5(t) & = & H(t-t_0) \big( y_1(t) y_2(t) + y_3(t)\big) \big/ \big( y_1\big( y_5(t)\big) y_2\big( y_5(t)\big) + y_3\big( y_5(t)\big) \big) \\[1mm] \dot{y}_6(t) & = & H(t-t_1) \big( y_2(t)+y_3(t)\big) \big/ \big( y_2\big( y_6(t)\big)+y_3\big( y_6(t)\big)\big) \end{array}$ where $$H(x)$$ is the Heavyside function ($$H(x)=0$$ if $$x<0$$ and $$H(x)=1$$ if $$x\ge 0$$). Choosing $$\alpha =1.8\ ,$$ $$\beta =20\ ,$$ $$\gamma =0.002\ ,$$ $$r=5\cdot 10^4\ ,$$ $$s=10^5\ ,$$ $$t_0=35\ ,$$ $$t_1=197$$ and the initial functions $$y_1(t)=5\cdot 10^{-6}\ ,$$ $$y_2(t)=10^{-15}\ ,$$ and $$y_3(t)=y_4(t)=y_5(t)=y_6(t)=0$$ for $$t\le 0$$ the obtained solutions are shown in the plot.

This problem has several difficulties: the delay is state-dependent, it becomes very small and vanishes asymptotically (see Fig.Figure 4). The functions $$y_2$$ and $$y_4, y_6$$ are extremely steep at the values $$t_0=35$$ and $$t_1=197\ ,$$ respectively, and the problem is very stiff (as already observed by Waltman (1978)).

## Software

• Dde-Stride: a Fortran77 code by Baker, Butcher and Paul (1992) for the integration of stiff delay and neutral differential equations. The algorithm extends Butcher's Stride code for ODEs and is based on a SIRK method.
• Delh: a Fortran77 code by Weiner and Strehmel (1988) for the integration of partitioned stiff/nonstiff systems. The algorithm is based on Rosenbrock type methods.
• Difsub-dde: a Fortran77 code by Bocharov, Marchuk and Romanyukha (1996) for the integration of non-neutral stiff delay differential equations. The algorithm suitably extends Gear's code Difsub for ODEs which is based on BDF formulas.
• Radar5: http://www.unige.ch/~hairer/software.html. A Fortran90 code by Guglielmi and Hairer (2001,2007) for the integration of implicit problems of the form (1). The algorithm extends Hairer and Wanner's Radau5 code, and is based on the $$3$$ stage Radau IIA collocation method.
• Snddelm: a Fortran77 code by Jackiewicz and Lo (1993) for the integration of systems of neutral delay differential equations. The algorithm is based on Adams predictor corrector methods.
• Baker C.T.H., Butcher J.C. and Paul C.A.H. (1992) Experience of Stride applied to delay differential equations. Tech. Rep. 208, Univ. of Manchester..
• Baker C.T.H., Paul C.A.H. and Willé D.R. (1995) Issues in the numerical solution of evolutionary delay differential equations. Adv. Comput. Math. 3:171-196.
• Bellen A. (1984) One-step collocation for delay differential equations. J. Comput. Appl. Math. 10:275-283.
• Bellen A., Jackiewicz Z. and Zennaro M. (1988) Stability analysis of one-step methods for neutral delay-differential eqautions. Numer. Math. 52:605-619.
• Bellen A. and Zennaro M (2003) Numerical solution of delay differential equations. Oxford University Press, Oxford. 
• Bocharov G.A., Marchuk G.I. and Romanyukha A.A. (1996) Numerical solution by LMMs of a stiff delay-differential system modelling an immune response. Numer. Math. 73:131-148.
• Breda D., Maset S. and Vermiglio R. (2004) Computing the characteristic roots for delay differential equations. IMA J. Numer. Anal. 24:1-19.
• Castleton R.N. and Grimm L.J. (1973) A first order method for differential equations of neutral type. Math. Comp. 27:571-577.
• Engelborghs K. and Roose D. (2002) On Stability of LMS-methods and Characteristic Roots of Delay Differential Equations. SIAM J. Numer. Anal. 40:629-650.
• Enright W.H., Jackson K.R., Nørsett S.P. and Thomsen P.G. (1988) Effective solution of discontinuous IVPs using a Runge-Kutta formula pair with interpolants. Appl. Math. Comput. 27:313--335
• Enright W.H. (1989) Analysis of error control strategies for continuous Runge-Kutta methods. SIAM J. Numer. Anal. 26:588--599.
• Feldstein A., Neves K.W. and Thompson, S. (2006) Sharpness results for state dependent delay differential equations: an overview. Appl. Numer. Math. 56:472--487.
• Guglielmi N. (1998) Delay dependent stability regions of Theta-methods for delay differential equations. IMA J. of Numer. Anal. 18:399--418.
• Guglielmi N. (2001) Asymptotic stability barriers for natural Runge-Kutta processes for delay equations. SIAM J. Numer. Anal. 39:763-783.
• Guglielmi N. and Hairer E. (1999) Order stars and stability for delay differential equations. Numer. Math. 83:371-383.
• Guglielmi N. and Hairer E. (2001) Implementing Radau IIa methods for stiff delay differential equations. Computing 67:1-12.
• Guglielmi N. and Hairer E. (2007) Computing breaking points in implicit delay differential equations. Adv. Comput. Math. in press.
• Hairer E. and Wanner, G. (1996) Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems. Springer-Verlag, Berlin. 
• Jackiewicz Z. and Lo E. (1993) The algorithm SNDDELM for the numerical solution of systems of neutral delay differential equations, Appendix in: Kuang Y.: Delay Differential Equations with Applications in Population Dynamics. Academic Press, Boston.
• Koto T. (1994) A stability property of A-stable natural Runge-Kutta methods for systems of delay differential equations. BIT 34:262-267.
• Koto T. (1996) A stability property of A-stable collocation-based Runge-Kutta methods for neutral delay differential equations. BIT 36:855-859.
• Maset S. (2002) Instability of Runge-Kutta methods when applied to linear systems of delay differential equations. Numer. Math. 90:555-562.
• Shampine L.F. and Gahinet P. (2006) Delay-differential-algebraic equations in control theory. Appl. Numer. Math. 56:574--588.
• van der Houwen P.J., Sommeijer B.P. and Baker C.T.H. (1986) On the stability of predictor-corrector methods for parabolic equations with delay. IMA J. Numer. Anal. 6:1--23.
• Waltman P. (1978) A threshold model of antigen-stimulated antibody production. Theoretical Immunology, Immunology Series 8:437-453. Dekker, New York.
• Weiner R. and Strehmel K. (1988) A type insensitive code for delay differential equations basing on adaptive and explicit Runge-Kutta interpolation methods. Computing 40:255--265.
• Wu, J. (1996) Theory and applications of partial functional-differential equations. Applied Mathematical Sciences, 119. Springer-Verlag, New York. 
• Zennaro M. (1986) P-stability properties of Runge-Kutta methods for delay differential equations. Numer. Math. 46:305-318.
• Zennaro M. (1997) Asymptotic stability analysis of Runge-Kutta methods for nonlinear systems of delay differential equations. Numer. Math. 77:549-563.

Internal references