Donald Estep


Modern society is affected in myriad ways by the behavior of complex multiscale, multiphysics physical systems involving the interaction of multiple physical processes acting through a range of scales. The impact of hurricanes on coasts, pollution flow in porous media, and radiation damage in uranium fuel rods are just a few of many examples. A characteristic of multiphysics systems is that it is possible to observe certain aspects of behavior at some scales experimentally and to construct well-validated process models of some component behavior at some scales, but we cannot directly observe or model all important behaviors and interactions. Thus, the study of multiphysics systems faces significant challenges that require, at a minimum, integrating all available data and models. The long-term goal of my research program is to develop powerful computational and statistical tools to draw inferences about complex multiphysics systems, predict their behavior, and quantify uncertainties in the results

Stochastic Inverse Problems

Over the last few years, my collaborators and I have been working on the formulation, analysis, and numerical solution of the stochastic inverse problem for determining input parameters to a model based on data on the output of the model. Considering a model solution as a (measurable) map from the space of parameter values to the space of computed quantities, the stochastic inverse problem consists of imposing a probability structure on the output and determining the corresponding structure(s) on the input space that matches. The stochastic inverse problem is of central importance to scientific inference and engineering design. My collaborators and I have developed a strikingly new formulation and solution methodology for stochastic inverse problems that is already promising a significant advance in this difficult area. he major difficulty in inverse problems arise from the generic situation in which the input space is higher dimensional than the output space, so that the inverse of a single output value is a set of input values. A well-known example is the contour curve corresponding to a value of a surface function. Both the standard mathematical and statistical approaches to dealing with such inverse problems involves changing the model, e.g. using regularization, in order to obtain an inverse problem that has a single solution. Yet, scant attention is paid to the effect of such a change on the original model and the impact on any resulting inferences. In contrast, our approach uses measure theory to work directly in the space of set-valued inverses. The approach decomposes a probability structure on the input parameter space into a component structure in the space of set-valued inverses and another component structure on a space that is “transverse”.  This decomposition is then employed to compute an inverse probability structure corresponding to the structure on the output. Our papers have established a complete basic theory for this formulation, as well as a complete numerical analysis for the approximate solution. The method has also been applied to very high dimensional problems in hurricane forecasting using data, using data from storm surge measurements at a set of measurement stations during Katrina to determine characteristics such as bathymetry fields and fields determining bottom friction. Funding is provided by the Air Force, DOD, DOE, Idaho National Laboratory, and NSF.

Forward Stochastic Sensitivity Analysis and Uncertainty Quantification for Differential Equations

My collaborators and I are interested in developing methods for stochastic sensitivity analysis for differential equations. The general problem is to efficiently approximate the effects of introducing stochastic parameters and/or data as input into a differential equation.  In the situation of coupled physics, the sensitivity analysis becomes very complex because of stochastic feedback between the physical components and because of the nominally high dimension. We have a promising approach that borrows on the framework for a posteriori analysis for multiphysics problems mentioned above. This method also provides a systematic way to reduce the complexity of parameter dependence in many cases. Moreover, we continue to pursue the derivation of a posteriori error estimates for statistical quantities computed from the output that take into account all sources of deterministic and stochastic errors. We also use this information to derive generalized adaptive algorithms that balance both numerical and stochastic (i.e. sampling) errors. We also pursue the derivation of new algorithms that reverse the order of the standard Monte Carlo approach with the sampling loops on the outside and the solve loops on the inside. For example, we recently devised an algorithm for stochastic elliptic equations in which the principal cost is almost independent of the number of samples. Support is provided by the DTRA, DOD, DOE, Idaho National Laboratory, and NSF.
A Posteriori Error Analysis

A significant part of my research lies in the area of deriving and implementing accurate computational estimates for the error in specified quantities of interest computed from a solution of a differential equation. In practice, the information to be computed from solving a differential equation is rarely the solution, but rather some set of functionals and/or statistics computed from solutions. Moreover, most problems of practical interest encompass multiple scales in behavior and/or multiple physical processes interacting closely. The complexity and computational cost of model solutions means that numerical error is always significant and moreover the numerical methods are almost never operating in the regime of ideal asymptotic behavior (optimal convergence rates). This raises a need to estimate the error in particular computations.

Early in my career, I helped pioneer a systematic approach to computing accurate estimates of the error in information computed from a finite element solution of a differential equation known as a posteriori error analysis. Rooted in functional analysis, this approach uses duality, adjoint problems, definition of computable residuals, and variational analysis to produce a computable estimate. Since the introduction in the early nineties, this approach has been widely adopted in computational mathematics, engineering, and science. Since that time, my colleagues and I have worked continuously both to expand the range of problems and discretization methods to which the approach is applied and to deepen theoretical understanding of the approach.

In particular, over the last decade, we have made a significant advance in the systematic extension of a posteriori analysis to multiscale, multiphysics models. Current problems we study include: multiscale nuclear fuel performance models, gravitational waveform models, chemical reactions in cells, (stacked) shallow water equation models used in climate and weather simulations, hyperbolic conservation laws, reacting flows, and restricted MHD models. Each of these problems poses some significant challenges. The technical issues we study include the solution of multiphysics problems using multiscale operator decomposition and treatment of explicit and non-converged implicit time integration schemes.

Support is provided by the DTRA, DOE, Idaho National Laboratory, NSF, and the Lawrence Livermore National Laboratory.

Efficient Control of Discretization Error

I have a long time interest in algorithms for enriching a discretization to achieve a desired accuracy in computed information in an efficient way. Most common adaptive algorithms are based on optimal distribution of a bound on the error, which avoids the issue of cancellation of error. However, such strategies often produce unreliable results in terms of affecting the control of the error, which can be much smaller than an error bound. The distinguishing feature of our work in this area is the attempt to allow for the effects of cancellation of error in the selection of discretization. Support has been provided by the DTRA, DOE, and NSF.

A Posteriori Analysis for Problems with Complex Geometric Features

My collaborators and I are investigating a posteriori error estimation and efficient solution of problems that involve some aspect of complex geometry. The range of problems is rather diverse. One example is the treatment of elliptic problems that have coefficients that are discontinuous across an arbitrary, smooth curve. We are interested in analyzing the effects of error that arises when the discontinuous interface curve is unresolved by a mesh, which appears as a kind of uncertainty in the elliptic coefficient. We are also interested in problems involving the solution of a differential equation posed on a manifold, say the heat equation posed on a metal shell, the formation of patterns in the skin of an animal, or models of black holes, where the manifold description is subject to error. Such errors may arise from resolution or from measurement. Of particular interest is the case in which the manifold evolves in time in a way that depends on the solution of the differential equation, so that the error in the description of the manifold also changes with time. Another class of problems involves the solution of partial differential equations on domains whose boundaries are either described stochastically or with some degree of uncertainty that is modeled stochastically. Support is provided by DTRA, DOE, and NSF.

Stochastic Modeling of Multiscale Behavior

My collaborators and I have been developing a systematic approach to create stochastic models of under-resolved microscale behavior (either deterministic or stochastic) to be included in a coupled multiscale coupled-physics model. One could think of our approach as a kind of stochastic upscaling. In the simplest form, we create a stochastic smooth spline representation by random sampling of the fine scale model, where the computed distributions of the spline parameters provide a characterization of the (under-resolved) fine scale properties. This representation enters the macroscale model as a stochastic parameter. Support has been provided by the DOE.

Miscellaneous Interdisciplinary Projects

I am involved in a couple of projects in which I was approached by scientists/engineers with challenging mathematical and computational problems that needed some help. These include modeling the behavior of large wireless networks using partial differential equations and developing new “first principle” methods for computing the scattering image from small angle X-rays (SAXS) from complex bio-molecules. Funding is provided the NIH and NSF.


The background image is a realization of bathymetry off the coast of Louisiana drawn from a solution of a stochastic inverse problem using wave height data from Katrina.