Starting from:

$30

CSE 397/GEO 391/ME 397/ORI 397 Assignment 5

Computational and Variational Methods for Inverse Problems CSE 397/GEO 391/ME 397/ORI 397 Assignment 5

I. An inverse problem for Burgers’ equation Consider the inverse problem for the viscosity field m in the one-dimensional Burgers’ equation (this equation is often taken as a one-dimensional surrogate for the Navier-Stokes equations). Taking [0, L] as the spatial domain and [0, T] as the temporal interval, the solution u = u(x, t) : [0, L]×[0, T] satisfies ut + uux − (mux)x = f in (0, L) × (0, T), (1a) u(0, t) = u(L, t) = 0 for all t ∈ [0, T], (1b) u(x, 0) = 0 for all x ∈ [0, L]. (1c) Here, m = m(x) : [0, L] → R is the spatially-dependent viscosity field we wish to invert for, f = f(x, t) is a given source term, and subscripts t and x indicate partial derivatives with respect to time and space coordinates. The conditions (1b) and (1c) are the boundary and initial conditions, respectively.1 We are given observations d = d(x, t) for a portion of the time interval, i.e., for t ∈ [T1, T], where T1 > 0. To invert for the viscosity field m, we thus minimize the functional F(m) := 1 2 Z T T1 Z L 0 (u − d) 2 dx dt + β 2 Z L 0 dm dx dm dx dx (2) with regularization parameter β > 0. An efficient optimization method for (2) requires the gradient of F with respect to m. 1. Derive a weak form of (1) by multiplying (1a) with a test function p(x, t) : [0, L] × [0, T] that satisfies Dirichlet boundary conditions analogous to (1b), and integrating over space and time. There is no need to impose the initial condition explicitly via a Lagrange multiplier (as we did in class for the initial condition inversion problem), since the inversion parameter (m) does not appear in the initial condition. Instead, you should build the satisfaction of the initial condition into the definition of the solution space (just as you would do with a Dirichlet boundary condition). Use integration by parts on just the viscous term to derive the weak form of the Burgers’ equation. 2. Using the Lagrangian approach, derive expressions for the adjoint equation and for the gradient of F with respect to m. Give weak and strong forms of these expressions. Note that m as well as its variation mˆ are functions of space only, while u and the adjoint p are functions of space and time. II. Inverse advection-diffusion inverse problem, continued from Assignment 4 Here, we continue solution of the advection-diffusion-reaction inverse problem begun in Assignment 4. There we employed a steepest descent method to minimize the cost functional. Here, we will extend a state-of-the-art inexact Newton-CG method, the source code for which can be found on Canvas. 1Note that the boundary ∂Ω of the one-dimensional interval Ω = (0, L) is simply the points x = 0 and x = L, i.e., ∂Ω = {0, L} 1 We wish to solve the inverse problem for the advection-diffusion-reaction equation on Ω = [0, 1] × [0, 1]: min m F(m) := 1 2 Z Ω (u − d) 2 dx + β 2 Z Ω ∇m · ∇m dx, (3) where u(x) depends on the log reaction coefficient m(x) through −∇ · (k∇u) + v · ∇u + 100 exp(m)u 3 = f in Ω, u = 0 on ∂Ω, (4) with advective velocity v(x) = (vx, vy) T = (1, 0)T , regularization parameter β > 0, diffusion coefficient k = 1, measurement data d(x), and source term f(x, y) = max{0.5, exp(−25(x−0.7)2−25(y−0.7)2 )}. The data d are synthesized by solving the state equation with the “true” reaction coefficient field m(x, y) = ( 4 if (x − 0.5)2 + (y − 0.5)2 ≤ 0.2 2 8 otherwise Noise is added to this data to simulate actual instrument noise. 1. Derive weak and strong forms of the Hessian action for this problem. 2. Extend 4.InexactNewtonCG.ipynb (the FEniCS implementation of the inexact Newton and Gauss-Newton methods2 ) to solve the advection-diffusion-reaction inverse problem defined in (3)–(4). Initialize the random generator seed by including the following line in the beginning of the notebook: np.random.seed(seed=1). Solve the inverse problem for discretizations of the domain with 20 × 20, 40 × 40, and 80 × 80 finite elements, and report the numbers of GaussNewton and total CG iterations for each case. Note that the state and adjoint variables and their incremental variants are discretized with quadratic finite elements, while the inversion parameter (log diffusivity) field is discretized with linear elements. Discuss how the number of iterations changes as the inversion parameter mesh is refined, i.e., as the parameter dimension increases. In these experiments, the noise level should be fixed to the default value (1%) while the mesh is refined. The “optimal” regularization parameter can be found manually (i.e., by experimenting with a few different values and finding the one that results in a reconstruction that best matches the “true” log field), or else by the discrepancy principle (if you are so inclined). 3. Optional, for extra credit: Fix the mesh size to 40 × 40 and consider two noise levels: low noise (1%) and high noise (10%). Determine the “optimal” regularization parameter in each case. For each noise level, solve the inverse problem by both Newton and Gauss-Newton methods. Report the number of nonlinear (Newton/Gauss Newton) and total linear (CG) iterations. Draw conclusions on the relative performance of Newton and Gauss-Newton as a function of the noise level. 4. Optional, for extra credit: Replace Tikhonov regularization with total variation regularization and repeat sub-problem 2 above (i.e., report number of Gauss-Newton and CG iterations as the mesh is refined). 2Recall that the Gauss-Newton approximation of the Hessian drops all terms (in the incremental adjoint equation and the Hessian action expression) that involve the adjoint variable p. 2

More products