Starting from:

$30

CSE 397/GEO 391/ME 397/ORI 397 Assignment 4

Computational and Variational Methods for Inverse Problems CSE 397/GEO 391/ME 397/ORI 397 Assignment 4

The first problem in this assignment is a paper-and-pencil exercise in which you work out expressions for the gradient and the Hessian action (in a given direction) for a frequency domain inverse wave propagation problem. This is a technologially-important problem: it’s how we image the subsurface and how we non-destructively evaluate materials for internal defect, and it represents the underlying inverse problems for certain medical imaging methods. In the second problem, you will be asked to solve an inverse problem governed by an advectiondiffusion-reaction equation (this is the prototype inverse problem we discussed in class). For this assignment we’ll use just a steepest descent method; the next assignment will generalize the solver to a Newton method. You will be given an existing implementation of a steepest descent inverse problem solver for a Poisson inverse problem, and will be asked to modify it. 1. Frequency-domain inverse wave propagation problem Here we formulate and solve an inverse acoustic wave propagation problem in the frequency domain, which is governed by the Helmholtz equation. Let Ω ⊂ R d be a bounded domain (d ∈ {2, 3}) with boundary ∂Ω. The idea is to propagate harmonic waves from multiple sources fj (x) (j = 1, . . . , Ns) at multiple frequencies ωi (i = 1, . . . , Nf ) into a medium and measure the amplitude uij (x) of the reflected wavefields at points xk (k = 1, . . . , Nd) for each source and frequency, with the goal of inferring the soundspeed of the medium, c(x). We can formulate this inverse problem as follows: min m J (m) : = 1 2 X Nf i X Ns j X Nd k Z Ω (uij (m) − u obs ij ) 2 δ(x − xk) dx + β 2 Z Ω ∇m · ∇m dx where uij (x) depends on the medium parameter field m(x), which is equal to 1/c(x) 2 , through the solution of Helmholtz problem −∆uij − ω 2 i m uij = fj in Ω, i = 1, . . . , Nf , j = 1, . . . , Ns, uij = 0 on ∂Ω. In the problem above, u obs ij (x) denotes given measurements (for frequency i and source j), uij is the amplitude of the acoustic wavefield, and β > 0 is the regularization parameter. 1. Derive an expression for the (infinite dimensional) gradient of J with respect to the medium m using the Lagrangian method for a single source and frequency, i.e., for Nf = Ns = 1. Give both weak and strong forms of the gradient. 2. Derive an expression for the (infinite dimensional) action of the Hessian in a direction m˜ in the single source and frequency case. Give both weak and strong forms of the Hessian action. 1 3. Derive an expression for the (infinite dimensional) gradient for an arbitrary number of sources and frequencies.1 How many state and adjoint equations have to be solved for a single gradient computation? 2. Inverse advection-diffusion problem We would like to solve the inverse problem for the advection-diffusion-reaction equation we discussed in class, on the domain Ω = [0, 1] × [0, 1]: min m J (m) := 1 2 Z Ω (u(m) − u obs) 2 dx + β 2 Z Ω ∇m · ∇m dx, (1) where the field u(x) depends on the reaction coefficient m(x) through the solution of −∇ · (k∇u) + v · ∇u + 100 exp(m)u 3 = f in Ω, u = 0 on ∂Ω, (2) where k(x) is the diffusivity, v(x) is the advective velocity vector with components (vx, vy), β > 0 is the regularization parameter, and f(x) is the source term. We synthesize the measurement data u obs(x) by solving the forward advection-diffusion equation with the “true” reaction coefficient field m(x, y) =  4 if (x − 0.5)2 + (y − 0.5)2 ≤ 0.2 2 8 otherwise for f(x, y) = max{0.5, exp(−25(x − 0.7)2 + (y − 0.7)2 )}, k = 1, and v = (1, 0)T . Noise is added to this “data” to simulate actual instrument noise. On Canvas, we have provided the FEniCS code Poisson_SD.ipynb, which is an implementation of the steepest descent method2 for a Poisson inverse problem. You should extend the implementation to include the advection and reaction terms, and invert for the reaction logcoefficient m. Please turn in your modified code along with numerical results and discussion of the following: 1. Report the solution of the inverse problem and the number of required iterations (for a 106 relative reduction in the norm of the gradient) for the following cases: (a) Noise level of 0.01 (roughly 1% noise), regularization β = 10−8 , and initial guess of m(x) = 8. (b) Same as (a), but with β = 0, i.e., no regularization. You will have to decide when to terminate the iterations to produce a good inverse solution.3 1Hint: Every state equation with solution uij has a corresponding adjoint variable pij . The Lagrangian functional now involves the sum over all state equations. 2Note that we do not advocate the use of the steepest descent method for variational inverse problems, since its convergence will generally be very slow. For all cases, please use a maximum number of iterations of at least 1000—you will need it! In Assignment 5, we will solve this problem using an inexact Newton-CG method, which is far more efficient. 3 Inverse problems can be “iteratively regularized,” which means that an iterative method for solving the inverse problem is terminated early in the iterations, thus providing a regularizing effect without the need for any formal 2 (c) Same as (a), but with an initial guess of m(x) = 4. 2. Since the coefficient m is discontinuous, a better choice of regularization is total variation rather than Tikhonov regularization, to prevent an overly smooth reconstruction. Modify the implementation and plot the result for a reasonably chosen regularization parameter. Use δ = 0.1 to make the non-differentiable TV regularization term differentiable (δ has the same meaning as in Assignment 3). In other words, your regularization functional should be: Rε T V := β Z Ω (∇m · ∇m + δ) 1 2 dx. Just to be sure, note that the first variation of Rδ T V with respect to m(x), i.e., the weak form of the gradient, is given by δmRδ T V := β Z Ω (∇m · ∇m + δ) − 1 2 ∇m · ∇m d ˜ x ∀m˜ ∈ H 1 . regularization term. The reason behind this is that iterative methods such as conjugate gradients tend to capture first the modes that are most informed by the data, and then proceeding to modes that are less informed by the data. Another way of putting this is that CG eliminates error components associated with large eigenvalues (of the Hessian) first before proceeding to smaller ones. The challenge in iterative regularization is to terminate the iterations late enough that data-informed modes are captured, but not too late that noisy modes are also brought into the solution. See, for example, the book by Kaipio and Somersalo for more on iterative regularization methods. 3

More products