Abstract
In this chapter we briefly outline a new and remarkably fast algorithm for solving a large class of high dimensional Hamilton-Jacobi (H-J) initial value problems arising in optimal control and elsewhere [1]. This is done without the use of grids or numerical approximations. Moreover, by using the level set method [8] we can rapidly compute projections of a point in \(\mathbb{R}^{n}\), n large to a fairly arbitrary compact set [2]. The method seems to generalize widely beyond what will we present here to some nonconvex Hamiltonians, new linear programming algorithms, differential games, and perhaps state dependent Hamiltonians.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Nonconvex Hamiltonians
- Linear Programming Algorithm
- Differential Games
- Initial Value Problem
- Hopf Formula
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
We begin with the Hamilton-Jacobi (HJ) initial value problem
We assume \(J: \mathbb{R}^{n} \rightarrow \mathbb{R}\) is convex and one coercive, i.e., \(\lim _{\|x\|_{2}\rightarrow +\infty }\frac{J(x)} {\|x\|_{2}} = +\infty\), \(H: \mathbb{R}^{n} \rightarrow \mathbb{R}\) is convex and positively one homogeneous (we sometimes relax all these assumptions).
A good example of this is
Here \(\|v\|_{p} = \left (\varSigma _{i=1}^{n}\vert v_{i}\vert ^{p}\right )^{\frac{1} {p} }\) for p ≥ 1 and 〈x, v〉 = Σ i = 1 n x i v i .
Let us consider a convex Lipschitz function J having the property that, for Ω a convex compact set of \(\mathbb{R}^{n}\)
We call this level set initial data. Then the set of points for which φ(x, t) = 0, t > 0 are exactly those at a distance t, from the boundary of Ω. In fact given \(\bar{x}\ \notin \ \varOmega\), then the closest point x opt from \(\bar{x}\) to (Ω ∖ int Ω) is exactly
To solve (12.1) we use the Hopf formula [5]
where the Fenchel-Legendre transform \(f^{{\ast}}: \mathbb{R}^{n} \rightarrow R \cup (+\infty )\) of the convex function f is defined by
Moreover, for free we get that the minimizer satisfies
whenever φ(⋅ , t) is differentiable at x. Let us note here that our algorithm computes φ(x, t) but also ∇ x φ(x, t).
Also, we can use the Hopf-Lax formula [5, 6] to solve (12.1).
for convex H.
From (12.4) it is easy to show that if we have k different initial value problems i = 1, … k
with the usual hypotheses, then (12.4) implies, for any \(x\ \in \ \mathbb{R}^{n},\ t> 0\)
So
solves the initial value problem
This means that if Ω = ∪ i = 1, …, k Ω i , where each Ω i is compact and convex and may overlap, then we can easily compute the set of all points at distance t from Ω which is exactly the solution to (12.5) where each J i is a level set function for Ω i . Moreover, at every point \(\bar{x}\) outside of \(\bar{\varOmega }\) for which there is one i such that \(\varphi _{i}(\bar{x},t) <\varphi _{i'}(\bar{x},t)\) for any i ≠ i′, then the closest point x opt to \(\bar{x}\) and Ω is again
If there are several i for which \(\varphi _{i}(\bar{x},t)\) is the minimum among all k of them, then ∇ x φ will be “multivalued”, i.e., it will have jumps, but any of the x opt defined above will be a closest point on Ω to \(\bar{x}\).
2 Split Bregman
We solve the optimization problem (12.3) by using the split Bregman algorithm [4, 3, 9] as follows
Here the sequences \((v^{k})_{k\in \mathbb{N}},(d^{k})_{k\in \mathbb{N}}\) both converge to ∇ x φ(x, t). Let us emphasize again that our numerical algorithm not only computes the solution φ(x, t) but also computes ∇ x φ(x, t) when φ(⋅ , t) is differentiable.
Both (12.6) and (12.7), up to change of variables, can be reformulated as finding the unique minimizer of
which is the proximal map of f. Equation (12.6) can be solved if either J ∗ or J have easily computable proximal maps, which often occurs, especially if one of them is smooth.
Equation (12.7) can be easily solved if H(d) = ∥ d ∥ 2 via the shrink2 operator defined by
and we have
If H(d) = ∥ d ∥ 1 we use shrink1 operator defined as follows for any i = 1, …, n
and we have
To solve (12.7) for more general H(d) convex one homogeneous or to find the proximal map for f of that type we use the fact that H ∗ is the characteristic function of a closed convex set \(C \subset \mathbb{R}^{n}\)
By using the Moreau identity [7] we realize that the proximal map of H can be obtained by projecting onto C. To do this projection, we merely solve the eikonal equation with level set initial data for C via split Bregman as above in (12.6), (12.7), (12.8) with H(d) = ∥ d ∥ 2. This is easy using the shrink2 operator. We then use (12.2) to obtain the projection and repeat the entire iteration.
3 Numerical Experiments
Numerical experiments on an Intel Laptop Core i5-5300U running at 2.3 GHz are now presented. We consider diagonal matrices D defined by \(D_{ii} = 1 + \frac{1+i} {n}\) for i = 1, …, n. We also consider matrices A defined by A ii = 2 for i = 1, …, n and A ij = 1 for i, j = 1, …, n. Table 12.1 presents the average time (in seconds) to evaluate the solution over 1,000,000 samples (x, t) uniformly drawn in [−10, 10]n × [0, 10]. The convergence is remarkably rapid: 10−6 to 10−8 seconds on a standard laptop, per function evaluation. Figure 12.1 depicts 2-dimensional slices at different times for the (H-J) equation with a weighted ℓ 1 Hamiltonian H = ∥ D ⋅ ∥ 1, initial data \(J = \frac{1} {2}\| \cdot \|_{2}^{2}\) and n = 8.
4 Summary and Future Work
We have derived a very fast and totally parallelizable method to solve a large class of high dimensional, state independent H-J initial value problems. We do this suing the Hopf formula and convex optimization via splitting, which overcomes the “curse of dimensionality”. This is also done without the use of grids or numerical approximations, yielding not only the solution, but also its gradient.
We also, as a step in this procedure, very rapidly compute the projections from a point in \(\mathbb{R}^{n}\), n large, to a fairly arbitrary compact set.
In future work, we expect to extend this set of ideas to nonconvex Hamiltonians, including some that arise in differential games, to new linear programming algorithms, to fast methods for redistancing in level set methods and, hopefully, to a wide class of state dependent Hamiltonians.
References
Darbon, J., Osher, S.: Algorithms for overcoming the curse of dimensionality for certain Hamilton-Jacobi equations arising in control theory and elsewhere. Research in the Mathematical Sciences (to appear)
Darbon, J., Osher, S.: Fast projections onto compact sets in high dimensions using the level set method, Hopf formulas and optimization. (In preparation)
Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de Dirichlet non linéaires. ESAIM: Mathematical Modelling and Numerical Analysis 9 (R2), 41–76 (1975)
Goldstein, T., Osher, S.: The split Bregman method for L1-regularized problems. SIAM Journal on Imaging Sciences 2 (2), 323–343 (2009)
Hopf, E.: Generalized solutions of non-linear equations of first order (First order nonlinear partial differential equation discussing global locally-Lipschitzian solutions via Jacoby theorem extension). Journal of Mathematics and Mechanics 14, 951–973 (1965)
Lax, P.D.: Hyperbolic Systems of Conservation Laws and the Mathematical Theory of Shock Waves. SIAM, Philadelphia, PA (1990)
Moreau, J.J.: Proximité et dualité dans un espace hilbertien. Bulletin de la Société Mathématique de France 93, 273–299 (1965)
Osher, S., Sethian, J.A.: Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. Journal of Computational Physics 79 (1), 12–49 (1988)
Yin, W., Osher, S.: Error forgetting of Bregman iteration. Journal of Scientific Computing 54 (2–3), 684–695 (2013)
Acknowledgements
Research supported by ONR grants N000141410683, N000141210838 and DOE grant DE-SC00183838.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Darbon, J., Osher, S.J. (2016). Splitting Enables Overcoming the Curse of Dimensionality. In: Glowinski, R., Osher, S., Yin, W. (eds) Splitting Methods in Communication, Imaging, Science, and Engineering. Scientific Computation. Springer, Cham. https://doi.org/10.1007/978-3-319-41589-5_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-41589-5_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-41587-1
Online ISBN: 978-3-319-41589-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)