HJB Equation

1 minute read

Published:

Classical mechanics

In Lagrangian formalism, we define an action functional S as follows: \[\begin{aligned} S[q(t)]=\int_0^{T}L(q(t),\dot q(t),t) dt\end{aligned}\] We vary \(q(t)\) to obtain the minimum of action. We call the action corresponding to the minimum of the action, the Hamilton principal function. Then, \[\begin{aligned} L=\frac{dS}{dt}&=\frac{\partial S}{\partial t}+\frac{\partial S}{\partial x}\dot x\end{aligned}\] Using the definition of Hamiltonian, \(H=p\dot x-L\), we get the Hamilton-Jacobi equation \[\begin{aligned} \frac{\partial S}{\partial t}+\frac{\partial S}{\partial x}\dot x-L&=0\\ \frac{\partial S}{\partial t}+H(x,\frac{\partial S}{\partial x})&=0\end{aligned}\]

Optics

According to Fermat's principle of least time, we would like to minimize the following action: \[\begin{aligned} S[\vec{r}(s)]=\int_A^{B}n(\vec{r}(s))ds\end{aligned}\] The ray equation is given by, \[\begin{aligned} \nabla n=\frac{d}{ds}\left(n\frac{d\vec{r}}{ds}\right)\end{aligned}\] The conjugate momentum, S is known as the eikonal and it satisfies the following equation, \[\begin{aligned} (\nabla S)^2=n^2\end{aligned}\] https://galileo-unbound.blog/2019/05/30/the-iconic-eikonal-and-the-optical-path/

Control Theory

Is this correct: value function is time reversed Hamilton principal function? We call the Hamilton principal function, the value function and the Lagrangian, the cost rate function. So, \[\begin{aligned} C=\frac{dV}{dt}&=\frac{\partial V}{\partial t}+\frac{\partial V}{\partial x}\dot x \\ &=\frac{\partial V}{\partial t}+\frac{\partial V}{\partial x}F(x,u)\end{aligned}\] \[\begin{aligned} V(x(t),t)&=V(x(t+dt),t+dt)+\int_{t}^{t+dt}C(x(s),u(s))du\\ &=V(x(t),t)+\frac{\partial V}{\partial t}dt+\frac{\partial V}{\partial x}\dot x d t+C(x(t),u(t))dt\end{aligned}\] This give HJB, \[\begin{aligned} \frac{\partial V}{\partial t}+\frac{\partial V}{\partial x}F +C(x(t),u(t))=0\end{aligned}\] So, \[\begin{aligned} \frac{\partial V}{\partial t}-H(x,u,-\frac{\partial V}{\partial x})=0\end{aligned}\]

Reinforcement Learning

Maybe I need to consider stochastic version of the above equation to get the full Bellman equation. \[\begin{aligned} v(s)=[r+v(s')]\end{aligned}\]